Automate DAX UDF With Semantic Link Labs
DAX UDF + Semantic Link Labs = ๐๐๐
Principal Program Manager, Microsoft Fabric CAT helping users and organizations build scalable, insightful, secure solutions. Blogs, opinions are my own and do not represent my employer.
DAX User Defined Function announced at FabCon was one of the biggest updates to the DAX language in recent years. You can read more about it on official Microsoft docs and SQLBI. In this blog, I share how you can use Semantic Link Labs (SLL) to automate the process of defining and centralizing the UDFs. Note DAX UDF is still in preview so read official documentation for all the details and limitations.
Defining UDF
The latest SLL version 0.12.4, thanks to my colleague Michael Kovalsky, has set_user_defined_function using TOM. Here is how you use it:
#%pip install semantic-link-labs --q #upgrade to 0.12.4
import sempy_labs as labs
from sempy_labs.tom import connect_semantic_model
dataset = '' # Enter the name or ID of your semantic model
workspace = None # Enter the name or ID of the workspace in which the semantic model resides
name = 'AddTax' # Name of the user-defined function
expression = "(amount : NUMERIC) => amount * 1.1" # Expression logic of the user-defined function
with connect_semantic_model(dataset=dataset, readonly=False, workspace=workspace) as tom:
tom.set_user_defined_function(name=name, expression=expression)
# List user-defined functions
df = labs.list_user_defined_functions(dataset=dataset, workspace=workspace)
display(df)
Using DAX Lib
SQLBI has an open-source collection of UDFs at DAX Lib, submitted by users and community members. You can use TMDL to apply any of these UDFs in Desktop or using TE3. We can use SLL to use any of those functions from DAX Libโs repo. Below I will use the Mode function my friend Kerry submitted.
In the repo, the functions are defined in TMDL so thanks to LLM below function extracts the name and the function using regex.
import re
from typing import Tuple, Optional
## thanks to LLM for below
def extract_dax_function(text: str) -> Tuple[Optional[str], Optional[str]]:
"""
Returns (function_name, expression_block).
- function_name: between 'function' and '='
- expression_block: after '=' up to (but not including) the first 'annotation'
"""
udf_name = re.search(r"function\s+['\"]?\s*([^'\"=\s]+)\s*['\"]?\s*=", text, re.IGNORECASE)
if not udf_name:
return None, None
func_name = udf_name.group(1).strip()
udf_expr = re.search(r"=\s*(.*?)\bannotation\b", text, re.IGNORECASE | re.DOTALL)
if udf_expr:
expr = udf_expr.group(1).rstrip()
else:
m_after_eq = re.search(r"=\s*(.*)$", text, re.DOTALL)
expr = m_after_eq.group(1).rstrip() if m_after_eq else None
return func_name, expr
import requests
mode_udf = (requests.get("https://raw.githubusercontent.com/daxlib/daxlib/refs/heads/main/packages/k/kolosko.summarystats/0.1.0/lib/functions.tmdl").text)
print(extract_dax_function(mode_udf)[1])
Next we take that and apply it to a published semantic model.
name = extract_dax_function(mode_udf)[0]
expression = extract_dax_function(mode_udf)[1]
dataset = "Sales_udf_sandeep" #dataset name/id
workspace=None #if notebook is in the same workspace as the dataset, else workspace name/id
with connect_semantic_model(dataset=dataset, readonly=False, workspace=workspace) as tom:
tom.set_user_defined_function(name=name, expression=expression)
udf_df = labs.list_user_defined_functions(dataset=dataset)
display(udf_df)

Letโs check if it actually works:
dax = """
EVALUATE
{ Kolosko.SummaryStats.MODE(Sales, Sales[Net Price]) }
"""
fabric.evaluate_dax(dax_string=dax, dataset=dataset)


Centralizing UDFs
We can extend this further by automating the process. If you define one (or more) semantic model(s) with all the UDFs defined, SLL can read and apply to a list of models.
In the function below:
It checks for compatibility level first
If itโs <1702, itโs upgraded to 1702
Waits for 5 seconds (for the update to go through)
Applies the UDFs from the source semantic model to the target semantic model(s)
import time
def set_udf_with_compatibility_check(tom, name, expression, max_retries=1):
"""
Set UDF with automatic compatibility level upgrade if needed
"""
for attempt in range(max_retries + 1):
try:
tom.set_user_defined_function(name=name, expression=expression)
print(f"Successfully set UDF: {name}")
return True
except ValueError as e:
if "compatibility level of at least 1702" in str(e) and attempt == 0:
print(f"Compatibility level error for {name}. Upgrading to 1702...")
tom.set_compatibility_level(1702)
time.sleep(5) # Wait 5 seconds as requested
continue # Retry
else:
print(f"Failed to set UDF {name}: {e}")
return False
except Exception as e:
print(f"Unexpected error setting UDF {name}: {e}")
return False
return False
for i, row in udf_df.iterrows():
name = row['Function Name']
expression = row['Expression']
dataset = 'order_analysis'
workspace = None
with connect_semantic_model(dataset=dataset, readonly=False, workspace=workspace) as tom:
set_udf_with_compatibility_check(tom, name, expression)
This could be a powerful pattern. You could use several approaches to centralize the UDFs:
Define a set of golden/centralized semantic models in a workspace with UDFs defined.
Bring these model(s) into other model as composite models
Or use notebooks to continuously update and apply the UDFs to downstream models

Below is another approach:
Using notebooks for cataloging
I wrote a blog a while ago on getting the daylight savings adjusted US time. I converted that to a UDF. The use case here is to make this available to all the users so based on their defined timezone, they can get the daylight adjusted US time.
( tz: STRING ) =>
VAR _utcnow = UTCNOW()
VAR _todayUTC = TRUNC(_utcnow)
VAR _y = YEAR(_todayUTC)
/* US DST window: 2nd Sun Mar 1st Sun Nov (UTC-date granularity) */
VAR _mar1 = DATE(_y, 3, 1)
VAR _nov1 = DATE(_y,11, 1)
VAR _secondSunMar = (_mar1 + MOD(8 - WEEKDAY(_mar1), 7)) + 7
VAR _firstSunNov = _nov1 + MOD(8 - WEEKDAY(_nov1), 7)
VAR _isDST = _todayUTC >= _secondSunMar && _todayUTC < _firstSunNov
/* offsets (hours to ADD to UTC) */
VAR _std =
SWITCH(TRUE(),
tz="useastern", -5,
tz="uscentral", -6,
tz="usmountain", -7,
tz="usarizona", -7,
tz="uspacific", -8,
tz="usalaska", -9,
tz="ushawaii", -10,
-8 // default
)
VAR _dst =
SWITCH(TRUE(),
tz="useastern", -4,
tz="uscentral", -5,
tz="usmountain", -6,
tz="usarizona", -7, /* no DST */
tz="uspacific", -7,
tz="usalaska", -8,
tz="ushawaii", -10, /* no DST */
-7 // default
)
VAR _usesDST = NOT (tz IN {"usarizona","ushawaii"})
VAR _offsetHours = IF(_usesDST && _isDST, _dst, _std)
RETURN
_utcnow + (_offsetHours / 24.0)
RETURN
_utcnow + (_offsetHours / 24.0)
You can use Fabric notebooks for documentation, embed that into an OrgApp for self-service users to see the list of various UDFs available. Users can copy and paste the UDFs they need. In the catalog you can add all the relevant details for the users to get details and use the functions. If you want to make it very fancy, you can also include working examples, thanks to Semantic Link Labs !


These are just some ideas. DAX UDFs are super powerful and automating and centralizing them will make them even more useful & powerful.