Saturday, February 7, 2026

Have you ever tried to figure out which cloud flow or workflow updated a record, but the Modified By field just says a user name (or service account)?

In my applications, I will regularly have over a hundred flows, business rules, integrations, canvas pages, and plugins that update records in Dataverse. Up until a few years ago, it was frustrating when a client would call because some odd behavior was occurring and they could not find the source. Sometimes you could go to Make.powerapps.com and open the default solution to see if the field in the table had a dependency, but that is...not dependable. 

My solution is that I created a "Fingerprint" field in every table, and when any automation  or integration creates or updates a record, it also populates the fingerprint field with its name. Now I can look at the Audit History of any record and see what process is making updates. If you have a better option, please share it!


I have been learning how to build Power Platform Canvas apps and have some things I would like to share. 

I have a client that has a need for a model-driven-like view inside a canvas page. After poking around, I found the Powercat Creator Kit contains a "DetailsList" control which is about as close to the user experience of a view (sort/filter/drill down) in model driven app as I have found to date. Releases · microsoft/powercat-creator-kit

First, thanks to Scott Durow (https://www.youtube.com/scottdurow) for making some nice videos on the topic - highly recommend. 

My use-case is a bit complex, but here is my cut-down version: I have several tables in Dataverse that I needed to do some fancy join logic to get the right values, so I decided to use Power Automate and custom FetchXML on Dataverse to get my data into a collection in my Canvas Page (story for another day). I need to present the user with the data as a view and let them click on selected cells to pass to another process. The DetailsList has the ability to format selected columns as "links" where I can customize the behavior when a user clicks on it. 

My collection has text and many columns of numbers that range from 0-15 with up to 3 decimal places. The client wants to see the numbers with all 3 decimal places and right justified. The only way to do that (that I have found so far) is to convert the displayed data using the Right("_" & Text(,"#0.000"),6) function, however, that breaks the OOTB column sorting feature in DetailsList, because it sees the column values as text, so numeric values appear to sort lexicographically (e.g., “1”, “10”, “2”, “20”) rather than numerically (and still left justified because the string appears trimmed).

My workaround is to add a 'formatted' column (using the above expression) for each numeric column. For example, if I have a numeric column name of "ecc_25", then I will do this:

AddColumns( collection, ecc_25_formatted, Right("_" & Text(ecc_25,"#0.000"),6)) 

So now there is one numeric column in the collection that can be sorted correctly, and a formatted version that I added to the fields of the DetailsList control. In my case, all my values are a links, so they appear to be underlined when rendered, and my "_" prefix is not obvious to the user.

Then in the DetailsList OnChange event I have this:

If(
    Self.EventName = "Sort",
    UpdateContext(
        {
            ctxSortCol: If(("formatted" in Self.SortEventColumn), Left(Self.SortEventColumn,6), Self.SortEventColumn),
            ctxSortAsc: If(
                Self.SortEventDirection = 'PowerCAT.FluentDetailsList.SortEventDirection'.Ascending,
                false
                ,true
            )
        }
    )
);

Note the ctxSortCol is expecting the name of the sorting column as a string, so I trim off the "_formatted" part of the column name and the DetailsList will take care of the sorting correctly.

I posted a feature request in Github for them to add a Text format property added to each column (defined in columns_Items) and a Justification property (left/right/center). If you like this idea, please add your comments DetailsList column sorts formatted numeric values lexicographically: workaround + feature request · Issue #389 · microsoft/powercat-code-components

Monday, September 16, 2024

Power Platform (MS CRM/CE) Business Rule Best Practices

Here are my top things I would do when creating a business rule:
  1. Set the Scope to a specific form (unless there is a really good reason to make it Entity scope)
  2. Make it affect only one field on a specific form
  3. Give it a name using the one form / field it is tied to
  4. Give it a description that explains WHY it is necessary, not a description of what it does
  5. Include the inverse action (ex: lock AND unlock) after the condition
I recommend using a scope that is specific to a form, because it is common to have multiple forms, and if you set the scope to All Forms or Entity, then you run the risk of having the logic have unexpected results. 

The reason to make it affect only one field is that it is very common for someone to remove a field from a form without realizing that the field is part of a business rule, and it causes the entire business rule to stop running. If you have a complex business rule that updates multiple fields, then you run the risk that future form maintenance will break the rule. By making a separate business rule for each field is that you can very clearly "declare" that one fields behavior separate from every other field on the form, thus removing a field from a form will not break all the other logic on the form. This is the same way that Canvas apps operate, using declarative approach, instead of procedural. 

Naming the business rule something like "Main Form Account Sync locked" is more meaningful than trying to invent a name that describes the logic that affects 10 fields. 

A description should inform the person that is maintaining the code "why" you have the business rule, for example "After the account is synchronized, this field is read-only so that users don't try to change it back to No". Imagine trying to write a useful description for multiple fields. 

The reason to have both branches of the condition covered is that (in most cases) a user is likely to start to enter a value in a field which triggers the business rule, then they change their mind and want to choose a different option, but if there is no alternate action to 'undo' what was set in the Yes condition, then the user is stuck. 

Wednesday, August 21, 2024

PowerAutomate Flows Trigger Multiple Times - a proposed workaround



Revised 2/7/26

Every now and then, I get a PowerAutomate Flow that seems to trigger more than once, and most of the time it is not a problem, but sometimes they are running in parallel and end up creating duplicate records which is undesirable. It happens most often when a Dataverse trigger Change Type is "Added or Modified". I suspect that something in the API is updating the record right after creating it.

There is a "Concurrency Control" feature under the trigger action settings. If you turn this on, you cannot turn it off (not even by deleting the trigger action), so make a backup first and consider your options. What this feature can do is make your flows run one after the other, rather than at the same time (in parallel). By turning setting the trigger's concurrency control to 1, and revise the flow to run a query first (LIST action) to see if any other flow finished the work already.

While it can solve this problem with flows running in parallel, I believe it will be a performance bottleneck; I would appreciate if someone could confirm this. Further, I don't want to do this to very many flows, and I cannot always tell when it will be a problem. Should I just settle for possible bottlenecks and make all of them single threading? Nah, I like getting the max performance possible.

A better solution:
I created an "Event Tracking" table with an alternate key. If more than one flow tries to add more than one row to that table with the same key value, then it will fail.

The new strategy is this: Using the ID of the triggering record and a the flow name as a 'key value', insert it into the Event tracking table. If it succeeds, continue, else terminate. At the end of the flow, delete the record that was created. You could also potentially use the Event Tracking table to surface an admin dashboard to monitor the health of your system and present error messages, or highlight long running processes.

I have noticed that the OOTB Power Automation / Automation Center will show errors for actions that fail, even if the action is followed by "Run After" to catch the failed action, so for that reason, using this strategy could look like more failures are happening, but if you create your own dashboard, you can get better information. I use Run After all over my flows (because I am used to using Try/Catch in other programming languages) and the Automation Center is not helpful for me. I would like to hear your thoughts on the subject.

Sunday, April 7, 2019

Making PowerBI Easier

I recently learned about a cool tool in XRMToolbox called Power Query M Builder (PQMB) by Mohamamed Rasheed (ITLec) and Ulrick “CRM Chart Guy” Carlsson (eLogic LLC) and I love the tool. It lets you generate Power Query code using Dynamics 365 views as your starting point, then you can tweek them using FetchXML Builder to get more complex queries, and then  generate a full query that produces the field names using the labels from your CRM metadata. This process is much faster than starting from scratch using the Power Query tool to build up a query to find the right data, plus, it has the added advantage that you can easily change the connection URL for all the queries from a single place, which is really helpful if you have a solution you move from dev to production.

I left a mesage on Ulrick’s blog that there is one improvement I would like to see.  I would like to have the ability to save/recover views more easily. For example, I have a report that I know needs several different CRM views or custom FetchXML queries. After I have created all the queries in Power Query and then start to relate them together in PBI, then I find that I am missing some columns, so I need to start from scratch on one or more of my views in PQMB.

To save some time, I started saving all my FetchXML queries embedded in comments at the end of the Power Query Advanced Editor. Here is what I do:

While in PQMB, I can open the FetchXML builder to modify a query

then copy it to the clipboard and paste it into the end of the Power Query using /* comment */  .

Now if I had to revise my query at a later date, then I can quickly get back to what I had without having to recreate a complex query from scratch.

Saturday, April 6, 2019

How to I hide App navigation in Dynamics 365 CE?

My favorite secret weapon is a Chrome extension called Stylebot. I can change navigation, colors, and even compress whitespace!

After you install it, refresh your screen and open it using the CSS icon in Chrome. You can click on the element you want to change (or hide), and it has friendly buttons to make quick changes. It will save the style changes using the URL of the site. You can export and import the style mods and share them with other users.

Wednesday, February 27, 2019

What is the solution version number used for?

A recent thread of discussion on the CRMUG forums about this topic got me thinking – here is my take on the subject.

For the smaller clients with no integration, I think there is value in using the current date in the solution version number so when you export it so that you can easily keep track of which file in your downloads folder is the right one. A quick acknowledgement to @Gus Gonzalez's for making that suggestion. End users can easily comprehend it and it adds value to them. A solution file can have literally anything in it and the (typical) version number does nothing to help you with figuring out what is major/minor/patch or resolving dependencies. By using Gus's technique, the date of the solution is built into the exported file name. By using Gus's technique, you avoid having multiple copies of a file called "SolutionName_1_0_0_0.zip" and trying to figure out which one is relevant. I would argue that for the people creating solutions for small(er) organizations, it is not easy to make the distinction of major vs minor. Can anyone here define how many fields on a form must move before you cross the threshold from a patch to a minor release, or how many business rules must be changed before it becomes a major release?


But for a company that has a staff of developers that depend on integration with other systems and have governance policies, we should consider the benefits to the version number with a traditional approach. For many of my past projects, I have clearly agreed with @Ben Bartle's position because I come at this as a developer. The reason has to do with traditional software development practices: When writing applications that have dependencies on a multitude of libraries (.DLL's), then keeping track of the version number helps you identify whether an update to a library will significantly impact your application. With a DLL it is easy to define what is major, minor or a patch based on best practices.

For a consultant that is configuring systems for many clients, I would recommend that you have the name of the client for the solution name, and put the D365 version in the first two decimal positions of the version number, and date in YYMM.DD format in the last two positions. This makes it easy to tell which version of D365 you exported the solution from (which could be significant if the development area is not in sync with staging and production) and the date it was exported.

To bring this point back into context of this discussion, when you modify a D365 solution and then promote it to production, you are actually updating the API endpoints. Consider what happens when you alter the behavior of workflows, deprecate entities/fields, or make the field length shorter. When you promote that solution into production, you run the risk that integration code will break.

Personally, I have been putting the date in the solution's Information/Description field.  While this is not as pretty as using the version number as a proxy for a date, it serves it purpose because you can see the solution description gets updated in the production system. The only downside with that approach is that you have to either import the solution file to see the description containing the date, or rip apart the zipped solution file and read the XML files.

With respect to keeping track of whether the solution file contains a major/minor/patch to a system, that is something everyone should be documenting anyway regardless of whether you go with Gus or Ben's recommendation. By keeping a log of changes and/or implementing a full version control system between development/staging/production, you must be communicating your changes to your testing staff or your end users, no matter how small the change is. If users detect a change that you were not in control of, they might think the system is unstable, and they will lose confidence in your system.