Thursday, October 31, 2019

Pushing, Dragging or Pulling an Industry Forward

Quite a few years ago, in my previous job when I was an MVP still, I did an online webinar for the AXUG called "Putting Software Engineering back in Dynamics AX" (in 2014). Admittedly it was somewhat of a rant / soap box type of talk. I guess "food for thought" would be a more optimistic characterization. I did try to inject some humor into the otherwise ranty slides by adding some graphics. At the time we were building out our X++ development team and we were heavily vested in TFS and automation, and I was very keen on sharing our lightbulb moments and those "why did we only start doing this now" revelations.

Fast forward 5 years to a new generation of technology and a shift to cloud. In fairness, many more people are engaged in some of these topics now because the product finally has features out of the box to do builds, use source control without tons of headaches and setup, etc. But contrary to the advice on one of the original slides from 2014 that said "Bring software engineering into AX, not the opposite" - it sort of feels that is exactly what has happened. People projecting their AX processes onto some software engineering processes. Sometimes ending up with procedures and technology for the sake of it, and not actually solving any problems at all and sometimes even creating more problems. But, they can say they ticked another checkbox on the list. I have stories of customers with messed up code in production, because someone setup branching because they were told that's a good thing to have. Yet nobody knew what that meant or how to use it. So code was being checked into branches left and right, merged in whichever direction. Chaos. A perfect example of implementing something without having a good reason or understanding to do so. On the flipside, we have customers calling us up because they "redeployed" their dev VM, and want to know how they can get a clean copy of their code from production back into their VM. Now, part of that is legacy thinking and not understanding the technology change. But honestly that was never a good thing in older versions either.

Anyway, that brings us to my topic du jour. As you may or may not have heard and read, we're working on elevating the developers tools further. We'll become more standard Visual Studio, standard Azure DevOps. This is all great news as it will allow X++ developers to use more of the existing tools out there that can be used for any standardized .NET languages or tools. The problem is not that we'll be forcing people to use advanced tools they don't know how to use. They can still choose to not use source control or build automation. I'm more worried about the people using all these new tools and not understanding them. What if in the future we start supporting Git? Will our support team be overwhelmed with people stuck on branching, merging, PRs, rebasing and all the great but complex features of decentralized source control? We've never dealt with situations where we "support" the technology (i.e. the tools are compatible) but we won't support the user of that technology (sorry your production code got messed up, go get some training on Git branching and good luck to you on recovering your production environment). In the history of our product, we've never drawn a big line between technical compatibility but not supporting the usage of it. But we will have to. How about other areas, like PowerBI, PowerApps, etc.? Yes they are supported and will be integrated further, but will Dynamics 365 support answer your usage questions?

I've had frank discussions with developers (that I personally know), where I basically tell them "the fact you're asking me these questions tells me you shouldn't be doing this". But that's not an attitude we can broadly apply to our customer base.

So I ask YOU, dear audience. Where and how can we draw a line of supportability?

Friday, October 11, 2019

Debugging woes with symbols: bug or feature?

I've struggled with this myself for a while during the betas of "AX7". Sometimes, symbols aren't loaded for your code and your breakpoints aren't hit. It's clear that the Dynamics 365 option "Only load symbols for your solution" has something to do with it, but still there's strange behavior. It took me a few years at Microsoft for someone to explain the exact logic there. Since I've been sitting on this knowledge for a while and I've recently ran into some customer calls where debugging trouble was brought up, I realized it's overdue for me to share this knowledge.

Summary: it's in fact a feature, not a bug. But I would like to see this behavior changed assuming we don't introduce performance regressions.

There's a piece of compiler background information that is not well understood which is actually at the root of this problem. We all know there are two ways to compile your code: from the Dynamics 365 "Full build" menu, or from the project. The project build, if you right-click on your project, has two options: build and rebuild. Now, the "rebuild" feature does NOT do the same thing as the full build menu - and that is the crux of the issue here. Both build and rebuild from the project only compile the objects in your project. Rebuild will force a build of everything in your project but not the whole package it belongs to. To do this, our Visual Studio tools and the compiler make good use of netmodules for .NET assemblies. Think of a netmodule as a sub-assembly of an assembly, I guess.

Now, the point is this. The "load symbols only for your solution" option only loads the symbols of the binaries for the objects in your project - aka the netmodules. So when you do a full build from the Dynamics 365 menu, you actually HAVE NO symbols only for the objects in your project (only the full binary of the package). And as a result after doing a full build and debugging with the "symbols for solution only" option turned on, your breakpoints will NOT be hit due to the symbols not having loaded.

I think we should change this option to work more like "load symbols for the packages containing your solution's objects" or something to that effect. We'll have to see if that affects the performance for large packages in a significant way, since it will now load all the symbols for that package. That is ultimately why this feature was introduced (see? it's a feature!). Worst case we may need a new option so you can use the old behavior or the more inclusive behavior...

I'd love to hear your thoughts on this, here or on Twitter @JorisdG.

Monday, March 25, 2019

Repost: Pointing Build Definitions to Specific VMs (agents)

Since the AXDEVALM blog has been removed from MSDN, I will repost the agent computer name post here AS-IS, until we can get better official documentation.
Original post: October 20, 2017

----

We've recently collaborated with some customers who are upgrading from previous releases of Dynamics 365 to the recent July 2017 application. These customers typically have to support their existing live environment on the older application, but also produce builds on the newer application (with newer platform).

Currently the build agent is not aware of the application version available on the VM. As a result, Visual Studio Team Services (VSTS) will seemingly randomly pick one or the other VM (agent) to run the build on. Obviously this presents a challenge if VSTS compiles your code on the wrong VM - so the wrong version of application and platform. We are reviewing what would be the best way to support version selection, but in the mean time there is an easy way to tie a build definition to a specific VM.

First, in LCS go to your build environment and on the environment details page, find the VM Name of the build machine. In this particular example below, the VM Name is "DevStuffBld-1".


Next, go to VSTS and find the build definition you wish to change. Note that if you have more than one version you're building for, you will want more than one build definition - and point each to its respective VM. To make sure a build definition points to a specific VM, edit the build definition and find the Options tab. Under Options you will find a section of parameters called Demands. The demands are effectively either specific values setup on the agent setup in VSTS (you can do this in the Agent Queue settings), and the agent also picks up all environment variables on the VM it runs on. You will notice that all build definitions already check for a variable called DynamicsSDK to be present to ensure the build definition runs only on agents where we have set this "flag" if you will. Since each VM already has an environment variable called COMPUTERNAME, we can add a demand for computername to equal the name of our build VM. So for the example of the build VM from above, we can edit our build definition to add the following demand by clicking +Add:


Save your build definition and from now on your build will always run on the right VM/agent.

Tuesday, February 19, 2019

Repost: Enabling X++ Code Coverage in Visual Studio and Automated Build

Since the AXDEVALM blog has been removed from MSDN, I will repost the code coverage blog post here AS-IS (other than wrong capitalization in the XML code), until we can get better official documentation. Note that after this was published, I received a mixed response from developers. For many it worked, for others this did not work at all no matter what they tried... I have not been able to spend more time on investigating why for some people this doesn't work.
Original post: March 28, 2018

----

To enable code coverage for X++ code in your test automation, a few things have to be setup. Typically, more tweaking is needed since you will likely be using some platform/foundation/appsuite objects and code, and don't want code coverage to show up for those. Additionally, the X++ compiler generates some extra IL to support certain features, which can be ignored. Unfortunately there is one feature that may throw off your results, we'll talk about this further down.

One important note: Code Coverage is a feature of Visual Studio Enterprise and is not available in lower SKUs. See this comparison chart under Testing Tools | Code Coverage.

To get started, you can download the sample RunSettings file here: CodeCoverage You will need to update this file to include your own packages (="modules" in IL terminology). At the top of the file, you will find the following XML:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackageName.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageNameTest*.*</ModulePath>
    </Exclude>
</ModulePaths>

You will need to replace the "MyPackageName" with the name of your package. You can add multiple lines here and use wildcards, of course. You could add Dynamics.AX.* but that would then include any and all packages under test (including Application Suite, for example). This example also shows how to exclude a package explicitly, for example in this case the test package itself. If you have multiple packages to exclude and include, you would enter it this way:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackage1.*</ModulePath>
        <ModulePath>.*MyPackage2.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageName1Test*.*</ModulePath>
        <ModulePath>.*MyPackageName2Test*.*</ModulePath>
    </Exclude>
</ModulePaths>

To enable code coverage in Visual Studio, open the Test menu, select Test Settings and Select Test Settings File. Select your settings file. You can then run code coverage from menu Test > Analyze Code Coverage and then selecting All Tests or Selected Tests (this is your selection in the Test Explorer window). You can open the code coverage results and double click any of the lines - which will open the code and highlight the coverage.

To enable code coverage in the automated build, edit your build definition. Click on the Execute Tests task, and find the Run Settings File parameter. If you have a generic run settings file, you can place it in the C:\DynamicsSDK folder on the build VM, and point to it here (full path). Optionally, if you have a settings file specific for certain packages or build definitions, you can be more flexible here. For example, if the run settings file is in source control in the Metadata folder, you can point this argument to "$(Build.SourcesDirectory)\Metadata\MySettings.runsettings".

The biggest issue with this is the extra IL code that our compiler generates, namely the pre- and post-handler code that is generated. This is placed inside any method, and is thus evaluated by code coverage even though your X++ source doesn't contain this code. As such most methods will never get 100% coverage. If a method has the [Hookable(false)] attribute (which makes the X++ compiler not add the extra IL code), or if the method actually has pre/post handlers, the coverage will be fine. Note that Chain-of-Command logic that the compiler generates is nicely filtered out.

Friday, January 18, 2019

Azure DevOps Release Pipeline

Welcome to 2019, the year of the X++ developer!

Today marks a great day with a release of the first Azure DevOps task for D365 FinOps users. Since documentation is still underway, I wanted to append the official blog post with some additional info to help guide you through the setup.
The extension can be installed from here: https://marketplace.visualstudio.com/items?itemName=Dyn365FinOps.dynamics365-finops-tools

The LCS Connection
- if your LCS project is hosted in the EU, you will need to change the "Lifecycle Services API Endpoint". By default it points to https://lcsapi.lcs.dynamics.com but if you log into LCS and your URL for your project shows "https://eu.lcs.dynamics.com" you will need to change this api URL to also include EU, like so: https://lcsapi.eu.lcs.dynamics.com
- App registration: I encourage to use the preview setup experience ("App registrations (Preview)"). Add a "new registration" for a native application, I selected "accounts in this organizational directory only (MYAAD)". In the redirect URI you can put anything for a native application, typically http://localhost and in the preview experience use "Public client (mobile & desktop)" to indicate this is a native application.

Thanks to Marco Scotoni for pointing out that finding the API to give permissions to, just go to the "APIs my organization uses" tab.

The Task
- Create the new connection using the app registration as described above
- LCS Project Id is the "number" of your project. You can see this in the URL when you go to your project on the LCS website, for example https://lcs.dynamics.com/V2/ProjectDashboard/1234567. I'm hoping this can eventually be made into a dropdown selection.
- File to upload... The build currently produces a ZIP file with a name that contains the actual build number, and that is not configurable there (you'd have to edit powershell for that). So until that is changed, there's actually an easy way to fix that. Since your release pipeline has the build pipeline's output as an artifact, you can actually grab the build's build number. So, use the BROWSE button to select the build drop artifact, but then replace the build number with the $(Build.BuildNumber) variable. For example, on my test project this resulted in the following file path: $(System.DefaultWorkingDirectory)/BuildDev/Packages/AXDeployableRuntime_7.0.4641.16233_$(Build.BuildNumber).zip
If your AX build is not your primary artifact, you can use the artifact alias, like $(Build.MyAlias.BuildNumber). You can find this into in the release pipeline variables documentation.
- LCS Asset Name and Description are optional, but I would recommend setting at least the name. For example, I set the following:
LCS Asset Name: $(Release.ReleaseName)
LCS Asset Description: Uploaded from Azure DevOps from build $(Build.BuildNumber)
- If using a hosted agent, make sure to use the latest host ("Host VS2017").

Happy uploading!!

Monday, August 13, 2018

Query Functions: SysQueryRangeUtil Extensions

Now that overlayering is a thing of the past, how does one add methods to SysQueryRangeUtil as explained in my old post from back in 2013?

Simple. Create your own class, name it whatever you want. Add a static method as described in the old post. The only difference is that you just put an attribute on top of it indicating that it's a query range utility method... So the method used in the old post, would now just look like this:

[QueryRangeFunctionAttribute()]
public static str customerTest(int _choice = 1)
{
    AccountNum accountNum;
    
    switch(_choice)
    {
        case 1:
            accountNum = '1101';
            break;
        case 2:
            accountNum = '1102';
            break;
    }
    
    return accountNum;
}


If you're look for an example in the standard code, you can find class "SysQueryRangeUtilDMF" in the AOT.

Wednesday, June 20, 2018

Moving to Extensions is Changing Your Mindset

There's a lot of change that has come with AX7 (Dynamics 365 for Finance and Operations), ranging from plentiful technical changes but also the way implementations are being run. Any large change is usually disruptive to existing users of the product, and it takes time to learn the changes and get used to them. There's been plenty of changes in some major releases of AX, but one type of change has now come around that we haven't had to deal with before: rethinking how we design customizations.

Now, I'm not talking about technology or code but rather DESIGN of customizations. As I've stated before, the extension paradigms in AX7 weren't invented for the sake of new technology, but as an enabler to get to quicker and easier updates and upgrades. What's largely overlooked in discussions is that in many cases this requires not just using extensions in your code, but also to design your customization to allow for easy updates. And as a result, this means that some customizations cannot be ported from over-layering into extensions just like that. But that doesn't mean you can't give the user the easy button she was asking for.
Now that our version 8.0 of the Application is released, the era of NO over-layering is upon us. And to illustrate the points from above, I'd like to share an example of a customization I dealt with at a customer who came from AX2012 with plenty of existing code, and much of it "intrusive" (as in, not-easily-upgradable because it changes existing code). When the platform packages were originally sealed in Platform Update 2, this particular customization posed a problem for this customer. As opposed to being just one customer asking for a specific hook point (delegates, etc.) they need for one customization, we went back to the drawing board to discuss the original requirement and see if we could reimplement it in a "softer" way.

Here's the customization. The customer in question sometimes has large purchase orders and department heads need to approve specific lines that apply to their departments. They typically filter the lines to check what applies to them, and then just want to approve the whole lot with a click of a button as opposed to each line individually. They want to do this straight from the purchase order screen where they're reviewing all the details. In AX2012, a customization was made to the workflow framework (red flag) that adds an option to the approval ribbon. Instead of just cancel/reject/approve, a new option was added: approve all lines. It would then approve all the lines in this P.O. assigned to the user approving. The framework is locked in platform, so this change could not stay and couldn't be done with extensions...
Although perhaps not as nice as the original, the solution was relatively simple. Instead of changing the framework itself, we made a customization on the approval step. When the user approves a line, we can prompt her that there are other lines and ask if she wants to approve all. This has several clear advantages: no more framework change (which may have to be merged/changed on next upgrade), plus we use the superficial public API to approve an item (as opposed to adding logic inside the approvals itself). So when the framework changes its logic or adds new features, our customizations will just use advantage of those automatically.

This goes back to a principle I always like to adhere to in AX customizations as much as possible: don't change the process, but automate existing processes. When a request comes in, I like to ask the simple question: how would you do this manually, regardless of how convoluted that process would be? For example, the customer needs an extra financial transaction to happen somehow somewhere in existing logic. Question: how would you do this manually? It makes them think about the process. Typically they would create some journal somewhere with references to the original transaction. Ok, so can we now just automate creating/posting that journal and make sure it can be tied back? This opposed to creating an individual extra ledger transactions inside the original process, which would be very intrusive, error-prone and would have to be reviewed with each upgrade to make sure we're in line with the original code or changes to the transactions frameworks.

I realize there will be examples where even this doesn't apply. But I challenge you to ask the question whether that means the customization is really a good idea at all, and how it would be impacted if the Application Suite changed some of the details of the implementation. Customers upgrading from previous releases will face these dilemmas for sure, but now is the time to rethink the design, or perhaps even question the existence of the customization entirely.
In other cases, some of these intrusive customizations are done to correct strange or incorrect behavior in the application. Most of us have been there: some XPO sitting on a local share which you can re-use at every customer. And therein lies another mindset change: please file support requests! Don't ask for hookpoints so you can correct wrong behavior, rather ask Microsoft to fix the behavior! I realize and know from experience that the "by design" answer gets very tiring very quickly, but things are changing rapidly and this is one way everyone can improve the product and reduce friction. Your fellow AX users will thank you, and your organizations/customers will thank you on the next upgrade.