I'm Joris "Interface" de Gruyter. Welcome To My

Code Crib

Microsoft Dynamics 365 / Power Platform

  Page: 1 of 15

Feb 12, 2022 - 20 Years of Dynamics

Filed under: #daxmusings #bizapps

Almost 20 years ago, in July of 2002, Microsoft acquired Dynamics AX. Several months later, in October 2002, I landed my first job and started learning X++. AX3.0 had just dropped, but the company I worked for in Belgium still had some AX2.5 implementations going on. Unforgettable implementations were the one that was using Oracle as its database, another was implementing AX2.5 “enterprise portal” which at the time was an ASP site calling X++ classes over a COM connector, with X++ classes returning strings of HTML.

  Read more...

Nov 22, 2021 - Changes To Internal Access modifier in X++

Filed under: #daxmusings #bizapps

In the upcoming April 2022 release (version 10.0.25/PU49), some changes will be introduced to how the internal access modifier works in X++. Although still not as strict as C#, it will at least fix a few issues with how it works. Note that the InternalUseOnly attribute is a different feature altogether, and still only generates warnings.

Before we dive into the changes, let’s understand why and when internal is used. Essentially, marking something as internal allows only the code in the same assembly to access it. In X++, an assembly is a package so only models in that package can access the internal code. The reason to use this isn’t much different from using private methods or members, except the allowed scope to use the code is the whole assembly and not just the class. In the context of Finance and Operations apps and X++, marking code as internal allows engineers to build the code but give themselves the option of changing the signatures of their code without worrying of breaking any code that extends it.

Note that the InternalUseOnly attribute is a different feature altogether, and still only generates warnings.

Summary

  • If you make use of the internal access modifier in your own code (declare methods or classes internal) you likely want to review the details below and you may have to fix some issues in your code. Since you’re in full control of your own code, this should not present any problems.
  • For all other code, the only potential issue happens when your code is using INHERITANCE on a Microsoft class and the code is trying to override an internal method and/or changing its access modifier.
  • All code with InternalUseOnly attributes is still only giving warnings.

Details

I’d like to thank the team and specifically Laurent for not only working on these compiler issues, but providing the following detailed examples of the changes.

The issues with internal that were corrected, are largely around how inheritance works and what the compiler allows or does not allow you to do. Let’s go right into the details. I encourage you to also read Peter Villadsen’s blog post on the matter.

A public class shouldn’t be allowed to extend an internal class

internal class ClassA {}
public class ClassB extends classA {} // classB is in the same model as classA

This scenario was changed from a warning to an error (the message remaining the same: InconsistentAccessibilityInheritance: Base class 'ClassA' is less accessible than class 'ClassB').

Since customer code cannot extend our internal classes to begin with, this will not impact customer code that extends Microsoft code. Customer code will be impacted if it declares its own internal class and publicly derives from it.

An internal class couldn’t be used as an internal member + misleading message

internal class ClassA {}
public class ClassB // ClassB is in the same model as ClassA
{
        internal ClassA a;
        public ClassA b;
        protected ClassA c;
        private ClassA d;
}

This scenario used to result in the following errors:
Error: InconsistentFieldAccessibility: The type 'ClassA' of the member variable 'a' is internal to Model 'MyModel' and is not accessible from model 'MyModel'.
Error: InconsistentFieldAccessibility: The type 'ClassA' of the member variable 'b' is internal to Model 'MyModel' and is not accessible from model 'MyModel'.
Error: InconsistentFieldAccessibility: The type 'ClassA' of the member variable 'c' is internal to Model 'MyModel' and is not accessible from model 'MyModel'.

With the change, this results in the following errors:
Error: InconsistentAccessibility: field type 'ClassA' is less accessible than field 'b.ClassB'
Error: InconsistentAccessibility: field type 'ClassA' is less accessible than field 'c.ClassB'

This is mostly LESS restrictive than before. Customers COULD be impacted if their code is exploiting the compiler issue where private use of non-accessible internal types was allowed.

Overriding an internal method in the same package and increasing visibility was not diagnosed as an error

class Base { internal void method() {} }

// in the same model as Base
class LocalA extends Base { public void method() {} }
class LocalB extends Base { internal void method() {} }
class LocalC extends Base { protected void method() {} }
class LocalD extends Base { private void method() {} }

This resulted in the following errors:
LocalC.xpp(3,5): OverrideMoreRestrictive: OverrideMoreRestrictive: Method 'LocalC.method' cannot have a more restrictive visibility than the method 'Base.method' which it overrides.
LocalD.xpp(3,5): OverrideMoreRestrictive: OverrideMoreRestrictive: Method 'LocalD.method' cannot have a more restrictive visibility than the method 'Base.method' which it overrides.

This will now result in the following errors:
LocalA.xpp(3,5): CannotChangeAccess: 'LocalA.method' cannot change access modifiers when overriding inherited method 'Base.method'
LocalC.xpp(3,5): CannotChangeAccess: 'LocalC.method' cannot change access modifiers when overriding inherited method 'Base.method'
LocalD.xpp(3,5): CannotChangeAccess: 'LocalD.method' cannot change access modifiers when overriding inherited method 'Base.method'

Customer code will only be impacted if in their own model they have an internal method and override it publicly.

class Base { internal void method() {} }

// in a different model than Base
class DerivedA extends Base { public void method() {} }
class DerivedB extends Base { internal void method() {} }
class DerivedC extends Base { protected void method() {} }
class DerivedD extends Base { private void method() {} }

This used to result in the following errors:
DerivedB.xpp(3,5): InvalidOverrideIntenalMethod: Method 'method' in class 'Base' is internal and is not allowed to be overriden.
DerivedC.xpp(3,5): OverrideMoreRestrictive: OverrideMoreRestrictive: Method 'DerivedC.method' cannot have a more restrictive visibility than the method 'Base.method' which it overrides.
DerivedD.xpp(3,5): OverrideMoreRestrictive: OverrideMoreRestrictive: Method 'DerivedD.method' cannot have a more restrictive visibility than the method 'Base.method' which it overrides.

It is now diagnosed as:
DerivedA.xpp(3,5): CannotChangeAccess: 'DerivedA.method' cannot change access modifiers when overriding inherited method 'Base.method'
DerivedB.xpp(3,5): InvalidOverrideIntenalMethod: Method 'method' in class 'Base' is internal and is not allowed to be overriden.
DerivedC.xpp(3,5): CannotChangeAccess: 'DerivedC.method' cannot change access modifiers when overriding inherited method 'Base.method'
DerivedD.xpp(3,5): CannotChangeAccess: 'DerivedD.method' cannot change access modifiers when overriding inherited method 'Base.method'

Customer code will only be impacted if it is exploiting the existing gap in the compiler where publicly overriding a non-accessible internal method wasn’t diagnosed.

  Read more...

Feb 27, 2021 - Use The New Packaging in the Legacy Build Pipeline

Filed under: #daxmusings #bizapps

The legacy pipeline from the build VM has its own PowerShell script that generates the packages. However, it always puts the F&O platform version into the package file name which can make it more difficult to use release pipelines or including ISV licenses into your packages since the version number changes with each update, requiring you to update your pipeline settings (and finding out the actual build number to use). Continue reading below or watch the YouTube video to learn how to swap the packaging step from the legacy pipeline with the Azure DevOps task which lets you specify your own name for the deployable package zip file. You can find the official documentation on the packaging step here.

  Read more...

Jan 27, 2021 - ISV Licenses in Packages

Filed under: #daxmusings #bizapps

ISV licenses for Dynamics 365 F&O can only be applied using deployable packages. There are ISV license packages that only contain a license, and there are combined packages that have both the binaries as well as the license. But now with all-in-one packages on self-service environments, you can only apply the license as part of an all-in-one package. So what are your options? Check out my YouTube video and/or read on for more details.

  Read more...

Jan 23, 2021 - Updating The Legacy Pipeline for Visual Studio 2017

Filed under: #daxmusings #bizapps

With the upcoming April 2021 release, support for Visual Studio 2015 will be dropped. If you’re building your code using a build VM deployed from LCS, you’re using the legacy pipeline. You will have to manually update your build pipeline tasks to use the new version. The steps are fairly simple and outlined in this official docs article. I have a quick video on YouTube to walk you through this as well. There is one little flag that could trip you up, however.

  Read more...

Jan 18, 2021 - Including ISV Binaries in Your Package

Filed under: #daxmusings #bizapps

Many ISVs supply their Dynamics 365 Finance / Supply Chain solutions in a deployable package, which only contains binaries. With the current enforcement (“all-in-one package”) of a long-standing best practice to deploy all code together all the time, some customers are only now faced with figuring out how to “repackage” an ISV’s binaries into their own package. In this post I will outline a few gotchas in addition to the official documentation, for both the legacy build pipeline and the new build pipeline. You can also watch a quick overview video I made here on YouTube.

  Read more...

Oct 31, 2019 - Pushing, Dragging or Pulling an Industry Forward

Filed under: #daxmusings #bizapps

Quite a few years ago, in my previous job when I was an MVP still, I did an online webinar for the AXUG called “Putting Software Engineering back in Dynamics AX” (in 2014). Admittedly it was somewhat of a rant / soap box type of talk. I guess “food for thought” would be a more optimistic characterization. I did try to inject some humor into the otherwise ranty slides by adding some graphics. At the time we were building out our X++ development team and we were heavily vested in TFS and automation, and I was very keen on sharing our lightbulb moments and those “why did we only start doing this now” revelations.

Fast forward 5 years to a new generation of technology and a shift to cloud. In fairness, many more people are engaged in some of these topics now because the product finally has features out of the box to do builds, use source control without tons of headaches and setup, etc. But contrary to the advice on one of the original slides from 2014 that said “Bring software engineering into AX, not the opposite” - it sort of feels that is exactly what has happened. People projecting their AX processes onto some software engineering processes. Sometimes ending up with procedures and technology for the sake of it, and not actually solving any problems at all and sometimes even creating more problems. But, they can say they ticked another checkbox on the list. I have stories of customers with messed up code in production, because someone setup branching because they were told that’s a good thing to have. Yet nobody knew what that meant or how to use it. So code was being checked into branches left and right, merged in whichever direction. Chaos. A perfect example of implementing something without having a good reason or understanding to do so. On the flipside, we have customers calling us up because they “redeployed” their dev VM, and want to know how they can get a clean copy of their code from production back into their VM. Now, part of that is legacy thinking and not understanding the technology change. But honestly that was never a good thing in older versions either.

Anyway, that brings us to my topic du jour. As you may or may not have heard and read, we’re working on elevating the developers tools further. We’ll become more standard Visual Studio, standard Azure DevOps. This is all great news as it will allow X++ developers to use more of the existing tools out there that can be used for any standardized .NET languages or tools. The problem is not that we’ll be forcing people to use advanced tools they don’t know how to use. They can still choose to not use source control or build automation. I’m more worried about the people using all these new tools and not understanding them. What if in the future we start supporting Git? Will our support team be overwhelmed with people stuck on branching, merging, PRs, rebasing and all the great but complex features of decentralized source control? We’ve never dealt with situations where we “support” the technology (i.e. the tools are compatible) but we won’t support the user of that technology (sorry your production code got messed up, go get some training on Git branching and good luck to you on recovering your production environment). In the history of our product, we’ve never drawn a big line between technical compatibility but not supporting the usage of it. But we will have to. How about other areas, like PowerBI, PowerApps, etc.? Yes they are supported and will be integrated further, but will Dynamics 365 support answer your usage questions?

I’ve had frank discussions with developers (that I personally know), where I basically tell them “the fact you’re asking me these questions tells me you shouldn’t be doing this”. But that’s not an attitude we can broadly apply to our customer base.

So I ask YOU, dear audience. Where and how can we draw a line of supportability?

  Read more...

Oct 11, 2019 - Debugging woes with symbols: bug or feature?

Filed under: #daxmusings #bizapps

I’ve struggled with this myself for a while during the betas of “AX7”. Sometimes, symbols aren’t loaded for your code and your breakpoints aren’t hit. It’s clear that the Dynamics 365 option “Only load symbols for your solution” has something to do with it, but still there’s strange behavior. It took me a few years at Microsoft for someone to explain the exact logic there. Since I’ve been sitting on this knowledge for a while and I’ve recently ran into some customer calls where debugging trouble was brought up, I realized it’s overdue for me to share this knowledge.

Summary: it’s in fact a feature, not a bug. But I would like to see this behavior changed assuming we don’t introduce performance regressions.

There’s a piece of compiler background information that is not well understood which is actually at the root of this problem. We all know there are two ways to compile your code: from the Dynamics 365 “Full build” menu, or from the project. The project build, if you right-click on your project, has two options: build and rebuild. Now, the “rebuild” feature does NOT do the same thing as the full build menu - and that is the crux of the issue here. Both build and rebuild from the project only compile the objects in your project. Rebuild will force a build of everything in your project but not the whole package it belongs to. To do this, our Visual Studio tools and the compiler make good use of netmodules for .NET assemblies. Think of a netmodule as a sub-assembly of an assembly, I guess.

Now, the point is this. The “load symbols only for your solution” option only loads the symbols of the binaries for the objects in your project - aka the netmodules. So when you do a full build from the Dynamics 365 menu, you actually HAVE NO symbols only for the objects in your project (only the full binary of the package). And as a result after doing a full build and debugging with the “symbols for solution only” option turned on, your breakpoints will NOT be hit due to the symbols not having loaded.

I think we should change this option to work more like “load symbols for the packages containing your solution’s objects” or something to that effect. We’ll have to see if that affects the performance for large packages in a significant way, since it will now load all the symbols for that package. That is ultimately why this feature was introduced (see? it’s a feature!). Worst case we may need a new option so you can use the old behavior or the more inclusive behavior…

I’d love to hear your thoughts on this, here or on Twitter @JorisdG.

  Read more...

Mar 25, 2019 - Repost: Pointing Build Definitions to Specific VMs (agents)

Filed under: #daxmusings #bizapps

Since the AXDEVALM blog has been removed from MSDN, I will repost the agent computer name post here AS-IS, until we can get better official documentation. Original post: October 20, 2017


We’ve recently collaborated with some customers who are upgrading from previous releases of Dynamics 365 to the recent July 2017 application. These customers typically have to support their existing live environment on the older application, but also produce builds on the newer application (with newer platform).

Currently the build agent is not aware of the application version available on the VM. As a result, Visual Studio Team Services (VSTS) will seemingly randomly pick one or the other VM (agent) to run the build on. Obviously this presents a challenge if VSTS compiles your code on the wrong VM - so the wrong version of application and platform. We are reviewing what would be the best way to support version selection, but in the mean time there is an easy way to tie a build definition to a specific VM.

First, in LCS go to your build environment and on the environment details page, find the VM Name of the build machine. In this particular example below, the VM Name is “DevStuffBld-1”.

Next, go to VSTS and find the build definition you wish to change. Note that if you have more than one version you’re building for, you will want more than one build definition - and point each to its respective VM. To make sure a build definition points to a specific VM, edit the build definition and find the Options tab. Under Options you will find a section of parameters called Demands. The demands are effectively either specific values setup on the agent setup in VSTS (you can do this in the Agent Queue settings), and the agent also picks up all environment variables on the VM it runs on. You will notice that all build definitions already check for a variable called DynamicsSDK to be present to ensure the build definition runs only on agents where we have set this “flag” if you will. Since each VM already has an environment variable called COMPUTERNAME, we can add a demand for computername to equal the name of our build VM. So for the example of the build VM from above, we can edit our build definition to add the following demand by clicking +Add:

Save your build definition and from now on your build will always run on the right VM/agent.

  Read more...

Feb 19, 2019 - Repost: Enabling X++ Code Coverage in Visual Studio and Automated Build

Filed under: #daxmusings #bizapps

Since the AXDEVALM blog has been removed from MSDN, I will repost the code coverage blog post here AS-IS (other than wrong capitalization in the XML code), until we can get better official documentation. Note that after this was published, I received a mixed response from developers. For many it worked, for others this did not work at all no matter what they tried… I have not been able to spend more time on investigating why for some people this doesn’t work. Original post: March 28, 2018


To enable code coverage for X++ code in your test automation, a few things have to be setup. Typically, more tweaking is needed since you will likely be using some platform/foundation/appsuite objects and code, and don’t want code coverage to show up for those. Additionally, the X++ compiler generates some extra IL to support certain features, which can be ignored. Unfortunately there is one feature that may throw off your results, we’ll talk about this further down.

One important note: Code Coverage is a feature of Visual Studio Enterprise and is not available in lower SKUs. See this comparison chart under Testing Tools | Code Coverage.

To get started, you can download the sample RunSettings file here: CodeCoverage You will need to update this file to include your own packages (=”modules” in IL terminology). At the top of the file, you will find the following XML:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackageName.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageNameTest*.*</ModulePath>
    </Exclude>
</ModulePaths>

You will need to replace the “MyPackageName” with the name of your package. You can add multiple lines here and use wildcards, of course. You could add Dynamics.AX.* but that would then include any and all packages under test (including Application Suite, for example). This example also shows how to exclude a package explicitly, for example in this case the test package itself. If you have multiple packages to exclude and include, you would enter it this way:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackage1.*</ModulePath>
        <ModulePath>.*MyPackage2.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageName1Test*.*</ModulePath>
        <ModulePath>.*MyPackageName2Test*.*</ModulePath>
    </Exclude>
</ModulePaths>

To enable code coverage in Visual Studio, open the Test menu, select Test Settings and Select Test Settings File. Select your settings file. You can then run code coverage from menu Test > Analyze Code Coverage and then selecting All Tests or Selected Tests (this is your selection in the Test Explorer window). You can open the code coverage results and double click any of the lines - which will open the code and highlight the coverage.

To enable code coverage in the automated build, edit your build definition. Click on the Execute Tests task, and find the Run Settings File parameter. If you have a generic run settings file, you can place it in the C:\DynamicsSDK folder on the build VM, and point to it here (full path). Optionally, if you have a settings file specific for certain packages or build definitions, you can be more flexible here. For example, if the run settings file is in source control in the Metadata folder, you can point this argument to “$(Build.SourcesDirectory)\Metadata\MySettings.runsettings”.

The biggest issue with this is the extra IL code that our compiler generates, namely the pre- and post-handler code that is generated. This is placed inside any method, and is thus evaluated by code coverage even though your X++ source doesn’t contain this code. As such most methods will never get 100% coverage. If a method has the [Hookable(false)] attribute (which makes the X++ compiler not add the extra IL code), or if the method actually has pre/post handlers, the coverage will be fine. Note that Chain-of-Command logic that the compiler generates is nicely filtered out.

  Read more...

 

  Page: 1 of 15

Blog Links

Blog Post Collections

Recent Posts