Thursday, January 9, 2014

How We Manage Development - Automated Builds

It's been a while so it's about time I continue the "How We Manage Development" articles. If you missed it, I wrote about our architecture of TFS and Dev machines and how we organize TFS projects. Although we continuously adjust our methods and some of the details in those posts have changed, the essence still holds and I'm hoping you'll find those articles a good read.

In this post I will focus on our build process and some specific issues I'm sure anyone who's tried to do automated builds has encountered at some point.
To get started, I need to explain we have two different build workflows. One is for a test environment, where we need to update an existing environment. The second is for models that need to be deployed to customers. The difference is that for a test environment, you can't remove the existing code since that would result in data loss. The second does remove existing code as that would be equivalent to a "clean" action that every build process should do. Ultimately, the test build produces a model store export file (we don't actually build in the test environment, we build somewhere else and then deploy the new modelstore), the release build process produces an model file.

Our current model for builds has changed over what we used to have. We used to have a separate (dedicated) AOS on each customer's development VM that was just used for builds (we still have this process for AX2009 environments though). Today, for AX 2012 environments, we just have two dedicated build machines: one for 2012 RTM and one for 2012 R2. Yes, we support clients with all sorts of levels of updates, but consider that for compiles the only thing that matters is the code. So we always build with the latest kernel version (for example, R2 CU7), but we use the exact patch-level for each customer (for example, R2 CU5 with some custom hotfixes). To support this, the first step of our build process is to restore a correct database for the version we need to build (more on this later). Now, since we are using the latest kernel version, if we were to export a model store or model file we would have a version issue. So, we have a repository of AXUTIL versions for different kernel versions of AX, and the build will use the correct axutil version when it exports the model or model store. This sounds like a hack and I guess it sort of is, but it works perfectly so far. If we ever run into a CU that is somehow not compatible, we'll have to setup a different build server for that specific CU going forward. Again, we have all levels of CUs across our customers and so far we haven't had any compatibility problems. And the nice thing is, we can compile our <CU7 clients using the newer CU7 axbuild process ;-)

So what are the steps in our build process? Considering we are using the same physical machine and the same AOS instance for multiple clients on multiple versions with multiple sets of code, we have some precautions and failsafes in place.

1. Setup the AOS to use the right database.
We used to flip the AOS configuration to point to a different database. We've changed this step now to just restore the database we need. This has the advantage of not needing the database already on the build machine's SQL server (meaning we can setup new build machines and they'll just pick up databases from a shared drive to restore). And it also saves space on the local SQL server on the build machines since we keep overwriting the same DB.
This also has an advantage that we don't run into any issues removing code first, as the database we're restoring will be in working order. Sometimes removing models prior to starting the build can cause issues with synchronize or other things.

2. Do a "Cleansing" of the solution.
Since the previous build may have been for a totally different application version or code base, we don't want to have to deal with any remnants. So, we delete the XPPIL artifacts, VSAssemblies, Appl files such as labels, etc. Also on the client-side, we clean the VS Assemblies folder for the build user and delete AUC files.

3. Combine all the XPOs from source into 1 big XPO.
You can use the standard Microsoft one provided at Information Source or you can write your own or use our simple open source one. At this point I'm guessing the Microsoft one will have some benefits but we still use our own simple one and it works great.

Now, here's where we need some explanation of model store build versus model build. In the model store situation we are not going to uninstall the existing code. However, if we import the XPO with all the code, do we really want it to delete "sub-elements"? What if there is code from another model (in the same layer) that adds a method to the same class for example? If we import the XPO for our model, it will delete that method? We need it to be model aware, and XPOs just aren't.
So, to work around that issue, we use a temporary database... create our model, import the xpo, extract the model and then import that model into the actual model store. This will have the model import take care of deleting sub-elements, and that will be cleaner and take care of the model specifics. To save time we don't compile or sync or anything in our temporary database, we're just happy if the code is there.

4. Uninstall all models from all custom layers.
Now that we restore database from scratch, we could skip this step. I guess we still have it in there :-) It doesn't add much overhead except for the next step (5) which should be done if we do this step. For our temp database, or our actual database if we're just building a model, we clean out all models in the custom layers (from ISV all the way up to USP). This is technically also part of a cleaning of the solution, as it makes sure there are no weird remaining artifacts from a previous build that will skew the compiler results. If we have dependencies on other models, we'll reinstall those back in later. In some cases there may be several ISV models that can't be installed cleanly together without merging. We have an option to exclude certain layers from being cleaned up, so that we can create a base database containing ISV products that we can restore, and then not remove those products. These should be exceptions as we want to start the build as close to standard AX as possible. Again, if we restore the database we could assume there's nothing in it that needs to be removed...

5. Start AOS and synchronize.
Since/if we removed code, we want to synchronize before we continue. If we don't synchronize and re-import the code, IDs will be different but the DB will still have the old artifacts and IDs, resulting in synchronization errors later on. We have an option that for a temporary database import (as explained above) we can skip this step as we don't care about synchronizing (and save a bit of time).

6. Stop AOS, deploy references, start AOS.
for temporary databases, we skip this step for now and perform it later for the actual model store database.
Lots of times code depends on external assembly DLLs (references). Since we are using the same machine to build all sorts of different environments and different versions, we shouldn't (can't) actually install the software or DLLs in the right places. Since we need them to compile the code, and the compile runs on the client under a specific user, we can copy all needed DLLs into the VSAssemblies folder for the build user. We store all the correct DLLs and correct versions with the project's code in the source control repository. It makes sense, you version your dependent binaries as well. And that's how we can get to them from any build machine.
Also, our code may depend on third-party models. Since we deleted all models, we have to re-import them unless we have them pre-installed on the DB backup and have set the build to skip cleaning that layer. So, same as the DLLs, we have the dependent models in the source control tree, so they get pulled onto the build machine and we just install them into the model store we're about to build.

7. Import labels.
We import label files using the client executable and -StartupCmd=aldimport_filename but there are ways to do it with autorun as well I think. Now, a lot of people (including ourselves) have had numerous problems getting the AOS to grab new labels or create labels. Labels don't show up, or old ones do but new ones don't etc. Additionally, sometimes they do show up but if you export the model it doesn't contain them. So, here's the scoop on that: 1) make sure you have delete the old label files from the server appl folders. 2) (super-secret trick) after importing the labels we use an autorun XML file to call Label::flush to make sure the client/AOS flush the labels down into the model store so the export works.

8. Import the combined XPO file.
Now, the combined XPO file doesn't contain the VS projects, we deal with those separately. To import the XPO we use autorun. We used to use the client's import xpo startup command but autorun has some advantages (including logging) and seems more stable.

9. Import VS projects.
Technically you can convert VS project files into an XPO and import that. The standard combine XPOs tool doesn't do this I believe, and we have had unreliable results importing VS project XPOs. So, alternatively we are using autorun to call SysTreeNodeVSProject::importproject. Now, Microsoft just told me about another trick where you can use the msbuild process to call the "add to AOT" on the project, as you would from VS manually. I have to figure out how to do this as it would probably solve a few remaining issues with import projects. But for any normal VS projects, the static call to importproject should work great, and that's what we currently use successfully.

If we're doing a temporary database to create a model to update a model store, this is where we stop and just export the model as-is without compiling. We then switch to the actual model store and import the model we just exported. Note that that will also correctly update the version number of the existing model.

10. Compile etc
Now we're back in sync with both types of builds. We run the X++ compile, generate CIL and run a sync. Here we just added the option to compile "traditionally" using the client, or using the multi-threaded axbuild utility in CU7. If any of the steps (compile, CIL) log any errors in their log file, we fail the build. If the sync fails somehow, we generate an error in the TFS build log, which results in a "partially succeeded" build.

11. Extract the code
Now we can extract either the model store or the model file. Note that you never want to extract the model store from the build that cleaned out all the code first, since that will have all new IDs for all tables. That's the exact reason why we have two distinct build workflows.

Obviously we use TFS, but it should be clear that these steps can just be incorporated into PowerShell scripts and run manually without using TFS build. All of the code we use for this is stored in a class library, which then has a TFS Workflow Activies front-end as well as a PowerShell front-end. We are close to finalizing the CU7 axbuild pieces and then we can do a major release of our utilities. But you can already get the code from our source repository on CodePlex.

Waw, what a wall of text. Hope it makes sense to someone :-) And for your reference, with the CU7 optimization this whole process (db operations / import code / compile X++ / generate CIL / synchronize / export model) this runs in less than 40 minutes.


  1. Excellent post Joris! Yes it made sense for me very well, thanks! :)

    We're using InRelease from InCycle Software (now part of VS2013 as Release Management) for our deployment solutions to severals AX 2012 R2 CU7 servers, all included into automated builds and using some very useful powershell srcipts to do so. I think someone like you would (or might at least) be very interested to see how it works... I will be posting something on that subject soon (I hope) and then look how people like it (or not)...

    1. Sounds great. When you've posted, ping me on twitter or shoot me an email (my first name at this blog's domain -

  2. I have a little question for you Joris... Oh and by the way, I just changed my profile nickname, so Neo's comments above are from me! Just so you know, not to be confused ;)

    So my question is, when you talk about XPOs import, do you have issues sometime when importing XPOs on the build server? I mean sometimes we have issues (seems to happen more frequently with CU7) when synchronizing the AOS with TFS: the XPOs are effectively dropped physically on the server (GetLatest), but the AOS does not seem to "see" that it should import those XPOs, so some AX objects remain unchanged in the AOS while they should. We're using the Synchronize-AXVCS function. But it's all fine when using ax32.exe with the Synchronize VCS menu, with the "Force" option... although some people recently told me that even that way sometime it fails...

    What are we doing wrong??

    1. We never have and never will use the synchronize feature in AX. And from what I gather everyone says it doesn't work well at all, so I'm glad we're not using it. As mentioned in the post, we combine all the XPO's into one big one, then use command line to import that. Never had an issue (except for VS projects, as explained in the post).
      In a build, that synchronize would be a dependency I'm not willing to take anyway. The less high-level functionality I have to use, the better.

  3. Alright thank you Joris. We'll try and do what you said in your post :)

  4. Hello Joris,
    another excellent and useful post. Thanks a lot.
    Just a little comment regarding CombineXPO. You don't have to take it from InformationSource anymore. This version is the beta. The final version is available with other management utilities (same path as axbuild).

  5. Hi there,

    Just so you know, we just open a ticket with MS regarding the Synchronize with TFS command in order to fix it and that it does what it's meant to do, all the time, at the first and only call.

    I'll keep you informed... :)

  6. Out of curiosity, what sort of environment are you running on that has this running in 40 minutes?

    1. Dedicated i7 laptops.

  7. I have a question regarding your "super secret trick". You said that you are using an autorun xml to call the Label::Flush system class method. I am wondering if this is custom code. I have been looking through the SysAutoRun and SysStartupCommandAutoRun classes and I don't see anything that looks like it accomplishes this. I have no problem modifying these classes to do this but I would rather use existing functionality if it is already there. Thanks

    1. Ok, so I went back and looked through the CodeCrib project and was able to figure out the AutoRun xml that is necessary. You are using the Run command.

      Can you explain to me why writing a method call to the Infolog causes that method to run?

    2. Sorry for the late reply. Yes, we're using the Run method. I'm forgetting the reason we wrap it in an infolog, but it had to be wrapped in something, for some reason the label::flush with 2 parameters wouldn't work (i'd have to go back and try to see why). Anyway, we wrap it inside another call first, but wrapping it in infolog actually output something into the autorun log so you know it ran...

    3. Ok, I attempted to put the infolog call in my build process and I was able to make it work but unfortunately it wasn't enough to actually fix the labels in my build model. I debugged calling the Label::flush() method directly from Run, and the issue is that it looks for the class in the Classes node of the AOT instead of the System Classes. I am getting around this by creating a custom class, DDCAutoRunHelper. I have added a method to this class called LabelFlush() that calls the system class method. I then call my custom class to flush the labels. This also means that you can't flush the labels until after your model has been imported. This process appears to work but I will double check it this weekend with my next build push.

    4. So, the call we make from Run, without any customization in AX:

      Parameters=string.Format("strFmt(\"Flush label {0} language {1}: %1\", Label::flush(\"{0}\",\"{1}\"))", labelFile, labelLanguage)

      Of course the parameters are a bit obnoxious, but since it's a nice string, that infolog will appear in the autorun log so we can make sure it ran correctly. Hope that helps.

    5. I was attempting the infolog call from Powershell. It does complete for me but doesn't fix my label file issues. I was able to validate that my custom helper class call for Label::flush() does fix it for me. When I pushed my build model this weekend, my labels came over correctly. If you set the logFile parameter of the autorun xml, you can get confirmation of the method call as well. Hopefully this approach will help someone that is having issues making the infolog call work. Thanks for your help.

  8. Has anyone had a chance (and been successful) using the technique for VS project import documented my MS in "Change management and TFS integration for multi-developer projects (white paper) [AX 2012]"

    1. Ryan,

      I'm unfamiliar with that whitepaper but I'll look it up. Our technique for VS projects works fine for 95% of VS projects. The only sure-fire way is to use msbuild to import that projects, a technique that Microsoft has in their powershell scripts for builds. I'm hoping to include that in our opensource project soon.

  9. I am trying to use auto run to invoke the SysTreeNodeVSProject::importProject method. I am seeing the following output in my log file...any ideas?
    Class name: SysTreeNodeVSProject - Method: importProject
    Failed to compile run buffer.

    Failed to execute command: Run

    1. I believe I received this error when passing in parameters that didn't work with the method call. What does the Autorun.xml look like?

  10. Any ideas how to target a specific model when importing the labels, combined xpos etc? Originally I had an empty model spec'd in my template database with this model identified in the "Startup model" part of dev options. I would like to remove this empty model from my template database and have the build create the model and subsequently 'populate it'. The challenge I am encountering is that when you create the model on the fly like this, the "Startup model" isn't identified. This causes any 'import' commands to import into the wrong Model in the target layer.

    I've tried the following with no luck:
    ax32.exe [AXCFILE] -Internal=noModalBoxes -Minimize -StartupCmd=aldimport_[ALDLOC] -logdir=[LOGDIR] -model=@MyModel

    ax32.exe [AXCFILE] -Internal=noModalBoxes -Minimize "-StartupCmd=aldimport_[ALDLOC]" -logdir=[LOGDIR] "-model=@MyModel"

    ax32.exe [AXCFILE] -Internal=noModalBoxes -Minimize -StartupCmd=aldimport_[ALDLOC] -logdir=[LOGDIR] "-model=@MyModel"

    1. Figured this out. Sometimes when the documentation says "Model" it really means the model's name other times (like in this case) it means the integer Id of the model as seen when you run
      Select * from SysModelManifest against the model db

    2. Ryan, have you tried loading and setting the ax configuration to the appropriate layer on the machine that the AxBuild.exe is running?

      See this:

    3. You can also add -aol and -aolcode to your command line. We pass all three for a build (layer, layercode and model).

  11. Hello Joris,

    I was wondering if you handle the set of activated licences, configuration keys, aol codes in AX on your build machine to be the same as your targeted customer setup. Or it doesn't matter?


    1. For builds, we don't care. We usually just use our own license and AOL codes, that way it's easy to copy/paste build definition parameters between builds :-)
      There's some exceptions if we need a special database for a customer's build (eg if they have patches or nasty ISV products that won't install properly) - those usually contain the customer's license but that's more coincidence than intentional.

  12. How Does the ID of objects are controlled from build environment to another environment, when it is done in a periodic fashion. If a new table is introduced alphabetically before than an existing table. When a new build is created the Ids of the existing Table will be changed. Moving this Build Model to the Staging Environments , does it cause any id sync issues

    Please advise.

    1. IDs are only part of a model store, not of model files. The IDs are also only important between environments that will be sharing data - as in, you'll restore a database backup from one to the other.
      So in that respect, your TFS repository and the model file coming from your build won't have IDs at all. Yes, once you import the model it will generate IDs as you described - but at that point you'll deal with it the way you are today, assuming you have a strategy.
      I don't think there's a one-size-fits-all way of doing code deployments. IDs play a role but you need to look at your overall strategy to manage your environments. A build strategy should just fit in perfectly with what you're already doing.
      Hope that helps.