Tuesday, September 20, 2011

Deploying your AX 2012 Code

Yesterday I talked about deploying .NET assemblies, how they get automatically deployed for any Visual Studio projects you have. At the end, I talked briefly about X++ code and the CIL assemblies. I will take that a bit further today. If you haven't read the Microsoft whitepaper on "Deploying Customization Across Microsoft Dynamics AX 2012 Environments" I suggest you take a look at that, it's a good read.

We had a gathering with some other MCTs at Microsoft's offices in Fargo, and we had a big discussion around deploying the customizations, and the "high availability" scenario where you want to deploy code, but avoid too much downtime. But let's not get ahead of ourselves too quickly.

[Editing note: I admit, I like to rant. Please bear with me, I promise it will get interesting further down :-)]

I made this statement to one of our clients a few months ago, and I'd like to share it with you: moving XPOs is barbaric. Yes, barbaric. As AX developers, administrators, implementers and customers we need to move on. Yes, moving XPOs has some benefits, but they are all benefits to circumvent proper procedure. The benefit of moving one piece of code? That's introducing risk and circumventing proper code release management (branching and merging for the version control aficionados). The benefit of no downtime? That is bad practice, what about users that have cached versions of your code running in their sessions? Not to mention data dictionary changes. The benefit of not having to recompile the whole codebase? Talk about bad practice again, and all the possible issues that come out of that. Sure, you say, but I am smart and I know what I can move without risk. Famous last words. And that's not a reason to circumvent proper code release procedures anyway. Here's my take:

1) Code should be released in binary form. For AX 2009 that means, layers. In AX 2012, we have models.
Have you ever received a .cpp or .cs code file from Microsoft containing a patch for Windows or Office? Didn't think so.

2) Released code should be guaranteed to be the same as the tested code (I'm sure your auditors will agree)
When importing an XPO there is an overwrite and/or merge step that happens. This is prone to user error.
Also, who is importing the XPO? If there is no chance of doing anything wrong or different, can't you just have a user do it? Yeah, didn't think so either.

3) You should schedule downtime to install code updates
Sure, AX doesn't mind a code update while that code is still running. A binary update? Not so much, you need to restart the AOS.
Remember Windows asking to restart the system after an update was installed? Yes, there's a good reason for that, to replace stuff that's in use.

I can keep going for a while, but those are my main points. So, back to AX 2012. I will relate this to the points above.

1) Models are binary files. The .AxModel files are technically assemblies, if you open them up in Reflector or ILSpy you won't see too much though, it contains the model manifest (XML) as a resource, and a BinaryModel resource. The model is a subset of a specific layer, so it's a more granular way to move code than moving layers was in AX 2009. But, just like in AX 2009, since this is not the full application, after installing the model one should compile the application. An additional reason to do so in AX 2012 is the fact that the model also contains compiled X++ p-code. Any references to other classes, fields, etc gets compiled into IDs. In 2012, those IDs are installation-specific. That means your code using field ABC may have been compiled to use fieldID 1, but perhaps in the new environment you're importing this model into, the field ABC is really fieldID 2. So now your code, when run, will reference the wrong field (you can actually test this, it's freaky). So, compiling is a necessity. Also think of situations where your model modifies a base class. The whole application needs to be recompiled to make sure all derived classes get your update compiled into them (yes, you can do compile forward, assuming the person installing the model is aware of what is actually in the model).
You can replace an existing model. This avoids the typical XPO issue where you need to import with "delete elements" enabled. Which you can't always do since you may not want to remove other fields that are not in your XPO, but belong to another project. The model knows what it contained previously, and what it contains now, and will only delete the specific changes for your model. One thing to remember is that you should never uninstall the old model and install the new one. This will likely cause havoc in your object IDs, which means tables will get dropped and recreated rather than updated!

2) Obviously by exporting the model from an existing environment where you tested, the code will be the same. Same confidence as moving layers in previous versions of AX. Another method is moving the code using branch merging in a source environment, and creating a binary build out of that. In this case you are branching a known and tested set of code, and have full traceability of the code as it progresses through your testing and release processes.

3) Ok, this is all great, but... compiling in AX 2012 takes a LONG time compared to AX 2009. And then we need to generate the CIL as well (which doesn't take that long in my experience)! So you catch my drift on the whole binary deployment, you see what I mean with matching released code to tested code... and starting the AOS is one thing, but having to recompile and wait for that to complete? That could be some serious downtime!

Enter.. the model store! Think about this. The model store today, is what the APPL directory used to be in previous releases. It contains EVERYTHING. The layers, which contain the code, the labels, etc. So, it contains the compiled code as well. In fact, the model store even contains the generated compiled IL code. So, going back to the "Deploying Customization Across Microsoft Dynamics AX 2012 Environments" whitepaper. It has this section called "import to the target system". To avoid downtime, you should have a code-copy of your production environment ready, where you can replace models, do whatever you need, and then compile the application and generate the CIL for X++. This is what the whitepaper means with the "source system". When that system is fully prepped, you can export the FULL model store. Again, this contains source code, compiled code, CIL and even your VS projects' built assemblies!
To further avoid down time, you can import the model store into production (the "target" system in the whitepaper) using a different schema. This basically lets you "import" the modelstore without disturbing the production system. Then, whenever you are ready, you can stop the AOS, and apply your temporary schema, effectively replacing the code in production with the code you built. The only thing you have to do still, after starting the AOS, is synchronize the database (and deploy web and reporting artifacts). And of course, clean up the model store's old schema.

So, when you start up the AOS, it will recognize the changed CIL in the modelstore, and download it to its bin\vsassemblies directory (see my previous article).

Now that's what I call high availability. So... no more reason to not move code in binary form! No more barbarism.

10 comments:

  1. Interesting post - I assume you're talking about maintaining a staging environment (final integration test prior to LIVE-deployment). What about moving code from say a DEV to TEST environment though? I'm not convinced that models are the way to go, as there will inevitably be code-conflicts that need to be merged manually. Tedious, but necessary, and in that respect the project import/merge functionality is still the best option.

    ReplyDelete
  2. Yes, I'm talking about a staging environment to minimize production downtime.

    From DEV to TEST, think about this: you are probably talking about a complete customer scenario. Unless you make each spec a separate model, which is serious overkill, I'm guessing people will just have 1 model for the whole customer solution, which boils down to moving a whole layer. No merging there, of course.
    If you DO end up using a logical grouping of development into more than one model, I do believe it is possible to code in such a way that you avoid code conflicts. Mainly the events story, as well as the granular forms in the AOT will make it possible to segregate the code properly. There will be some type of customizations, the more "destructive" ones that change the existing functionality more deeply, that will still need to modify existing code, which gives the potential for merge issues. However, in my experience they rarely overlap on the same objects, and if they do they will probably exist in the same logical model anyway.

    ReplyDelete
  3. I agree with you in principle, but in an implementation where there is heavy modification, events aren't going to help a great deal. In a situation where it's "we need to do xxx after an invoice has been posted", then no problem. But what about an instance where "we need to check x prior to posting an invoice, and if condition y is true then we need to post the invoice but modify the structure of the resulting GL postings". I've heard another developer refer to 'event spaghetti' (for ASP.net development), where yes we segregate the code out, but you lose a great deal of visibility as to what the program is doing (or will do). In addition, as far as I can tell the debugger support for events isn't great in that it doesn't provide a full stack-trace to see where events have been executed from. Might be a minor point, but could be a real headache when debugging complex logic.

    That may not be a great example, but there is no way around the fact that (I would say more often than not), we need to *change* the code as opposed to *extend* it, via things like events.

    Of course this comes down to the nature of the project (and team, and development cycle). Just my two cents. Good blog by the way

    ReplyDelete
  4. You're absolutely right and trying to do a mod as you describe (ie GL postings) with events would be a square peg in a round hole type of situation. In my experience though those types of mods are a minority, and something like that I try to negotiate with finance consultants to see if we can do some type of extra journal post(perhaps automated) rather than making those drastic code changes, which in the end also helps future upgrades.
    Anyway, I do agree with you, events are not a cure-all and there will always remain instances where you'll have to just change the code and be stuck with merges and upgrades.

    Also, glad you're enjoying the blog. Spread the word ;-)

    ReplyDelete
  5. Amen to that!

    There are still people debating on this topic, while it should be abundantly clear.

    We only use XPO files when performing builds and even then they cause trouble.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. I totally agree on not using xpo's anymore.

    However, eventually I'm not the one who makes the final decision about this.
    Do you have any recommendations on how to cope with managers, customers and non-technical colleagues who are glued tight to xpo deliveries?

    The most common reasons given to stick with xpo's:
    - We only want adjustment x and z, but not y to production
    - We must be able to deliver hot-fixes to production, regardless of pending approvals in the acceptance environment.
    - XPO's are faster to deploy

    especially the reason about the hot-fixes is hard to debunk.

    ReplyDelete
  8. The only safe way to do "code promotion" deliveries (ie move x and y, but not z) is through a version control system that supports branching. At least it will keep track of differences and conflicts (object X has been touched by both FDD a and b, and you're only moving one of them) and there's no manual merging going on that could result in issues - the branch/merge system will do it for you or at the very least alert you to the exact issue. In such a case, the proper way would be to then promote the code and test it one last time - since "integrated testing" implies testing all, so if you then move only part of it, you invalidated your testing. Hotfixes can work the same way, with a version control system you can promote these along.

    XPOs are not faster to deploy. If you're not doing a full compile after importing an XPO then you're doing it wrong. The compile is what takes the time, so XPOs are not faster to move. And with a staging environment to do the compile of the model store, then move that model store, you can at least reduce down time.

    Here in the USA people with auditors have a hard time with all this code promotion stuff in AX. The easiest way to mitigate most of these issues is to have a TEST and UAT separate. You can move and test all code in test. When you're ready to move specific pieces, that's when you move it into UAT where you sign off on the whole thing (auditors REQUIRE that you prove what got moved is what you tested - which you cannot do if you're not moving the whole thing). For production hotfixes, I've seen approaches where there is a separate UAT that's an exact copy of PROD without any pending approvals, so you can rush a hotfix through. You wouldn't need to keep that environment (unless you do a lot of hotfixes which is not a good sign :-).

    All in all, the only reason you have to explain this is because people are used to the scripting of X++ and its ability to do just one object compiles etc. New customers will not make a point of this if you tell them how you deploy code, because they are probably used to do this from *any* software they have dealt with in the past. With CIL that game has somewhat changed since everything needs to compile without errors before that can be generated, so I can only speculate what the next release of AX will be. But I'm sure for anyone unwilling to come up with new procedures now and find a good way of doing it, AX "7" (or another version down the line) may turn out to become a hard landing.

    ReplyDelete
  9. Following your recommended best practice of exporting models from a Staging Environment into Production, can you achieve this and use SQL 2012 "Always on High Availability" groups?
    Due to the containment restraints. - Thoughts?

    ReplyDelete
    Replies
    1. I would avoid any magic in SQL like SQL replication, SSIS or always on. You can do a full backup/restore but I wouldn't use any other feature on the SQL end to move code. It's too risky and not enough fail safes.

      Delete