Wednesday, February 27, 2013

How We Manage Development - Architecture

With all the posts about TFS and details on builds etc, I thought it was time to take a step back and explain how we manage development in general. What infrastructure do we have, how does this whole TFS business work in reality?! Let me take you through it.

First, let's talk about our architecture. We currently have two Hyper-V servers. These host a development server per client. These development servers contain almost everything on them except AX database, we use a physical SQL server for that. So the development servers contain AOS, SSRS/SSAS, SharePoint/EP, Help Server and IIS/AIF (where needed). Each development server is tied to a specific project in TFS for that client.
In addition to the main AOS that is used to develop in (yes we still do the "multiple developers - one AOS" scenario), there is a secondary AOS on each machine that is exclusively used for the builds. Typically developers do not have access to this AOS or at least never go in (in fact the AOS is usually shut off when it's not building, to save resources on the virtual machine).

There are several advantages to having a virtual development server per client:
- Each server can have whichever version of AX kernel we need (potentially different versions of SQL etc)
- If clients wish to perform upgrades we can do that without potentially interfering with any other environments
- Any necessary third-party software can be installed without potentially interfering with any other environments
- The virtual servers can be shut down or archived when no development is being done

I'm sure there is more, but those are the obvious ones.

Traditionally, VARs perform the customer development on a development server at the customer site. We used to do this as well (back in the good ol' days). There are several drawbacks to that:
- Obviously it requires an extra environment that, if the client is not doing their own development, may solely be for the consultant-developers. With AX 2012 resource requirements per AOS server have skyrocketed and some clients may not have the resources or budget to support another environment.
- We need source control. This is the 21st century, developing a business-critical application like AX without some form of source control is irresponsible. MorphX source control can be easily used (although it has limited capabilities), but any other source control software (that integrates with AX) requires another license of sorts, in case of TFS you also need a SQL database, etc. Especially for clients not engaging in development efforts, this is a cost solely to support the consultant-developers.
- Segregation of code and responsibilities between VAR and clients that do their own development. Developing in multiple layers is the way to do that. But working with multiple developers in multiple layers (or multiple models) with version control enabled is a nightmare.
- No easy adding of more resources (=developers) for development at critical times, since remote developers need remote access, user accounts, VPNs, etc. Definitely doable, but not practical.

As with most things development in the Dynamics AX world, any new customer has no issue with this and finds it logical and accepts the workflow of remote development, deploying through builds, etc. It is always customers that have implemented AX the traditional way (customers upgrading, etc) that have a hard time changing. It is interesting how this conflicts with a lot of VARs switching to off-shore development, essentially the ultimate remote development scenario. At our AX practice within Sikich we only use internal developers that work out of our Denver office. (Want to join us in Denver?)

We don't have all of our clients up on this model, but all new clients are definitely setup this way. As of this writing, we have 11 AX2009 client environments and 9 AX2012 client environments running on our Hyper-Vs and tied into and managed by TFS.

With that out of the way, we'll look at our TFS setup in the next post.


  1. This is good stuff! Thanks for sharing, Joris!

  2. Joris,
    How do you handle the inevitable 'quick fix' and how do you manage the downtime associated with a model move (deployment). In a scenario where you have a multi-person functional team working on multiple (possibly unrelated) customizations that are completed and refined over a longer period of time?


  3. Rob,

    Stay tuned for the next few posts. :-)
    But in general, your questions show "your age" in the AX world - the things you mention are all historical AX artifacts and are purely mindset issues. There are plenty of other business critical applications where none of this would work anyway. Changing our AX model to be like any normal software engineering practice doesn't change a whole lot, just your way of thinking and working with AX. And in the end it improves your overall quality of delivery, there is no question about that.
    We do this with AX 2009 as well, but AX 2012 forces you more in that direction. We'll see where the next release goes, but I can guess.

    1. Hey! Are you calling me old? ;)

      When you say "There are plenty of other business critical applications where none of this would work anyway." , are you referring to the scenario I describe, or to the 'build deployment' scenario?

      I'm all for the 'build/deploy' process, but unless ERP implementation methodologies significantly change, I don't see this viable in the near future.

      I'd love to see you prove me wrong. And would be very interested in seeing how you implement this in a new 2012 implementation where there is ongoing solution design.

    2. No, not calling you old, but calling you old school :-)

      I'm talking about both your scenario and the build deployment. Think about this: if AX was one giant C# project for the business logic, that compiled into a DLL (we're already part-way there anyway with 2012). You need to compile the whole thing every time (one error in any piece would fail your DLL build. To deploy, you have to stop your application to replace the DLL, since it is locked. How would you manage that downtime? How very different is that really, except for some procedural or architectural things? The only "wrench" today is the time it takes to compile AX - on our slowest VMs it's 50mins for AX2009, 3hrs for AX2012 and 5hrs for AX2012R2. That really is the only problem - but it's a problem all around for AX, whether you do build deployments or not.

    3. Gotcha.
      I have no doubt that eventually we will get to the point where individual changes are no longer possible.

      It is going to be interesting to see how that will change the implementation (and development) process. - can't wait :)

    4. Good post!
      Have you seen any issues with TFS synchronization failing (not synchronizing) with the architecture you described above running under AX2012 R2?

    5. Hi Todd,

      We don't use TFS synchronization so I can't say I have. Our builds import XPOs and especially VS projects, and we had plenty of issues with that originally when 2012 just came out. Perhaps the synch has similar problems.

    6. Meant to say, we especially had issues with VS projects.

    7. I'm sure this will be in one of your follow-up posts... I'll ask it anyway:
      How do you efficiently manage the sychronization of multiple developer instances? I'm working a project (similar architecture) with 12+ developers in the US, Romania and India.

    8. We actually only have one AOS that is shared by all developers. There's some minor issues with CIL with that, but it's not been a big deal so far, granted we don't have 12+ developers working on the same instance at the same time. So all developers use remote desktop and log into the hyper-v image for that particular customer.

      To support that one-aos-multi-developer scenario we have had to make customizations to the TFS integration, though. I will talk about that further indeed.

      I agree our scenario is not ideal, and somewhat contradictory to my pleas for becoming more software engineering and less AX cowboys. Unfortunately neither scenario (one aos or multi aos) works well at all right now, so we took the lesser of two evils, in my opinion.

    9. Good post, Joris. Sounds interesting using TFS in a shared development environment and as you don't have to do TFS synchronization I guess a lot of the overhead is gone. I look forward to your next post on customizing TFS.

      I am not a fan of TFS and my experience with isolated development enviroments and TFS syncronization is that it cause a 20 percent overhead on the development time. I often ask my customers whether they think using TFS is worth an increase in the the development on about 20 percent.

      I advice AX customers to have a system that keep track of the full history of which mods have been done by who and when, so any code changes are documented and logged, TFS is just not what I think of first. I guess it becomes more and more difficult to avoid TFS so it's good to hear how you cope with the evil :-)

    10. You have no idea how sad you make me when you call TFS evil and overhead :-(
      I do hope I can change your mind throughout these posts. Your new blog says you want to help improve quality and speed of AX dev projects, so hopefully you'll eventually agree you should start with TFS! :-)

  4. Thanks for the good post, Joris!

    Todd, Rob and Steen - small world. Glad to see I'm not the only geek out here. :)

  5. Hi Joris,

    Thanks for sharing

    I'd have 2 questions about your isolated environments for developers:
    - What are the minimum hardware requirements of your virtual machines ? (CPU, RAM)
    - In these environements, how do you manage the business data? Does each developer have a copy of the full customer database? or do they work with test data? How do you ensure the AX modules setup is correct amongst the environments and relevant for developing efficiency?

    Thanks !

    1. With isolated environments you can basically give everyone a clone of one original image. As usual the virtual machines should have at least 8GB ram, where possible more (16gb or so?). I believe there are some recommendations from Microsoft for virtual machines for demo environments so those of course can apply.
      Data is more tricky. If we get the customer's database that's of course preferred, but that's not always possible (DoD clients or SEC restrictions etc). We work with the Contoso set where needed. As you pointed out that may make it trickier as far as setup, but we deal with that. Worst case most of the testing has to happen on the client's testing machine.
      Note that we used shared environments not isolated for each developer, but same things apply.

  6. @Joris, nice post.

    Did You guys had to customize the TFS Workspace to multiple workspaces (per user)? For me that's the main bug when doing "multiple devs - one instance" with TFS enabled. Any other custom?

    1. Yes, we customized so the local repository contains the user ID at the end. This can cause other issues so we make those different Id folders an NTFS junction of the same base folder so that XPOs are always present everywhere. This was an issue in 2009 but we kept it in 2012, not sure if it still is an issue or not there.

  7. Hi Joris,

    Can you please share how did you customized TFS for shared environment? What were the problems you encountered without this customizations? Thanks

  8. This software requires a smallest part of your investment and can help you in achieving bigger returns.
    buy Revit Structure 2014