Tuesday, November 5, 2013

What You Need To Know About the 15-minute CU7 Compiler

If you hadn't heard yet, CU7 has been released for AX 2012 R2. Lots of things to get excited about (comparing reports anyone?), the thing I'm most excited about is the new compile optimization. We did a lot of work in the past to identify the bottlenecks and speed up the compile by minimizing those bottlenecks. If you're on AX 2012 RTM, that's your best bet. However, the main issue still remained: the compiler didn't use more than 1 core and so didn't scale with hardware.

So, no more! CU7 includes a new way to compile the X++ code (CIL is still the way it was). First, it eliminates the client altogether, which removes another communication bottleneck by taking the 32 bit client (and the communication with the AOS) entirely out of the picture. The other advantage to this is that the server component is a 64 bit process, allowing access to more memory. Secondly, it will spin up multiple processes to compile pieces of the X++ code, effectively using multiple cores.

There are a few gotchas that you need to be aware of, though.

By moving the compile into a 64-bit process, there are limitations to loading certain dependent client-side (32 bit) components for the compile. These could be ActiveX controls, COM objects or even DLLs and assemblies. To be able to do the compile, the compiler will use reflection on all these objects, effectively allowing it to check everything without having to actually load these components.
Obviously to be able to check all these dependencies correctly, the compiler needs access to client-side components it typically does not have installed. To that end the compiler needs the path to a folder that contains all these dlls it may need. Typically pointing it to the folder of an installed AX client is enough.
When calling COM objects in X++, you can "chain" the method calls (for example: COM.method1().method2(); ) relying on the fact that the first method call will return an object of a certain type on which you can call the second method. With the server-side compile, this chaining cannot be checked, so code has to be refactored into multiple calls (for example: o = COM.method1(); o.method2(); ). Note that you will run into this trying to compile a model store version prior to CU7 with the new compiler (SysInetHTMLEditor has calls like this that need to be fixed).
Finally, because it starts multiple instances of the server process to do the compile, hotswapping needs to be turned OFF.

So, how does it work?

Well, first: check and make sure hotswapping is turned OFF! Next, open an elevated command prompt ("run as administrator"). Next, navigate to the server/bin folder which contains the axbuild.exe program. There are several parameters you need, one optional one controls the number of parallel processes it uses (aka "workers"). By default axbuild will create roughly 1.4 workers per core you have: I have 4 cores in my laptop (2 physical, total 4 virtual), so axbuild will start up 6 worker processes. I found that reducing that to 4 workers on my machine gained me about 10 minutes. So play with this a bit to find the optimal setting. Additionally you need to specify the AOS you are compiling for. This uses the AOS "instance number" on the machine you are on. On my laptop I'm testing this on my primary AOS which is "01" (yes you need the leading zero). Finally, as mentioned earlier, you need to provide the path to a binary folder containing client-side DLLs, so pointing it to the client/bin folder works great.
My final command ends up being:

cd \Program Files\Microsoft Dynamics AX\60\Server\<YOUR AOS NAME>\bin
axbuild.exe xppcompileall /aos=01 /altbin="C:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin" /workers=4

I'm currently running a laptop with 3GHZ i7 (2 physical cores / 4 virtual) with 16GB memory and an SSD drive, running AOS and SQL on this machine (and SQL constrained to 4GB memory). I consistently hit 15 minutes compile time.


As mentioned earlier, if you are planning to use this on CU6 or earlier, you will have to cleanup some COM calls. You will see these as compile errors in your compile log output, which will be located in C:\Program Files\Microsoft Dynamics AX\60\Server\<YOUR AOS NAME>\Log (you can specify an alternative path in the command line).

For more details on command line arguments and architecture, check out these two Microsoft links:
AX Tools Blog about CU7 Parallel compiler
MSDN Article on AXBuild


Thursday, September 26, 2013

Linq to AX Example using WPF

Today I decided to investigate and blog about a feature I haven't tried since the beta of AX 2012 R2: the Linq connector. I do a quite a bit of C# work regularly, not related to AX, and Linq (along with WPF) is one of my favorite frameworks. So combining that with AX, seems like a perfect match to me.

For this example, I decided to do something basic. WPF allows easy binding, Linq allows querying and returns an IQueryable that you can bind to. In the past I've blogged about using WPF to bind to AIF services in my 10-minute app series. This time however, we'll call our code from within AX and stay inside the AX client process and connection, which is where the Linq connector is working. You could use this over the business connector as well, just keep in mind that the BC technology is announced to be deprecated in the next release.

Unfortunately, there are several limitations and issues using Linq, and I'll talk about those here.

So, let's dive right into it. Open Visual Studio 2010 (with the VS extensions for AX 2012 R2 installed). Again, this ONLY works on R2 and higher. In Visual Studio, create a new class library project, I've called my example "AX62Linq". Once it has opened, add the project to the AOT as shown below.


Once it's added to the AOT, right-click on the project and select properties. In the properties window, make sure to set the "Deploy to Client" property to "Yes". If you want to run and debug directly from Visual Studio, set the "Debug Target" to "Client".


Next, we'll create a WPF window control which we'll call from within AX. On the project, right-click and click Add > New Item. Select WPF and create a new User Control (WPF) - don't take the Windows Forms user control! I named my user control "CustomerSearch".


Instead of a user control though, we'll make this a full-on window. In the code CustomerSearch.xaml.cs code (expand the CustomerSearch.xaml in your solution explorer and double-click the CustomerSearch.xaml.cs file), change the inheritance from UserControl to inherit from Window instead.

Original:
public partial class CustomerSearch : UserControl
{
    public CustomerSearch()
    {
        InitializeComponent();
    }
}

New:
public partial class CustomerSearch : Window
{
    public CustomerSearch()
    {
        InitializeComponent();
    }
}


To support this, you also need to add a reference to System.Xaml.


In the designer of the CustomerSearch.xaml, let's change the "UserControl" tag to "Window".

Original:
<UserControl x:Class="AX62Linq.CustomerSearch"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
             mc:Ignorable="d" 
             d:DesignHeight="300" d:DesignWidth="300">
    <Grid>
            
    </Grid>
</UserControl>

New:
<Window x:Class="AX62Linq.CustomerSearch"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
             mc:Ignorable="d" 
             d:DesignHeight="300" d:DesignWidth="300">
    <Grid>
            
    </Grid>
</Window>


Next, we'll add the references to the AX Linq libraries. Right-click references again and click "add a reference", but this time we'll have to browse to the DLLs we need. These DLLs are in the AX client's bin directory, which by default on an 64-bit system is in c:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin. Add the files Microsoft.Dynamics.AX.Framework.Linq*.dll On my current test system which is AX 2012 R2 CU6 I have three files, other versions may have different files or different number of files. The reason I say this is because the code example on MSDN seems to show a difference with my system.


Alright, that's a lot of "blah" for the little tiny bit of code we're going to write, but here goes. First, we need to instantiate a query provider for AX. then, we create a query collection object for the table we want to query. That table should be a proxy to the table we're interested in, so first, open the Application Explorer toolbar (from the Visual Studio menu: View > Application Explorer). Expand Data Dictionary / Tables and find CustTable. Right-click on CustTable and select "Add to project". That create the proxy for you. You'll need to declare some using statements at the top. Again I have a difference on my system versus the code example from MSDN referenced above. On my 2012 R2 CU6, here's the using statements I added:

using Microsoft.Dynamics.AX.Framework.Linq.Data;
using Microsoft.Dynamics.AX.Framework.Linq.Data.Common;
using Microsoft.Dynamics.AX.Framework.Linq.Data.ManagedInteropLayer;
using Microsoft.Dynamics.AX.ManagedInterop;


That allows us to declare the query provider and query collection:

QueryProvider provider = new AXQueryProvider(null);
QueryCollection<CustTable> custTableCollection = new QueryCollection<CustTable>(provider);


Next, we can perform our query. If your familiar with Linq, it's pretty much regular Linq, but there are a few restrictions. But, at least the basics work. For example, here I'm querying for all customers in a given customer group (specified in a string variable named "customerGroup"):

var customers = from c in custTableCollection where c.CustGroup == customerGroup select c;


This is where it gets a little bad. There's an issue when both a method and a field have the same name. For example, the CustTable table we are using has both a field named "Blocked" and a method named "Blocked". Now, the proxy generator for AX avoids this issue by naming the field "Blocked_" with an underscore. However, the Linq provider seems to not pick up on this correctly. So as soon as you try to use the customers list from the Linq query, you will receive an exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: The supplied method arguments are not valid. If you look through the stack, you'll see it go down to the interop layer and linq libraries, starting with Microsoft.Dynamics.AX.ManagedInterop.Record.createFieldExpressionNode(String fieldName).
Luckily, C# features anonymous types, and as handy as they are with Linq in general, they are a must to solve our problem here.
So, let's change our Linq query to not return CustTable types, but rather a new anonymous types containing only the fields we want (and... only fields that don't have a method with the same name). As for the ugly part: if anyone ever decides to add a method with the same name as one of your fields, I guess you're done.
Below I create a new anonymous type containing the account number and the delivery mode fields).

var customers = from c in custTableCollection where c.CustGroup == customerGroup select new { c.AccountNum, c.DlvMode };


An option to this whole mess would be to remove the methods that are causing the conflicts from the generated proxy file. Unfortunately, by default a rebuild of your project will regenerate the proxy code. If you want to try that out anyway, right click on a declaration of "CustTable" in your code (for example in the QueryCollection declaration we have) and click "go to definition".

Anyway, back to the good! Let's create our XAML window and try this out, shall we? Open the CustomerSearch.xaml file by double clicking on it. In the XAML code, inside our <Grid> tag which is currently empty, we'll define some rows and columns for layout, but most importantly we'll then add a TextBox named "CustGroup" and a button to perform the search. Finally, we add a ListView named "CustomerList" with a GridView inside of it. The GridView we bind to "AccountNum" and "DlvMode" which are the two fields we are returning from our Linq query. The final xaml code looks like this (note that if you named your project and XAML control differently, the x:Class declaration at the top in the Window tag should be preserved as you have it!):

<Window x:Class="AX62Linq.CustomerSearch"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
             mc:Ignorable="d"
             Title="Customer Search"
             Width="300"
             Height="300"
             d:DesignHeight="300" d:DesignWidth="300">
    <Grid Margin="5">
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="Auto" />
            <ColumnDefinition Width="*" />
            <ColumnDefinition Width="Auto" />
        </Grid.ColumnDefinitions>
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto" />
            <RowDefinition Height="*" />
        </Grid.RowDefinitions>
        
        <TextBlock Grid.Row="0" Grid.Column="0" Text="Customer Group Search: " />
        <TextBox Grid.Row="0" Grid.Column="1" Name="CustGroup" />
        <Button Grid.Row="0" Grid.Column="2" Content="Search" Name="SearchBtn" Click="SearchBtn_Click" />

        <ListView Name="CustomerList" Grid.Row="1" Grid.ColumnSpan="3">
            <ListView.View>
                <GridView>
                    <GridViewColumn Header="Account Number" DisplayMemberBinding="{Binding AccountNum}" />
                    <GridViewColumn Header="Delivery Mode" DisplayMemberBinding="{Binding DlvMode}" />
                </GridView>
            </ListView.View>
        </ListView>
    </Grid>
</Window>


Now, note that the SearchBtn button has a click event handler called SearchBtn_Click, which we need to create. So let's move on to the code. Open up the CustomerSearch.xaml.cs again. First, we'll add a method that takes a customer group string, performs the Linq query and sets the result set as the item source for our list view (the control we named "CustomerList").
public void LoadCustomers(string customerGroup)
{
    QueryProvider provider = new AXQueryProvider(null);
    QueryCollection<CustTable> custTableCollection = new QueryCollection<CustTable>(provider);

    var customers = from c in custTableCollection where c.CustGroup == customerGroup select new { c.AccountNum, c.DlvMode };

    CustomerList.ItemsSource = customers;
}


Finally, we create another method for the button click, which we have to name the same as what we put in the XAML, "SearchBtn_Click" in the example from above. All we'll do is grab the text from the "CustGroup" textbox and pass it into our method for Linq:

private void SearchBtn_Click(object sender, RoutedEventArgs e)
{
    LoadCustomers(CustGroup.Text);
}


At this point, we're all done. You can do Build > Rebuild Solution and Build > Deploy Solution and we can go into AX to call our code. For the purpose of speeding this up, I just created a quick and dirty job:

static void Job1(Args _args)
{
    AX62Linq.CustomerSearch myWindow;

    myWindow = new AX62Linq.CustomerSearch();
    myWindow.ShowDialog(); // this waits for exit
}


If all went well, here's what you should get. I entered "10" as a filter which should give you some customers in the standard CU6 demo data. If you want to go advanced you can go back to Visual Studio and from your Application Explorer make the job you create your "Startup Object" (and make sure you have debug target "client" set on your project, as explained in the beginning). This will allow you to just hit F5 from within Visual Studio which will start AX and run the code (and allow you to debug easily without needing to manually attach the debugger to the ax client). You can go back to my Developer Resources page and find some of the Visual Studio articles if you want to know more about those features.



Ok, so what have we learned today.

The good:
Getting the linq queries to work is pretty easy, just add the reference, a proxy and get started. Basic linq queries work well and perform as expected. Since it's IQueryable you can use the linq results as datasources for binding etc. Although I didn't demonstrate that here, joins between tables work just as well. Look at the MSDN code example if you'd like to see that.

The bad:
Some linq query syntax is not available. For example, looking for customer account numbers that contain "abc" doesn't work. In Linq you would filter in the where clause: where c.AccountNum.Contains("abc") but that won't fly for the AX proxies. Normal filters where a field equals a string, or where a number is larger or smaller than another number work just fine.

The ugly:
A lot of standard tables have fields and methods with the same names. This causes major issues and you won't be able to query these fields at all unless you copy/paste the proxy code into your own CS file (and remove the proxy) and remove all the methods that cause the issues. Of course then you're disabling the benefit of the rebuild refreshing your proxy with new code and fields, so you're on your own for maintaining the proxy code.

Tuesday, August 20, 2013

XLNT - A Most "Excellent" Framework for X++

Although demonstrated and praised quite a few times by Master Obi-Wan Villadsen and his team, the XLNT framework is still a little-known framework that contains a lot of power. So much so that the static code analysis (aka "Customization Analysis") in the Dynamics Lifecycle Services almost solely based on the XLNT framework.

So what is the XLNT framework exactly? XLNT basically allows us to hook into the X++ compiler to get the parsed code and run analysis and diagnostics.
How do you get XLNT? There's no XLNT download in itself, but the XLNT framework is used for the Code Upgrade Service. You can download the Code Upgrade Service from InformationSource (you need to be a Dynamics Customer or Partner) and browse to the SERVICES page. Or you can just click this link. When you've download the package, you can just open the EXEcutable with your favorite compression utility (or you can run the installer) and the only files you really need for this article are:

Microsoft.Dynamics.AX.Framework.Xlnt.XppParser.dll
Microsoft.Dynamics.AX.Framework.Xlnt.XppParser.Pass2.dll

Feel free to play with the rest, but it requires models to be installed etc to do the code upgrade service. So for the purpose of what we're talking here, those two DLLs are all we need.

So, to make both the coding and this article a bit easier, I'm going to stay entirely in Visual Studio. XLNT does need the AX "context" so if we won't be running this in AX but entirely in .NET from Visual Studio, we'll use a business connector session to provide the context we need. I love XAML as a UI for test apps, but considering this is a blog and not everyone is familiar with XAML, I will just use a console app so avoid distracting from what we're trying to do with XLNT. Once you create your project, right-click on the project node in the solution explorer and select properties.


Next, we need to change the Target framework to .NET Framework 4 (default is client profile 4.0).


Finally, the AX assemblies are built against .NET Framework 2.0. If you would try to run it as-is, you'll get an error saying that "Mixed mode assembly is built against version v2.0.50727 of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information". The "additional configuration information" has to happen in the app.config of your console app. If you're making a class library for use inside of the AX process, this shouldn't be an issue at all. But in a standalone app like this console app, open the app.config and add the attribute useLegacyV2RuntimeActivationPolicy="true" to the startup node, so your app.config xml file should look like this:

<?xml version="1.0"?>
<configuration>
  <startup useLegacyV2RuntimeActivationPolicy="true">
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
</startup>
</configuration>


Let's get down to it. I'm working in VS 2010 for this example. This would allow me to add any code to the AOT later if I wanted to use this code from within AX. Create a new C# console application, I'm calling it XLNTConsoleApp. In the solution explorer, right-click references and first add references to the XLNT dlls mentioned before. It doesn't matter where you have them stored, Visual Studio will copy them into your solution by default.


Additionally, we'll need to reference the business connector so that we can provide the AX session context needed for XLNT to work. Assuming you have the business connector installed (if not, please do so :-)) you can find the DLLs in your client/bin folder (default is C:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin). The files you need are:
Microsoft.Dynamics.AX.ManagedInterop.dll
Microsoft.Dynamics.BusinessConnectorNet.dll


After that, we're ready to write some code. First, we'll load up a connection to AX using the business connector. Alternatively to all of this, you could just create a class library project, add it to the AOT, and then call the code from an X++ job or so. So, to account for both, let's try and detect if we have an AX session context already (if we're running from within AX) and if not, we'll create one (uses BC.Net). In your Program.cs locate the static Main method and add the following code, which looks for a runtime context and checks if you're logged in, if not it makes a BC connection. Of course, we do some basic exception handling.

static void Main(string[] args)
{
    Microsoft.Dynamics.BusinessConnectorNet.Axapta ax = null;

    try
    {
        // Check for a RunTimeContext.Current, which will exist if we're running this
        // from within the AX process
        if (Microsoft.Dynamics.AX.ManagedInterop.RuntimeContext.Current == null
            || Microsoft.Dynamics.AX.ManagedInterop.RuntimeContext.Current.isLoggedOn() == false)
        {
            ax = new Microsoft.Dynamics.BusinessConnectorNet.Axapta();
            // Supply all nulls to use the Client Configuration settings for Business Connector
            ax.Logon(null, null, null, null);
            // alternatively, use the line below to specify a config file
            //ax.Logon(null, null, null, @"C:\path_to_my_config\configfile.axc");
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message); // output the exception message
        Console.ReadLine(); // pause execution before we quit the console app
        return;
    }

    try
    {
        // [CODE WILL GO HERE]
        Program.BasicTest("void test() { if (true) { info('hello, world'); } }");
    }
    catch(Exception ex)
    {
        Console.WriteLine(ex.Message); // output the exception message
    }

    if (ax != null)
    {
        ax.Logoff();
        ax.Dispose();
    }

    Console.ReadLine();
}


Note the line that says // [CODE WILL GO HERE]. That is where we will be introducing our experiments with XLNT. For now, let's create one basic method for you to experiment with until the next article :-)

static void BasicTest(string sourceCode)
{
    ProxyMetadataProvider metaData = new Microsoft.Dynamics.AX.Framework.Xlnt.XppParser.Pass2.ProxyMetadataProvider();
    MultipassAdministrator multipassAdmin = new MultipassAdministrator(metaData);

    Method method = multipassAdmin.CompileSingleMethod(sourceCode) as Method;

    if (method != null)
    {
        foreach (Statement statement in method.Statements)
        {
            Console.WriteLine(statement.ToString());
        }
    }
}


This will create a metadata provider and a "multipass administrator" which basically let's you compile things. We'll give it some source code to compile which we pass in as a string. Note that you can point it to AOT objects etc. Finally, we'll loop over the statements in the method. Note that statements can be nested. The Statement class is a base class for different types of statements, which will all have different properties for sub-statements etc (eg the if statement ("IfStmt" class) has an expression statement and a "consequent" property).

You can explore the statements by adjusting the sourceCode input and putting a breakpoint inside the foreach loop, for example.
Of course, you still need to actually call this new method, so in the [CODE WILL GO HERE] section you can put the following:

Program.BasicTest("void test() { if (true) { info('hello, world'); } }");


Note that a compile error will not result in an exception but rather it will just return null instead of an actual method instance.

Alright, now you're ready to do something interesting with XLNT! We'll explore more in the next article, but if you're doing some exploratory work of your own, please post in the comments!

Wednesday, July 31, 2013

Auto-Deploying DLLs and Other Resources - Part 2

In the first post of this series on deploying resources, I discussed the framework and some of its issues we'll have to deal with. In this article, we'll actually write the code to support that article.

Note that I also posted an "intermission" to that blog post based on some reader feedback. The article explains how to use a manually edited (aka hack :-)) Visual Studio project to have AX deploy resources through the AOT VS project framework. It works great, but there's always the possibility that an undocumented "feature" like that may be killed in an update.

So, back to the file deployer framework. We'll create a new class called "CodeCribDeploy" and we'll extend "SysFileDeployment".


As soon as you save the code, you'll notice 4 compile errors complaining you need to implement a few of the abstract methods:
- filename
- parmClientVersion
- parmServerVersion
- destinationPath

You can right-click the class and override each of these. They will still error out on the super() call since that would be calling an abstract method. Just get rid of the super() call for now if (like me) the errors bother you.
Let's start with the method "destinationPath". This indicates where you will store the files you're deploying. This requires some consideration. Users may not be local admins on the machine and may not have enough privileges to put the files anywhere. On the other hand, for DLLs you want to make sure they are in a path where AX will look to load assemblies from. As an alternative to client/bin I like to use the same folder that AX uses to deploy AOT VS project artifacts too, which is in the user's appdata folder as explained in this article. Feel free to change, but for this example that's where we'll put it. So ultimately, my destinationPath method looks like this:

protected FilenameSave destinationPath()
{
    return strFmt(@'%1\%2', j
        CLRInterop::getAnyTypeForObject(System.Environment::GetEnvironmentVariable('localappdata')),
        @'Microsoft\Dynamics Ax\VSAssemblies\');
}


I ask .NET for the System Environment variable "localappdata" and append the folder for the VSAssemblies. Interestingly, the sourcePath() method is not abstract and doesn't need to be overridden. Unfortunately, although it returns the path to the include folder, it runs the code on the client tier and so it returns the wrong value. So, we'll need to write a method to grab the server include folder on the server tier, then change the sourcePath method to return that value. Note I'm using the server include folder (default location is C:\Program Files\Microsoft Dynamics AX\60\Server\[YOURAOSNAME]\bin\Application\Share\Include) because I think that makes sense, but feel free to change this. So this is what we're adding to our CodeCribDeploy class:

protected static server FilenameOpen serverIncludePath()
{
    return xInfo::directory(DirectoryType::Include);
}

protected FilenameOpen sourcePath()
{
    return CodeCribDeploy::serverIncludePath();
}


Next, the filename. Since there's only one filename, this implies you need a class for each file you wish to deploy. I've personally just created a base class with all the overrides, and then just inherit from that for each file, just changing the filename method's return value. So, we'll just enter the filename. In this case I'll deploy "MyDLL.dll".

public Filename filename()
{
    return 'MyDLL.dll';
}


The next two methods to override are "parmClientVersion" and "parmServerVersion". Interestingly these don't seem to be used much by the framework at all. In fact, the only reference is from the base class SysFileDeployment.getClientVersion() and SysFileDeployment.getServerVersion() who seem to just get the version from their parm method. Interestingly, the framework calls the isClientUpdated() method which by default only checks to see if the file exists on the client side. Not helpful. So, let's implement these methods to actually return some useful information on the versions, then we'll fix isClientUpdated to actually use these versions properly. There are different things you can do, including using the .NET framework to get actual assembly version numbers from your DLL, but we'll go with the cheap version and just check timestamps of the files.
Note that we need to run these checks on their respective tiers, ie we need to get the server version by running code on the server tier and the client version by running a check on the client tier. since we're just check file properties (timestamp), we can use the WinAPIServer class to check stuff on the server. Unfortunately, that class demands the FileIOPermission, which means we have the assert that permission on the server tier prior to the calls to WinAPIServer. Since our class will be running client-side, we'll have to create a static server method which we can call from the parmServerVersion.

protected server static anytype ServerFileVersion(str filename)
{
    date serverDate;
    TimeOfDay serverTime;

    new FileIOPermission(filename, 'r').assert();
    
    if (WinAPIServer::fileExists(filename))
    {
        serverDate = WinAPIServer::getFileModifiedDate(filename);
        serverTime = WinAPIServer::getFileModifiedTime(filename);
    }

    return strFmt('%1T%2', serverDate, serverTime);
}

public anytype parmServerVersion()
{
    str filename = strFmt(@'%1\%2', this.sourcePath(), this.filename());

    return CodeCribDeploy::ServerFileVersion(filename);
}

public anytype parmClientVersion()
{
    str filename = strFmt(@'%1\%2', this.destinationPath(), this.filename());
    date clientDate;
    TimeOfDay clientTime;

    if (WinAPI::fileExists(filename))
    {
        clientDate = WinAPI::getFileModifiedDate(filename);
        clientTime = WinAPI::getFileModifiedTime(filename);
    }
    
    return strFmt('%1T%2', clientDate, clientTime);
}


So now we'll override the "isClientUpdated" method to actually perform a version check:

public boolean isClientUpdated()
{
    return this.parmClientVersion() == this.parmServerVersion();
}


Note that here I'm checking if the client and server versions are equal. So if the server version if older, it will return false here and prompt the client to download the older version. That may or may not be what you want.

We also need to make sure the framework picks up on our file to be "checked". It unfortunately doesn't look at subclasses of the base class to determine that automatically. You're supposed to add your classNum as part of a return value of the filesToDeploy() method. If you're reading this and wanting to implement this for AX 2009, you need to over-layer this method and add your class. If you're on 2012, you have a better option: events!
Right-click on your CodeCribDeploy class and click New > Pre- or post-event handler. Let's rename this method to "filesToDeployHandler". We'll get the method's return value, add our class ID to the container, and set the return value back.

public static void filesToDeployHandler(XppPrePostArgs _args)
{
    container filesToDeploy = _args.getReturnValue();

    filesToDeploy += classNum(CodeCribDeploy);

    _args.setReturnValue(filesToDeploy);
}


Finally, we just drag&drop this new method onto the filesToDeploy method of the SysFileDeployer class. Make sure to give the new subscription a meaningful and unique name (or otherwise you'll defeat the clean purpose of using events in the first place). Also make sure to set the properties of the event subscription (right-click your new subscription node, select properties) to "Post" event.


Great, all set, right?! Well, there's one more "fix" we have to perform, as discussed, to make sure our file versions are always checked. To do this, either change the code in the "parmUpToDate" method to always return false, or if you're on AX 2012, again you can use events. By making parmUpToDate return false we force AX to check the versions, as it should. This can be as easy as adding another pre/post handler as we did before, and changing the return value to false.

public static void parmUpToDateHandler(XppPrePostArgs _args)
{
    _args.setReturnValue(false);
}


And obviously we need to drag&drop this onto the parmUpToDate method of the SysfileDeployer class, and set the CalledWhen property to Post.


Make sure to save the whole lot.
Now, when you open a new AX client, you should get the following dialog:


If you don't see it, make sure you put your DLL to be deployed in the right folder, for the right AOS. Yeah, that's what I did.

Tuesday, July 30, 2013

Custom Query Range Functions using SysQueryRangeUtil

You've probably seen these requests before. Users want to submit some report or other functionality to batch, and the query should always be run for "yesterday". It's a typical example where, as a user, it would be handy to be able to use functions in your query range. Well, you can. And in fact, you can make your own, very easily!

Enter the class "SysQueryRangeUtil". All it contains is a bunch of static public methods that return query range values. For example, there is a method called "day" which accepts an optional integer called "relative days". So, in our example of needing a range value of "tomorrow", regardless of when the query is executed, you could use day(-1) as a function. How this works in a range? Just open the advanced query window and enter your function call within parentheses.

Let's make our own method as an example. Add a new method to the SysQueryRangeUtil class, and enter the following, most interesting code you've ever encountered.

public static str customerTest(int _choice = 1)
{
    AccountNum accountNum;
    
    switch(_choice)
    {
        case 1:
            accountNum = '1101';
            break;
        case 2:
            accountNum = '1102';
            break;
    }
    
    return accountNum;
}


So, this accepts an options parameter for choice. If choice is one (or choice is not specified), the function returns 1101, if 2 it returns 1102. Save this method and open a table browser window on the CustTable table. Type CTRL+G to open the grid filters. In the filter field for the AccountNum field, enter: (customerTest(1)).


So, the string returned from the method is directly put in the range. So, you could do all sort of interesting things with this, of course. Check out some of the methods in the SysQueryRangeUtil as examples.

Thursday, July 18, 2013

Auto-Deploying DLLs and Other Resources - Intermission

I posted part 1 of the auto-deploying DLLs and other resources article last month. Although I will finish the part 2 article as promised, an interesting comment and subsequent email discussion / testing has prompted me to include this "intermission".

The deployment framework has existed throughout quite a few versions of AX. when AX 2012 was released and we were all drooling over the Visual Studio projects in the AOT, one thing became clear: referenced DLLs within the project are not deployed like the DLLs built from the project. I tried quite a few options in the properties of the references to get the DLLs copied to the output folder etc, but nothing worked. Additionally, deploying other files from your project (images etc) doesn't work either.
But, one attentive reader of this blog, Kyle Wascher, pointed out a way to edit your Visual Studio project file to have it deploy files to the output folder. Interestingly, AX honors these settings as opposed to honoring the regular properties in the VS project file. So, here's how you do it!


First, let's create a new project in Visual Studio 2010. I'm choosing the Class Library project type, and I'm naming it "DeploymentProject".



Once created, right-click the new project and select "Add DeploymentProject to AOT".



Right-click on your project and select "Properties". Make sure to select "Deploy to client" (or deploy to server or client or EP or all of them, depending on your scenario). For this test I'll just set Deploy to client to YES.



Of course, we need a DLL to deploy. I'm going to create a new project/solution but of course that is NOT a requirement, you can pick any DLL you have created anywhere or downloaded from third parties. Shift-click on Visual Studio in your Windows taskbar to start another instance of Visual Studio. Create new project,again I pick the Class Library project type, and I'm naming it MyDLL. After this, my project looks like this. Again, creating this new project is just an illustration of a DLL we'll deploy, it's not needed to make this work. As an illustration for later, MyDLL contains a public class MyClass with a public static method "Message" that returns the string "Hello, world!". Since the code is irrelevant I'm just putting a screenshot up here. On a side note, it seems if you create another project within the solution where you create the AX VS project, the new project will also be added to the AOT, which of course defeats what we are trying to do here.




Make sure to build this DLL so we can use it.

Ok, so now there are two ways to have the DLL be part of your project. One, you add it as an actual reference. Or two, you just add it to your project as a regular file. In this example, I will add the DLL as a reference in the project. This will allow me to actually also use the DLL in the project itself, which I will use further down as an example. This is also the most common scenario where one needs to deploy an extra DLL.
So, go back to your AX VS Project "DeploymentProject", right click the references node in your deployment project, and click "Add reference". On the "Add Reference" dialog click the Browse tab and navigate to the MyDLL.dll we built in the other project. You'll find that DLL file in your project's folder under bin/debug or bin/release depending on which configuration you used to build.




Ok, open the File menu and select "Save all" to make sure we've saved our project. Time to get our hands dirty and "hack" the Visual Studio project :-) Right-click on your project and select "Open folder in windows explorer" (or manually browse to your project folder). Find your .csproj file (in my case it's DeploymentProject.csproj) and open it in notepad or your favorite text editor (depending on your OS you may or may not have a "Open with" option, you may have to right-click or shift-right-click, it all depends... if all else fails, just open notepad and open the file from within notepad). Find the XML nodes called ItemGroup and add your own ItemGroup as follows:



A few things there. By using $(TargetDir) as the path, we're telling Visual Studio/AX to find our extra DLL in the folder where this CURRENT project's DLL is being built. This is important, since it will make sure that wherever the project is compiled, we'll always find MyDLL.DLL correctly. By default when you add a reference, VS will set the "copy local" flag to yes, which will make sure the referenced DLL is available. Save the csproj file and switch back to Visual Studio. You should get the following dialog:



This is exactly what we need. Click "Reload". Now, we've edited the VS project on disk but it's not in the AOT yet. Normally, VS updates the AOT any time you click save. Unfortunately, in this case you just reloaded the project from disk so VS doesn't actually do anything when you hit save, as it doesn't think you've changed anything. So, the easiest thing you can do is just click the "Add DeploymentProject to AOT" option again, as we did in the beginning. This will force an entire update of the project to the AOT.
Ok, so now (remember I had "Deploy to Client" set to yes) I will just open the AX client. And as explained in my article on VS Project DLL deployment, you should see both the DeploymentProject.dll and the MyDLL.dll files appearing in your Users\youruser\AppData\Local\Microsoft\Dynamics Ax\VSAssemblies.

Now, as for using the code of MyDLL. Remember we added a class and method there. However, the DLL is referenced in the VS project, but not in the AOT. So, your code inside DeploymentProject.dll can use that class, but your X++ can only use the code from DeploymentProject.dll. If you need access to the MyDLL.dll code from X++, you will need to manually add a reference to that DLL in the AOT still. Now, at that point you point it to the location of the DLL, but at runtime (or after you deploy the code to a test or production environment using models) AX will just try to load the DLL from its known search paths, which will include the VSASSemblies folder in your appdata. So as long as you include the AOT reference as part of your model, this trick will work everywhere.

As a final note, you can use this to deploy ANY file. You can right-click your project and select "add / existing item" and select a JPG file for example. In the properties of the file, make sure to set the "Copy to Output Directory" flag to "Copy always". Then, just add another VSProjectOutputFiles node in your csproj file.

Friday, June 28, 2013

R2 Hotfix for Compile Time needs Schema Update

Just a quick note on the hotfix that was released quite a while ago to improve compile times on R2. Many blogs including the official Microsoft one linked directly to the hotfix, and many people have installed it immediately with no result. What many people don't seem to know (and honestly in my own haste to try it out I did the same thing at first) is that you need to update your model store schema to benefit from the improvements which included new indexes in the model store.
So, if you have installed the hotfix (KB2844240), make sure to run "axutil schema" on the model store to actually make the changes take effect!

Thursday, June 20, 2013

Dynamics AX Admin Tools - CodeCrib.AX.Config

Yesterday I released code libraries and a wrapper PowerShell cmdlet libraries to automate installs of AX and maintain client and server configurations for AX. I also blogged an example of an installer script to have an automated install of an AX 2012 AOS.
The download links for both libraries are:

CodeCrib.AX.Setup
CodeCrib.AX.Config

Today I will give you an example of the Config library and how we're using it. You can find reference documentation of the commands for the Config library here.

The point of the config library is to create and maintain configuration files. For example, when we auto-deploy an AOS for development, we can run a script that will change the AOS to have debugging and hot-swapping enabled. For the client side, we can generate client configuration files to log into the development workspace and in the correct layer. Both server and client config objects expose all the properties that you see on the configuration utilities. Before anyone comments, the big missing piece here is the "Refresh configuration" that exists on the client configuration utility. I'm working on finding out how to get that configuration easily.

So this one script takes care of both AOS and client configs. The first part of this script gets parameters in for the PowerShell script and loads the library. Next, it gets the active configuration (after initial install this is the "original" configuration). It changes the configuration name to equal the AOS name (I like this as a convention on our VMs), sets breakpoints on server, sets hotswapping, and save the configuration back (by piping the config object into the Save-ServerConfiguration cmdlet). Next, it uses Set-ServerConfiguration to set that new configuration as the active one for our AOS instance.
Param(
    [parameter(mandatory=$true)][string]$instancename,
 [parameter(mandatory=$true)][string]$VARcde,
 [parameter(mandatory=$true)][string]$CUScode,
 [parameter(mandatory=$true)][string]$configfolder
    )
import-module ((Get-Location).Path + "\CodeCrib.AX.Config.PowerShell.dll")

$config = Get-ServerConfiguration -aosname $instancename -active
$config.Configuration=$instancename
$config.BreakpointsOnServer=1
$config.HotSwapping=1
$config | Save-ServerConfiguration -aosname $instancename
Set-ServerConfiguration -aosname $instancename -config $config.Configuration


Next, we move on to the client configuration. Just like the server configuration, initially you are stuck with the "original" configuration. We just retrieve that one (it's the active one), set user and global breakpoints, and save out the config three times (for three layers: USR, VAR, CUS).
After that we repeat the process but we add the -Development startup command and create config files for each layer to log into the development workspace.
$config = Get-ClientConfiguration -active
$config.UserBreakPoints=1
$config.GlobalBreakPoints=1

$config.Layer="usr"
$config.LayerCode=""
$config | Save-ClientConfiguration -filename ($configfolder + "\" + $instancename + "_usr.axc")

$config.Layer="var"
$config.LayerCode=$VARcode
$config | Save-ClientConfiguration -filename ($configfolder + "\" + $instancename + "_var.axc")

$config.Layer="cus"
$config.LayerCode=$CUScode
$config | Save-ClientConfiguration -filename ($configfolder + "\" + $instancename + "_cus.axc")


$config.KernelStartupCommand = "-Development"

$config.Layer="usr"
$config.LayerCode=""
$config | Save-ClientConfiguration -filename ($configfolder + "\" + $instancename + "_usr_Development.axc")

$config.Layer="var"
$config.LayerCode=$VARcode
$config | Save-ClientConfiguration -filename ($configfolder + "\" + $instancename + "_var_Development.axc")

$config.Layer="cus"
$config.LayerCode=$CUScode
$config | Save-ClientConfiguration -filename ($configfolder + "\" + $instancename + "_cus_Development.axc")


We can probably shorten this into a loop of sorts, but this is easy to read and understand at this point.



Bonus round:

You could ask, how about actually creating a shortcut to start AX and pass the config file? I haven't worked out that code yet (I'll leave it as "an exercise for the reader" :-) but basically you can use WScript.Shell for that. I haven't gotten past one issue with this (just haven't had the time) where the target path validates the path's existence. If you add the configuration file as a parameter in there, it basically fails to validate that whole string (including config file) as a valid target path. Either way, you can play with this but the following PowerShell script is where I left it last time I considered it:
$shell = New-Object -COM WScript.Shell
$shortcut = $shell.CreateShortcut("C:\Users\Public\Desktop\Powershell Link.lnk")
#$shortcut.TargetPath=('"c:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin\Ax32.exe" "' + $configfolder + "\" + $instancename + '_cus_Development.axc"')
$shortcut.TargetPath="c:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin\Ax32.exe"
$shortcut.WorkingDirectory="c:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin\"
$shortcut.Description="AX link with config file"
$shortcut.Save()

Note how the commented line causes the error. So this will now create a shortcut to AX without the config file. I'll let you know when I figure this out :-)


For now, that's it on the admin tools. I'm actively working on this code base, so expect more updates in the next weeks!

Wednesday, June 19, 2013

Dynamics AX Admin Tools - CodeCrib.AX.Setup

Long overdue for release, I'm glad to announce the first beta of my admin tools. These tools are still a work in progress, but you can start taking advantage of these right away. As you probably know, we have open sourced our TFS build scripts for Dynamics AX, and ever since these were released I've received quite a few emails and messages from people asking how to automate deployment etc outside of TFS. Obviously we do some of that already inside the build scripts, and there's some code sharing that can be done. Additionally, we've been exploring SCVMM (System Center Virtual Machine Manager) for which we would like to automate a lot of things (such as installing AX, deploying code, etc). So, in an effort to refactor and support TFS builds as well as automated scripts or even your own tools (UI?), I embarked on a mission to create a set of admin tools. This first beta release features less than half of the final product, but it's a good start and it's what we've been using for SCVMM so far (more on that in another post).

So, today's release includes a code library (which you can use to create your own tools) and a wrapper PowerShell cmdlet library to automate installs of AX and maintain client and server configurations for AX. The downloads are:

CodeCrib.AX.Setup
CodeCrib.AX.Config


Today I will give you an example of the Setup library and how we're using it. You can find reference documentation of the commands for the Setup library here.



Dynamics AX has always allowed silent installs using parameter files which you can pass to the setup executable of AX. For our VMM setup I wanted to make this even more generic and needed some good tools to support parameterized, automated installs. Additionally, a log file generated after an install of AX actually leaves you with most of the parameters you actually used (the exceptions are passwords are not stored in the log file).
All of this is captured in the library CodeCrib.AX.Setup and the PowerShell CmdLets CodeCrib.AX.Setup.PowerShell . The download also contains a sample UI which lets you load a log file and write it out as a parameter file, or load a parameter file and manipulate it. Note that the UI is just an example of how to use the class library in your own projects, I'm not planning on maintaining that much but will instead focus on the library and PowerShell cmdlets instead. The following is an example of the PowerShell script we currently have in use for installing an AOS:
Param(
    [parameter(mandatory=$true)][string]$setuppath,
 [parameter(mandatory=$true)][string]$databaseserver,
    [parameter(mandatory=$true)][string]$instancename,
 [parameter(mandatory=$true)][string]$aosaccount,
 [parameter(mandatory=$true)][string]$aosaccountpw
    )
import-module ((Get-Location).Path + "\CodeCrib.AX.Setup.PowerShell.dll")

$env:instance = $instancename

$setupparams = get-parameters -filename ((Get-Location).Path + "\AX2012 Server.txt")
$setupparams | set-parameter -environmentvariables

$setupparams | set-parameter -parameter "DbSqlServer" -value $databaseserver
$setupparams | set-parameter -parameter "AosAccount" -value $aosaccount
$setupparams | set-parameter -parameter "AosAccountPassword" -value $aosaccountpw

$setupparams | start-axsetup -setuppath $setuppath


Basically, the PowerShell script accepts some basic information such as the path to the setup executable, the SQL server name, a name for a new AOS server (and it will reuse that as the name of the database assuming you follow convention and what to keep those the same), account and password to use for the AOS service. Obviously this is abbreviated and it's specific to just installing an AOS. I will post more examples in future posts.
But basically, this loads the PowerShell cmdlets, loads the parameter file (AX2012 Server.txt) and then 1) replaces the %instance% environment variable, and sets the db / aos / password in the parameter object and starts the AX setup.

Tomorrow I will show you an example PowerShell script for the CodeCrib.AX.Config.PowerShell library, to create some standard configuration files to get into layers, development workspace, etc. Enjoy!

Tuesday, May 28, 2013

Auto-Deploying DLLs and Other Resources - Part 1

In my article on .NET Assembly Deployment in AX 2012 we reviewed how assemblies are deployed for Visual Studio projects as well as the CIL generate for X++. However, there are several scenarios one can think of where you want to deploy files outside of that. For example, you are releasing a DLL but don't want to provide the source, in which case you can't add the VS Project to the AOT. Other scenarios are files not related to code execution, for example icons or other resource files. In this article we'll look at a framework in AX that supports doing this, and it has actually existed for multiple versions of AX already: SysFileDeployer.

Let's start with a scenario. We have a .NET assembly (DLL) we need for use on the client side. We could optionally copy this DLL file into every user's client/bin folder, but that's not very convenient. If we need to make an update, we'll need to update all the clients as well. So, we want to auto-deploy these files to the client. Additionally, the question is WHERE do we put the files on the client side? Putting it in the client/bin would be one option, but there's a few potential issues. For example, what if the user doesn't have write privileges to that folder? (it's in program files after all). For auto-deploying VS projects, AX has created a VSAssemblies folder in each user's directory, and AX actually looks there to load DLL files. So we can exploit that and put our DLLs there as well. I'll go with that in this example, but of course you're free to do what you want.
Second decision is, where do we put the files to begin with? The best way in my opinion is the share/include folder on the AOS. Each AOS bin directory has an Application\Share\Include folder which already contains some images and other things to be shared. For example, my default AOS "AX60" has those files in C:\Program Files\Microsoft Dynamics AX\60\Server\AX60\bin\Application\Share\Include . We'll have the AOS load the files from there, and transfer them to the user's AppData\Local\Microsoft\Dynamics AX\VSAssemblies folder.

To start off, I'll create a new X++ project called FileDeployer and add some of the existing AX classes in there. I'll add classes SysFileDeployment, SysFileDeploymentDLL, SysFileDeploymentFile and SysFileDeployer.



Now, if we debug this framework (for example, put a breakpoint in the MAIN method of the SysFileDeployer class and restart your AX client) we can figure out how this works. Unfortunately, you'll soon figure out that this framework has an issue right from the start - but of course nothing we can't fix. Anyway, the SysFileDeploy class a static method called "filesAndVersions" which will get a list of classes (that have to inherit from SysFileDeployment) that will tell this framework which files we wish to deploy. Obviously that will be the first thing we need to customize. Next, it will loop over that list of classes, instantiate each class and call the "getServerVersion" method. The end result is it returns the list of classes with the version on the server side. This method will be called from the "isUpTodate" method on the file deployer class, then it creates an instance of each class again - this time on the client side, sets the server version it got earlier, then calls the "isClientUpdated" method. The idea is that the isClientUpdated method actually checks the version on the client, and compares it with the server version that was retrieved earlier. It all makes sense. Then from the main method in the file deployer it will call the run method on each file deployment class if it determind one file was out of date.
So a few issues here. One, if one file needs to be updated, it seems to be downloading all of them. I don't think that's a big issue considering these files are typically not large (and if they are, you may need to reconsider how you're deploying these). The biggest issue though is the check for the parmUpdate() method in that main method. It's basically checking a stored version from SysLastValue. So any time files are updated, that flag is set to true and stored for next time. Unfortunately, the check for that flag in the main() method is at the beginning of the IF statement, meaning this thing will only run once in its lifetime, to then never run again. Without customizing this framework, the easiest thing I could think of to get around this (in AX 2012 anyway, you're stuck with customizing in AX 2009) is to add our "isUpdated" logic as handlers to the parmUpToDate method and change the return value if we need to update.
If anyone has any better ideas or solutions to this issue, please let me know (put in comments or contact me).

Alright, in the next article we'll start the code.

Tuesday, April 30, 2013

Mixing Dynamic and Static Queries with System Services in AX 2012

In the "old" blog post about using WPF connected to the query service in AX, we talked about consuming a static query and displaying the data set in a WPF grid. The goal there was to be able to whip this up very quickly (10 minutes?!) and I think that worked pretty well.
In this post I'd like to dig a bit deeper. I've received some emails and messages asking for examples on how to use the dynamic queries. Well, I will do you one better and show you how to use both interchangeably. We will get a static query from the AOT, change a few things on it, and then execute it. All with the standard system services!

So, we will first use the metadata service to retrieve the AOT query from AX. In this example, I will be using Visual Studio 2012, but you should be able to run through this using Visual Studio 2010 just fine. We start by creating a new Console Application project. Again, we'll be focused on using the system services here, but feel free to use a WPF app instead of a console app (you can merge with my previous article for example). I'm in Visual Studio 2012 so I'm using .NET 4.5, but you can use .NET 4.0 or 3.5 as well.



Next, right-click on the References node in your solution explorer and select Add Service Reference.



This will bring up a dialog where you can enter the URL of your Meta Data Service. Enter the URL and press the GO button. This will connect and grab the WSDL for the metadata service. Enter a namespace for the service reference proxies (I named it 'AX' - don't add "MetaData" in the name, you'll soon find out why). I also like to go into advanced and change the Collection Type to List. I love Linq (which also works on Arrays though) and lists are just nicer to work with I find. Click OK on the advanced dialog and OK on the service reference dialog to create the reference and proxies.




Ok, now we are ready to code! We'll get metadata for a query (we'll use the AOT query "CustTransOpen" as an example) and print the list of datasources in this query, and print how many field ranges each datasource has. This is just to make sure our code is working.

static AX.QueryMetadata GetQuery(string name)
{
    AX.AxMetadataServiceClient metaDataClient = new AX.AxMetadataServiceClient();

    List queryNames = new List();
    queryNames.Add(name);

    var queryMetaData = metaDataClient.GetQueryMetadataByName(queryNames);

    if (queryMetaData != null && queryMetaData.Count() > 0)
        return queryMetaData[0];

    return null;
}

Very simple code, we create an instance of the AxMetadataServiceClient and call the GetQueryMetadataByName operation on it. Note that we have to convert our query's name string into a list of strings because we can fetch metadata for multiple queries at once. Similarly, we have to convert the results returned from a list back into 1 query metadata object (assuming we got one). We'll return null if we didn't get anything back. If you left the service reference Collection Type to Array, either change this code to create an array of strings for the query names instead of a List, or you can actually right-click the service reference, select "Configure Service Reference" and change the Collection Type to List at this point.
We'll make a recursive method to traverse the datasources and their children, and print out the ranges each datasource has, like so:
static void PrintDatasourceRanges(AX.QueryDataSourceMetadata datasource)
{
    Console.WriteLine(string.Format("{0} has {1} ranges", datasource.Name, datasource.Ranges.Count()));
    foreach (var childDatasource in datasource.DataSources)
    {
        PrintDatasourceRanges(childDatasource);
    }
}

I'm using a console application so I'm using Console.WriteLine, and I have a Main method for the rest of my code. If you're doing a WPF app, you may want to consider outputting to a textbox, and adding the following code somewhere it's relevant to you, for example under the clicked event of a button. Here we call our GetQuery method, and then call the PrintDatasourceRanges for each datasource.

static void Main(string[] args)
{
    AX.QueryMetadata query = GetQuery("CustTransOpen");

    if (query != null)
    {
        foreach (var datasource in query.DataSources)
        {
            PrintDatasourceRanges(datasource);
        }
    }

    Console.ReadLine();
}

Note that we have a Console.ReadLine at the end, which will prevent the Console app to close until I press the ENTER key. When we run this project, here's the output:



Ok, so we're getting the query's metadata. Note that the classes used here (QueryMetadata, QueryMetadataRange etc) are the exact same classes the query service accepts. However, if we add a new service reference for the query service, AX will ask for a new namespace and not re-use the objects already created for the metadata service. If we give it a new namespace we can't pass the query object received from the metadata back into the query service. Of course I wouldn't bring this up if there wasn't a solution!
In your solution explorer, right-click on your project and select "Open Folder in File Explorer".



In the explorer window, there will be a folder called "Service References". Inside you'll find a sub-folder that has the name of the namespace you gave your service reference. In my case "AX". The folder contains XML schemas (xsd), datasource files, the C# files with the proxy code, etc. One particular file is of interest to us: Reference.svcmap. This file contains the URL for the service, the advanced settings for the proxy generation, etc (you can open with notepad, it's an XML file). But the node called MetadataSources contains only one subnode, with the service URL. If we add a second node with a reference to our second URL, we can regenerate the proxies for both URLs within the same service reference, effectively forcing Visual Studio to reuse the proxies across the two URLs. So, let's change the XML file as follows. Note that XML is case sensitive, and obviously the tags must match so make sure you have no typos. Also make sure to increment the SourceId attribute.

Original:



New:



Again, I can't stress enough, don't make typos, and make sure you use upper and lower case correctly as shown. Now, save the Reference.svcmap file and close it. Back in Visual Studio, right-click your original service reference, and click "Update Service Reference".



FYI, if you select "Configure Service Reference" you'll notice that compared to when we opened this from the Advanced button upon adding the reference, there is now a new field at the top that says "Address: Multiple addresses (editable in .svcmap file)").

If you made no typos, your proxies will be updated and you are now the proud owner of a service reference for metadata service and query service, sharing the same proxies (basically, one service reference with two URLs). First, let's create a method to execute a query.
static System.Data.DataSet ExecuteQuery(AX.QueryMetadata query)
{
    AX.QueryServiceClient queryClient = new AX.QueryServiceClient();

    AX.Paging paging = new AX.PositionBasedPaging() { StartingPosition = 1, NumberOfRecordsToFetch = 5 };

    return queryClient.ExecuteQuery(query, ref paging);
}

Note that I use PositionBasedPaging to only fetch the first 5 records. You can play around with the paging, there are different types of paging you can apply. So now for the point of this whole article. We will change our Main method to fetch the query from the AOT, then execute it. For good measure, we'll check if there is already a range on the AccountNum field on CustTable, and if so set it. Here I'm doing a little Linq trickery: I select the first (or default, meaning it returns null if it can't find it) range with name "AccountNum". If a range is found, I set its value to "2014" (a customer ID in my demo data set). Finally I execute the query and output the returned dataset's XML to the console.
static void Main(string[] args)
{
    AX.QueryMetadata query = GetQuery("CustTransOpen");

    if (query != null)
    {
        var range = (from r in query.DataSources[0].Ranges where r.Name == "AccountNum" select r).FirstOrDefault();
        if (range != null)
        {
            range.Value = "2014";
        }

        System.Data.DataSet dataSet = ExecuteQuery(query);
        Console.WriteLine(dataSet.GetXml());
    }

    Console.ReadLine();
}

And there you have it. We retrieved a query from the AOT, modified the query by setting one of its range values, then executed that query. Anything goes here, the metadata you retrieve can be manipulated like you would a Query object in X++. You can add more datasources, remove datasources, etc.
For example, before executing the query, we can remove all datasource except "CustTable". Also make sure to clear order by fields since they may be referencing the other datasources. Again using some Linq trickery to achieve that goal.
// Delete child datasource of our first datasource (custtable)
query.DataSources[0].DataSources.Clear();
// Remove all order by fields that are not for the CustTable datasource
query.OrderByFields.RemoveAll(f => f.DataSource != "CustTable");

Thursday, April 25, 2013

Dynamics AX 2012 Compile Times

As I'm sure most of you are aware, compile times in Dynamics AX 2012 are a concern with more functionality being added. Especially on our build environments, which are (non-optimized) virtual machines, we are looking at around 3 hours for AX 2012 RTM/FPK and around 5 hours for R2. There have been discussions on the official Microsoft Dynamics AX Forums about this very topic, and there seem to be huge differences in experiences of compile times. After a lot of discussion with other people on the forums, and consequent chats with Microsoft people that are "in the know", I think it's pretty clear which areas one needs to focus on to optimize compile times.

1) The AX compiler was originally built when there was no talk about multi-core. So, as a result, you've probably noticed a compile only uses one thread. With today's trend of more cores but at lower clock speeds, an "older" machine (CPU type) may possibly perform better than a new one, or a desktop machine may perform better than a high-end server.
2) The communication between AX and SQL is critical. The communication with the model store is critical (AOS gets the source code from the model store, compiles it, puts the binaries back in the model store).
3) The model store is in SQL so SQL has to perform optimally.

To this end, I set out to build one of our customer's code bases (AX 2012 RTM CU3, no feature pack) on an "experimental" build machine. This code base has been taking an average compile time of 3 to 3.2 hours every time on our virtual AOS connected with physical SQL.


The new setup? A Dell Latitude E6520 laptop:

* Core i7-2760QM CPU @ 2.4GHz, 4 Cores, 8 Logical Processors
* 8 GB memory
* High performance SSD (Samsung 840 Pro), 256GB
* Windows Server 2012, SQL 2012, AX 2012 RTM CU4

Besides this hardware (2.4ghz clock speed - number of cores doesn't matter, SSD to maximize SQL throughput), the key elements of our setup are putting the AOS and the SQL server both on this same machine, and disabling TCP/IP in the SQL server protocols so that it uses shared memory instead. This is the least overhead you can possibly get between the AOS and SQL.

The difference in compile time is staggering. I actually ran it multiple times because I thought I had done something wrong. However, since this is an automated build using TFS, I know the steps and code and everything else is EXACTLY the same by definition. So.... drumroll!
(Note I didn't list some extra steps being done in the automated build explicitly so that's why it may not seem to add up...)


Old Build Server

New Build Server
Remove old models
00:00:27

00:00:03
Start AOS
00:01:26

00:00:25
Synchronize (remove old artifacts from DB)
00:06:52

00:05:57
Import XPOs from TFS
00:13:17

00:03:55
Import VS Projects
00:00:29

00:00:11
Import Labels
00:00:22

00:00:08
Synchronize (with new data model)
00:05:42

00:01:55
X++ Compile
02:29:36

00:41:28
CIL Generation
00:13:41

00:05:29
Stop AOS
00:00:10

00:00:03
Export Built Model
00:00:42

00:00:12
Total Build Time
03:14:43

01:00:59

So yes, the compile time got down to 41 minutes! We've actually switched using this machine somewhat permanently for a few customers, we'll be switching more. Now I need another machine for R2 compiles :-) I will post the compile times for R2 when I get to those.

Happy optimizing! :-)

Monday, April 15, 2013

Exception Handling in Dynamics AX

Exception handling in Dynamics AX is a topic that is not discussed too often. I figured I would provide a quick musing about some of my favorite exception handling topics.

Database Transactions
Exception handling while working with database transactions is different. Unfortunately, not a lot of people realize this. Most exceptions cannot be caught within a transaction scope. If you have a try-catch block within the ttsBegin/ttsCommit scope, your catch will not be used for most types of exceptions thrown. What does happen is that AX will automatically cause a ttsAbort() call and then look for a catch block outside of the transaction scope and execute that if there is one. There are however two exception types you CAN catch inside of a transaction scope, namely Update Conflict and Duplicate Key (so don't believe what this MSDN article says). The reason is that it allows you to fix the data issue and retry the operation. You see this pattern in AX every now and then, you have a maximum retry number for these exceptions, after which you throw the "Not Recovered" version of the exception. The job below shows a generic X++ script that loops through each exception type (defined by the Exception enumeration), throws it, and tries to catch it inside a transaction scope. The output shows if the exception is caught inside or outside the transaction scope.

static void ExceptionTest(Args _args)
{
    Exception exception;
    DictEnum dictEnum;
    int enumIndex;

    dictEnum = new DictEnum(enumNum(Exception));
    for (enumIndex=0; enumIndex < dictEnum.values(); enumIndex++)
    {
        exception = dictEnum.index2Value(enumIndex);
        try
        {
            ttsBegin;

            try
            {
                throw exception;
            }
            catch
            {
                info(strFmt("%1: Inside", exception));
            }

            ttsCommit;
        }
        catch
        {
            info(strFmt("%1: Outside", exception));
        }
    }
}


Fall Through
Sometimes you just want to catch the exception but not do anything. However, an empty catch block will result in a compiler warning (which of course we all strive to avoid!). No worries, you can put the following statement inside your catch block:

Global::exceptionTextFallThrough()

Of course, you're assuming the exception that was thrown already provided an infolog message of some sort. Nothing worse than an error without an error message.


.NET Interop Exceptions
When a .NET exception is thrown, they are typically "raw" exceptions compared to our typical 'throw error("message here")' informative exceptions. I've seen quite a lot of interop code that does not even try to catch .NET call exceptions, let alone handle them. The following examples show different tactics to show the actual .NET exception message. Note that not catching the error (trying to parse "ABCD" into an integer number) does not result in ANY error, meaning a user wouldn't even know any error happened at all.

Strategy 1: Get the inner-most exception and show its message:
static void InteropException(Args _args)
{
    System.Exception interopException;
    
    try
    {
        System.Int16::Parse("abcd");
    }
    catch(Exception::CLRError)
    {
        interopException = CLRInterop::getLastException();
        while (!CLRInterop::isNull(interopException.get_InnerException()))
        {
            interopException = interopException.get_InnerException();
        }
        
        error(CLRInterop::getAnyTypeForObject(interopException.get_Message()));
    }
}


Strategy 2: Use ToString() on the exception which will show the full stack trace and inner exception messages:
static void InteropException(Args _args)
{
    System.Exception interopException;
    
    try
    {
        System.Int16::Parse("abcd");
    }
    catch(Exception::CLRError)
    {
        interopException = CLRInterop::getLastException();
        
        error(CLRInterop::getAnyTypeForObject(interopException.ToString()));
    }
}


Strategy 3: Get all fancy and catch on the type of .NET exception (in this case I get the inner-most exception as we previously have done). Honestly I've never used this, but it could be useful I guess...
static void InteropException(Args _args)
{
    System.Exception interopException;
    System.Type exceptionType;
    
    try
    {
        System.Int16::Parse("abcd");
    }
    catch(Exception::CLRError)
    {
        interopException = CLRInterop::getLastException();
        while (!CLRInterop::isNull(interopException.get_InnerException()))
        {
            interopException = interopException.get_InnerException();
        }
        
        exceptionType = interopException.GetType();
        switch(CLRInterop::getAnyTypeForObject(exceptionType.get_FullName()))
        {
            case 'System.FormatException':
                error("bad format");
                break;
            default:
                error("some other error");
                break;
        }
    }
}


throw Exception::Timeout;


Wednesday, March 27, 2013

Performance Info from Convergence 2013

This week at Convergence, I attended several sessions (and actually moderated an interactive discussion session) on performance in Dynamics AX. As expected, customers and partners showed up with specific questions on issues they are facing. Out of the three different sessions I was a part of, two major topics came up over and over again: how do we troubleshoot performance issues and how do we handle virtualization?

Microsoft has done benchmarking on virtualization using Hyper-V, by starting at an all-physical setup and iteratively benchmarking and virtualizing pieces of the Dynamics AX setup. The big picture reveals (not surprisingly) that the further a virtual component is from SQL, the less impact it has. I've been asked to wait to blog about this further as new benchmarks are being made on hyper-v 2012 which has an enormous amount of performance improvements. But, for more information and the currently available benchmark you can download the virtualization benchmark whitepaper (requires customer or partner source access).

The next obvious question around performance was how to troubleshoot it. Naturally there are multiple reasons one could have performance issues. It could be SQL setup, AOS setup, client setup, poorly designed customizations, etc. Each area has its tools to troubleshoot, so a gradual approach may be in order.
First, the Lifecycle Services Microsoft will release in the second half of 2013 will have a system diagnostics tool available. This is what Microsoft calls "the low hanging fruit", things that could be obvious setup or configuration issues that can be easily fixed and may make a significant difference in performance; things like buffer sizes, SQL tempdb location, debugging enabled on a production environment, etc. This tool will currently only support AX 2012 and up, I have not heard any plans to support earlier versions of AX. As mentioned earlier, the tool is scheduled to be released the second half of 2013. Pricing for the service which includes a whole range of tools (more on this in another post) is currently not released, but in general lines it will be tied to the level of support plan a customer has with Microsoft.

Next up is the Trace Parser. The trace parser will allow you to trace a particular scenario and give feedback on the code being called, calls between client and server, time spent on every call, etc. This will give you a good idea on what is going on behind the covers of a specific process. Besides its use in troubleshooting performance issues, this is a really good tool during development to do code tracing in the standard application or augmenting other debugging efforts.
You can download the trace parser (for free) here (requires customer or partner source access)

Finally there is the dynamics perf tool. This will do a deep dive on your SQL server, give you things like top 10 queries, index usage (or lack thereof) etc. This is knee-deep SQL troubleshooting and will give you a broad range of statistics and suggestions to optimize your SQL setup and can identify issues with missing indexes or poor performing queries in your application code (note this is strictly a SQL tool so you won't see any tracing back to the source code).
You can find the dynamics perf tool (for free) here.

Somewhat surprisingly, a show of hands in my session revealed most customers were unaware of the availability of these tools and what they could do, so spread the word!

And last but not least there were some announcements on a load testing / benchmarking tool that we should see released in the next month or so. It will allow you to setup scenarios such as entering sales orders, creating journals etc, and then allow for mass-replay of these scenarios in AX to test performance. The tools are based on the Visual Studio load testing tool and basically provide an integration with AX. I will make sure to keep you updated on more details and release dates when that information becomes available.

All in all a lot of learning went on at Convergence on the performance topic. Also visit the blog of the Microsoft Dynamics AX Performance Team to stay current on any other tools or whitepapers they are releasing.

Thursday, March 14, 2013

Convergence 2013

For those of you waiting on the continuation of the TFS series - please be patient. Next week is Convergence and a lot of time and energy is being put into preparing both our Sikich company booth (#2437) and material, as well as the Interactive Discussion session I will be leading there. Topic at hand in my session is optimizing performance in code and sql. Session IDAX07 in the session catalog, held Thursday from 11am - 12pm. This is an interactive discussion so the idea is that I talk as little as possible, and have you, the audience, ask questions and answer questions and share experience with your peers. I will have some Microsoft performance experts (Christian Wolf and Gana Sadasivam) there as well for those ultra-tough questions that your peers or myself don't know how to answer :-)

If you are unable to attend, I do recommend following the virtual conference and attend any keynotes/sessions you deem relevant. It's always great to find out about features you didn't know of, or get a feel for where your favorite Dynamics product is headed. As far as social media you can also follow the "Convergence Wall". I will be tweeting and blogging about Convergence next week. You can follow me on Twitter @JorisdG.

If you are attending Convergence 2013, come see me at my session on Thursday, or visit me at the Sikich booth Monday afternoon or early Tuesday afternoon. Otherwise, look for me walking around. Like last year, I will be sporting my fedora (see below).

This is what I look like... sort of

Thursday, February 28, 2013

How We Manage Development - Organizing TFS

In the first post, I presented an overview of how we have architected our development infrastructure to support multiple clients. In this article, we will look at how we organize TFS to support our projects.

Team Foundation Server contains a lot of features to support project management. We have chosen to keep Team Foundation Server a developer-only environment, and manage the customer-facing project management somewhere else using SharePoint. In fact, even our business consultants and project managers do not have access or rarely see this magical piece of software we call TFS. (I bet you some are reading these articles to find out!)
There are several reasons for this, and these are choices we have made that I'm sure not everyone agrees with.
- Developers don't like to do bookkeeping. We want to make sure the system is efficient and only contains information that is relevant to be read or updated by developers.
- We need a filter between the development queue and the user/consultant audience. Issues can be logged elsewhere and investigated. AX is very data and parameter driven so a big portion of perceived "bugs" end up to be data or setup related issues. If anyone can log issues or bugs in TFS and assign them to a developer, the developers will quickly start ignoring the lists and email alerts because there are too many of them. TFS is actual development work only, any support issues or investigative tasks for developers/technical consultants are outside of our TFS scope.
- Another reason to filter is project scope and budget. Being agile is hot these days, but there is a timeline and budget so someone needs to manage what is approved for the developers to work on. Only what is approved gets entered in TFS, so that TFS is the actual development queue. That works well, whether you go waterfall or agile.
- Finally, access to TFS means access to a lot of data. There are security features in TFS, of course, but they are nothing like XDS for example. This means check-in comments, bug descriptions, everything would be visible. There are probably ways to get this setup correctly, but for now it's an added bonus of not having to deal with that setup.

Each functional design gets its own requirement work item in TFS. Underneath, as child workitems, we add the development "tasks". Initially, we just add one "initial development" task. We don't actually split out pieces of the FDD, although we easily could i guess. When we get changes, bugs, basically any change after the initial development, we add another sub-task we can track. So, for example, the tasks could look like this in TFS:



This allows us to track any changes after the initial development/release of the code for the FDD. We can use this as a metric for how good we are doing as a development group (number of bugs), how many change requests we get (perhaps a good measure of the quality of the FDD writing), etc. It also allows us to make sure we don't lose track of loose ends in any development we do. When a change is agreed upon, the PM can track our TFS number for that change. If the PM doesn't have a tracking number, it means we never got the bug report.
It is typically the job of the project's development lead to enter and maintain these tasks, based on emails, status calls with PMs, etc. I will explain more about this in the next article.

On to the touchy topic of branching. There are numerous ways to do this, all have benefits and drawbacks. If you want to know all about branching and the different strategies, you can check out the branching guide written by the Microsoft Visual Studio ALM Rangers. It is very comprehensive but an interesting read. There is no one way of doing this, and what we chose to adopt is loosely based on what the rangers call the "code promotion" plan.



Depending on the client we may throw a TEST branch in between MAIN and UAT. We have played with this a lot and tried different strategies on new projects. Overall this one seems to work best, assuming you can simplify it this way. Having another TEST in between may be necessary depending on the complexity of the implementation (factors such as client developers doing work on their systems as well - needing a merge and integrated test environment, or a client that actually understands and likes the idea of a pure UAT - those are rare).
One could argue that UAT should always be moved as a whole into your production environment. And yes, that is how it SHOULD be, but few clients and consultant understand and want that. But, for that very reason, and due to the diversity of our clients' wants and needs, we decided on newer projects to make our branches a bit more generic, like this:



This allows us to be more consistent in TFS across projects, while allowing a client to apply these layers/models as they see fit. It also allows us to change strategies as far as different environment copies at a client, and which code to apply where. All without being stuck with defining names such as "UAT" or "PROD" in our branch names.

So, the only branch that is actually linked to Dynamics AX is the MAIN branch. Our shared AOS works against that branch, and that is it. Anything beyond that is ALL done in Visual Studio Explorer. We move code between branches using Visual Studio. We do NOT make direct changes in any other branch but MAIN. Moving code through branch merging has a lot of benefits.

1) We can merge individual changes forward, such as FDD03, but leave FDD04 in MAIN so it is not moved. This requires developers to ALWAYS do check-ins per FDD, never check-in code for multiple FDDs at the same time. That is critical, but makes sense - you're working on a given task. When you go work on something else (done or not), you check in the code you're putting aside, and check out your next task.
2) By moving individual changes, TFS will automatically know any conflicts that arise. For example, FDD02 may add a field to a table, and the next FDD03 may also add a field to that same table. But both fields are in the same XPO in TFS - the table's XPO. Now, TFS knows which change you made when... so whether we want to only move FDD02, or only FDD03, TFS will know what code changes belongs to which. (more on this in the next article) This removes the messy, manual step of merging XPOs on import.
3) The code move in itself becomes a change as well. For clients requiring strict documentation for code moves (SOx for example), the TFS tracking can show that in the code move, no code was changed (ie proving that UAT and PROD are the same code). We also get time stamps of the code move, user IDs, etc. Perfect tracking, for free.
4) With the code move itself being a change set, we can also easily REVERT that change if needed.
5) It now becomes easy to see which pieces of code have NOT been moved yet. Just look for pending merges between branches.

So, I can look at my history of changes in MAIN, and inquire into the status of an individual change. In the first screenshot, you see a changeset that was merged all the way into RELEASE. My mouse is hovered over the second branch (test) and you can see details. The second screenshot shows a change that has not been moved into RELEASE yet.




Obviously you can also get lists of all pending changes etc. We actually have some custom reports in that area, but you can get that information out of the box as well.

So, for any of you that have played with this and branching, the question looming is: how can you merge all changes for one particular FDD when there are multiple people working on multiple FDDs. TFS does not have a feature to "branch by work item" (if you check in code associating it to a work item, one could in theory decide to move all changes associated with that work item, right?). Well, the easiest way we found is to make sure the check-in comment is ALWAYS preceded with the FDD number. So, when I need to merge something from MAIN to TEST, I just pick all the changesets with that FDD in the title:



Yes, you may need several merges if there are a lot of developers doing a lot of changes for a lot of FDDs. However, it forces you to stay with it, and it actually gets better: if you merge 5 changesets for the same FDD at the same time, the next merge further down to the next branch will only be for the 1 merge-changeset (which contains all 5). Also consider the alternative of you moving the XPOs manually and having to strip out all the unrelated code during the import. After more than 2 years of using TFS heavily, I know what I prefer :-) (TFS, FYI).

Once code is merged, all that needs to happen is a right-click by the lead developer to start a new build - no further action required. This will spin up TFS build using our workflow steps, which imports all the code, compiles, and produces the layer (ax 2009) or model (ax 2012). More details on this in the next article.



Once the build is completed (successfully or not), we get an alert via email. If the build was successful, the model or layer will be on a shared folder - ready to be shipped to the client for installation, with certainty that it compiles. We actually have a custom build report we can then run which, depending on the client's needs, will show which FDDs and which tasks/bugs were fixed in this release, potentially a list of the changed objects in the build, etc. Without the custom report, here's what standard TFS will give you:



As you can tell by the run time (almost 3 hours), this is AX 2012 :-)

This has become a lengthy post, but hopefully interesting. Next article, I will show you some specific examples (including some conflicts in merging etc) - call it a "day in the life of a Sikich AX developer"...