Microsoft Dynamics. AX Development. Cookbook. Solve real-world Dynamics AX development problems with over 60 simple but incredibly effective . Microsoft Dynamics AX Development Cookbook. As a Dynamics AX developer, your responsibility is to deliver all kinds of application customizations . Microsoft Dynamics AX Development Cookbook Solve real-world DOWNLOAD PDF First published: December Second edition: May
|Language:||English, Spanish, Japanese|
|Distribution:||Free* [*Registration needed]|
Microsoft Dynamics AX Programming Getting glametesaspo.cf · Add files via upload, 2 years ago. Microsoft Dynamics AX Development glametesaspo.cf In December , he released his first book, Microsoft Dynamics AX Development Cookbook, both of which are published by Packt Publishing. . know that Packt offers eBook versions of every book published, with PDF and ePub. AX Development Cookbook, glametesaspo.cf%5Chide%5CAX5% 5CeBooks%5CAXebook%20AX%glametesaspo.cf
So, some Dynamics AX coding experience is expected. The company specializes in providing development, consulting, and training services for Microsoft Dynamics AX resellers and customers.
From to , Mindaugas has participated in over 15 Dynamics AX implementations, ranging from small to international user projects. He has a wide range of experience in development, consulting, and has played leading roles while always maintaining a significant role as a business application developer. During numerous Dynamics AX implementations, Mindaugas noticed similarities in application development patterns.
He noticed that different customers request very similar modifications to be implemented. Since then he started building the list of most-often used modifications and their principles for future reuse.
It is irreplaceable if a record was saved by mistake or simply needs renaming. The function ensures data consistency that is, all related records are renamed too. It can be accessed from the Record information form shown in the following screenshot , which can be opened by selecting Record info from the right-click menu on any record: An alternative way of doing that is to create a job that automatically runs through all required records and calls this function automatically.
This recipe will explain how the record primary key can be renamed through the code. As an example, we will create a job that renames a customer account.
Open Accounts receivable Common Customers All customers and find the account that has to be renamed: Click on Transactions in the action pane to check the existing transactions: Use the previously selected account: Run the job and check if the renaming was successful, by navigating to Accounts receivable Common Customers All customers again, and finding the new account.
The new account should have retained all its transactions and other related records, as shown in the following screenshot: Click on Transactions in the action pane in order to see if existing transactions are still in place: In this recipe, first we will select the desired customer account that is, Here we can easily modify the select statement to include more accounts for renaming, but for demonstration purposes, let's keep it simple. Note that only fields belonging to a table's primary key can be renamed in this way.
Then we call the table's renamePrimaryKey method, which does the actual renaming. The method finds all the related records for the selected customer account and updates them with the new account.
The operation might take a while depending on the volume of data, as the system has to update multiple records located in multiple tables. Merging two records For various reasons, data in the system such as customers, ledger accounts, configuration settings, and similar data may become obsolete.
This could be because of changes in the business or it could simply be a user input error. For example, two salespeople could create two records for the same customer, start entering sales orders and post invoices. One of the ways to solve that is to merge both records into a single one. In this recipe, we will explore how to merge one record into another one, including all related transactions. For this demonstration, we will merge two ledger reason codes into a single one.
Open General ledger Setup Ledger reasons to find two reason code records to be merged. Run the job to merge the records. Open the Ledger reasons form again and notice that one of the reasons were deleted and all related transactions have also been updated to reflect the change: How it works First, we retrieve both records from the database and prepare them for updating.
The key method in this recipe is the merge method. It will ensure that all data from one record will be copied into the second one and all related transactions will be updated to reflect the change. Such a technique could be used to merge two, or even more, records of any type.
Adding a document handling note It is good practice to add some kind of note to the record when doing data renaming, merging, or any other data manipulation task, whether it's manual or automatic. Dynamics AX allows adding a note or a file to any record by using the so-called Document handling feature. By default, it is enabled for all tables, but can be restricted to fewer tables by changing its configuration parameters. Document handling can be accessed from the form action pane by clicking on the Attachments button, choosing Document handling from the File Command menu or selecting the Document handling icon from the status bar.
Document handling allows adding text notes or files to any currently selected record. Dynamics AX also allows adding document handling notes from the code too, which helps developers or consultants to add additional information when doing various data migration or conversion tasks.
In this recipe, we will add a note to a vendor account. Open Accounts payable Common Vendors All vendors, and locate the vendor account that has to be updated: Use the previously selected vendor account: RefCompanyId docuRef.
RefTableId docuRef. RefRecId docuRef. TypeId docuRef. Name docuRef. Notes docuRef. TableId; vendTable. RecId; 'Note'; 'Imported'; 'This vendor was imported. Run the job to create the note. Click on the Attachments button in the form's action pane or select Document handling from the File Command menu to view the note added by our code: In our recipe, we will set those fields to the vendor company ID, vendor table ID, and vendor account record ID, respectively.
Next, we will set note type, name, and description, and insert the document handling record. In this way, we will add a note to the record. The code in this recipe could also be added to a separate method for further reuse.
Using a normal table as a temporary table Standard Dynamics AX contains numerous temporary tables, which are used by the application and could be used in custom modifications too. Although new temporary tables can also be easily created using the AOT, sometimes it is not effective. One of the cases could be when the temporary table is very similar or exactly the same as an existing one.
The goal of this recipe is to demonstrate an approach for using standard non-temporary tables to hold temporary data. As an example, we will use the vendor table to insert and display a couple of temporary records without affecting the actual data. No; vendTable. All; vendTable. AccountNum, vendTable. Run the class and check the results: The key method in this recipe is in the setTmp method. It is available on all tables, and it declares the current table instance to behave as a temporary table in the current scope.
So in this recipe, we will first call the setTmp method on the vendTable table to make it temporary in the scope of this method. That means any data manipulations will be lost once the execution of this method is over and actual table content will not be affected.
Next, we will insert a couple of test records. Here, we use the doInsert method to bypass any additional logic, which normally resides in the table's insert method. The last thing to do is to check for newly created records by listing the vendTable table.
We can see that although the table contains many actual records, only the ones which we inserted were displayed in the Infolog. Additionally, the two we inserted do not appear in the actual table records. Copying a record One of the tasks often used when manipulating data is record copying.
For various reasons, an existing record needs to be modified and saved as a new one. The most obvious example could be when a user requires a function that allows him or her to quickly duplicate records on any of the existing forms. In this recipe, we will explain the usage of the table's data method, the global buf2buf function, and their differences. As an example, we will copy one of the existing ledger account records into a new one. Open General ledger Common Main accounts, and find the account to be copied.
In this example, we will use Open General ledger Common Main accounts again, and notice that there are two identical records now: In this recipe, we have two variables—mainAccount1 for original record and mainAccount2 for the new one.
Next, we will copy it to the new one. Here, we will use the data table member method, which copies all data fields from one variable to another.
After that, we will set a new ledger account number, which is a part of a unique table index and must be different. Finally, we call the insert method on the table, if validateWrite is successful. In this way, we have created a new ledger account record, which is exactly the same as the existing one apart from the account number. There's more As we saw before, the data method copies all table fields, including system fields such as record ID, company account, created user, and so on.
Most of the time, it is OK because when the new record is saved, the system fields are overwritten with the new values.
However, this function may not work for copying records across companies. In this case, we can use another function called buf2Buf. It is very similar to the table's data method with one major difference.
The buf2Buf function copies all data fields excluding the system ones. The code in the function is as follows: We can also see that this function is slower than the internal data method, as it checks and copies each field individually. In order to use the buf2Buf function, the code of the MainAccountCopy job could be amended as follows: Normally, queries are stored in the AOT, but they can also be dynamically created from code.
This is normally done when visual tools cannot handle complex and dynamic queries. In this recipe, we will create a query dynamically from the code to retrieve project records from the project management module. We will select only the projects of type fixed price, starting with 2 in its number and containing at least one hour transaction. ProjId, projTable.
Name, projTable. Run the job and the following screen should appear: First, we create a new query object. Next, we add a new ProjTable data source to the query object by calling its addDataSource member method. The method returns a reference to the QueryBuildDataSource object—qbds1. Here, we call the addSortField method to enable sorting by project name. The following two blocks of code create two ranges. The first is to show only projects of type fixed price and the second one is to list only records, where the project number starts with 2.
Those two filters are automatically added together using the SQL and operator. The range value is set by calling value on the QueryBuildRange object itself. It is a good practice to use the queryValue function to process values before applying them as a range. More functions such as queryNotValue , queryRange , and so on can be found in the Global application class. Note that these functions are actually shortcuts to the SysQuery application class, which in turn have even more interesting helper methods that might be handy for every developer.
Adding another data source to an existing one connects both data sources using the SQL join operator. In this example, we are displaying projects that have at least one posted hour line. We start by adding the ProjEmplTrans table as another data source. Next, we need to add relations between the tables. If relations are not defined on tables, we will have to use the addLink method with relation field ID numbers.
In this example, relations on the tables are already defined so it is enough only to enable them by calling the relations method with true as an argument. Calling joinMode with JoinMode:: ExistsJoin as a parameter ensures that a record from a parent data source will be displayed only if the relation exists in the attached data source.
The last thing to do is to create and run the queryRun object and show the selected data on the screen. It is worth mentioning a couple of specific cases when working with query objects from code. One of them is how to use the or operator and the other one is how to address array fields. Using the OR operator As you have already noted, regardless of how many ranges are added, all of them will be added together using the SQL and operator.
In most cases it is fine, but sometimes complex user requirements demand ranges to be added using SQL or. There might be a number of workarounds, such as using temporary tables or similar tools, but we can use the Dynamics AX feature that allows passing a part of raw SQL string as a range. In this case, the range has to be formatted in a similar manner as a fully qualified SQL where clause, including field names, operators, and values. The expressions have to be formatted properly before using them in a query.
Here are some of the rules: Let us replace the code from the previous example: FixedPrice ; with the new code: Using arrays fields Some table fields in Dynamics AX are based on extended data types, which contains more than one array element.
An example in a standard application could project sorting based on a ProjSortingId extended data type. Although such fields are very much the same as normal fields, in queries they should be addressed in a slightly different manner. In order to demonstrate the usage, let us modify the example by filtering the query to list only those projects containing the value South in the field labelled Sort field 2, which is the second value in the array. First, let us declare a new QueryBuildRange object in the variable declaration section: QueryBuildRange qbr3; Next, we add the following code, right after the qbr2.
This function can also be used anywhere, where addressing the dimension fields is required. Now, we can run this job, as the project list based on previous criteria will be reduced even more to match projects having only a specific Sort field 2: In this recipe, we will create a small macro, which holds a single where clause to display only active vendor records. Then we will create a job, which uses the created macro for displaying a vendor list.
Run the job and check the results, as displayed in the following screenshot: First, we define a macro that holds the where clause.
Normally, the purpose of defining SQL in a macro is to reuse it a number of times in various places. More arguments could be added here. Next, we create a job with the select statement.
Here, we use the previously created macro in a where clause and pass vendTable as an argument. The query works like any other query, but the advantage is that the code in the macro can be reused elsewhere.
Note that although using a macro in a SQL statement can reduce the amount of code, too much code in it might decrease the SQL statement's readability for other developers. So keep it balanced.
One of the cases is when we run data upgrade tasks during an application version upgrade. The standard application contains a set of data upgrade tasks to be completed during the version upgrade. If the application is highly customized, then most likely the standard tasks have to be modified to reflect data dictionary customizations, or even a new set of tasks have to be created to make sure data is handled correctly during the upgrade.
Additionally, running direct SQL statements dramatically increases data upgrade performance because most of the code is executed on the database server where all data resides. This is very important while working with large volumes of data. Another case when we would need to use direct SQL statements is when we want to connect to an external database using the ODBC connection. This recipe will demonstrate how to execute SQL statements directly. We will connect to the current Dynamics AX database directly using an additional connection and will retrieve the list of vendor accounts.
Sql , tblDirPartyTable. Sql , fldAccountNum. Sql , fldName. Sql , fldParty.
Sql , fldRecId. Sql , fldDataAreaId. Sql , fldBlocked. Sql , sqlSystem. Run the class to obtain the list of vendors retrieved directly from the database: We start the code by creating DictTable and DictField objects for handling the vendor table and its fields used later in the query.
DirPartyTable table is used to get additional vendor information. A new SqlSystem object also has to be created. Next, we set up an SQL statement with a number of placeholders for table or field names and field values to be inserted later.
The main query creation happens next when the query placeholders are replaced with the right values. Here we use the previously created DictTable and DictField type objects by calling their name methods with the DbBackend:: Sql enumeration as an argument. This ensures that we pass the name exactly how it is used in the database—some of the SQL field names are not necessary the same as field names within the application.
We also use the sqlLiteral method of the previously created sqlSystem object to properly format SQL values to make sure they do not have any unsafe characters. The results are returned into the resultSet object, and we get them by using the while statement and calling the next method until the end of the resultSet object.
Note that we create an sqlPermission object of type SqlStatementExecutePermission here and call its assert method before executing the statement. This is required in order to comply with Dynamics AX trustworthy computing requirements. Another thing to mention is that when building direct SQL queries, special attention has to be paid to license, configuration, and security keys. Some tables or fields might be disabled in the application and may contain no data in the database. The code in this recipe can be also used to connect to the external ODBC databases.
By using those classes, we can create SQL statements as objects as opposed to text. Next, we will demonstrate how to use the SQLBuilder classes. We will create the same SQL statement as before.
No ; selectExpr. It represents the object of the SQL statement. Next, we add the VendTable table to it by calling its member method addTableId. We also add DirPartyTable as a joined table. Then, we create a number of field objects of type SQLBuilderFieldEntry to be used later and two ranges to show only this company account and only active vendor accounts.
We use addSelectFieldEntry to add two fields to be selected. Here we use the previously created field objects. The SQL statement is generated once the getExpression method is called, and the rest of the code is the same as in the previous example. Running the class would give us results, which are exactly similar to the ones we got before.
Enhancing the data consistency check It is highly recommended to run the standard Dynamics AX data consistency check from time to time, that is located in System administration Periodic Database Consistency check, to check the system's data integrity.
This function finds orphan data, validates parameters, and does many other things, but it does not do everything. The good thing is that it can easily be extended to match different scenarios. In this recipe, we will see how we can enhance the standard Dynamics AX consistency check to include more tables in its data integrity validation. Open Fixed assets Setup Fixed asset posting profiles and under the Ledger accounts group, create a new record with the newly created value model for any of the posting types: Now, we have a non-existing value model in the fixed asset posting settings.
Open System administration Periodic Database Consistency check, select the newly created Fixed assets option in the Module drop-down, and click OK to run the check: Now the message displayed in the Infolog should complain about the missing value model in the fixed asset posing settings: The consistency check in Dynamics AX validates only the predefined list of tables for each module. The system contains a number of classes derived from SysConsistencyCheck.
For example, the CustConsistencyCheck class is responsible for validating the Accounts receivable module, LedgerConsistencyCheck—for General ledger, and so on. The following methods were created: Here we use the kernelCheckTable member method, which validates the given table. The classes we just mentioned can only be executed from the main Consistency check form. Individual checks could also be invoked as stand-alone functions. We just need to create an additional method to allow running of the class: It allows the creation of all kinds of structured documents to exchange between systems.
For example, user profiles can be exported as XML files.
It is an infrastructure that allows exposing business logic or exchanging data with other external systems. The communication is done by using XML formatted documents.
By using the existing XML framework application classes prefixed with Axd, you can export or import data from or to the system in an XML format to be used for communicating with external systems. It is also possible to create new Axd classes using the AIF Document Service Wizard from the Tools menu to support the export and import of newly created tables.
Basically, those classes are wrappers around the System. XML namespace in the. NET framework.
In this recipe, we will create a new simple XML document by using the latter classes, in order to show the basics of XML. We will create the file with the data from the chart of the accounts table and will save it as an XML file. RecId ; nodeXml.
MainAccountId ; nodeTable. Name ; nodeTable. Run the class. The XML file accounts. Then we create its root node named xml using the createElement method, and add the node to the document by calling the document's appendChild method. Next, we go through the MainAccount table and do the following for each record: Create a new XmlElement node, which is named exactly as the table name, and add this node to the root node. Create a node representing the account number field and its child node representing its value.
The account number node is created using createElement , and its value is created using createTextNode. Add the account number node to the table node. Create a node representing the account name field and its child node representing its value. Add the account name node to the table node. Finally, we save the created XML document as a file. In this way, we can create documents having virtually any structure. In this recipe, we will continue using the System.
XML wrapper application classes. We will create a new class which reads XML files and displays the content onscreen. As a source file, we will use the previously created accounts. Use the document created in the previous recipe: The Infolog should display the contents of the accounts.
In this recipe, we first create a new XmlDocument. We create it from the file and hence we have to use its newFile method. Then we get all the document nodes of the table as XmlNodeList. We also get its first element by calling the nextNode method. Next, we loop through all the list elements and do the following: Get an account number node as an XmlElement. Get an account name node as an XmlElement.
Display the text of both nodes in the Infolog. Get the next list element. In this way, we retrieve the data from the XML file. A similar approach could be used to read any other XML file. Although nowadays modern systems use XML formats for data exchange, CSV files are still popular because of the simplicity of their format. Normally, the data in the file is organized so one line corresponds to one record, and each line contains a number of values normally separated by commas.
Record and value separators could be any other symbol, depending on the system requirements. In this recipe, we will learn how to create a custom comma-separated file from code. We will export a list of ledger accounts—the CSV format. MainAccountId, mainAccount. Name]; file. A new file named accounts. Open that file with Notepad or any other text editor to view the results: In the variable declaration section of the main method of the newly created CreateCommaFile class, we define a name for the output file, along with other variables.
Normally, this should be replaced with a proper input variable. It accepts two parameters—filename and mode. If a file with the given name already exists, then it will be overwritten.
In order to make sure that a file is created successfully, we check if the file object exists and its status is valid, otherwise we show an error message.
In multilingual environments, it is better to use the CommaTextIo class. It behaves the same way as the CommaIo class does plus it supports Unicode, which allows us to process data with various language-specific symbols.
Finally, we loop though the MainAccount table, store all account numbers and their names in a container, and write them to the file using the writeExp method. In this way, we create a new comma-separated value file with the list of ledger accounts. You probably already noticed that the main method has the client modifier, which forces its code to run on the client. When dealing with large amounts of data, it is more effective to run the code on the server.
In order to do that, we need to change the modifier to server. The following class generates exactly the same file as before, except that this file is created in the folder on the server's file system: Finally, we call CodeAccessPermission:: It is probably the most simple integration approach, when one system generates CSV files in some network folder and another one reads those files at specified intervals. Although this is not very sophisticated real-time integration, in most cases it does the job and does not require any additional components, such as Dynamics AX Application Integration Framework or something similar.
Another well-known example is when external companies are hired to manage the payroll. On a periodic basis, they send CSV files to the finance department, which are then loaded into the General journal in Dynamics AX and processed as usual. In this recipe, we will learn how to read CSV file from code. As an example, we will process the file created in a previous recipe. Run the class to view the file's content, as shown in the following screenshot: As in the previous recipe, we first create a new file object using the CommaTextIo class.
We also perform the same validations to make sure that the file object is correctly created, otherwise we show an error message. Finally, we read the file line by line until we reach the end of the file.
OK, meaning we have reached the file end. Inside the loop, we call the read method on the file object, which returns the current line as a container and moves the internal file cursor to the next line.
File data is then simply output to the screen using the standard global info function in conjunction with the con2Str function, which converts a container to a string for displaying. The last element of code, where the data is output, should normally be replaced by proper code that processes the incoming data.
File reading, could also be executed in a similar way as file writing on a server to improve performance. The modifier client has to be changed to server, and code with the FileIoPermission class has to be added to fulfil the code access security requirements. The modified class should look similar to the following code: Date ranges are used for defining record validity between the specified dates, for example, defining employee contract dates.
This feature significantly reduces the amount of time that developers spend writing code and also provides a consistent approach to implement data range fields. This recipe will demonstrate the basics of date effectiveness. We will implement date range validation on the standard E-mail templates form. Notice the two new fields that are automatically added to the table: Add the newly created ValidFrom and ValidTo fields to the existing emailIdIdx index and change the properties shown in the following table: The index should look similar to the following screenshot: In the AOT, find the SysEmailTable form, refresh it using the Restore command which can be found in the form's right-click context menu.
Then, locate its data source named SysEmailTable and change its properties as follows: In order to test the results, navigate to Organization administration Setup E-mail templates and notice the newly created fields: Effective and Expiration columns. Try creating records with the same E-mail ID and overlapping date ranges—you will notice how the system is proposing to maintain valid date ranges: This automatically creates two new fields: ValidFrom and ValidTo that are used to define a date range.
Next, we add the created fields to the primary index where the EmailId field is used and adjust the following index's properties: The property can also be set to Gap allowing non-continuous date ranges.
Finally, we adjust the SysEmailTable form to reflect the changes. We also change a few properties of the SysEmailTable data source: The default AsOfDate value could be used if we want to display only the records for the current period. Forms with date effective records can be enhanced with an automatically-generated toolbar for filtering the records. This can be done with the help of the DateEffectivenessPaneController application class.
In order to demonstrate that, let's modify the previously used SysEmailTable form and add the following code to the bottom of the form's init method: They are also used for running reports, executing user commands, validating data, and so on.
Normally, forms are created using the AOT by creating a form object and adding form controls such as tabs, tab pages, grids, groups, data fields, images, and others. Form behavior is controlled by its properties or the code in its member methods. The behavior and layout of form controls are also controlled by their properties and the code in their member methods.
Although it is very rare, forms can also be created dynamically from the code. We start with building Dynamics AX dialog, and explaining how to handle its events. The chapter will also show how to build a dynamic form, how to add a dynamic control to existing forms, and how to make a modal form. This chapter also discusses the usage of a splitter and a tree control, how to create a checklist, save last user selections, and other things.
Creating a dialog Dialogs are a way to present users with a simple input form. They are commonly used for small user tasks, such as filling in report values, running batch jobs, presenting only the most important fields to the user when creating a new record, and so on. The application class Dialog is used to build dialogs.
A common way of using dialogs is within the RunBase framework classes, where user input is required. In this example, we will demonstrate how to build a dialog from the code using the RunBase framework class.
The dialog will contain customer table fields shown in different groups and tabs for creating a new record. There will be two tab pages, General and Details. Managing your data and functions will become easier.
You will also get numerous development tips and tricks from a Dynamics AX development expert. Most of the recipes are presented using real-world examples in a variety of Dynamics AX modules. The step-by-step instructions along with many useful screenshots make learning easier. Who this book is written for This book is for Dynamics AX developers, and is primarily focused on delivering time-proven application modifications. So, some Dynamics AX coding experience is expected.
The company specializes in providing development, consulting, and training services for Microsoft Dynamics AX resellers and customers.