Having worked a lot with Dynamics CRM/365 over the last few years I thought it would be interesting to discuss a common use case and some of the architecture patterns you may consider to implement the solution.
Lets imagine a scenario where the business requirement is as follows:
- The user will be updating a customers record in Dynamics 365
- When the user saves the change we need the change to be synchronised with the billing system
Now at this point I am going to deliberately ignore flushing out these requirements too much. Any experiences integration person will now be thinking of a number of functional and non-functional questions they would want to get more information about, but the above is the typical first requirement. We will use this vagueness to allow us to explore some of the considerations when we look at the options that are available to solve the problem. One thing to note is I am going to consider this to be a 1 way interface for this discussion.
Option 1 – CRM Custom Plugin – Synchronous
In option 1 the CRM developer would use the extensibility features of Dynamics. This allows you to write C# code which will execute within the CRM runtime environment as a plugin. With a plugin you can configure when the code will execute. Options include things like:
- When an entity is updated but before the save is made
- When the entity is updated but after the save is made
- As above but on other commands such as created/deleted
The below picture shows what this scenario will look like
- This is probably the quickest way you can get the data from the commit in CRM to the other application
- This is probably the simplest way you can do this integration with the minimum number of network hops
- This solution probably only needs the skill set of the CRM developer
Things to consider:
- You would be very tightly coupling the two applications
- You would have some potential challenges around error scenarios
- What happens if the save to the other app works but the save to CRM doesn’t or visa-versa
- The custom plugin is probably going to block the CRM users thread while it makes the external call which is asking for performance issues
- You would need to consider if you would do the call to the other application before or after saving the data to CRM
- You would need to consider where to store the configuration for the plugin
- There would be error and retry scenarios to consider
- There would be the typical considerations of tightly coupled apps
- What if the other app is broken
- What if it has a service window
- Errors are likely to bubble up to the end user
- You will have OOTB (out of the box) CRM plugin tracing diagnostics but this may require some custom code to ensure it logs appropriate diagnostic information
Option 1.5 – CRM Custom Plugin – Asynchronous
In this option the solution is very similar to the above solution with the exception that the developer has chosen to take advantage of the asynchronous system jobs feature in CRM. The plugin that was developed is probably the same code but this time the configuration of the plugin in CRM has indicated that the plugin should be executed out of process from the transaction where the user is saving a change. This means that the commit of the change will trigger a system job which will be added to the processing queue and it will execute the plugin which will send data to the other application.
The below picture illustrates this option.
- The synchronize transaction will no longer block the users thread when they save data
- The system jobs gives a degree of troubleshooting and retry options if the other system was down compared to option 1
- This only required CRM developer skills
Things to consider:
- There may be other things on the processing queue so there is no guarantee how long it will take to synchronize
- You may get race conditions if another transaction updates the entity and you haven’t appropriately covered these scenarios in your design
- Also think about the concurrency of system jobs and other plugins
- I have seen a few times where option 1 is implemented then flipped to option 2 due to performance concerns as a workaround
- This needs to be thought about upfront
- You may struggle to control the load on the downstream system
- Again there is a tight coupling of systems. CRM has explicit knowledge of the other application and a heavy dependency on it
- What if the app is down
- What if there are service windows
- Error scenarios are highly likely and there could be lots of failed jobs
Option 2 – CRM out of the Box Publishing to Azure Service Bus
Option 1 and 1.5 are common ways a CRM developer will attempt to solve the problem. Typically they have a CRM toolset and they try to use a tool from that toolset to solve the problem as bringing in other things was traditionally a big deal.
With the wide adoption of Azure we are starting to see a major shift in this space. Now many Dynamics projects are also including Azure by default in their toolset. This means CRM developers are also gaining experience with tooling on Azure and have a wider set of options available. This allows a shift in the mindset that not everything has to be solved in CRM and actually doing stuff outside of CRM offers many more opportunities to build better solutions while at the same time keeping the CRM implementation pure and focused on its core aim.
In this solution the CRM developer has chosen to add an Azure Service Bus instance to the solution. This means they can use the OOTB plugin (not a custom one) in CRM which will publish messages from CRM to a queue or topic when an entity changes. From here the architecture can choose some other tools to get messages from Service Bus to the destination application. For simplicity in this case I may choose an Azure Function which could allow me to write a simple bit of C# to do the job.
The below solution illustrates this:
- No custom coding in CRM
- The Service Bus plugin will be much more reliable than the custom one
- The Service Bus plugin will get a lot of messages out to Service Bus very fast by comparison to the custom plugin in 1.5 which will bottleneck on the downstream system probably
- Service Bus supports pub/sub so you can plugin routing of messages to other systems
- The Azure Function could be developed by the CRM developer quite easily with a basic C# skillset
- Service Bus offers lots of retry capabilities
- The queue offers a buffer between the applications so there is no dependency between them
- The function could be paused in downtime so that CRM can keep pumping out changes and they will be loaded when the other app is back online
- The solution will be pretty cheap, you will pay a small cost for the service bus instance and per execution for the function. Unless you have very high load this should be a cheap option
Things to consider:
- The key thing to remember here is that the solution is near realtime. It is not an instant synch. In most cases it is likely the sync will happen very quickly but the CRM System Jobs could be one bottleneck if you have lots of changes or jobs in CRM. Also the capability of the downstream system may be a bottleneck so you may need to consider how fast you want to load changes
- The only bad thing is that there are quite a few moving parts in this solution so you may want to ensure you are using appropriate management and monitoring for the solution. In addition too CRM System jobs you may want to consider Serverless360 to manage and monitor your queues and also Application Insights for your Azure Functions
Option 3 – Logic App Integration
In option 3 the developer has chosen to use a Logic App to detect changes in CRM and to push them over to the other application. This means that the CRM solution is very vanilla, it doesn’t even really know that changes are going elsewhere. In the above options a change in CRM triggered a process to push the data elsewhere. In this option the Logic App is outside CRM and is periodically checking for changes and pulling them out.
Typically the Logic App will check every 3 minutes (this is configurable) and it will pull out a collection of changes and then 1 instance of the logic app will be triggered for each change detected.
The logic app will then use an appropriate connector to pass the message to the downstream application.
The below picture shows what this looks like.
- There is nothing to do in CRM
- The Logic App will need monitoring and managing separate to CRM
- The Logic App is not part of the CRM developers core skill set, but they are very simple to use so it should be easy to pick this up
- The Logic App has a lot of features if you run into more advanced scenarios
- The Logic App has connectors for lots of applications
- You may be able to develop the solution with no custom code
- The Logic App has some excellent diagnostics features to help you develop and manage the solution
- The Logic App has retry and resubmit capabilities
- The solution will be pretty cheap with no upfront capital cost. You just pay per execution. Unless you have very high load this should be a cheap option
- This option can also be combined with Service Bus and BizTalk Server for very advanced integration scenarios
Things to consider:
- Is the polling interval going to be often enough
- Only the most recent change will be extracted, if a particular row has been updated 3 times since the last trigger you will get the latest stage
- It may require some more advanced patterns to control the load if the downstream system is a bottleneck. This may be beyond the CRM developers Logic App skills
Option 4 – SSIS Integration
The next option to consider is an ETL based approach using SSIS. This approach is quite common for CRM projects because they often have people with SQL skills. The solution would involve setting up an SSIS capability and then purchasing the 3rd party Kingswaysoft SSIS connectors which includes support for Dynamics.
The solution would then pull out data from CRM via the API using a fetch xml query or OData Query. It would then push the changes to the destination system. Often SSIS would be integrating at database level which is its sweetspot but it does have the capability to call HTTP endpoints and API’s.
Although the diagrams look similar, the big difference between the Logic App approach and SSIS is that SSIS is treating the records as a batch of data which it is attempting to process in bulk. The Logic App is attempting to execute a separate transaction for each row it pulls out from the CRM changes. Each solution has its own way of dealing with errors which makes this comparison slightly more complex, but typically think of the idea of a batch of changes vs individual changes.
In the SSIS solution it is also very common for the solution to include a staging database between the systems where the developer will attempt to create some separation of concern and create deltas to minimize the size of the data being sent to downstream systems.
- You can process a lot of data very quickly
- Common approach on CRM projects
- Kingswaysoft product is mature
- Predominantly configuration based solution
- Sometimes error scenarios can be complex
Things to consider:
- Capital cost for 3rd party software and probably maintenance too
- Need to consider where to host SSIS (Azure VM or On Premise VM) – Cost associated with this
- Possible license cost for SQL depending on organisation setup
- You will sync on a schedule, how often does it need to be
- The more frequent the less data each time
- Cant be too frequent
- How will you monitor and schedule the SSIS package
I guess in addition to the main options discussed above, you could also do other stuff utilizing the extensibility of Dynamics and Azure. Some of those might include:
- Create a plugin that will publish changes to an Azure Event Hub
- Create a plugin that will publish changes to an Azure Event Grid
I think these would both be very interesting options to have as future out of the box capabilities. You can achieve these at present with a little custom work such as a custom plugin or using an intermediary of publishing to a queue then forwarding to Event Grid or Event Hub.
Event Hub would have the benefits or a stream of changes which could be read multiple times so you have a change history.
Event Grid would have the benefits of a fire and forget event system for broadcast of changes and an event consumption model to make it easy to do stuff with these changes.
Both of these would be good candidates for extending the CRM Service Endpoint feature which has worked with Service Bus for years.
There is no right or wrong answer based on the original 2 line requirement we got, but you can see each solution has a lot to think about.
This emphasises the importance of asking questions and elaborating on the requirements and working out the capabilities of the applications you will integrate with before choosing which option to take. As a general rule I would recommend not to jump too quickly to option 1 or 1.5. As an integration guy we usually frown upon these kind of options because of the way they couple applications and create long term problems even though they might work initially. I think the other 3 options (2-4) will be relatively easy to choose between depending on the requirements elaboration but with option 1 and 1.5 I would only choose these in niche cases and I would do so only with full buy in from your architecture team that you have a justifiable reason for choosing it that has been documented enough to be able to explain later when someone comes along and asks WTF?
One other factor to consider which we didn’t touch on too much above. I have kind of assumed you have an open toolset on todays typical CRM and Azure project. It may also be the case that your project has some constraints which may influence your decision to choose one option over the other. I hope in these cases the above considerations will help you to validate the choice you make or also give you some ammunition if you feel that you should challenge the constraint and consider another option.
Post a Comment