Thursday, December 30, 2021

Virtual tables in Microsoft Dataverse and Dynamics 365 Customer Engagement

Virtual entities in Dynamics CRM have been around for more than 4 years and were introduced with the 9.x release of Dynamics CRM Online. Virtual tables (entities) allow you to integrate external data into Dataverse without having the replicate the data or without having to write client side script to visualize the data. Virtual tables provide an abstraction layer which allow for real-time on-demand access to data wherever it resides, for end users it will look as if the data resides in Dataverse.

To be honest, I hadn't paid a lot of attention to virtual entities due to its limitations and due to the fact that there weren't  a lot of improvements from Microsoft in it before 2021 - some of these limitations of virtual tables still exists like only organization owned tables, no auditing or change tracking, no queue support, a virtual table cannot represent an activity, etc...  But a lot has changed in the last 12 months - for example CRUD support for virtual table providers went GA in March 2021 so I decided to take a look at it in the last couple of days.  Another trigger was a blog post from @jukkan on Virtual Dataverse tables with no code, via Connectors which I marked already marked for review a couple of months ago.

Virtual Table Data Providers act as the glue between the external data and your virtual entities in Dataverse.  There are currently  4 different flavors of the Virtual Table Data Providers: oData, custom-built using plugins, Cosmos DB and using the Power Platform Connector for SQL with a number of them still in preview.


Dataverse ships with an OOTB OData v4 Data Provider but I personally haven't seen any practical use case for this specific connector. So let's take a look at the current status of the 3 other data providers.

Custom Virtual Data Provider
This is really a pro-developer story, .NET developers have the option of creating custom virtual table data providers to help integrate external data source types that are not supported by an existing data provider. Sample: Custom virtual table provider with CRUD operations contains some the sample code for the plugins that you need to implement as well how to register these in Datavers but I found the walkthrough - Custom Virtual Table provider with CRUD operations: Introduction and scenario - part 1 of 6 from @capuanodanilo a lot more helpful since it outlines all of the different steps in a more elaborate manner.

Azure Cosmos DB SQL API Data Provider
The Azure Cosmos DB Data Provider has been in preview for quite a while already and you will need to install this from AppSource - Azure Cosmos DB for DocumentDB API Data Provider - based on a recent blog post from @temmy_raharjo  -  [Virtual Entity - Preview] Setup Azure Cosmos DB SQL API Data Provider - the current release seems to be working again (I got stuck with the setup but will add more details when I find time to get it working)

Virtual Connector Data Provider
The Virtual Connector Provider (VCP) for tabular connectors entered public preview in August 2021 (no GA date mentioned). The VCP preview currently supports only SQL Server (both on-premise and on-premise) but the release plan mentions that it will  allow you to use Microsoft Dataverse virtual tables using multiple supported tabular connectors (e.g. SQL Server, OneDrive, Excel, SharePoint, etc...).  Similar to the Cosmos DB Data Provider you will first need to install the solution from AppSource.



I got everything working using the walkthrough  mentioned in Create virtual tables using the virtual connector provider (preview) but overall the configuration experience is still confusing because you have to switch between the new and old solution designer experience (I think it even is safest to just do everything using the old solution designer). 

I also noticed that when you use Azure SQL with the Virtual Connector Provider you might need to add the IP addresses from the Dataverse back-end servers to the Azure SQL whitelist - maybe because my Azure SQL is in a different tenant/subscription then the Dataverse environment. 

After completing the configuration, the virtual tables initially did not work and when looking at the plugin trace logs - I noticed below exception. When adding both IP addresses (probably 2 back-end servers) to Azure SQL - everything started working.


Even though the article Create virtual tables using the virtual connector provider (preview) mentions that you can make the configuration solution aware - I think you first need to test everything especially if you want to automate solution deployment and have specific naming conventions around entities/tables.

The fact that you need to create a connection with all users in your organization is also something that Microsoft will change - a concern which is also shared in this blog post. Dataverse virtual connector provider for SQL Server. Awesome? which also provides some interesting workarounds. I however think that is should be best that in a standard config - only the service principal that you used in the initial configuration should be given access to the SQL without  a need to explicitly share the connection with everyone in the organization.



Summary
Virtual tables indeed show great promise but the Virtual Connector Provider is still a little bit to rough on the edges to use with confidence. As we near GA, I hope additional improvements will be made available especially around the security aspect. 

References:


Sunday, December 26, 2021

Random notes on the Microsoft Dataverse Web API - background and historical perspective

For those new to Dataverse - previously known as the Common Data Service (CDS) - it might not seem so obvious but the modern Dataverse Web API was one of the big features of the Dynamics CRM 2016 release. So it's seen a bit of mileage already.

The goal of the D365 Web API was to make it easier to create applications across a wide variety of platforms, devices and programming languages.  The Web API is extremely well suited for developing client side components using either Javascript or Typescript but it was also meant to be used for building applications or services using other programming languages such as Python, Ruby or Java or development on non-Microsoft platforms such as mobile (Android or iOS) or cloud applications which needed to interact with Dynamics CRM Online or on-premise.

It is interesting to see that the Web API still relies on the Organization Service as the expectation was at that time that the Web API would fully replace the Organization Service quite soon. As mentioned before the Web API was released together with CRM 2016 which had version number 8.1. For an overview of the different on- premise version numbers - see Microsoft Dynamics 365 on-premises cumulative updates and to understand the way that version numbers work in the cloud see Dynamics 365 Customer Engagement release version transparency. With Microsoft sticking to the 9.x version  for a number of releases now, the statement that CRM SOAP 2011 Service Endpoint will be turned off as of v10 might still hold true.


The original ODATA v2 CRM endpoint which is now deprecated only supported CRUD operations whereas the new Web API also supported special messages and functions typical for Dynamics 365/CRM - to see the full list of actions available you can take a look at the metadata document https://[yourenvironment].crm4.dynamics.com/api/data/v9.0/$metadata - below is a screenshot with the QualifyLead action.

One of the interesting actions that you can call in this way is the RetrieveTotalRecordCount function e.g. https://[yourorg].crm4.dynamics.com/api/data/v9.1/RetrieveTotalRecordCount(EntityNames=['account','contact']) which provides the quickest way to retrieve the number of records for an entity/table.

OAuth authentication also became the standard authentication model for the Dynamics 365/Dataverse Web API. For code running within the context of Dataverse/CRM - e.g. web resources or PCF controls, you don't need to handle authentication separately since the authentication context is passed on. For back-end applications - you will need to register your application with Azure to use the new authentication model.

References:




Thursday, December 16, 2021

Azure Well Architected Framework for Dynamics 365 and Power Platform solution architects

As a Dynamics 365 CE solution architect, a basic understanding of the  Azure cloud platform components is a must. Due to the nature of Dynamics CRM Online (and Dataverse) being a shared resource operated by Microsoft, there are limitations on how much time and resources your data processing and business logic can consume. By passing of your code to Azure cloud, you remove these resource limits and you can also benefit of specific innovations available in Azure Cloud (e.g. AI capabilities in Azure Cognitive Services). Microsoft offers a number of standard integrations which enable these scenarios - one example is the ability of plugins to pass their execution context to one or more Azure components.

As solution architect you will be working with developers and technical architects to translate your proposed solution architecture into an integrated system and therefore a good initial understanding of Azure Cloud is essential. If you have already implemented a number of solution architectures combining Azure resources and Dynamics CE online, it might make sense to revisit these setups once in a while since things change rapidly in the cloud. This is where the Azure Well-Architected Framework comes in.

The Azure Well-Architected Framework is a set of guiding principles that can be used to improve the quality of a workload hosted in Microsoft Azure. The framework consists of five pillars of architecture excellence: 

  • Cost optimization
  • Operational excellence
  • Performance efficiency
  • Reliability
  • Security. 

Incorporating these pillars helps produce high-quality, stable, and efficient cloud architecture.

References:

Friday, December 03, 2021

Dataverse search: the good, the bad and the ugly

The enhanced search experience for model driven apps (initially called relevance search) on Dataverse and Dynamics 365 CE was initially announced in November 2020 and you could start using it a couple of months later. Recently Microsoft announced some changes (and improvements) for relevance search in Goodbye Relevance search, hello Dataverse search! 

When you plan to use Dataverse search, there are a number of things that you need to keep in mind which are not so obvious from the Microsoft documentation so let's dive right in and look at the Good, the Bad and the Ugly about Dataverse search.


Dataverse search - The Good

The search box has moved in the UI and is now very easy for end users to find back. When you start typing text in the Dataverse search box, you will see a list of suggested results as well as previous queries.  The way that the suggested results are shown can even by slightly changed as outlined in this blog post - Quick tip - Data in the inline suggestion view of relevance search in Dynamics 365 CRM. Dataverse search is powered by Azure Cognitive Search so you get support for fuzzy search and advanced capabilities like word stemming and work breaking. For more documentation on the end-user experience see Search for tables and rows by using Dataverse search



With the new solution explorer update it also is a lot easier to configure which entities/tables you want to be indexed for search. The settings are solution aware so if you have multiple solutions installed in your Dataverse environment be aware of solution layering. To get an overview of all indexed tables - select the Default solution in the maker portal and click Manage Search Index - for more details see Configure Dataverse search for your environment


Relevance search is also available through a REST API meaning that you can also use it in Canvas Apps & Power Automate (See Use Relevance Search API in Power Automate & Canvas App - Part 1). To begin using Dataverse search, your application simply needs to use POST requests  to the /api/search/v1.0/query endpoint - for the complete documentation see  Search across table data using Dataverse search in Microsoft docs.

Dataverse search - The Bad

Although there is no specific license required for Dataverse search - and it works both for environments without Dynamics 365 apps enabled and with Dynamics 365 apps - there might be a cost associated to it. 

When you enable Dataverse search, additional storage is required for creating the indexes in the Dataverse database storage. You will probably not notice this in your development environments since they might not contain sufficient data. But in production environments the increase can be quite significant - I have seen Dynamics 365 Customer Service environments where as much as 16% of the database usage is taken by the RelevanceSearch table.

In my opinion it would be better to add a more prominent warning of possible storage increase to the main documentation pages about Dataverse search - there is however a mention on the Free up storage space documentation - remove unrequired tables and columns from Dataverse search. 


With the Wave 2 2021 release (after October 2021) Microsoft enabled Dataverse search by default for all production environments so probably want to find out what the impact is on your storage  consumption.

I also did not expect the storage to be added in the database which at roughly 40€ GB/month for additional database storage does not come cheap.

Dataverse search - The Ugly

Now suppose that you are suddenly faced with an increase in storage in one of your production environments after you activated Dataverse search. Unfortunately just disabling the Dataverse search again is probably not an option because your end users probably liked the functionality. So your only options are to start reviewing which tables and which fields to add to the index. The entities which are enabled for search are clearly visible from the administration screen but unfortunately finding out the fields which are added to the relevance search index is a little more involved.



So if we take the example of the "accounts" table and look at the "Quick find" - out of the box there will be 8 fields marked as added to the index - account name, account number, primary contact (lookup), address1 city, main phone and e-mail. If you want to modify this e.g. remove account number - you will create a new solution and add the "accounts" table and "quick find" to the solution (via add existing components) and remove the column/field from the Quick find view and from the "Find by" settings. After you made these changes - you will notice that the number of indexed fields decreased.



Unfortunately in bigger organizations, you are not allowed to make changes in production environments without following a strict process. 

So an iterative process of  removing tables from Dataverse search results and modifying the fields selected for each table in an iterative manner to assess the impact is not always easy to find the balance between storage increase and providing functionality is not always easy. It would be a lot easier to see how much each table and preferably also each field contributes to the RelevanceSearch table storage size so that you now which changes are required.

The amount of storage required depends on the number of rows in the table, the number of indexed columns/fields and the amount of data stored in a specific column - a text field/column with many characters will consume more space then a text field with only a few characters. So if you are indexing e-mail activities - adding the description field might cause a significant increase. The indexed fields include Find columns (isSearchable), View Columns (isRetrievable) and Filter criteria (isFilterable).

References:


Tuesday, November 09, 2021

Dynamics 365 and Power Platform monthly reading list October 2021

Power Platform and Dynamics 365 release 2021 wave 2

Technical topics (Configuration, customization and extensibility)


Topics for Dynamics 365 Business Applications Platform consultants, project managers and power users

Tuesday, August 10, 2021

Dynamics 365 and Power Platform monthly reading list August 2021

 Power Platform and Dynamics 365 release wave 

Technical topics (Configuration, customization and extensibility)
Topics for Dynamics 365 Business Applications Platform/Power Platform consultants, project managers and power users

Monday, August 02, 2021

Leverage environment variables in Power Platform and Dynamics 365 projects

Environment variables allow you to easily add configuration data with Power Platform solutions  and use them in different solution components such as canvas apps, Power Automate cloud flows and model driven apps. Environment variables are a fairly new component type and were introduced in public preview in November 2019 and released in GA around summer 2020.  They are a good alternative for the typical custom "configuration entity/table" that you use in a project -  check out Why you should use environment variables for some practical experience around it.


The new solution import experience supports setting environment variables during the import of the solution but it will only do it when you did not set the default value or a current value so don't forget to remove the current value (click on the "..." next to the current value to remove before exporting your managed solution) as documented on Enter new values while importing solutions. On the target environment where you import your managed solution, you will see a red bar appearing if there are environment variables missing values.



You will be able to use the created environment variables within all different types of Power Platform components but so far I have used it mainly in Power Automate cloud flows where you can use environment variables in expressions  (see screenshot below) - see references section for other example use cases.





References:






Thursday, July 22, 2021

Dynamics 365 and Power Platform monthly reading list July 2021

Power Platform and Dynamics 365 release wave 2
Technical topics (Configuration, customization and extensibility)
Topics for Dynamics 365 Business Applications Platform/Power Platform consultants, project managers and power users

Monday, June 21, 2021

Quick tip: don't forget to unblock downloaded source code files when using Visual Studio

A couple of weeks ago, I suddenly received below exceptions when trying to build a VS2019 solution that I downloaded from Github. 


The solution is indicated in the exception - you need to right click the file and uncheck the "Unblock tickbox" at the bottom of the dialog. But when you have a lot of files, this is quite tedious. Luckily there is a PowerShell cmdlet Unblock-File to help you solve this. Navigate to the folder where you sourcecode is located and just run "dir -recurse | Unblock-File".




Friday, June 18, 2021

Quick tip: submitting web forms to Dataverse using Power Automate Cloud Flows

You can quite easily integrate data coming from forms on your web site with Dataverse using some simple  Javascript and Power Automate Cloud Flows. 

First step is to create a Power Automate Flow with a "When a HTTP request is received" trigger and add the form fields that you want to capture as JSON payload.  Remember to start creating your flow from within a solution so that you can leverage the ALM capabilities of Dataverse and use the new Dataverse connector (current environment)

The request body JSON schema should contain all the web form fields that you want to sent to Dataverse/Dynamics 365 - below is the JSON schema that I generated from a sample payload. 

{
    "type": "object",
    "properties": {
        "firstname": {
            "type": "string"
        },
        "lastname": {
            "type": "string"
        },
        "phone": {
            "type": "string"
        },
        "email": {
            "type": "string"
        },
        "message": {
            "type": "string"
        }
    }
}

After you save the flow - the "HTTP Post URL" will be automatically generated - this is the URL that you will be using in the Javascript code that you embed on your website to sent the web form  to Dataverse. You might want to consider using Azure API management to protect and expose the API indirectly. 

 

The last step is to  add the Javascript code snippet to your website to submit the changes to the Power Automate cloud flow endpoint or the Azure API management hyperlink.

function submitform()
	  {
		var emailvalue = document.getElementById('emailvalue').value;
		var firstnamevalue = document.getElementById('firstnamevalue').value;
		var lastnamevalue = document.getElementById('lastnamevalue').value;
		var phonevalue = document.getElementById('phonevalue').value;
		var messagevalue = document.getElementById('messagevalue').value;
		var xmlhttp = new XMLHttpRequest();
		xmlhttp.open("POST","yourcloudflowurl");
		xmlhttp.setRequestHeader("Content-Type","application/json;charset=UTF-8");
		xmlhttp.send(JSON.stringify(
			{
				firstname: firstnamevalue,
				lastname: lastnamevalue,
				phone: phonevalue,
				email: emailvalue,
				message: messagevalue
			}
		));
		alert("Thank you");
	  }

Wednesday, June 16, 2021

Introduction to the Dual Write framework for Dynamics 365

Summary: The Dynamics 365 Business Applications Platform consists out of a number of different first party apps with CRM (Dynamics 365 Sales & Customer Service) and ERP (Dynamics 365 Supply Chain Management) being core components which typically are used cross industries and sectors in a large number of sectors. The Dual write framework allows for deep integration of Dynamics CRM and F&O.


In most application architectures at companies we have this classical divide between CRM software on the one hand and ERP software on the other hand. Different vendors, technology stacks make it difficult to bridge the divide between the processes and the data models in these core applications.


But from a customer perspective, they don't care about how your application architecture looks like and whether your processes need to cross application boundaries (see simplified schema below)


The Dual Write framework in Dynamics 365 is just one of the different integration mechanisms available in the platform but it probably is the one which provides the deepest level of integration. Dual-write is an out-of-the-box infrastructure that provides near real-time and bi-directional integration between model-driven apps in Dynamics 365 CE (and Dataverse) and Dynamics 365 Finance and Operations. Dual-write provides tightly coupled, bi-directional integration which offers a number of interesting functionalities (but is not necessarily a fit for your scenarios so it is important to do an assessment first of the impact both on sales and customer service processes and financial processes). Changes in CE  causes writes to FinOps and changes FinOps causes writes to CE, this automated data flow provides an integrated user experience across the apps. (See Dual write overview in Microsoft docs for more detailed information)

Dual-write was made general available (GA) in March 2020 and has been evolving at a rapid pace with monthly updates - see What's new or changed in dual-write.  It consists out of a number of entity maps which are provided out of the box by Microsoft but you can adapt or even extend these entity maps in Dynamics 365 FinOps.  In my opinion, the configuration and setup should typically be done by FinOps consultants/architects since the process and data model from ERP is the most complex one but a basic understanding of Dynamics 365 Customer Engagement is quite useful.

Enabling Dual-Write for existing Finance and Operations Apps is quite thoroughly documented but in my personal experience there were still some quirks during the setup that I did not expect (experience from last year):


After installing the Dual Write solution on Dataverse (or Dynamics 365 CE) you will see quite a lot of changes that you need to be aware of as a CE consultant or CE solution architect like:

  • Enhanced currency functions (FinOps and CE need to have the same currency precision - see Currency data-type migration for dual-write)
  • Changes in CRM will be propagated to Dynamics 365 FinOps using a large number of plugins, if a transaction fails in FinOps because of some missing data (or wrong configuration) your changes will not be saved in CRM.
  • Introduction of new concepts like company (legal entity in FinOps) & party
  • As part of the Integrated customer master, account numbers are now required and need to be filled in. There is no global number sequence across CE and FinOps. You need to handle the autonumbering in both apps and use a common pattern. Accountnumber in CR is mapped to Customer Account number in FinOps, it needs to be unique per legal entity and is the primary key of the customer table in FinOps.
  • Exposure to financial and tax data  in Dynamics 365 CE e.g. customer groups which are used in FinOps to share characteristics for companies such as payment terms, number of days until invoices are due etc... This is sometimes also used for intercompany invoicing for internal/external customers.
  • Advanced pricing and on hand inventory functions


References:


Tuesday, May 04, 2021

Thursday, April 15, 2021

Getting started with Azure Data Factory for Dynamics365 CRM consultants

The majority of Dynamics 365 CRM projects typically include some initial data load/migration activities as customers don't start from a blank sheet. Besides initial load of data there might also be a need for data integrations - there are a  number of solutions for data migration or data integration with Dynamics 365 CRM - this post will explain why you should strongly consider Azure Data Factory as a possible option.. 

Besides the obvious writing code as an integration option, you can also consider a third party solution and there are many good solutions on the market to meet integration needs e.g.  Tibco Scribe or the KingswaySoft SSIS Integration toolkit for Dynamics 365 .  But if you are looking for something with minimal initial cost investment (no separate licensing fee), quick start-up time and fully cloud based (no virtual machine/server environments needed) Azure Data Factory is a great fit. 

This is why - in my opinion -  technical Dynamics 365 CRM consultants should be at least aware of the  capabilities of Azure Data Factory and setup simple data pipelines.

Azure Data Factory is a managed cloud service meant for building complex, hybrid ETL/ELT pipelines that integrate data silos and can include big data and machine learning transformations.


Azure Data Factory provides an easy to use drag and drop interface where you use activities/tasks to transformation, move and copy your data from over 100+ supported data sources within the Microsoft stack or non-Microsoft (Salesforce, SAP, etc…). Azure Data Factory also has full support for running existing SSIS packages on its cloud infrastructure and last but not least it has a flexible pricing model with low initial costs. As expected from an enterprise platform it also provides great DevOps integration - something which I will cover in a next blog post (always setup Git integration for Azure Data Factory - it will make the editing experience a lot easier - see below in the Tips & tricks section for more details).

Below is a visualization of the different building blocks in  Azure Data Factory:

  • A data factory can have one or more pipelines which group together activities which define the actions to perform on your data. (See  Pipelines and activities in Azure Data Factory for more details)
  • These activities use datasets as inputs and outputs. A dataset identifies the data in a specific data store. (See Datasets in Azure Data Factory for more details)
  • Before you can create a dataset you need to specify the linked service to link your data store to the data factory. The dataset represents the structure of the data within the linked data store and the linked services defines the connection to the data source. (See Linked services in Azure Data Factory for more details)
  • The pipelines need to run in a compute infrastructure used by Azure Data Factory to allow for the different types of data integration capabilities. There are three types of integration runtime types (Azure, Self-hosted and Azure SSIS) and depending on the required capabilities and network support you will need to use on or the other. (See Integration runtime in Azure Data Factory)


Tips & tricks:

ADF has a connector which supports both Dynamics 365 (cloud version) as well the legacy on-premise versions of Dynamics CRM (version 2015 and higher) -   See Copy data from and to Dynamics 365 (CDS/Dataverse) or Dynamics CRM by using Azure Data Factory for more details). To be able to use on-premise versions of Dynamics CRM, the environment needs to be setup with  Internet-facing deployment (IFD) - info on the ADF connector information to on premise CRM see Dynamics 365 and Dynamics CRM on-premises with IFD is found on the same documentation page
  • Setup Git integration for Azure Data Factory - by default, the Azure Data Factory user interface experience (UX) authors directly against the data factory service (live mode). Even when you are not deploying Azure Data Factory artifacts from source control you will benefit from the better developer experience when git integration is enabled.
  • For the copy activity, you are required to set the Write Behavior to Upsert when  you use Dynamics 365 CRM as a destination (sink). You will need to provide an alternate key or you  can  leave the alternate key blank when you use the internal ID of CRM within your mapping (See Azure Data Factory: Update an existing record using Dynamics 365 Internal Guid for more details)

References:

Monday, April 12, 2021

Dynamics 365 and Power Platform quarterly reading List Q1 2021

Technical topics (Configuration, customization and extensibility)
Topics for Dynamics 365  Business Application Platform consultants, project managers and power users