Thursday, December 30, 2021

Virtual tables in Microsoft Dataverse and Dynamics 365 Customer Engagement

Virtual entities in Dynamics CRM have been around for more than 4 years and were introduced with the 9.x release of Dynamics CRM Online. Virtual tables (entities) allow you to integrate external data into Dataverse without having the replicate the data or without having to write client side script to visualize the data. Virtual tables provide an abstraction layer which allow for real-time on-demand access to data wherever it resides, for end users it will look as if the data resides in Dataverse.

To be honest, I hadn't paid a lot of attention to virtual entities due to its limitations and due to the fact that there weren't  a lot of improvements from Microsoft in it before 2021 - some of these limitations of virtual tables still exists like only organization owned tables, no auditing or change tracking, no queue support, a virtual table cannot represent an activity, etc...  But a lot has changed in the last 12 months - for example CRUD support for virtual table providers went GA in March 2021 so I decided to take a look at it in the last couple of days.  Another trigger was a blog post from @jukkan on Virtual Dataverse tables with no code, via Connectors which I marked already marked for review a couple of months ago.

Virtual Table Data Providers act as the glue between the external data and your virtual entities in Dataverse.  There are currently  4 different flavors of the Virtual Table Data Providers: oData, custom-built using plugins, Cosmos DB and using the Power Platform Connector for SQL with a number of them still in preview.


Dataverse ships with an OOTB OData v4 Data Provider but I personally haven't seen any practical use case for this specific connector. So let's take a look at the current status of the 3 other data providers.

Custom Virtual Data Provider
This is really a pro-developer story, .NET developers have the option of creating custom virtual table data providers to help integrate external data source types that are not supported by an existing data provider. Sample: Custom virtual table provider with CRUD operations contains some the sample code for the plugins that you need to implement as well how to register these in Datavers but I found the walkthrough - Custom Virtual Table provider with CRUD operations: Introduction and scenario - part 1 of 6 from @capuanodanilo a lot more helpful since it outlines all of the different steps in a more elaborate manner.

Azure Cosmos DB SQL API Data Provider
The Azure Cosmos DB Data Provider has been in preview for quite a while already and you will need to install this from AppSource - Azure Cosmos DB for DocumentDB API Data Provider - based on a recent blog post from @temmy_raharjo  -  [Virtual Entity - Preview] Setup Azure Cosmos DB SQL API Data Provider - the current release seems to be working again (I got stuck with the setup but will add more details when I find time to get it working)

Virtual Connector Data Provider
The Virtual Connector Provider (VCP) for tabular connectors entered public preview in August 2021 (no GA date mentioned). The VCP preview currently supports only SQL Server (both on-premise and on-premise) but the release plan mentions that it will  allow you to use Microsoft Dataverse virtual tables using multiple supported tabular connectors (e.g. SQL Server, OneDrive, Excel, SharePoint, etc...).  Similar to the Cosmos DB Data Provider you will first need to install the solution from AppSource.



I got everything working using the walkthrough  mentioned in Create virtual tables using the virtual connector provider (preview) but overall the configuration experience is still confusing because you have to switch between the new and old solution designer experience (I think it even is safest to just do everything using the old solution designer). 

I also noticed that when you use Azure SQL with the Virtual Connector Provider you might need to add the IP addresses from the Dataverse back-end servers to the Azure SQL whitelist - maybe because my Azure SQL is in a different tenant/subscription then the Dataverse environment. 

After completing the configuration, the virtual tables initially did not work and when looking at the plugin trace logs - I noticed below exception. When adding both IP addresses (probably 2 back-end servers) to Azure SQL - everything started working.


Even though the article Create virtual tables using the virtual connector provider (preview) mentions that you can make the configuration solution aware - I think you first need to test everything especially if you want to automate solution deployment and have specific naming conventions around entities/tables.

The fact that you need to create a connection with all users in your organization is also something that Microsoft will change - a concern which is also shared in this blog post. Dataverse virtual connector provider for SQL Server. Awesome? which also provides some interesting workarounds. I however think that is should be best that in a standard config - only the service principal that you used in the initial configuration should be given access to the SQL without  a need to explicitly share the connection with everyone in the organization.



Summary
Virtual tables indeed show great promise but the Virtual Connector Provider is still a little bit to rough on the edges to use with confidence. As we near GA, I hope additional improvements will be made available especially around the security aspect. 

References:


Sunday, December 26, 2021

Random notes on the Microsoft Dataverse Web API - background and historical perspective

For those new to Dataverse - previously known as the Common Data Service (CDS) - it might not seem so obvious but the modern Dataverse Web API was one of the big features of the Dynamics CRM 2016 release. So it's seen a bit of mileage already.

The goal of the D365 Web API was to make it easier to create applications across a wide variety of platforms, devices and programming languages.  The Web API is extremely well suited for developing client side components using either Javascript or Typescript but it was also meant to be used for building applications or services using other programming languages such as Python, Ruby or Java or development on non-Microsoft platforms such as mobile (Android or iOS) or cloud applications which needed to interact with Dynamics CRM Online or on-premise.

It is interesting to see that the Web API still relies on the Organization Service as the expectation was at that time that the Web API would fully replace the Organization Service quite soon. As mentioned before the Web API was released together with CRM 2016 which had version number 8.1. For an overview of the different on- premise version numbers - see Microsoft Dynamics 365 on-premises cumulative updates and to understand the way that version numbers work in the cloud see Dynamics 365 Customer Engagement release version transparency. With Microsoft sticking to the 9.x version  for a number of releases now, the statement that CRM SOAP 2011 Service Endpoint will be turned off as of v10 might still hold true.


The original ODATA v2 CRM endpoint which is now deprecated only supported CRUD operations whereas the new Web API also supported special messages and functions typical for Dynamics 365/CRM - to see the full list of actions available you can take a look at the metadata document https://[yourenvironment].crm4.dynamics.com/api/data/v9.0/$metadata - below is a screenshot with the QualifyLead action.

One of the interesting actions that you can call in this way is the RetrieveTotalRecordCount function e.g. https://[yourorg].crm4.dynamics.com/api/data/v9.1/RetrieveTotalRecordCount(EntityNames=['account','contact']) which provides the quickest way to retrieve the number of records for an entity/table.

OAuth authentication also became the standard authentication model for the Dynamics 365/Dataverse Web API. For code running within the context of Dataverse/CRM - e.g. web resources or PCF controls, you don't need to handle authentication separately since the authentication context is passed on. For back-end applications - you will need to register your application with Azure to use the new authentication model.

References:




Thursday, December 16, 2021

Azure Well Architected Framework for Dynamics 365 and Power Platform solution architects

As a Dynamics 365 CE solution architect, a basic understanding of the  Azure cloud platform components is a must. Due to the nature of Dynamics CRM Online (and Dataverse) being a shared resource operated by Microsoft, there are limitations on how much time and resources your data processing and business logic can consume. By passing of your code to Azure cloud, you remove these resource limits and you can also benefit of specific innovations available in Azure Cloud (e.g. AI capabilities in Azure Cognitive Services). Microsoft offers a number of standard integrations which enable these scenarios - one example is the ability of plugins to pass their execution context to one or more Azure components.

As solution architect you will be working with developers and technical architects to translate your proposed solution architecture into an integrated system and therefore a good initial understanding of Azure Cloud is essential. If you have already implemented a number of solution architectures combining Azure resources and Dynamics CE online, it might make sense to revisit these setups once in a while since things change rapidly in the cloud. This is where the Azure Well-Architected Framework comes in.

The Azure Well-Architected Framework is a set of guiding principles that can be used to improve the quality of a workload hosted in Microsoft Azure. The framework consists of five pillars of architecture excellence: 

  • Cost optimization
  • Operational excellence
  • Performance efficiency
  • Reliability
  • Security. 

Incorporating these pillars helps produce high-quality, stable, and efficient cloud architecture.

References:

Friday, December 03, 2021

Dataverse search: the good, the bad and the ugly

The enhanced search experience for model driven apps (initially called relevance search) on Dataverse and Dynamics 365 CE was initially announced in November 2020 and you could start using it a couple of months later. Recently Microsoft announced some changes (and improvements) for relevance search in Goodbye Relevance search, hello Dataverse search! 

When you plan to use Dataverse search, there are a number of things that you need to keep in mind which are not so obvious from the Microsoft documentation so let's dive right in and look at the Good, the Bad and the Ugly about Dataverse search.


Dataverse search - The Good

The search box has moved in the UI and is now very easy for end users to find back. When you start typing text in the Dataverse search box, you will see a list of suggested results as well as previous queries.  The way that the suggested results are shown can even by slightly changed as outlined in this blog post - Quick tip - Data in the inline suggestion view of relevance search in Dynamics 365 CRM. Dataverse search is powered by Azure Cognitive Search so you get support for fuzzy search and advanced capabilities like word stemming and work breaking. For more documentation on the end-user experience see Search for tables and rows by using Dataverse search



With the new solution explorer update it also is a lot easier to configure which entities/tables you want to be indexed for search. The settings are solution aware so if you have multiple solutions installed in your Dataverse environment be aware of solution layering. To get an overview of all indexed tables - select the Default solution in the maker portal and click Manage Search Index - for more details see Configure Dataverse search for your environment


Relevance search is also available through a REST API meaning that you can also use it in Canvas Apps & Power Automate (See Use Relevance Search API in Power Automate & Canvas App - Part 1). To begin using Dataverse search, your application simply needs to use POST requests  to the /api/search/v1.0/query endpoint - for the complete documentation see  Search across table data using Dataverse search in Microsoft docs.

Dataverse search - The Bad

Although there is no specific license required for Dataverse search - and it works both for environments without Dynamics 365 apps enabled and with Dynamics 365 apps - there might be a cost associated to it. 

When you enable Dataverse search, additional storage is required for creating the indexes in the Dataverse database storage. You will probably not notice this in your development environments since they might not contain sufficient data. But in production environments the increase can be quite significant - I have seen Dynamics 365 Customer Service environments where as much as 16% of the database usage is taken by the RelevanceSearch table.

In my opinion it would be better to add a more prominent warning of possible storage increase to the main documentation pages about Dataverse search - there is however a mention on the Free up storage space documentation - remove unrequired tables and columns from Dataverse search. 


With the Wave 2 2021 release (after October 2021) Microsoft enabled Dataverse search by default for all production environments so probably want to find out what the impact is on your storage  consumption.

I also did not expect the storage to be added in the database which at roughly 40€ GB/month for additional database storage does not come cheap.

Dataverse search - The Ugly

Now suppose that you are suddenly faced with an increase in storage in one of your production environments after you activated Dataverse search. Unfortunately just disabling the Dataverse search again is probably not an option because your end users probably liked the functionality. So your only options are to start reviewing which tables and which fields to add to the index. The entities which are enabled for search are clearly visible from the administration screen but unfortunately finding out the fields which are added to the relevance search index is a little more involved.



So if we take the example of the "accounts" table and look at the "Quick find" - out of the box there will be 8 fields marked as added to the index - account name, account number, primary contact (lookup), address1 city, main phone and e-mail. If you want to modify this e.g. remove account number - you will create a new solution and add the "accounts" table and "quick find" to the solution (via add existing components) and remove the column/field from the Quick find view and from the "Find by" settings. After you made these changes - you will notice that the number of indexed fields decreased.



Unfortunately in bigger organizations, you are not allowed to make changes in production environments without following a strict process. 

So an iterative process of  removing tables from Dataverse search results and modifying the fields selected for each table in an iterative manner to assess the impact is not always easy to find the balance between storage increase and providing functionality is not always easy. It would be a lot easier to see how much each table and preferably also each field contributes to the RelevanceSearch table storage size so that you now which changes are required.

The amount of storage required depends on the number of rows in the table, the number of indexed columns/fields and the amount of data stored in a specific column - a text field/column with many characters will consume more space then a text field with only a few characters. So if you are indexing e-mail activities - adding the description field might cause a significant increase. The indexed fields include Find columns (isSearchable), View Columns (isRetrievable) and Filter criteria (isFilterable).

References: