Tuesday, June 16, 2015

Combining Dynamics CRM Online and Power BI Preview

I am a strong believer of the concept of “in-context analytics” and as I outlined in Mindful apps – putting people at the center supported by data I consider analytics and business intelligence to be essential in providing business value. So I was quite interested when I first learned about the Power BI preview with it’s built in support Dynamics CRM Online  (For a great write up about it check out Previewing the New Power BI Experience with Dynamics CRM).

When I started playing around with it I was surprised that it seemed to do things quite differently from Power BI for Office 365 since I thought it was simply the next release of the existing Power BI for Office 365 offering. Apparently this is not the case.

Power BI Preview seems to be quite different from Power BI for Office 365 - for a detailed description of differences check out Power BI vs Power BI Preview: what’s the difference – here’s a quick summary:
  • Power BI for Office 365 is based on technologies such as Excel and SharePoint and is an integrated part of Office 365, whereas Power BI Preview is built on a separate platform. 
  • Power BI Preview is using the browser the Power BI Designer as design tool for creating dashboards and reports whereas Power BI for Office 365 mainly relies on Excel as a design tool.
  • Power BI Preview also exposes an API which allows you to push data into the Power BI service – for more information check out the Power BI Developer Center. For a good introduction check out Developing for Power BI Overview (Video). This is something which I think is the key enabler for real-time analytics on your data. To stay up to date make sure that you follow the Power BI Development blog
  • Power BI Preview has some new data visualizations available such as single number card tiles, combo charts, funnel charts, gauge charts, filled maps and tree maps (Check out Visualization types available in Power BI Reports)


If you check out the official documentation Use Power BI with Microsoft Dynamics Online (Technet) – it seems to focus on the new Power BI Preview but the Microsoft Dynamics CRM templates for Power BI that you can download for free from PinPoint - listed in the second section of the page - seem to be based on Power BI for Office 365. (Use Google Chrome to see the download link – I did not see it when using Internet Explorer 11)

When you actually try to use it in practice together with Dynamics CRM Online you will however encounter some serious limitations which are hopefully resolved by the summer release:



My guess is that the way forward will be Power BI Preview (or name it Power BI 2.0) and it will replace Power BI for Office 365 – you already see it appearing in the license management section of Office 365 (see screenshot below). But for the moment it is still a Preview and no specific release date has been made available so go for Power BI for Office 365 at the moment.



References:

Tuesday, June 09, 2015

Getting to grips with Dynamics CRM releases, updates and build numbers

Since a couple of weeks I have been working with Dynamics CRM. One of the things which is always challenging when starting to learn a new product is getting to understand the different versions and the changes between versions. When Microsoft was still on a 3-year release cycle for their products, this was quite easy to understand but most Microsoft products are now on a more much more frequent release schedule and Dynamics CRM is no exception.
Updates and improvements to Dynamics CRM are released twice a year – in what is commonly referred to as the spring and fall release – see Microsoft Dynamics CRM – Roadmap for 2015. Given the new “Cloud first” credo of Microsoft these updates can be a cloud only release as was the case with the Spring 2015 (Carina) release.  For Dynamics CRM Online you are required to be on the current version ( n )  or the prior version ( n-1 ) but you have the choice to skip an update – see Manage Dynamics CRM Online Updates. Dynamics CRM on premise follows the standard lifecycle that you are accustomed to  (see Microsoft Dynamics Support Lifecycle Policy FAQ and Microsoft Product Lifecycle Search for Dynamics CRM)



To make things a little more interesting the Dynamics CRM product team seems to have chosen to use stars and constellations as code names for the different releases. Code names of the same genre are also used for closely related products to Dynamics CRM such Dynamics Marketing, Social Engagement and Parature Knowledgebase.
Recently Microsoft also changed the naming conventions for their updates and explained the version/build numbers that they are using now and for future releases – check out New naming conventions for Microsoft Dynamics CRM updates. The tables below summarizes the different versions for the moment. As outlined in Greg Olsen his blog post – Microsoft Dynamics CRM 2015 Roadmap – the next version for Dynamics CRM is code named Ara – another interesting tidbit -  “Not confirmed by Microsoft, but it is likely that On-Premises installations will have to wait for the CRM ‘ARA’ release during the Fall Wave in order to get the Carina new features and others.”
Product Name Version description Version number Release or Update Code Name
Microsoft Dynamics CRM Online Fall ‘13 6.0.0 Major release Orion
Microsoft Dynamics CRM Online Fall ‘13 6.0.1 Incremental Update -
Microsoft Dynamics CRM Online Fall ‘13 6.0.2 Incremental Update -
Microsoft Dynamics CRM Online Spring ‘14 6.1.0 Minor release Leo
Microsoft Dynamics CRM Online 2015 Update (Fall ‘14) 7.0.0 Major release Vega
Microsoft Dynamics CRM Online 2015 Update 1 (Spring ‘15) 7.1.0 Minor release Carina
Microsoft Dynamics CRM Online t.b.d. t.b.d. t.b.d. Ara
Table1.  Releases Microsoft Dynamics CRM online
Product Name Version description Version number Release or Update Code Name
Microsoft Dynamics CRM (on premise) 2013 6.0.0 Major release Orion
Microsoft Dynamics CRM (on premise) 2013 UR1 6.0.1 Incremental Update -
Microsoft Dynamics CRM (on premise) 2013 UR2 6.0.2 Incremental Update -
Microsoft Dynamics CRM (on premise) 2013 SP1 6.1.0 Minor release Leo
Microsoft Dynamics CRM (on premise) 2015 7.0.0 Major release Vega
Microsoft Dynamics CRM (on premise) 2015 Update 0.1 7.0.1 Minor release Carina
Microsoft Dynamics CRM (on premise) t.b.d. t.b.d. t.b.d. Ara
Table 2. Releases Microsoft Dynamics CRM (on premise)
References:


Thursday, April 09, 2015

SharePoint Server 2013 and business intelligence scenarios

With all the emphasis on Microsoft Power BI – people seem to forget that there still are some other options for setting up a business intelligence solution based on SharePoint available for those of you who can’t go all in for a cloud solution (because of regulations, corporate policies or other reasons). Don’t get me wrong – I do believe that if you are standardized on Microsoft you should follow their “Cloud First” credo. Listed below are a number of links to get you started.

SharePoint Deep Dive exploration: explaining duplicate detection in SharePoint Server 2013

This is the third post in a series of posts which try to delve a little deeper in the inner workings of SharePoint - for the previous post check out:



SharePoint Server can detect near duplicates of documents and will take this into account when displaying search results. In this post I will delve a little deeper into the underlying techniques being used. An important thing to keep in mind is that the way that duplicate documents are identified has evolved and changed in the different versions of SharePoint.

SharePoint Server 2007 detected duplicates using a commonly used technique called "shingling". This is a generic technique which allows you to identify duplicates or near duplicates of documents (or webpages). Shingling has been  widely used in different types of systems and software to identify spams, plagiarism or to enforce copyright protection. A shingle – which is more more commonly referred to as a q-gram – is a contiguous subsequence of tokens taken from a document.
So if you want to see if two documents are similar, you can do this by looking at how many shingles they have in common. You however need to determine how long your subsequence of tokens needs to be – typically a value of 4 is used. This is formalized by using S(d,w), which is the set of distinct shingles of width w which are contained in a document e.g. for the line “a rose is a rose is a rose” – so with w=4, we get the following shingles “a rose is a”, “rose is a rose”, “is a rose is”. If you wan to compare the similarity between two sets, e.g. S(doc1) and S(doc2) which are the sets of distinct shingles of document1 and document2, you can use the Jaccard similarity index (or resemblance index) to define the degree of similarity. A Jaccard index with a value of 0 means that documents are completely dissimilar, whereas 1 points to identical documents.  This would however that we would need to calculate the similarity index of each pair of documents – which would be a quite intensive task – to speed up processing a form of hashing is used (for more details take a look at  the explanation about near duplicates and shingling)



As items in SharePoint 2007 were indexed, these hashes were stored in the search database. It is not really clear from the documentation whether these hashes only related to the content of an item or to the properties as well (although  this blog  - Microsoft Office SharePoint Server 2007: Duplicate search results  states that it is only on the content of a document). So in SharePoint Server 2007 these hashes were stored in the MSSDuplicateHashes tables.

In SharePoint Server 2013 these hashes are not stored in the MSSDuplicateHashes table anymore but in the DocumentSignature – this is documented in the article Customizing search results in SharePoint 2013. In the next screenshot I have used the and you will notice that although the document title and some metadata are different for the 5 documents, there are only 2 distinct document signatures. This indicates that the shingle is only calculated using the content of documents and not the metadata or the file name (Content By Search web parts don’t seem to use duplicate trimming). The document signature actually contains 4 checksums and if one of the four matches with another document, the document is treated as a duplicate. This also means that when SharePoint search encounters a document for which it is unable to extract the actual contents, it probably is not able to do proper duplicate trimming.


Since SharePoint Server 2013 search result web parts have duplicate trimming activated and SharePoint 2013 is using a quite coarse algorithm for determining a duplicate, you will see some unexpected results. Luckily after installing the SharePoint 2013 Cumulative Update July 2014 you will have the option to de-activate duplicate trimming within the query builder settings.



Another way to accomplish the same thing is by changing the settings for grouping of results. As outlined in Customizing search results in SharePoint 2013, duplicate removal of search results is a part of grouping. So if you specify to group on DocumentSignature, you would be able to show near duplicates (if one of the 4 checksums is different) but still omit the “complete” duplicates.



But the most elegant solution is the one outlined by Elio in View duplicate results in SharePoint 2013 Search Center via Javascript which allows you to change the “duplicate trimming” setting of the webpart using javascript –allowing your end users to determine themselves whether or not they want to trust the SharePoint duplicate trimming algorithm.
References:

Thursday, April 02, 2015

Big Data and Internet of Things (IOT) links

 

Just a quick roundup of some interesting links to articles, whitepapers and videos on Big Data and IoT. I would be amazed if you haven’t heard from Big Data – but still you might still take a look at these introductory blog posts which mainly cover Big Data from a Microsoft perspective.

Other Big Data and Internet of Things (IOT) links:

Tuesday, March 31, 2015

Overview of Apache Hadoop components in HDInsight, from Ambari to Zookeeper

A couple of months ago I wrote a first post about Microsoft Big Data – Introducing Windows Azure HDInsight. In this post I will delve a little deeper into the different components which are used in HDInsight. This is not an exhaustive list of components but it lists a number of components which you might encounter when working on your first big data project using Microsoft Azure HDInsight.


  • Ambari – provides provisioning, monitoring and management layer on top of Apache Hadoop clusters. It provides a web interface for easy management as well as a REST  API.
  • Flume – allows you to collect, aggregate and move large volumes of streaming data into HDFS in a fault tolerant fashion.
  • HBase – provides NoSQL database functionality on top of HDFS. It is a columnar store, which provides fast access to large quantities of data. HBase tables can have billions of rows and these rows can have almost unlimited number of columns.
  • HCatalog – provides a tabular abstraction on top of HDFS. Pig, Hive and Mapreduce use this layer to make it easier to work with files in Hadoop. HCatalog has been merged into the Hive project. Hive uses it kind of a like a master database. For more details check out Apache HCatalog – a  table management layer that exposes Hive metadata to other Hadoop applications.
  • Hive – allows you to perform data warehouse operations using HiveQL. HiveQL is a SQL like language and provides an abstraction layer on top of MapReduce. Hive allows you to use Hive tables to project a schema onto the data (schema on read). Through the use of HiveQL you can view your data as a table and create queries just as you would in a normal database with support for selects, filters, group by, equi-joins, etc…. Hive inherits schema and location information from HCatalog.  Hive will act as a bridge to many BI products which expect tabular data. One of the recent developments around Hive is the Stinger initiative – its main aim is to deliver performance improvements while keeping SQL compatibility
  • Kafka – is a fast, scalable, durable and fault-tolerant messaging system. It is commonly used together with Storm and HBase for stream processing, website activity tracking, metrics collection and monitoring or log aggregation. It is provides similar functionality as AMQP, JMS or Azure Event Hub
  • Mahout – the goal of Mahout is build scalable machine learning libraries. The main machine learning use cases Apache Mahout support are recommender systems (people who buy x also buy y), classification (assigning data to discrete categories e.g. is a credit card transaction fraudelent or not) and clustering (grouping unstructured data without any training data). For more details take a look at Introducing Mahout (IBM)
  • Oozie – enables you to create repeatable, dynamic workflows for tasks to be performed in a Hadoop cluster. An Oozie workflow can include Sqoop transfers, Hive jobs, HDFS commands, Mapreduce jobs, etc … Oozie will submit the jobs but Mapreduce will execute them.  Oozie also has built-in callback and pollback mechanisms to check for the status of jobs
  • Pegasus provides large scale graph mining capabilities by offering important graph mining algorithms such as degree calculation, pagerank calculation, random walk with restart (RWR), etc .. Most graph mining algorithms have limited scalability, they support up to millions of nodes. Pegasus billion-node graphs. Graphs (also referred to as networks) are everywhere in real life going from web pages, social networks, biological networks and many more… Finding patterns, rules etc within these networks allow you to rank web pages (or documents), measure viral marketing, discover disease patterns, etc … The details of Pegasus can be found in the white paper  Pegasus: a peta-scale graph mining system – implementation and observations.
  • Pig is developed to make data analysis on Hadoop easier. It is made up of two components: a high level scripting language (which is called Pig Latin but most people just reference it as Pig) and an execution environment. Pig Latin is a procedural language which allows you to build data flows, it contains a number of built in User Defined Functions (UDFs) to manipulate data. These UDFs allow you to ingest data from files, streams or other sources, make selections and transform the data. Finally Pig will store the results back into HDFS.  Pig scripts are translated into a series of MapReduce jobs that are run on Apache Hadoop. Users can create their own functions or invoke code in other languages such as JRuby, Jython and Java. Pig will gives you more control and optimization over the flow of the data than Hive does.
  • RHadoop – is a collection of R packages that allow users to manage and analyze data with Hadoop in R, including the creation of map-reduce jobs. Check out Step-by-step guide to setting up an R-Hadoop system and Using RHadoop to predict website visitors to get started with some hands-on examples.
  • Storm – distributed real-time computation system, it supports a set of common stream analytics operations, provides guaranteed message processing with support for transactions. It was originally created by Nathan Marz (see History of Apache Storm and lessons learned) – the guy who cam up with the term Lambda architecture for a generic, scalable and fault tolerant data processing architecture.
  • SQOOP – was built to transfer data from relational structured data stores (such as SQL Server, MySQL or Oracle) to Apache Hadoop and vice versa. Because Sqoop can handle database metadata, it is able to perform type-safe data movement using the data types specified in the metadata.
  • Zookeeper – manages and store configuration information. It is responsible for managing and mediating conflicting updates across your Hadoop cluster.

Thursday, March 26, 2015

People insights– data driven insights regarding people

Whereas marketing and sales as well as financial departments have been using advanced analytics for quite a while, it seems that HR is still in one of the early maturity phases of analytics usage. This  is a view which seemed to be shared by CEOs. In a recent study CEOs gave their HR department a 5.9 (out of 10) for their analytical skills.  (See CEO niet overtuigd van analytische skills HR )

Whereas HR controls a lot of data (and needs to keep it up to date) it does not seem to be able to use this data to provide strategic advise to the board of directors. HR can only deliver truly added value by providing data-driven insights regarding people that are both compelling to business leaders and actionable by HR. This is a view which is also quite nicely outlined by consultancy firm Inostix in their HR Analytics Value Pyramid (See The HR Analytics Value Pyramid (Part 3) ). To make sure that HR team stays current and viable, they will need to adopt a whole need set of skills of which analytics is just one (See The reskilled HR team – transform HR professionals into skilled business consultants  and the capability gap across the 2015 Human Capital Trends)

In a number of upcoming posts I will delve a little deeper into this topic and will show some practical examples of how you can realize some quick wins without a huge upfront investment.

Related links:

SharePoint Saturday 2015 : How to build your own Delve, combining machine learning, big data and SharePoint

BIWUG is organizing the fifth edition of SharePoint Saturday Belgium – this year in Antwerp – for more information check out the site http://www.spsevents.org/city/Antwerp/Antwerp2015/ . Here is the excerpt of the session I will be delivering.

How to build your own Delve: combining machine learning, big data and SharePoint

You are experiencing the benefits of machine learning everyday through product recommendations on Amazon & Bol.com, credit card fraud prevention, etc… So how can we leverage machine learning together with SharePoint and Yammer. We will first look into the fundamentals of machine learning and big data solutions and next we will explore how we can combine tools such as Windows Azure HDInsight, R, Azure Machine Learning to extend and support collaboration and content management scenarios within your organization.

Related posts:

Wednesday, March 04, 2015

BIWUG session on advanced integration between SharePoint Online and Yammer

On the 19th of March BIWUG (www.biwug.be) is organizing its next session – don’t forget to register for BIWUG1903 – we have planned a great speaker and an interesting session

Advanced integration between SharePoint Online and Yammer using Yammer Apps (Speaker: Stephane Eyskens, SharePoint Technical Architect - http://www.silver-it.com/ )

First things first, the session will start describing what are the required steps to bind an Office 365 Tenant with an Enteprise Domain, how to federate on-premises users with Office 365 in order to have a SSO in place and how to bind Yammer to the Office 365 Tenant. Next, developers will learn how to leverage the Yammer App Model in order to build deeper integration between SPO(+on-prem) and Yammer. Business scenarios such as leveraging Yammer's Open Graph in SPO Workflows and associating Yammer Groups to SPO Team sites (& groups) will be covered. Security aspects will be discussed as well : from acting on behalf of a user with his consent to impersonating it completely, we'll see how to manage tokens and discuss some best practices.

Intended audience: The session is primarily intended for developers.

Key benefits: After this session, developers should have a good visibility on how to go beyond the OOTB Yammer App integration with
SPO and what Open Graph is all about.

Also thanks to Xylos for hosting this session