Thursday, April 09, 2015

SharePoint Server 2013 and business intelligence scenarios

With all the emphasis on Microsoft Power BI – people seem to forget that there still are some other options for setting up a business intelligence solution based on SharePoint available for those of you who can’t go all in for a cloud solution (because of regulations, corporate policies or other reasons). Don’t get me wrong – I do believe that if you are standardized on Microsoft you should follow their “Cloud First” credo. Listed below are a number of links to get you started.

SharePoint Deep Dive exploration: explaining duplicate detection in SharePoint Server 2013

This is the third post in a series of posts which try to delve a little deeper in the inner workings of SharePoint - for the previous post check out:

SharePoint Server can detect near duplicates of documents and will take this into account when displaying search results. In this post I will delve a little deeper into the underlying techniques being used. An important thing to keep in mind is that the way that duplicate documents are identified has evolved and changed in the different versions of SharePoint.

SharePoint Server 2007 detected duplicates using a commonly used technique called "shingling". This is a generic technique which allows you to identify duplicates or near duplicates of documents (or webpages). Shingling has been  widely used in different types of systems and software to identify spams, plagiarism or to enforce copyright protection. A shingle – which is more more commonly referred to as a q-gram – is a contiguous subsequence of tokens taken from a document.
So if you want to see if two documents are similar, you can do this by looking at how many shingles they have in common. You however need to determine how long your subsequence of tokens needs to be – typically a value of 4 is used. This is formalized by using S(d,w), which is the set of distinct shingles of width w which are contained in a document e.g. for the line “a rose is a rose is a rose” – so with w=4, we get the following shingles “a rose is a”, “rose is a rose”, “is a rose is”. If you wan to compare the similarity between two sets, e.g. S(doc1) and S(doc2) which are the sets of distinct shingles of document1 and document2, you can use the Jaccard similarity index (or resemblance index) to define the degree of similarity. A Jaccard index with a value of 0 means that documents are completely dissimilar, whereas 1 points to identical documents.  This would however that we would need to calculate the similarity index of each pair of documents – which would be a quite intensive task – to speed up processing a form of hashing is used (for more details take a look at  the explanation about near duplicates and shingling)

As items in SharePoint 2007 were indexed, these hashes were stored in the search database. It is not really clear from the documentation whether these hashes only related to the content of an item or to the properties as well (although  this blog  - Microsoft Office SharePoint Server 2007: Duplicate search results  states that it is only on the content of a document). So in SharePoint Server 2007 these hashes were stored in the MSSDuplicateHashes tables.

In SharePoint Server 2013 these hashes are not stored in the MSSDuplicateHashes table anymore but in the DocumentSignature – this is documented in the article Customizing search results in SharePoint 2013. In the next screenshot I have used the and you will notice that although the document title and some metadata are different for the 5 documents, there are only 2 distinct document signatures. This indicates that the shingle is only calculated using the content of documents and not the metadata or the file name (Content By Search web parts don’t seem to use duplicate trimming). The document signature actually contains 4 checksums and if one of the four matches with another document, the document is treated as a duplicate. This also means that when SharePoint search encounters a document for which it is unable to extract the actual contents, it probably is not able to do proper duplicate trimming.

Since SharePoint Server 2013 search result web parts have duplicate trimming activated and SharePoint 2013 is using a quite coarse algorithm for determining a duplicate, you will see some unexpected results. Luckily after installing the SharePoint 2013 Cumulative Update July 2014 you will have the option to de-activate duplicate trimming within the query builder settings.

Another way to accomplish the same thing is by changing the settings for grouping of results. As outlined in Customizing search results in SharePoint 2013, duplicate removal of search results is a part of grouping. So if you specify to group on DocumentSignature, you would be able to show near duplicates (if one of the 4 checksums is different) but still omit the “complete” duplicates.

But the most elegant solution is the one outlined by Elio in View duplicate results in SharePoint 2013 Search Center via Javascript which allows you to change the “duplicate trimming” setting of the webpart using javascript –allowing your end users to determine themselves whether or not they want to trust the SharePoint duplicate trimming algorithm.

Thursday, April 02, 2015

Big Data and Internet of Things (IOT) links


Just a quick roundup of some interesting links to articles, whitepapers and videos on Big Data and IoT. I would be amazed if you haven’t heard from Big Data – but still you might still take a look at these introductory blog posts which mainly cover Big Data from a Microsoft perspective.

Other Big Data and Internet of Things (IOT) links:

Tuesday, March 31, 2015

Overview of Apache Hadoop components in HDInsight, from Ambari to Zookeeper

A couple of months ago I wrote a first post about Microsoft Big Data – Introducing Windows Azure HDInsight. In this post I will delve a little deeper into the different components which are used in HDInsight. This is not an exhaustive list of components but it lists a number of components which you might encounter when working on your first big data project using Microsoft Azure HDInsight.

  • Ambari – provides provisioning, monitoring and management layer on top of Apache Hadoop clusters. It provides a web interface for easy management as well as a REST  API.
  • Flume – allows you to collect, aggregate and move large volumes of streaming data into HDFS in a fault tolerant fashion.
  • HBase – provides NoSQL database functionality on top of HDFS. It is a columnar store, which provides fast access to large quantities of data. HBase tables can have billions of rows and these rows can have almost unlimited number of columns.
  • HCatalog – provides a tabular abstraction on top of HDFS. Pig, Hive and Mapreduce use this layer to make it easier to work with files in Hadoop. HCatalog has been merged into the Hive project. Hive uses it kind of a like a master database. For more details check out Apache HCatalog – a  table management layer that exposes Hive metadata to other Hadoop applications.
  • Hive – allows you to perform data warehouse operations using HiveQL. HiveQL is a SQL like language and provides an abstraction layer on top of MapReduce. Hive allows you to use Hive tables to project a schema onto the data (schema on read). Through the use of HiveQL you can view your data as a table and create queries just as you would in a normal database with support for selects, filters, group by, equi-joins, etc…. Hive inherits schema and location information from HCatalog.  Hive will act as a bridge to many BI products which expect tabular data. One of the recent developments around Hive is the Stinger initiative – its main aim is to deliver performance improvements while keeping SQL compatibility
  • Kafka – is a fast, scalable, durable and fault-tolerant messaging system. It is commonly used together with Storm and HBase for stream processing, website activity tracking, metrics collection and monitoring or log aggregation. It is provides similar functionality as AMQP, JMS or Azure Event Hub
  • Mahout – the goal of Mahout is build scalable machine learning libraries. The main machine learning use cases Apache Mahout support are recommender systems (people who buy x also buy y), classification (assigning data to discrete categories e.g. is a credit card transaction fraudelent or not) and clustering (grouping unstructured data without any training data). For more details take a look at Introducing Mahout (IBM)
  • Oozie – enables you to create repeatable, dynamic workflows for tasks to be performed in a Hadoop cluster. An Oozie workflow can include Sqoop transfers, Hive jobs, HDFS commands, Mapreduce jobs, etc … Oozie will submit the jobs but Mapreduce will execute them.  Oozie also has built-in callback and pollback mechanisms to check for the status of jobs
  • Pegasus provides large scale graph mining capabilities by offering important graph mining algorithms such as degree calculation, pagerank calculation, random walk with restart (RWR), etc .. Most graph mining algorithms have limited scalability, they support up to millions of nodes. Pegasus billion-node graphs. Graphs (also referred to as networks) are everywhere in real life going from web pages, social networks, biological networks and many more… Finding patterns, rules etc within these networks allow you to rank web pages (or documents), measure viral marketing, discover disease patterns, etc … The details of Pegasus can be found in the white paper  Pegasus: a peta-scale graph mining system – implementation and observations.
  • Pig is developed to make data analysis on Hadoop easier. It is made up of two components: a high level scripting language (which is called Pig Latin but most people just reference it as Pig) and an execution environment. Pig Latin is a procedural language which allows you to build data flows, it contains a number of built in User Defined Functions (UDFs) to manipulate data. These UDFs allow you to ingest data from files, streams or other sources, make selections and transform the data. Finally Pig will store the results back into HDFS.  Pig scripts are translated into a series of MapReduce jobs that are run on Apache Hadoop. Users can create their own functions or invoke code in other languages such as JRuby, Jython and Java. Pig will gives you more control and optimization over the flow of the data than Hive does.
  • RHadoop – is a collection of R packages that allow users to manage and analyze data with Hadoop in R, including the creation of map-reduce jobs. Check out Step-by-step guide to setting up an R-Hadoop system and Using RHadoop to predict website visitors to get started with some hands-on examples.
  • Storm – distributed real-time computation system, it supports a set of common stream analytics operations, provides guaranteed message processing with support for transactions. It was originally created by Nathan Marz (see History of Apache Storm and lessons learned) – the guy who cam up with the term Lambda architecture for a generic, scalable and fault tolerant data processing architecture.
  • SQOOP – was built to transfer data from relational structured data stores (such as SQL Server, MySQL or Oracle) to Apache Hadoop and vice versa. Because Sqoop can handle database metadata, it is able to perform type-safe data movement using the data types specified in the metadata.
  • Zookeeper – manages and store configuration information. It is responsible for managing and mediating conflicting updates across your Hadoop cluster.

Thursday, March 26, 2015

People insights– data driven insights regarding people

Whereas marketing and sales as well as financial departments have been using advanced analytics for quite a while, it seems that HR is still in one of the early maturity phases of analytics usage. This  is a view which seemed to be shared by CEOs. In a recent study CEOs gave their HR department a 5.9 (out of 10) for their analytical skills.  (See CEO niet overtuigd van analytische skills HR )

Whereas HR controls a lot of data (and needs to keep it up to date) it does not seem to be able to use this data to provide strategic advise to the board of directors. HR can only deliver truly added value by providing data-driven insights regarding people that are both compelling to business leaders and actionable by HR. This is a view which is also quite nicely outlined by consultancy firm Inostix in their HR Analytics Value Pyramid (See The HR Analytics Value Pyramid (Part 3) ). To make sure that HR team stays current and viable, they will need to adopt a whole need set of skills of which analytics is just one (See The reskilled HR team – transform HR professionals into skilled business consultants  and the capability gap across the 2015 Human Capital Trends)

In a number of upcoming posts I will delve a little deeper into this topic and will show some practical examples of how you can realize some quick wins without a huge upfront investment.

Related links:

SharePoint Saturday 2015 : How to build your own Delve, combining machine learning, big data and SharePoint

BIWUG is organizing the fifth edition of SharePoint Saturday Belgium – this year in Antwerp – for more information check out the site . Here is the excerpt of the session I will be delivering.

How to build your own Delve: combining machine learning, big data and SharePoint

You are experiencing the benefits of machine learning everyday through product recommendations on Amazon &, credit card fraud prevention, etc… So how can we leverage machine learning together with SharePoint and Yammer. We will first look into the fundamentals of machine learning and big data solutions and next we will explore how we can combine tools such as Windows Azure HDInsight, R, Azure Machine Learning to extend and support collaboration and content management scenarios within your organization.

Related posts:

Wednesday, March 04, 2015

BIWUG session on advanced integration between SharePoint Online and Yammer

On the 19th of March BIWUG ( is organizing its next session – don’t forget to register for BIWUG1903 – we have planned a great speaker and an interesting session

Advanced integration between SharePoint Online and Yammer using Yammer Apps (Speaker: Stephane Eyskens, SharePoint Technical Architect - )

First things first, the session will start describing what are the required steps to bind an Office 365 Tenant with an Enteprise Domain, how to federate on-premises users with Office 365 in order to have a SSO in place and how to bind Yammer to the Office 365 Tenant. Next, developers will learn how to leverage the Yammer App Model in order to build deeper integration between SPO(+on-prem) and Yammer. Business scenarios such as leveraging Yammer's Open Graph in SPO Workflows and associating Yammer Groups to SPO Team sites (& groups) will be covered. Security aspects will be discussed as well : from acting on behalf of a user with his consent to impersonating it completely, we'll see how to manage tokens and discuss some best practices.

Intended audience: The session is primarily intended for developers.

Key benefits: After this session, developers should have a good visibility on how to go beyond the OOTB Yammer App integration with
SPO and what Open Graph is all about.

Also thanks to Xylos for hosting this session

Monday, March 02, 2015

Resetting content index in SharePoint Server 2013: why and how

When you are developing against SharePoint Server 2013 search, you might forced to reset the search index. You can do this using the SharePoint user interface through the screen shown below or using PowerShell. I prefer to use PowerShell since resetting through the user interface seems to give me timeouts especially when the index is a quite large. One of the reasons why you are required to reset your content index is when your Search Service Application got into an unhealthy state because of insufficient disk space (See Fixing the Search Service after the Index Drive fills) but I also noticed that when you are working on your development machine and are making lots of changes to the search schema – it might also be useful to reset the search index for your changes to be picked up. If you want to change it using the user interface go to the Search Administration screen of the Search Service Application and select the “Index Reset” option underneath the crawling section of the left menu.

Don’t just reset your search index in a production environment since this will also impact the analytics processing component (Read Reset the index in SharePoint Server 2013). Listed below is the syntax for the PowerShell command (the snippet below assumes that you only have one SearchServiceApplication)


The SearchServiceApplication.Reset method takes two parameters -  public void Reset(    bool disableAlerts,   bool ignoreUnreachableServer) – I would recommend always setting disableAlerts to true if necessary. The value for the second parameter will depend on your specific case. If you also get a timeout when using the PowerShell cmdlet – you can use the steps outlined in SharePoint 2013 Content Index Reset Timeout – they worked for me.

Friday, February 13, 2015

Mindful apps – putting people at the center supported by data

When preparing for my session The future of business process apps – a Microsoft perspective  last year I got inspired by this great article The future of enterprise apps: moving beyond workflows to mindflows – which introduced the concept of mindful apps. The core message is that if we want to automate the last mile we have to analyze how people work day in and day out and start our system/application design with people at the center. One of the quotes which is mentioned in the article is from Bill Murphy (CTO of Blackstone one of the largest investment funds worldwide) – “We aim to take away as much of the stress as possible from easy stuff, by automating the routine and mundane actions, and give users more time to focus on the higher-end pieces of what they need to do.”

Most of the characteristics which are outlined in the comparison between traditional and mindful apps are not revolutionary (See table above) but there is one one important key message.
Mindful apps will allow us to assess and compare options in decision context, they will allow us to quickly respond to events and make the best decision given a specific context and will provide us with “extended intelligence” by understanding and recognizing patterns within the data at hand. We as humans are good at problem solving, pattern recognition, identifying outliers, making creative leaps and incorporating new information when making decisions. We should be able to focus on these high end tasks by being freed from laborious and menial tasks which can be automated.

There are 3 different trends which will impact how these mindful apps will be shaped:
  • User context matters – make it personal. When we make decisions or work within the context of specific processes, there are a lot of parameters which determine how we react or how we make decisions – these parameters should be integrated into the decision framework driving mindful apps. Our calendar, availability of colleagues to reach out to, input from communications (using e-mail, messaging or other formats), information that we capture from blogs, social networks such as LinkedIn or open data sources together with available information within your organization should be filtered and at your fingertips. Machine learning and cognitive algorithms will drive the second machine age (a term coined by Brynolfson from MIT) but we are only at the start of how these algorithms can drive the future workplace for information workers.
  • Mobile shapes our expectations.  Mobile apps and the user experience they provide is shaping at how we see an ideal enterprise application as well. Mindful apps should strive to combine beauty, simplicity and purpose to create an experience that delights us and that is effortless to use. Mobile apps are easy to understand, when people use a good app for the first time, they intuitively grasp the most important features, why can’t we do the same for enterprise apps. Simplicity rules. The apps should also incorporate necessary logic to evolve as the user grows more comfortable with its use and is exploring more advanced functionality. Apps should learn people’s preferences over time and show the interface which is best suited for the task at hand.
  • (Big) data and advanced analytics are the driving force. There is a lot of hype and confusion around the term Big Data but one thing is for sure – storage costs and processing cost have dropped significantly in the last decade. When you combine this with the rise of new storage platforms such as Hadoop, NoSQL datastores  such as HBase, Cassandra, etc … and new data processing frameworks such as Apache Drill, Dremel, Spark, etc..  new opportunities arise to support users in their decision making processes. While there is a lot of emphasis on the 4 Vs (Volume, Velocity, Variety and Veracity) – there is one more V that you have to think about that is Value (Also see  Big Data beyond the hype, getting to the V that really matters)
  • Cloud will lead the way.  A lot of the innovation which will enable this next generation of apps is coming out of the datacenters of Google, Amazon, LinkedIn, Microsoft, Yahoo, etc… but most organizations don’t have the available capacity (nor the same financial resources) as these internet giants. Luckily the economies of scales which are offered by the cloud allows solution providers to provide you with a data infrastructure which can scale from prototype size to production environments able to handle huge amounts of data. The different major cloud players – IBM, Microsoft, Amazon and Google all seem to make big bets in building out the data analytics platform of the future and this competition will drive prices further down. This competition  will also force them to focus on more innovative solutions which allow them to differentiate from the competition.
The best examples where we – as a consumer - see the power of Big Data, Analytics, Machine Learning and the cloud appear is mobile. The three major players (Microsoft, Apple and Google) are relying quite heavily on the cloud computing power and huge data stores to provide the experience of digital assistants. Microsoft is currently working on Cortana (which has been released in a number of countries worldwide), Apple was definitely the trendsetter with Siri and Google has Google Now.

The future is already here — it's just not very evenly distributed. (William Gibson)