Tuesday, November 18, 2014

Understanding Azure Event Hubs–ingesting data at scale

Azure Event Hubs are an extension on the existing Azure Service Bus which provides hyper-scalable stream ingestion capabilities. It allows different producers (devices & sensors – possibly in the 10 thousands) to send continuous streams of data without interruption. There are a number of different scenario in which you typically see this kind of streaming data from different sensors such as future oriented scenarios such as connected cars, smart cities but also more common scenarios such as application telemetry or industrial automation.

Event hubs scaling is defined by Throughput Units (TUs) which is kind of like a pre-allocation of resources. A single TU is able to handle up to 2 MB/s for writes or 1000 events per second and 2MB/s for read operations. Load in the Event Hub is determined by creation of partitions, these partitions allow for parallel processing both from the consumer and producer side. Next to support for common messaging scenarios, competing consumers, it allows provide data retention policies up to 84 GB of event storage per day. The current release supports up to 32 partitions but you can log a call to increase this up to a 1000 partitions. Since a partition is allocated at most 1 TU, this would allow for 1GB/s data ingest per Event Hub. Messages can be send to an Event Hub publisher endpoint via HTTPS or AMQP 1.0, consumers can retrieve messages using AMQP 1.0

Building such an application architecture is quite challenging and Event Hubs allows you to leverage the elasticity of the cloud and a pay per use model to get started quite rapidly. Whereas current scaling of this type of systems is oriented at 10s of thousands of units, expectations are that this number will increase quite rapidly. Gartner expects the number of installed IoT units to increase up to 26 billion by 2020, other estimates are event pointing at 40 billion IoT units (Internet of Things by the Numbers: estimates and forecasts)

References:

No comments:

Post a Comment