Kramer Electronics  AWS Use Case

Kramer Electronics  

For more than 40 years, Kramer has remained a leader at the forefront of the Pro AV information technology industry. With solutions deployed across six continents in more than 100 countries worldwide, Kramer offers innovative range of hardware, software, and cloud-based AV IT solutions for a range of enterprise, education, as well as government and military end-users.  
 
Kramer offers infinite ways for people and their organizations to engage by bridging and connecting our Physi-Digi (physical and digital) world. The global company delivers high-value AV IT solutions by developing networked products, integrated in an open eco-system, and intelligent cloud-based software.  

 

The Challenge 

One of Kramer’s new offerings followed the acquisition of a smart home device – ‘The Brain’ – that controls many IoT devices used by individuals and organizations across corporate, household, and classroom spaces. The Brain provided high-level insights on streaming data but lacked analytical and historical insights. That is because no historical data was preserved when The Brain issued events every few seconds. Data streamed to the cloud, was ingested using clusters of RabbitMQ brokers, which were then consumed by a dashboard application.  
 
One new requirement from Kramer’s team and customers was to gather historical data for more analytics-driven insights to help make better decisions (e.g., when are classrooms used the most? What is the most-requested time of day for videoconferences?).  The first step towards an analytics-focused solution was to create a data lake with ad-hoc query capabilities in order to keep and explore the raw data.  

 

The Solution 

Fundamentally, CloudZone’s team is heavily invested in open source solutions. Aside from cloud vendor services and third-party solutions, we also work with many open-source tools, especially in the area of data processing and analytics. This enabled us to quickly choose the right tool to pull data from RabbitMQ and flush it to S3 – a scalable Logstash deployment. Logstash landed raw data on an S3 bucket during the first of compressed jsons. From there, AWS Glue enhanced the data for better query performance (e.g., conversion to parquet and compaction. Additionally, Athena provided ad-hoc query capabilities which were best suited for the first phase of exploration and adhered to the business requirement of preserving the data. 
 

The Results 

The end result is great. The entire solution for Kramer (excluding logstash) is serverless and very cost-effective. It lets the customer easily control the trade-off between data freshness and cost. Also, the rollout from a few customers to all of them is safely and easily achieved using the flexible routing capabilities of RabbitMQ such as exchange-to-exchange bindings. Moreover, we used Glue dynamic frames capabilities to solve some data format conflicts within the payloads.