Collaborating Authors

Deploy a Data Pipeline in under 10 minutes with DC/OS - Mesosphere


Data pipelines are important architectural components of the fast data infrastructures that allow business leaders to make real-time data-driven decisions. Data pipelines, the combination of datasets and the processing engines that act on those datasets, are the backbone of many of today's hot technologies such as IoT, real-time recommendation engines, and fraud detection. Data pipelines rely on multiple tightly-coupled technologies in order to be successful. Building a data pipeline involves integrating varied technologies such as distributed analytics engines (Apache Spark), distributed message queues (Apache Kafka), and distributed storage systems (Apache Cassandra). However, many organizations struggle to build and maintain data pipelines because they are exceedingly complex they are comprised of multiple components that require careful operation and administration to obtain the maximum value and avoid data loss.

Flafka: Big Data Solution for Data Silos


From the previous post on "Poor Data Management Practices", the discussion ended with a high level approach to one possible solution for data silos. Traditional approaches for solving the data silo problem can cost millions of dollars (even for a moderately sized company), and typically requires a huge effort in integration work (e.g., data modeling, system engineering, software design, and development). In this post Flafka, the unofficial name for integrating Flume as a producer for Kafka, is presented as another possible big data solution for data silos. Without inundating you with technical jargon, Apache Flume is a distributed service that is very efficient at collecting and moving large amounts of data into Hadoop (e.g., click-stream data, security log files, and application data). Flume provides sinks into the Hadoop ecosystem like, Hbase, Solr Index, HDFS, and Kafka to name a few.

Streamlio, an open-core streaming data fabric for the cloud era


This Tricky aphorism of a song came to mind once more a couple of years back, when Streamlio came out of stealth. Streamlio is an offering for real-time data processing based on a number of Apache open source projects, and it directly competes with Confluent and Apache Kafka, which is at the core of Confluent's offering. Also: Processing time series data: What are the options? In 2017, Apache Kafka was generally considered an early adopter thing: Present in many whiteboard architecture diagrams, but not necessarily widely adopted in production in enterprises. Since then, Kafka has laid a claim to enterprise adoption, and Confluent has acquired open-core unicorn status after its latest funding.

Experimenting with IoT Anomaly Detection at Scale – What an Open Source Test Uncovers


The range and depth of applications dependent on IoT sensors continues to swell – from collecting real-time data on the floors of smart factories, to monitoring supply chains, to enabling smart cities, to tracking our health and wellness behaviors. The networks utilizing IoT sensors are capable of providing critical insights into the innerworkings of vast systems, empowering engineers to take better informed actions and ultimately introduce far greater efficiency, safety, and performance into these ecosystems. One outsized example of this: IoT sensors can support predictive maintenance by detecting data anomalies that deviate from baseline behavior and that suggest potential mechanical failures – thus enabling an IoT-fueled organization to repair or replace components before issues become serious or downtime occurs. Because IoT sensors provide such a tremendous amount of data pertaining to each particular piece of equipment when in good working condition, anomalies in that same data can clearly indicate issues. Looking at this from a data science perspective, anomalies are rare events which cannot be classified using currently available data examples; anomalies can also come from cybersecurity threats, or fraudulent transactions.

An introduction to Apache kafke, including what is kafka, kafka architecture, topics, partitions, a kafka tutorial, a kafka producer example, and a kafka consumer example


Kafka is a messaging system used for big data streaming and processing. In this tutorial, we discuss the basics of getting started with Kafka. We'll discuss the architecture behind Kafka and demonstrate how to get started publishing and consuming basic messages. Kafka is a messaging system. It safely moves data from system A to system B. Kafka runs on a shared cluster of servers making it a highly available and fault-tolerant platform for data streaming.