Goto

Collaborating Authors

How to Set Up Kafka - DZone Big Data

#artificialintelligence

Kafka is one of the most popular publisher-subscriber models written in Java and Scala. It was originally developed by LinkedIn and later open-sourced. Kafka is known for handling heavy loads, i.e. You can find out more about Kafka here. In this article, I am going to explain how to install Kafka on Ubuntu.


An introduction to Apache kafke, including what is kafka, kafka architecture, topics, partitions, a kafka tutorial, a kafka producer example, and a kafka consumer example

@machinelearnbot

Kafka is a messaging system used for big data streaming and processing. In this tutorial, we discuss the basics of getting started with Kafka. We'll discuss the architecture behind Kafka and demonstrate how to get started publishing and consuming basic messages. Kafka is a messaging system. It safely moves data from system A to system B. Kafka runs on a shared cluster of servers making it a highly available and fault-tolerant platform for data streaming.


Cassandra & Kafka based tweet analysis app on Application Container Cloud

#artificialintelligence

Application Container Cloud provides out-of-the-box Service Binding for Data Hub Cloud. The Kafka cluster topology used in this case is relatively simple i.e. a single broker with co-located with Zookeeper). You can opt for a topology specific to your needs e.g. Once you're done, please check the Key and Access Tokens section for the required info -- you will use it during application deployment


Real-World Data Stream Processing

#artificialintelligence

Using streaming technologies with Kafka Spark Cassandra to effectively gain insights on data. A tremendous stream of data is consumed and created by applications these days. These data include application logs, event transaction logs (errors, warnings), batch job data, IoT sensor data, social media, other external systems data and much many more. All this data flow can be piped through the data pipelines or stages that can give insights and provide tremendous benefits to the organization. As it was mentioned recently in an article in the Economist, "The world's most valuable resource is no longer oil, but data".


Part I: Develop stream processing apps using Kafka Streams on Oracle Cloud

@machinelearnbot

In simple words, Kafka Streams is a library which you can include in your Java based applications to build stream processing applications on top of Apache Kafka. Other distributed computing platforms like Apache Spark, Apache Storm etc. are widely used in the big data stream processing world, but Kafka Streams brings some unique propositions in this area Kafka Streams provides a State Store feature using which applications can store its local processing results (the state). RocksDB is used as the default state store and it can be used in persistent or in-memory mode. In our sample application, the state which we care about is the count of occurrences of the keywords which we chose to follow -- how is it implemented? Oracle Application Container Cloud provides access to a scalable in-memory cache and it's used the custom state store in our use case It's possible to scale our stream processing service both ways (details in the documentation) i.e. elastically