Goto

Collaborating Authors

Part I: Develop stream processing apps using Kafka Streams on Oracle Cloud

@machinelearnbot

In simple words, Kafka Streams is a library which you can include in your Java based applications to build stream processing applications on top of Apache Kafka. Other distributed computing platforms like Apache Spark, Apache Storm etc. are widely used in the big data stream processing world, but Kafka Streams brings some unique propositions in this area Kafka Streams provides a State Store feature using which applications can store its local processing results (the state). RocksDB is used as the default state store and it can be used in persistent or in-memory mode. In our sample application, the state which we care about is the count of occurrences of the keywords which we chose to follow -- how is it implemented? Oracle Application Container Cloud provides access to a scalable in-memory cache and it's used the custom state store in our use case It's possible to scale our stream processing service both ways (details in the documentation) i.e. elastically


Cassandra & Kafka based tweet analysis app on Application Container Cloud

#artificialintelligence

Application Container Cloud provides out-of-the-box Service Binding for Data Hub Cloud. The Kafka cluster topology used in this case is relatively simple i.e. a single broker with co-located with Zookeeper). You can opt for a topology specific to your needs e.g. Once you're done, please check the Key and Access Tokens section for the required info -- you will use it during application deployment


Use Apache Kafka to Add Streaming Analytics to Your Application

@machinelearnbot

While relational databases are still the king of transaction processing systems, they have a hard time keeping up with the increasing demand for real-time analytics. In this session we will build and demonstrate an end-to-end data processing pipeline. We will discuss how to turn changes in database state into events and stream them into Apache Kafka. We will explain the basic concepts of streaming transformations using windows and KSQL before ingesting the transformed stream in a dashboard application. And lastly we will explore the possibilities of adding microservices as subscribers.


10 Big Data Possibilities for 2017 Based on Oracle's Predictions - DZone Big Data

#artificialintelligence

Be it the exteriors of IoT or the more intricate aspects of cloud computing, enterprise technologies are on the way up, facilitating dramatic transformations. Many companies are embracing Big Data as the newest fad, mainly as an advantage in this competitive era. In this post, we will be talking about some of the predictions made by Oracle concerning Big Data and its future in 2017. Machine learning was previously restricted to data scientists, but 2017 will bring it out into the open. Be it Google's newest ranking algorithm or electronic gadgets par excellence, machine learning will find a foothold to work with.


Apache Kafka Online Training Kafka Certification Course Edureka

@machinelearnbot

You have to build a system which should be consistent in nature. For example, if you are getting product feeds either through flat file or any event stream you have to make sure you don't lose any events related to product specially inventory and price. If we talk about price and availability it should always be consistent because there might be possibility that product is sold or seller doesn't want to sell it anymore or any other reason. However, attributes like Name, description doesn't make that much noise if not updated on time. John wants to build an e-commerce portal like Amazon, Flipkart or Paytm.