Goto

Collaborating Authors

 kafka


Salman Rushdie's Literary Inspirations

The New Yorker

The author of "The Eleventh Hour" looks back on a few works--by Mikhail Bulgakov, Franz Kafka, Voltaire, and E. M. Forster--that have helped him craft his own. Salman Rushdie prefers not to immerse himself in other people's writing when he is working on his own. "When I'm writing fiction, I tend not to read fiction. I actually don't want other people's voices to sneak into my head," Rushdie said recently. That's not to say that other writers' books aren't an important part of his process--posing questions, providing instruction, and offering models of characters.


Playing Kafka review – a well-intentioned but sanitised attempt at adapting the unadaptable

The Guardian

If Franz Kafka had lived to give notes on Playing Kafka, a new video game adaptation of his work, a big one might have been: where's the sex? What this interactive version of The Trial has in branching narrative, it lacks in sexuality: one can imagine the author-cum-playtester apoplectic at the absence of sadomasochism and lust. Overall, the choices made in this literal and lightly interactive adaptation seem calibrated to what is appropriate to leave running on a museum iPad. Simple binary choices and touchscreen controls set the bar to entry low, and there is no imagery to scandalise a visiting classroom. Playing Kafka, released just weeks before the centenary of Kafka's death, is a collaboration between the Goethe-Institut and the developer Charles Games (a studio, not a person).


Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

#artificialintelligence

When Michelangelo started, the most urgent and highest impact use cases were some very high scale problems, which led us to build around Apache Spark (for large-scale data processing and model training) and Java (for low latency, high throughput online serving). This structure worked well for production training and deployment of many models but left a lot to be desired in terms of overhead, flexibility, and ease of use, especially during early prototyping and experimentation [where Notebooks and Python shine]. Uber expanded Michelangelo "to serve any kind of Python model from any source to support other Machine Learning and Deep Learning frameworks like PyTorch and TensorFlow [instead of just using Spark for everything]." So why did Uber (and many other tech companies) build its own platform and framework-independent machine learning infrastructure? The posts How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning describe the benefits of leveraging the Apache Kafka ecosystem as a central, scalable, and mission-critical nervous system. It allows real-time data ingestion, processing, model deployment, and monitoring in a reliable and scalable way. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers, and production engineers. By leveraging it to build your own scalable machine learning infrastructure and also make your data scientists happy, you can solve the same problems for which Uber built its own ML platform, Michelangelo.


Staff Data Engineer (Kafka, Java) at Visa - Bengaluru, India

#artificialintelligence

Visa is a world leader in digital payments, facilitating more than 215 billion payments transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable and secure payments network, enabling individuals, businesses and economies to thrive. When you join Visa, you join a culture of purpose and belonging – where your growth is priority, your identity is embraced, and the work you do matters. We believe that economies that include everyone everywhere, uplift everyone everywhere. Your work will have a direct impact on billions of people around the world – helping unlock financial access to enable the future of money movement.


Data Engineer at General System - London, England, United Kingdom

#artificialintelligence

The opportunity is for a Data Engineer to play a critical role in architecting and developing components forming the Analytics platform, whilst implementing new ideas to solve novel challenges related to geospatial analytics at scale. The Data Engineer will collaborate with Data Scientists to bring geospatial algorithms into production at scale, identify business requirements and opportunities, such as utilising new data sources or ways to process and store data. Working primarily in Python & Scala, the data engineer will gain exposure to a range of technologies including Spark, Kafka, AWS, Airflow, Rust and much more. Our mission is to transform the way humans and machines understand the world. We are doing this by creating a real-time index of reality, enabling billions of machines and trillions of sensors to land, index, share and consume each other's data about the world as they move through it.


Lead Data Engineer, Kafka at ASAPP - Bengaluru

#artificialintelligence

Find open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general, filtered by job title or popular skill, toolset and products used.


Senior Data Engineer at Trainline - London, United Kingdom

#artificialintelligence

But why should you join? You will be working in a high performing and collaborative multi-cultural team. We have over 42 nationalities across our 5 offices in London, Paris, Edinburgh, Barcelona and Milan who work closely together. We want our people to stretch their minds, abilities, and share their knowledge. Each year we hold The Trainline Tech Summit, which provides Trainliners with an opportunity to stand up and share their story, learnings, or new skills with their colleagues in a safe environment. We've always paid special attention to flexible working as we value a strong work/life balance. The pandemic has taught us that a balance between remote working and being in a collaborative office environment leads to productive teams. We prioritise the focus on being one team over elevating the heroics of an individual, for us the true heroes are those who are excellent at nurturing, coaching and generous in sharing their knowledge with others. Everything that we do takes into account the morale of every member of our team, their opportunities for growth and for participation in exciting challenges.


Senior Data Engineering Consultant at OpenCredo - London, England, United Kingdom

#artificialintelligence

OpenCredo (OC) is a UK-based software development consultancy helping clients achieve more by leveraging modern technology and delivery approaches. We are a community of passionate technologists who thrive on delivering pragmatic solutions for our client's most complex challenges. Curious, and tenacious but always sensitive to our client's context, we are not afraid to speak our minds to help steer our clients towards understanding and achieving their key goals. We are looking for a hands-on senior-level data engineer with experience in dealing with various data-centric problems and challenges. You understand the benefits of event streaming and the cases for ETL and batch processing, with experience bringing these approaches together in a coherent solution.


Machine Learning Streaming with Kafka, Debezium, and BentoML

#artificialintelligence

Putting a Machine Learning project to life is not a simple task and, just like any other software product, it requires many different kinds of knowledge: infrastructure, business, data science, etc. I must confess that, for a long time, I just neglected the infrastructure part, making my projects rest in peace inside Jupiter notebooks. But as soon as I started learning it, I realized that is a very interesting topic. Machine learning is still a growing field and, in comparison with other IT-related areas like Web development, the community still has a lot to learn. Luckily, in the last years we have seen a lot of new technologies arise to help us build an ML application, like Mlflow, Apache Spark's Mlib, and BentoML, explored in this post. In this post, a machine learning architecture is explored with some of these technologies to build a real-time price recommender system. To bring this concept to life, we needed not only ML-related tools (BentoML & Scikit-learn) but also other software pieces (Postgres, Debezium, Kafka). Of course, this is a simple project that doesn't even have a user interface, but the concepts explored in this post could be easily extended to many cases and real scenarios. I hope this post helped you somehow, I am not an expert in any of the subjects discussed, and I strongly recommend further reading (see some references below).


DataStax Astra Streaming Goes GA With New Built-in Support for Kafka and RabbitMQ

#artificialintelligence

DataStax, the real-time data company, announced the general availability (GA) of Astra Streaming, an advanced, fully-managed messaging and event streaming service built on Apache Pulsar. Now featuring built-in API-level support for Kafka, RabbitMQ and Java Message Service (JMS), Astra Streaming makes it easy for enterprises to get real-time value from all their data-in-motion. "Because business happens in real time, continuously processing streams of data is imperative for enterprises to optimize decisions, actions and experiences. Streaming data can be a game changer for companies to make predictive business decisions and gain competitive advantages." "Many enterprises are struggling with fragmented and complex streaming architectures, with most of their data-in-motion still siloed in legacy messaging and queuing middleware like JMS and RabbitMQ," said Chris Latimer, vice president of product management at DataStax.