redis
Novel Architecture for Distributed Travel Data Integration and Service Provision Using Microservices
Barua, Biman, Kaiser, M. Shamim
This paper introduces a microservices architecture for the purpose of enhancing the flexibility and performance of an airline reservation system. The architectural design incorporates Redis cache technologies, two different messaging systems (Kafka and RabbitMQ), two types of storages (MongoDB, and PostgreSQL). It also introduces authorization techniques, including secure communication through OAuth2 and JWT which is essential with the management of high-demand travel services. According to selected indicators, the architecture provides an impressive level of data consistency at 99.5% and a latency of data propagation of less than 75 ms allowing rapid and reliable intercommunication between microservices. A system throughput of 1050 events per second was achieved so that the acceptability level was maintained even during peak time. Redis caching reduced a 92% cache hit ratio on the database thereby lowering the burden on the database and increasing the speed of response. Further improvement of the systems scalability was done through the use of Docker and Kubernetes which enabled services to be expanded horizontally to cope with the changes in demand. The error rates were very low, at 0.2% further enhancing the efficiency of the system in handling real-time data integration. This approach is suggested to meet the specific needs of the airline reservation system. It is secure, fast, scalable, all serving to improve the user experience as well as the efficiency of operations. The low latency and high data integration levels and prevaiing efficient usage of the resources demonstrates the architecture ability to offer continued support in the ever growing high demand situations.
- Asia > Singapore (0.05)
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
- Africa > South Sudan > Equatoria > Central Equatoria > Juba (0.04)
- Transportation > Passenger (1.00)
- Transportation > Air (1.00)
- Information Technology > Security & Privacy (1.00)
- Consumer Products & Services > Travel (1.00)
A bug revealed ChatGPT users' chat history, personal and billing data - Help Net Security
A vulnerability in the redis-py open-source library was at the root of last week's ChatGPT data leak, OpenAI has confirmed. Not only were some ChatGPT users able to see what other users have been using the AI chatbot for, but limited personal and billing information ended up getting revealed, as well. ChatGPT suffered an outage on March 20 and then problems with making conversation history accessible to users. "During a nine-hour window on March 20, 2023, another ChatGPT user may have inadvertently seen your billing information when clicking on their own'Manage Subscription' page," OpenAI notified 1.2% of the ChatGPT Plus subscribers via email. "The billing information another user might have seen consisted of your first and last name, billing address, credit card type, credit card expiration date, and the last four digits of your credit card. The information did not include your full credit card number, and we have no evidence that any customer information was viewed by more than one other ChatGPT user."
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (0.89)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.53)
The AI Chatbot Handbook – How to Build an AI Chatbot with Redis, Python, and GPT
In order to build a working full-stack application, there are so many moving parts to think about. And you'll need to make many decisions that will be critical to the success of your app. For example, what language will you use and what platform will you deploy on? Are you going to deploy a containerised software on a server, or make use of serverless functions to handle the backend? Do you plan to use third-party APIs to handle complex parts of your application, like authentication or payments? Where do you store the data? In addition to all this, you'll also need to think about the user interface, design and usability of your application, and much more. This is why complex large applications require a multifunctional development team collaborating to build the app. One of the best ways to learn how to develop full stack applications is to build projects that cover the end-to-end development process. You'll go through designing the architecture, developing the API services, developing the user interface, and finally deploying your application. So this tutorial will take you through the process of building an AI chatbot to help you learn these concepts in depth. Important Note: This is an intermediate full stack software development project that requires some basic Python and JavaScript knowledge. I've carefully divided the project into sections to ensure that you can easily select the phase that is important to you in case you do not wish to code the full application.
Feature Stores for Real-time AI & Machine Learning - KDnuggets
Real-time AI/ML use cases such as fraud prevention and recommendations are on the rise, and feature stores play a key role in deploying them successfully to production. According to popular open source feature store Feast, one of the most common questions users ask in their community Slack is: how scalable / performant is Feast? This is because the most important characteristic of a feature store for real-time AI/ML is the feature serving speed from the online store to the ML model for online predictions or scoring. Successful feature stores can meet stringent latency requirements (measured in milliseconds), consistently (think p99) and at scale (up to 100Ks of queries per second and even million of queries per second, and with gigabytes to terabytes sized datasets) while at the same time maintaining a low total cost of ownership and high accuracy. As we will see in this post, the choice of online feature store as well as the architecture of the feature store play important roles in determining how performant and cost effective it is.
Running Redis on Google Colab - KDnuggets
Google Colab is a popular browser based environment for executing Python code on hosted Jupyter notebooks and training models for machine learning, including free access to GPUs! It is a great platform for data scientists and machine learning (ML) engineers for learning and quickly developing ML models in Python. Redis is an in-memory open source database that is increasingly being used in machine learning - from caching, messaging and fast data ingest, to semantic search and online feature stores. In fact, NoSQL databases - and specifically Redis - was named by Ben Weber, Director of Applied Data Science at Zynga as one of the 8 new tools he learned as a data scientist in 2020. Because of the increasing use of Redis for data science and machine learning, it is very handy to be able to run Redis directly from your Google Colab notebook!
Building a Scalable ML Feature Store with Redis
When a company with millions of consumers such as DoorDash builds machine learning (ML) models, the amount of feature data can grow to billions of records with millions actively retrieved during model inference under low latency constraints. These challenges warrant a deeper look into selection and design of a feature store -- the system responsible for storing and serving feature data. The decisions made here can prevent overrunning cost budgets, compromising runtime performance during model inference, and curbing model deployment velocity. Features are the input variables fed to an ML model for inference. A feature store, simply put, is a key-value store that makes this feature data available to models in production. At DoorDash, our existing feature store was built on top of Redis, but had a lot of inefficiencies and came close to running out of capacity. We ran a full-fledged benchmark evaluation on five different key-value stores to compare their cost and performance metrics.
- Asia > Kazakhstan > West Kazakhstan Region (0.04)
- Africa > Middle East > Egypt > Nile Delta (0.04)
Deep Face Recognition with Redis - Sefik Ilkin Serengil
Key value databases come with a high speed and performance where we mostly cannot reach in relational databases. Herein similar to Cassandra, Redis is a fast key value store solution. In this post, we are going to adopt Redis to build an overperforming face recognition application. On the other hand, this could be adapted to NLP studies or any reverse image search case such as in Google Images. The official redis distribution is available for Linux and MacOS here.
Deep learning in production with Keras, Redis, Flask, and Apache - PyImageSearch
Shipping deep learning models to production is a non-trivial task. If you don't believe me, take a second and look at the "tech giants" such as Amazon, Google, Microsoft, etc. -- nearly all of them provide some method to ship your machine learning/deep learning models to production in the cloud. Going with a model deployment service is perfectly fine and acceptable…but what if you wanted to own the entire process and not rely on external services? This type of situation is more common than you may think. How would you go about shipping your deep learning models to production in these situations, and perhaps most importantly, making it scalable at the same time?
An Introduction to Redis-ML (Part Three) Redis Labs
This post is part three of a series of posts introducing the Redis-ML module. The first article in the series can be found here. The sample code for this post requires several Python libraries and a Redis instance running Redis-ML. Detailed setup instructions to run the code can be found in either part one or part two of the series. Logistic regression is another linear model for building predictive models from observed data.