ray 2
Ray 2.2 boosts machine learning observability and scalability performance
Check out all the on-demand sessions from the Intelligent Security Summit here. Ray, the popular open-source machine learning (ML) framework, has released its 2.2 version with improved performance and observability capabilities, as well as features that can help to enable reproducibility. The Ray technology is widely used by organizations to scale ML models across clusters of hardware, for both training and inference. Among Ray's many users is generative AI pioneer OpenAI, which uses Ray to scale and enable a variety of workloads, including supporting ChatGPT. The lead commercial sponsor behind the Ray open-source technology is San Francisco-based Anyscale, which has raised $259 million in funding to date.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.62)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.62)
Ray, the machine learning tech behind OpenAI, levels up to Ray 2.0
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Over the last two years, one of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence (AI) workloads has been with the open-source Ray framework, used by companies from OpenAI to Shopify and Instacart. Ray enables machine learning (ML) models to scale across hardware resources and can also be used to support MLops workflows across different ML tools. Ray 1.0 came out in September 2020 and has had a series of iterations over the last two years. Today, the next major milestone was released, with the general availability of Ray 2.0 at the Ray Summit in San Francisco.