How practical AI prevailed over hype at Red Hat Summit 2025

ZDNet 

At the Red Hat Summit and Ansible Fest in Boston this month, much of the hype and overpromising about generative AI took a back seat to conversations about how organizations can actually build and deploy AI for their own business using their own data. Of course, this is a Red Hat Summit, and there was plenty of focus on core topics like open source, with the release of Red Hat Enterprise Linux 10, and automation and management with Ansible. But like everything nowadays, AI took up a lot of the attention at the conference, but at least much of it was refreshingly and critically practical. Also: 96% of IT pros say AI agents are a security risk, but they're deploying them anyway Rather than the more hyped AI-areas such as AI assistants, which a recent Aberdeen/ZDNet poll found to be of limited interest to a majority of users, most of the sessions and even major announcements were focused on technologies and strategies that business can use today to help them get the most out of AI while leveraging their own data in a secure and efficient manner. For example, there was a great deal of focus on inferencing, the process of running an AI model with new data to make predictions or decisions. Announcements on technologies such as vLLM and llm-d provide improved scaling and deployment options that simplify the complexities of inferencing while spreading compute loads.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found