Goto

Collaborating Authors

 InfoWorld News


Will generative AI in the cloud become affordable?

InfoWorld News

In this PWC study, 59% of leaders said they will invest in new technologies, and 46% say they will invest in generative AI specifically in the next 12 to 18 months. The most significant hurdle is adequate cloud bandwidth/computing power to accommodate usage and enable scalability. That means coming to terms with how much money can be spent on new generative AI systems and generative AI enablement. Try reading any tech or business article these days without finding a mention. However, the computing and infrastructure costs of running generative AI models in the cloud are a barrier for many businesses.


Google indemnifies generative AI customers over IP rights claims

InfoWorld News

Google announced on Thursday that it will protect its generative AI customers against any intellectual property claims made on the data used or output served by Google-hosted AI models. By extending protection in its cloud and workspace environments, Google joins the list of technology firms that have recently announced IP support for using their own generative AI tools. These include companies like IBM, Microsoft, Amazon, and Adobe. Google said the protection would span across all Google environments using the Duet AI collaborator, and the company's homegrown generative AI engine Vertex AI. The indemnity clause by leading technology companies will likely bring in hope as generative AI's challenges over privacy, security, and intellectual property violations peak.


IBM takes on AWS, Google, and Microsoft with Watsonx

InfoWorld News

IBM is taking on the likes of Microsoft, AWS, and Google by introducing Watsonx, a new generative AI platform, which will help enterprises design and tune large language models (LLMs) for their operational and business requirements. Watsonx comes with a suite of tools for tuning LLMs, a data store built on lakehouse architecture, and an AI governance toolkit, the company said. Watson AI is IBM's artificial intelligence engine that the company had trained on different machine learning algorithms along with question analysis, natural language processing, feature engineering, and ontology analysis. Watsonx can be seen as the evolution of Watson AI. With the Watsonx platform, the company said it is trying to meet enterprises' requirements in five areas including interacting and conversing with customers and employees, automating business workflows and internal processes, automating IT processes, protecting against threats, and tackling sustainability goals.


Semantic Kernel: A bridge between large language models and your code

InfoWorld News

At first glance, building a large language model (LLM) like GPT-4 into your code might seem simple. The API is a single REST call, taking in text and returning a response based on the input. But in practice things get much more complicated than that. The API is perhaps better thought of as a domain boundary, where you're delivering prompts that define the format the model uses to deliver its output. But that's a critical point: LLMs can be as simple or as complex as you want them to be.


New HPE offerings aim to turbocharge machine-learning implementation

InfoWorld News

HPE has released a pair of systems designed to broaden the uptake and speed deployment of machine learning among enterprises. Swarm Learning is aimed at bringing the wisdom of crowds to machine learning modeling without sacrificing security, while the Machine Learning Development System is meant to offer a one-box training solution for companies that would otherwise have had to design and build their own machine learning infrastructure. The Machine Learning Development System is available in physical footprints of several different sizes, but the company says a "small configuration" uses an Apollo 6500 Gen10 compute server to provide the horsepower for machine learning training, HPE ProLiant DL325 servers and Aruba CX 6300 switches for management of system components, and NVIDIA's Quantum InfiniBand networking platform, along with HPE's specialist Machine Learning Development Environment and Performance Cluster management software suites. According to IDC research vice president Peter Rutten, it's essentially bringing HPC (high performance computing) capabilities to enterprise machine learning, something that would usually require enterprises to architect their own systems. "It is the kind of system that businesses are really looking for, now that AI is more mature," he said.


Swift for TensorFlow project shuts down

InfoWorld News

Swift for TensorFlow, a Google-led project to integrate the TensorFlow machine learning library and Apple's Swift language, is no longer in active development. Nevertheless, parts of the effort live on, including language-differentiated programming for Swift. The GitHub repo for the project notes it is now in archive mode and will not receive further updates. The project, the repo notes, was positioned as a new way to develop machine learning models. "Swift for TensorFlow was an experiment in the next-generation platform for machine learning, incorporating the latest research across machine learning, compilers, differentiable programming, systems design, and beyond."