Goto

Collaborating Authors

 siebler


GPT-4's Multimodal Features: The Next Frontier in AI?

#artificialintelligence

A new announcement by Microsoft Germany has revealed that the latest version of the Generative Pre-trained Transformer language model, GPT-4, will be released in the coming week. This upcoming version is set to be even more advanced than its predecessor, #GPT3, as it will be equipped with the ability to process and comprehend various types of data, including text, images, and audio. This feature, known as multimodality, is expected to make GPT-4 an even more versatile language model as it will be multimodal, meaning it will be able to process and comprehend various types of data, including text, images, and audio. This new feature is expected to make #GPT4 more powerful, with potential applications in various fields such as natural language processing, advanced voice recognition, and image analysis and understanding. Clemens Siebler at #Microsoft presented several real-life examples of the existing capabilities of AI.


GPT-4 is coming next week – and it will be multimodal, says Microsoft Germany

#artificialintelligence

GPT-4 is coming next week: at an approximately one-hour hybrid information event entitled "AI in Focus - Digital Kickoff" on 9 March 2023, four Microsoft Germany employees presented Large Language Models (LLM) like GPT series as a disruptive force for companies and their Azure-OpenAI offering in detail. The kickoff event took place in the German language, news outlet Heise was present. Rather casually, Andreas Braun, CTO Microsoft Germany and Lead Data & AI STU, mentioned what he said was the imminent release of GPT-4. The fact that Microsoft is fine-tuning multimodality with OpenAI should no longer have been a secret since the release of Kosmos-1 at the beginning of March. "We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities – for example videos," Braun said.