Drilling into Einstein GPT - is generative AI trustworthy enough for enterprise use cases?

#artificialintelligence 

Salesforce is making a big deal this week of building OpenAI's GPT3 technology -- which powers ChatGPT -- into a broad swathe of its products, describing its Einstein GPT offering as "the world's first generative AI CRM technology." But as I explored in an interview published yesterday with Emergence Capital's Jake Saper, there are big risks in using these Large Language Models (LLMs) in a business context. I spent the day investigating whether Salesforce is cognizant of those risks, and what steps it is taking to ensure its customers don't fall foul of them when implementing solutions based on Einstein GPT. On the face of it, generative AI looks like it can bring a massive boost to business productivity, by making it easier to summarize information from unstructured data stored in documents, knowledgebases and message streams, preparing ready-made drafts for messages, emails and web content used in sales, service and marketing, or generating chunks of code and test routines for developers. But in more than twenty-five years of writing about and reporting on technology, I've seen enough to know that it's always sensible to look behind the hype and the enthusiastic demos to figure out what are the hidden downsides -- where could it all go wrong?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found