Goto

Collaborating Authors

 inworld


Inworld closes $50M Series A for its realistic NPC generator

#artificialintelligence

Inworld, a Disney-backed startup using AI to create realistic non-playable virtual characters (NPCs), has closed a $50 million Series A funding round. The startup has attracted interest for its ability to design and deploy interactive characters with more realistic interactions across the metaverse and other virtual words like video games. In current video games, NPCs have pre-scripted responses. AI-powered virtual characters like those Inworld are developing can offer dynamic responses to general questions about the local area or wider world. While graphics have generally become more immersive over the years, interactions have largely remained the same.


Disney-backed Inworld raises cash for AI-powered characters – TechCrunch

#artificialintelligence

If software is eating the world, AI isn't far behind. AI-powered text-, art- and audio-generating systems will soon make -- and already are making -- their way into the tools people use every day, from programming environments and spellcheck plugins to concept art creation platforms. The video game industry is no exception to this, and that hardly comes as a surprise. As illustrated by games like AI Dungeon, AI -- while imperfect -- can inject surprising creativity and novelty into branching narrative storytelling. Inworld AI was founded on this premise.


New-and-Improved Content Moderation Tooling

#artificialintelligence

We are introducing a new-and-improved content moderation tool: The Moderation endpoint improves upon our previous content filter, and is available for free today to OpenAI API developers. To help developers protect their applications against possible misuse, we are introducing the faster and more accurate Moderation endpoint. This endpoint provides OpenAI API developers with free access to GPT-based classifiers that detect undesired content -- an instance of using AI systems to assist with human supervision of these systems. We have also released both a technical paper describing our methodology and the dataset used for evaluation. When given a text input, the Moderation endpoint assesses whether the content is sexual, hateful, violent, or promotes self-harm -- content prohibited by our content policy.