The language industry is having a moment. The ongoing global health crisis has forced organizations to break down borders and support a global remote workforce, requiring more cross-language interactions and coordination than ever before. At the same time, technological innovations in the language translation industry are at an all time high. We've never before had access to such sophisticated technology tools to manage translation processes. I predict it's going to be an exciting year in the industry, with an unprecedented level of innovation.
You can get the book for 37% off by entering fccmunro into the discount code box at checkout at manning.com. One of the most important questions in technology today is how can humans and machines work together to solve problems? More than 90% of applications that use Artificial Intelligence improve with human feedback. For example, autonomous vehicles get smarter the more that they observe human drivers; smart devices get smarter as they hear more voice commands; and search engines get smarter by observing which sites people actually click on for each search term. Human-in-the-Loop Machine Learning Machine Learning details the process for optimizing the interaction between Machine Learning algorithms and humans who create the data that powers those algorithms.
Thanks to breakthroughs in natural language processing (NLP), machines can generate increasingly sophisticated representations of words. Every year, research groups release more and more powerful language models -- like the recently announced GPT-3, M2M 100, and MT-5 -- that are able to write complex essays or translate text into multiple languages with better accuracy than previous iterations. However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself. This summer, GPT-3 researchers discovered inherent biases within the model's results related to gender, race, and religion. Gender biases included the relationship between gender and occupation, as well as gendered descriptive words.
During a press meet recently, a Facebook spokesperson said that the social media giant would be redoubling its efforts to counter'harmful content' on its platform using artificial intelligence. Reportedly, Ryan Barnes, the Facebook Product Manager of Community Integrity, said that the company would use AI to prioritise harmful content. This move is targeting at helping its over 15,000 human reviewers and moderators in dealing with reported contents. Barnes said during the press interaction, "We want to make sure we're getting to the worst of the worst, prioritising real-world imminent harm above all." With that being said, there have been numerous attempts in the past to bring AI into the content moderation process on Facebook's platforms. However, not all of them have met with success.
Remember Facebook's automated personal assistant, M, that was released in a bid to compete with Alexa and Siri? After a series of embarrassing mishaps due to poorly trained algorithms, Facebook abruptly pulled the plug. They weren't alone; chatbots are infamous for putting their metaphorical feet in their mouths. While these debacles are tough to watch, the underlying problem is not artificial intelligence (AI) itself. AI succeeds when underpinned with sound strategy and well-trained models.