Entertainment companies are entering the Age of Data, where they'll have access to more information than ever about their products, their audiences and how to create, market and distribute one to the other. Now, those companies and their leadership have to be ready to embrace the coming huge opportunities, especially as data-driven competitors such as Netflix, MoviePass and Amazon transform the industry. That was one message this morning from Stephen F. DeAngelis, CEO and founder of AI provider Enterra Solutions, speaking before a group of Hollywood technology executives in Beverly Hills. He noted wryly that Hollywood has portrayed AI technologies in dark or at least complicated ways over the years, from the murderous HAL 9000 in 2001: A Space Odyssey to the world-ending SkyNet in the Terminator films to the runaway AIs of Ex Machina and Her. We're quite a ways still from AI with that kind of power and autonomy, DeAngelis said, but he cautioned that people think of AI tools in overly limited ways.
"It's tempting to dismiss the notion of highly intelligent machines as mere science fiction," writes the renowned theoretical physicist Stephen Hawking along with his colleagues Stuart Russell, Max Tegmark, and Frank Wilczek. "But this would be a mistake, and potentially our worst mistake in history." The Independent, 1 May 2014] Anything "sensational" that Hawking writes concerning science and technology garners headlines and most news sources have trumpeted his and his colleagues' warnings about the dangers of developing artificial intelligence (AI) and ignored what they had to say about the benefits of artificial intelligence. "The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history."
Alexander Eule writes, "First Google's self-driving car crashed, then Microsoft's Twitter bot started spewing inappropriate tweets. I'm sure there are lots of lessons, but one big lesson is that we need to program ethics into artificial intelligence (AI) systems. Last year an article from Taylor & Francis posed the question: Is it time we started thinking about programming ethics into our artificial intelligence systems. " "Naturally, to be able to create such morally autonomous robots, researchers have to agree on some fundamental pillars: what moral competence is and what humans would expect from robots working side by side with them, sharing decision making in areas like healthcare and warfare.
"Nothing ever comes to one, that is worth having," Booker T. Washington once remarked, "except as a result of hard work." I'm a firm believer that worthwhile work is good for the body and the soul; however, I also believe that tedious and repetitive work can be soul crushing and boring. Rachel King (@sfwriter) reports, "Over the past year, some workers at AT&T have begun to automate the boring, repetitive parts of their jobs by using software bots." Historically, technology has been developed to accomplish repetitive tasks because most people hate doing them and because bored workers are prone to making mistakes. In spite of concerns about automation eliminating jobs, technology's advance is not going to be halted.