To live in harmony with AI we must create a modern Magna Carta

#artificialintelligence

We stand at a watershed moment for society's vast, unknown digital future. A powerful technology, artificial intelligence (AI), has emerged from its own ashes, thanks largely to advances in neural networks modeled loosely on the human brain. AI can find patterns in massive unstructured data sets, improve performance as more data becomes available, identify objects quickly and accurately, and, make ever more and better recommendations and decision-making, while minimizing interference from complicated, political humans. This raises major questions about the degree of human choice and inclusion for the decades to come. How will humans, across all levels of power and income, be engaged and represented?


Hult reviews the future of artificial intelligence with TEDx Talks Hult Blog

#artificialintelligence

This article was originally published by BusinessBecause on October 18th 2017, Hult Reviews The Future Of Artificial Intelligence With TEDx Talks. Olaf Groth, digital expert and professor at Hult International Business School, wants a new, digital Magna Carta to help humans harness the potential of AI. Right now, in China, Communist party officials are working with big data analytics and artificial intelligence (AI) experts to launch a new, multi-billion-dollar tool to assign a'Trust Rating' to each of China's citizens. Designed to clamp down on corruption, the plan is to generate a social credit score for each citizen based on how trustworthy they are. Automatically, through AI technology, citizens with good scores will benefit; citizens with bad scores could face punishments, blacklisting, and restrictions.


Ethics and Emotional Intelligence in a Future of AI

#artificialintelligence

There's no doubt Artificial Intelligence (AI)–machines that reproduce human thought and actions–is on the rise, both in the scientific community and in the news. And along with AI, there comes "emotional AI," from systems that can detect users' emotions and adjust their responses accordingly, to learning programs that provide emotional analysis, to devices, such as smart speakers and virtual assistants, that mimic human interactions. As the pace of AI development and implementation accelerates–with the potential to change the ways we live and work–the ethics and empathy that guide those designing technology of our future will have far-reaching consequences. It is this moral dimension that concerns me most: do the organizations and software developers creating these programs have an ethical rudder? Long before the concept of AI became commonplace, science fiction writer Isaac Asimov introduced the "Three Laws of Robotics" in his 1942 short story "Runaround" (which was later included in his 1950 collection, I, Robot): Much of Asimov's robot-based fiction hinges upon robots finding loopholes in their interpretations of the laws, which are programmed into them as a safety measure that cannot be bypassed.


On the Brink of an Artificial Intelligence Arms Race

#artificialintelligence

This article was originally published by the World Economic Forum. The doomsday scenarios spun around this theme are so outlandish--like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies--it's difficult to visualize them as serious threats. Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It's obvious how developing these technologies will benefit humanity. But, then, don't all the dystopian sci-fi stories start out this way?


Are we on the brink of artificial intelligence arms race?

#artificialintelligence

There is a need for a new global platform to monitor, consider, and make recommendations about the implications of emerging technologies in general, and AI more specifically, for international security. The doomsday scenarios spun around this theme are so outlandish – like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies – it's difficult to visualize them as serious threats. Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It's obvious how developing these technologies will benefit humanity. But, then – don't all the dystopian sci-fi stories start out this way?