intelligence


Human-machine collaboration and the future of work

#artificialintelligence

We naturally think of "intelligence" as a trait belonging to individuals. We're all--students, employees, soldiers, artists, athletes--regularly evaluated in terms of personal accomplishment, with "lone hero" narratives prevailing in accounts of scientific discovery, politics, and business. Similarly, artificial intelligence is typically defined as a quest to build individual machines that possess different forms of intelligence, even the kind of general intelligence measured in humans for more than a century. Yet focusing on individual intelligence, whether human or machine, can distract us from the true nature of accomplishment. As Thomas Malone, professor at MIT's Sloan School of Management and director of its Center for Collective Intelligence notes: "Almost everything we humans have ever done has been done not by lone individuals, but by groups of people working together, often across time and space." Malone, the author of 2004's The Future of Work and a pioneering researcher in the field of collective intelligence, is in a singular position to understand the potential of AI technologies to transform workers, workplaces, and societies. In this conversation with Deloitte's Jim Guszcza and Jeff Schwartz, he discusses a vision outlined in his recent book Superminds--a framework for achieving new forms of human-machine collective intelligence and its implications for the future of work. Can you tell us what a "supermind" is, and how you define collective intelligence? Thomas Malone, director, MIT Center for Collective Intelligence: A "supermind" is a group of individuals acting collectively in ways that seem intelligent, and collective intelligence essentially has the same definition. For many years, I defined collective intelligence as groups of individuals acting collectively in ways that seem intelligent.


Is AI as smart as a chimp or a lab rat? The Animal-AI Olympics is going to find out.

MIT Technology Review

In one of Aesop's fables, a thirsty crow finds a pitcher with a small amount of water beyond the reach of its beak. After failing to push the pitcher over, the crow drops pebbles in one by one until the water level rises, allowing the bird to have a drink. For Aesop, the fable showed the superiority of intelligence over brute strength. Two and a half millennia later, we might get to see whether AI could pass Aesop's ancient intelligence test. In June, researchers will train algorithms to master a suite of tasks that have traditionally been used to test animal cognition.


AIs go up against animals in an epic competition to test intelligence

New Scientist

Some artificial intelligences can perform tasks with superhuman ability, but just how clever are they overall? A competition called the Animal-AI Olympics will pit AIs against tests normally used to study animal intelligence. From April, AIs will battle it out in a virtual playground for a $10,000 prize pool. All the tasks involve retrieving a piece of food, but the skills needed to succeed vary in complexity.


How Human Intelligence Differs From Artificial Intelligence

#artificialintelligence

Through my Twitter and on LinkedIn feeds I see a lot of postings about technology. Many (primarily technology experts) write about the massive potential of technologies, for example Artificial Intelligence (AI), Blockchain, Cloud, Internet of Things (IoT), mobile and other technologies. In the current blog I will refer specifically to AI, not to other technologies. Other people write about AI in a way that implies that they fear AI; that AI is a risk, maybe more than an opportunity. Articles with titles like "Robots will take our jobs. We'd better plan now, before it's too late" can create fear, especially when non-tech-experts read the title on Twitter, absorb the connotation "robots danger for my job", without reading the full article and doing additional research on the topic.


Combining AI's Power with Self-centered Human Nature Could Be Dangerous

#artificialintelligence

If we could shrink the entire history of our planet to one year, humans would have shown up roughly at 11pm on 31 Dec. In the grand scheme of things, we are insignificant. However, if we expand our thinking to the entire observable universe, our evolutionary success is a stroke of near-impossible luck that comprises all the biological conditions and chances required for us to become the dominant species on this planet. Of the 300 billion solar systems in the Milky Way, Earth is the only planet on which we know life exists. Out of the 8.7 billion known species on earth, we became the first general intelligence.


AI explained

#artificialintelligence

In this lecture, I will offer you a definition of artificial intelligence, or AI, and give you a brief overview of its history from its inception in the 1950s. Let's start by saying what AI isn't. AI is not machines that think, or even computers that work the way the brain works. AI is what machines do, not how they do it. The authors of a leading textbook on AI have offered eight possible definitions of the term.


The truth about artificial intelligence in medicine

#artificialintelligence

For many months, artificial intelligence has been in my peripheral vision, just sitting there, ignored by me because it seemed too far in the future to be interesting now. And then, there were all these terms -- Big Data, machine learning, data science -- which circled the subject and, frankly, gave me a bit of a headache. Artificial intelligence is upon us, unleashed and unbridled in its ability to transform the world. If in the previous technological revolution, machines were invented to do the physical work, then in this revolution, machines are being invented to do the thinking work. And no field involves more thinking than medicine.


We're living in the Last Era Before Artificial General Intelligence

#artificialintelligence

An Artificial General Intelligence is coming, and we have no clue how homo sapiens might be impacted. When we think of preparing for our future, we used to think about going to good college and moving for a good job that would put us on a relatively good career trajectory for a stable life where we will prosper in a free market meritocracy where we compete against fellow humans. However, over the course of the next few decades homo sapiens including generation GenZ and Alpha, may be among the last people to grow up in a pre automation and pre AGI world. Considering the exponential levels of technological progress expected in the next 30 years, that's hard to put into words or even historical context. Namely, because there's no historical precedent and no words to describe what the next-gen AI might become.


Artificial General Intelligence, did it gain traction in research in 2018? Packt Hub

#artificialintelligence

In 2017, we predicted that artificial general intelligence will gain traction in research and certain areas will aid towards AGI systems. The prediction was made in a set of other AI predictions in an article titled 18 striking AI Trends to watch in 2018. Let's see how 2018 went for AGI research. Artificial general intelligence or AGI is an area of AI in which efforts are made to make machines have intelligence closer to the complex nature of human intelligence. Such a system could possibly, in theory, perform tasks that a human can with the ability to learn as it progresses through tasks, collects data/sensory input.


Artificial intelligence: Can ethics keep pace with technological progress? Apolitical

#artificialintelligence

This piece was written by Nicoletta Iacobacci, global ethics catalyst and adjunct professor at Webster University and Jinan University, and author of Exponential Ethics. For more like this, see our digital government newsfeed. Science fiction is becoming science fact as exponential growth in technology happens all around us. Ethics, however, has a hard time keeping pace. While new moral guidelines are defined for existing anomalies, technology surges ahead, giving rise to newer ethical debates -- making it increasingly difficult to keep up with the paradigm shift of our own innovations.