human-like artificial intelligence
A social path to human-like artificial intelligence
Duéñez-Guzmán, Edgar A., Sadedin, Suzanne, Wang, Jane X., McKee, Kevin R., Leibo, Joel Z.
Traditionally, cognitive and computer scientists have viewed intelligence solipsistically, as a property of unitary agents devoid of social context. Given the success of contemporary learning algorithms, we argue that the bottleneck in artificial intelligence (AI) progress is shifting from data assimilation to novel data generation. We bring together evidence showing that natural intelligence emerges at multiple scales in networks of interacting agents via collective living, social relationships and major evolutionary transitions, which contribute to novel data generation through mechanisms such as population pressures, arms races, Machiavellian selection, social learning and cumulative culture. Many breakthroughs in AI exploit some of these processes, from multi-agent structures enabling algorithms to master complex games like Capture-The-Flag and StarCraft II, to strategic communication in Diplomacy and the shaping of AI data streams by other AIs. Moving beyond a solipsistic view of agency to integrate these mechanisms suggests a path to human-like compounding innovation through ongoing novel data generation.
- Leisure & Entertainment > Games (1.00)
- Education (1.00)
Why making human-like artificial intelligence may be 'a trap': AI expert
As companies such as Alphabet (GOOG, GOOGL) and Microsoft (MSFT) tussle to make the best artificial intelligence technology, one expert questioned whether they are going about it in the right way. "Alan Turing famously proposed that the test for intelligence, what we later called the Turing Test, was'how similar can an AI be to a human?' Trying to mimic humans has been kind of a goal of a lot of computer scientists ever since," Stanford Digital Economy Lab Director Erik Brynjolfsson said on Yahoo Finance Live (video above). "Can we fool humans so you can't tell the difference?" he continued. "I think it's a very evocative goal, but it's also a trap. The reason it's a trap is that if we make AI that mimics humans, it actually destroys the value of human labor and it leads to more concentration of wealth and power."
Facebook is now developing a human-like artificial intelligence called Ego4D
Facebook announced a research project Thursday that aims to develop an artificial intelligence capable of perceiving the world like a human being. The project, titled Ego4D, aims to train an artificial intelligence (AI) to perceive the world in the first-person by analyzing a constant stream of video from people's lives. This type of data, which Facebook calls "egocentric" data, is designed to help the AI perceive, remember and plan like a human being. "Next-generation AI systems will need to learn from an entirely different kind of data -- videos that show the world from the center of the action, rather than the sidelines," Kristen Grauman, lead AI research scientist at Facebook, said in the announcement. The project aims to improve AI technology's capacity to accomplish human processes by setting five key benchmarks: "episodic memory," in which the AI ties memories to specific locations and times, "forecasting," "social interaction," "hand and object manipulation" and "audio-visual diarization," in which the AI ties auditory experiences to specific locations and times.
2021 could bring us the first human-like artificial intelligence
Truthfully, the singularity of some spectrum is most definitely due to arrive, it has already within the gaming world and professional fields like health care. That being said, some humans may struggle with the reality of such a time arriving, and some may ignore it altogether (while still using a mobile phone or calculator, ignorantly). While both of these approaches will most definitely remain disastrously behind, others will realise that the path ahead relies on the increasing collaboration with humankind and computers. I argue that the dawn of singularity is here, possibly that it arrived decades ago, and that only in hindsight will we actualise this point in time as dramatic.
Eight Burning Questions About AI, Answered By The Experts.
Artificial intelligence and robotics have enjoyed a resurgence of interest, and there is renewed optimism about their place in our future. But what do they mean for us? How plausible is human-like artificial intelligence? It is 100% plausible that we'll have human-like artificial intelligence. I say this even though the human brain is the most complex system in the universe that we know of. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities. Popular AI from Issac Asimov to Steven Spielberg is plausible.
- North America > United States (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Protecting Humanity In The Face Of Artificial Intelligence
The evolution of artificial intelligence (AI) -- from artificial narrow intelligence (ANI), through artificial general intelligence (AGI), to artificial super intelligence (ASI) -- is on its way to changing everything. It's expected that soon, artificial intelligence will combine the intricacy and pattern recognition strength of human intelligence with the speed, memory and knowledge sharing of machine intelligence. As the rise of AI continues, AI is challenging and changing not only the way humans live, learn and work, but also how entities across nations: its government, industries, organizations and academia (NGIOA) construct their commercial and economic industries and markets. With this technology driven growth of artificial intelligence, the need to do most manual, mathematical and mundane work is already in decline and will likely be greatly diminished in the coming years. Moreover, with all these new digital assistants and decision-making algorithms assisting and directing humans, more complex day-to-day work for humans is being greatly lessened.
Unity tweaks AI training tools, makes bid for academic respect
Unity Technologies on Monday released version 0.5 of its ML-Agents toolkit to make its Unity 3D game development platform better suited for developing and training autonomous agent code via machine learning. Initially rolled out a year ago in beta, version 0.5 comes with a few improvements. There's a wrapper for Gym (a toolkit for developing and testing reinforcement learning algorithms), support for letting agents make multiple action selections at once and for preventing agents from taking certain actions, and a refurbished set of environments called Marathon Environments. In these virtual spaces, AI researchers can teach software agents to perform certain tasks by rewarding them for correct actions. This sort of reinforcement learning can be limited to digital environments like video games or mapped to software-driven machines in the real world. Through its latest code update, Unity is making the case for Unity 3D as a key tool for AI research, a goal that company code boffins describe in a preprint paper titled, "Unity: A General Platform for Intelligent Agents."
- Leisure & Entertainment > Games > Computer Games (0.58)
- Information Technology (0.52)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.97)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.32)
How far away are we really from artificial intelligence?
By the mid-1950s, the world realized that computers were going to play a major role in future technology. Military, business and educational entities began investing heavily in computers, and rapidly advancing hardware meant that the potential for computing seemed endless. Artificial intelligence, perhaps more than any other aspect of computing, captured the public's imagination, and predictions of a future ruled by computation and robots were common in news stories and throughout science fiction literature and cinema. To understand why early experts were so optimistic about artificial intelligence, it's important to understand Moore's Law. Computers developed rapidly through the 1950s and early 1960s, and Gordon Moore, a co-founder of computing giants Fairfield Semiconductor and Intel, predicted that the number of transistors in a given area on a circuit board would double every year, leading to exponential growth in processing power.
How far away are we really from artificial intelligence?
By the mid-1950s, the world realized that computers were going to play a major role in future technology. Military, business and educational entities began investing heavily in computers, and rapidly advancing hardware meant that the potential for computing seemed endless. Artificial intelligence, perhaps more than any other aspect of computing, captured the public's imagination, and predictions of a future ruled by computation and robots were common in news stories and throughout science fiction literature and cinema. To understand why early experts were so optimistic about artificial intelligence, it's important to understand Moore's Law. Computers developed rapidly through the 1950s and early 1960s, and Gordon Moore, a co-founder of computing giants Fairfield Semiconductor and Intel, predicted that the number of transistors in a given area on a circuit board would double every year, leading to exponential growth in processing power.
How Today's Jungle of Artificial Intelligence Will Spawn Sentience
From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It's usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published August 10, 2010. We hope you enjoy it!