artificial general intelligence

Artificial General Intelligence (AGI) Vs. Narrow AI


Vendors and theorists can promise the world, but it's essential to differentiate between Artificial General Intelligence (AGI) and narrow AI in order to make informed decisions. In simplest terms, all of the contemporary AI is narrow or weak AI. Even the smartest systems are not able to execute common sense comparable to human intelligence. While computers can outperform humans at specific tasks such as chess, Jeopardy, or predicting the weather, they are still not able to think abstractly, interpret recollections, or solve creative problems with complex solutions. To develop narrow AI, data scientists define which data to incorporate, determine the appropriate algorithm(s), and specify the best models to apply.

Microsoft wants to build artificial general intelligence: an AI better than humans at everything


A lot of startups in the San Francisco Bay Area claim that they're planning to transform the world. San-Francisco-based, Elon Musk-founded OpenAI has a stronger claim than most: It wants to build artificial general intelligence (AGI), an AI system that has, like humans, the capacity to reason across different domains and apply its skills to unfamiliar problems. Today, it announced a billion dollar partnership with Microsoft to fund its work -- the latest sign that AGI research is leaving the domain of science fiction and entering the realm of serious research. "We believe that the creation of beneficial AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity," Greg Brockman, chief technology officer of OpenAI, said in a press release today. Existing AI systems beat humans at lots of narrow tasks -- chess, Go, Starcraft, image generation -- and they're catching up to humans at others, like translation and news reporting.

Microsoft invests $1 billion in OpenAI, vows to build AI tech platform of 'unprecedented scale' 7wData


Microsoft will invest $1 billion in OpenAI and work with the San Francisco-based Artificial Intelligence powerhouse to create a computational platform of "unprecedented scale" to accelerate the development of advanced forms of AI. The expanded partnership gives Microsoft and its Azure cloud platform an influential ally in its competition with Google, Amazon and other rivals in the high-stakes race to develop next-generation AI platforms and technologies. Microsoft CEO Satya Nadella has called out AI as a pivotal area for the future of the company. OpenAI was formed in 2016 by leaders including Elon Musk, the Tesla and SpaceX CEO; and Sam Altman, former president of the Y Combinator startup accelerator. Musk, who has sounded the alarm over the risks of AI, said in May that he was no longer involved inOpenAI.

With $1 billion from Microsoft, an AI Lab wants to mimic the brain - Times of India


As the waitress approached the table, Sam Altman held up his phone. That made it easier to see the dollar amount typed into an investment contract he had spent the last 30 days negotiating with Microsoft. The investment from Microsoft, signed early this month and announced Monday, signals a new direction for Altman's research lab. In March, Altman stepped down from his daily duties as the head of Y Combinator, the startup "accelerator" that catapulted him into the Silicon Valley elite. Now, at 34, he is the chief executive of OpenAI, the artificial intelligence lab he helped create in 2015 with Elon Musk, the billionaire chief executive of the electric carmaker Tesla.

Microsoft invests $1 billion in artificial intelligence lab co-founded by Elon Musk

USATODAY - Tech Top Stories

Elon Musk announced that his company Neuralink plans to link human brains directly to computers, saying the first prototype could be implanted in a person by the end of 2020. Microsoft has agreed to invest $1 billion in and partner with research company OpenAI, co-founded by Elon Musk, to develop artificial general intelligence, a technology that could have human-level intellectual capacity. The companies said Monday that they will build a hardware and software platform of "unprecedented scale" within Microsoft's cloud service provider Azure that will train and run increasingly advanced AI models. Microsoft will also become OpenAI's preferred partner for selling its technologies and the two will jointly develop Azure's supercomputing technology. "By bringing together OpenAI's breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratize AI -- while always keeping AI safety front and center -- so everyone can benefit," said Microsoft CEO Satya Nadella in the statement.

Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence


Microsoft is investing $1 billion in OpenAI, a San Francisco-based research lab founded by Silicon Valley luminaries, including Elon Musk and Sam Altman, that's dedicated to creating artificial general intelligence (AGI). The investment will make Microsoft the "exclusive" provider of cloud computing services to OpenAI, and the two companies will work together to develop new technologies. OpenAI will also license some of its tech to Microsoft to commercialize, though when this may happen and what tech will be involved has yet to be announced. OpenAI began as a nonprofit research lab in 2015 and was intended to match the high-tech R&D of companies like Google and Amazon while focusing on developing AI in a safe and democratic fashion. But earlier this year, OpenAI said it needed more money to continue this work, and it set up a new for-profit firm to seek outside investment.

How artificial intelligence will do our dirty work, take our jobs and change our lives


At its crudest, most reductive, we could sum up the future of artificial intelligence as being about robot butlers v killer robots. We have to get there eventually, so we might as well start with the killer robots. If we were to jump forward 50 years to see what artificial intelligence might bring us, would we – Terminator-style – step into a world of human skulls being crushed under the feet of our metal and microchip overlords? No, we're told by experts. It might be much worse.

Opinion 'There's Just No Doubt That It Will Change the World': David Chalmers on V.R. and A.I.


Over the past two decades, the philosopher David Chalmers has established himself as a leading thinker on consciousness. He began his academic career in mathematics but slowly migrated toward cognitive science and philosophy of mind. He eventually landed at Indiana University working under the guidance of Douglas Hofstadter, whose influential book "Gödel, Escher, Bach: An Eternal Golden Braid" had earned him a Pulitzer Prize. Chalmers's dissertation, "Toward a Theory of Consciousness," grew into his first book, "The Conscious Mind" (1996), which helped revive the philosophical conversation on consciousness. Perhaps his best-known contribution to philosophy is "the hard problem of consciousness" -- the problem of explaining subjective experience, the inner movie playing in every human mind, which in Chalmers's words will "persist even when the performance of all the relevant functions is explained."

When Will We Reach the Singularity? – A Timeline Consensus from AI Researchers Emerj


AI is applicable in a wide variety of areas--everything from agriculture to cybersecurity. However, most of our work has been on the short-term impact of AI in business. We're not talking about next quarter, or even next year, but in the decades to come. As AI becomes more powerful, we expect it to have a larger impact on our world, including your organization. So, we decided to do what we do best: a deep analysis of AI applications and implications.

An AGI with Time-Inconsistent Preferences Artificial Intelligence

This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists' standard method of discounting. This trap is implicitly and falsely assuming that a rational AGI would have timeconsistent preferences. An agent with time-inconsistent preferences knows that its future self will disagree with its current self concerning intertemporal decision making. Such an agent cannot automatically trust its future self to carry out plans that its current self considers optimal. Economists have long used utility functions to model how rational agents behave (see Mas-Colell et al., 1995).