artificial general intelligence


Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence

#artificialintelligence

Microsoft is investing $1 billion in OpenAI, a San Francisco-based research lab founded by Silicon Valley luminaries, including Elon Musk and Sam Altman, that's dedicated to creating artificial general intelligence (AGI). The investment will make Microsoft the "exclusive" provider of cloud computing services to OpenAI, and the two companies will work together to develop new technologies. OpenAI will also license some of its tech to Microsoft to commercialize, though when this may happen and what tech will be involved has yet to be announced. OpenAI began as a nonprofit research lab in 2015 and was intended to match the high-tech R&D of companies like Google and Amazon while focusing on developing AI in a safe and democratic fashion. But earlier this year, OpenAI said it needed more money to continue this work, and it set up a new for-profit firm to seek outside investment.


How artificial intelligence will do our dirty work, take our jobs and change our lives

#artificialintelligence

At its crudest, most reductive, we could sum up the future of artificial intelligence as being about robot butlers v killer robots. We have to get there eventually, so we might as well start with the killer robots. If we were to jump forward 50 years to see what artificial intelligence might bring us, would we – Terminator-style – step into a world of human skulls being crushed under the feet of our metal and microchip overlords? No, we're told by experts. It might be much worse.


Opinion 'There's Just No Doubt That It Will Change the World': David Chalmers on V.R. and A.I.

#artificialintelligence

Over the past two decades, the philosopher David Chalmers has established himself as a leading thinker on consciousness. He began his academic career in mathematics but slowly migrated toward cognitive science and philosophy of mind. He eventually landed at Indiana University working under the guidance of Douglas Hofstadter, whose influential book "Gödel, Escher, Bach: An Eternal Golden Braid" had earned him a Pulitzer Prize. Chalmers's dissertation, "Toward a Theory of Consciousness," grew into his first book, "The Conscious Mind" (1996), which helped revive the philosophical conversation on consciousness. Perhaps his best-known contribution to philosophy is "the hard problem of consciousness" -- the problem of explaining subjective experience, the inner movie playing in every human mind, which in Chalmers's words will "persist even when the performance of all the relevant functions is explained."


When Will We Reach the Singularity? – A Timeline Consensus from AI Researchers Emerj

#artificialintelligence

AI is applicable in a wide variety of areas--everything from agriculture to cybersecurity. However, most of our work has been on the short-term impact of AI in business. We're not talking about next quarter, or even next year, but in the decades to come. As AI becomes more powerful, we expect it to have a larger impact on our world, including your organization. So, we decided to do what we do best: a deep analysis of AI applications and implications.


An AGI with Time-Inconsistent Preferences

arXiv.org Artificial Intelligence

This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists' standard method of discounting. This trap is implicitly and falsely assuming that a rational AGI would have timeconsistent preferences. An agent with time-inconsistent preferences knows that its future self will disagree with its current self concerning intertemporal decision making. Such an agent cannot automatically trust its future self to carry out plans that its current self considers optimal. Economists have long used utility functions to model how rational agents behave (see Mas-Colell et al., 1995).


Information Flow Theory (IFT) of Biologic and Machine Consciousness: Implications for Artificial General Intelligence and the Technological Singularity

arXiv.org Artificial Intelligence

The subjective experience of consciousness is at once familiar and yet deeply mysterious. Strategies exploring the top-down mechanisms of conscious thought within the human brain have been unable to produce a generalized explanatory theory that scales through evolution and can be applied to artificial systems. Information Flow Theory (IFT) provides a novel framework for understanding both the development and nature of consciousness in any system capable of processing information. In prioritizing the direction of information flow over information computation, IFT produces a range of unexpected predictions. The purpose of this manuscript is to introduce the basic concepts of IFT and explore the manifold implications regarding artificial intelligence, superhuman consciousness, and our basic perception of reality.


Modeling AGI Safety Frameworks with Causal Influence Diagrams

arXiv.org Artificial Intelligence

One of the primary goals of AI research is the development of artificial agents that can exceed human performance on a wide range of cognitive tasks, in other words, artificial general intelligence (AGI). Although the development of AGI has many potential benefits, there are also many safety concerns that have been raised in the literature [Bostrom, 2014; Everitt et al., 2018; Amodei et al., 2016]. Various approaches for addressing AGI safety have been proposed [Leike et al., 2018; Christiano et al., 2018; Irving et al., 2018; Hadfield-Menell et al., 2016; Everitt, 2018], often presented as a modification of the reinforcement learning (RL) framework, or a new framework altogether. Understanding and comparing different frameworks for AGI safety can be difficult because they build on differing concepts and assumptions. For example, both reward modeling [Leike et al., 2018] and cooperative inverse RL [Hadfield-Menell et al., 2016] are frameworks for making an agent learn the preferences of a human user, but what are the key differences between them?


Artificial general intelligence is a Rorschach Test: Perhaps we need orangutans? ZDNet

#artificialintelligence

Artificial general intelligence, or "AGI," the idea of a machine that can approach human levels of cognition, is a great topic to get people all worked up. Because no one can really define it, it serves as a Rorschach Test, onto which one can imprint whatever thoughts and feelings they care to. What is artificial general intelligence? Everything you need to know about the path to creating an AI as smart as a human. The result was a spirited discussion this past Friday night at John Jay College in Manhattan, site of the World Science Festival, now in its twelfth year.


Global Big Data Conference

#artificialintelligence

These days, when you browse the internet for news on artificial intelligence, you'll find out about new AI that just managed to do something humans do, yet far better. Present day AI can detect cancers better than human doctors, build better AI algorithms than human developers, and beat the world champions at games like chess and Go. Instances like these may lead us to believe that perhaps, there's not a whole lot that artificial intelligence can not do better than us humans. The realization of AI's superior and ever-improving capabilities in different fields has evoked both hope and caution from the global tech community as well as the general public. While many believe the rise of artificial general intelligence can massively benefit humanity by raising our standard of living and status as a civilization, some believe the development may lead to global doom.


Maximizing paper clips

#artificialintelligence

In What's the Future, Tim O'Reilly argues that our world is governed by automated systems that are out of our control. Alluding to The Terminator, he says we're already in a "Skynet moment," dominated by artificial intelligence that can no longer be governed by its "former masters." The systems that control our lives optimize for the wrong things: they're carefully tuned to maximize short-term economic gain rather than long-term prosperity. The "flash crash" of 2010 was an economic event created purely by the software that runs our financial systems going awry. However, the real danger of the Skynet moment isn't what happens when the software fails, but when it is working properly: when it's maximizing short-term shareholder value, without considering any other aspects of the world we live in.