If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Human brains are extremely energy-efficient. When a person thinks in a concentrated manner, his or her brain consumes a mere 21 watts of electricity. But AI doing the same degree of intensive thinking requires over 10,000 times more electricity. If that is the case, the international competitiveness of businesses will depend on factors concerning the supply and cost of electricity in their home country. How, then, does Japan stand with regard to power supply and cost?
Russian scientists are developing an advanced automated submarine that will be powered by an external combustion engine, Igor Denisov, deputy director of the Foundation for Advanced Studies (FPI), revealed in an interview with Interfax, a Russian news agency. "We are planning to create an apparatus that will pass through the Northern Sea Route without floating up and without the use of nuclear power, including under the ice," Denisov said. "In order for this device to accomplish such a'feat,' its autonomy should be at least 90 days, which is already commensurate with the autonomy of modern submarines." The decision to forego the nuclear option to power the underwater vehicle was a conscious one, Denisov said, in order to make it increasingly safe. While a nuclear installation helps power submarines for uninterrupted movement throughout the world's oceans, it also puts its operational capabilities at risk.
Steven Pinker is a cognitive psychologist, linguist, and author of Bill Gates' two favorite books. However, his latest – Enlightenment Now – has some serious shortcomings centering on Pinker's misperceptions about climate change polarization. Pinker falls into the trap of'Both Siderism,' acknowledging the Republican Party's science denial, but also wrongly blaming liberals for the policy stalemate, telling Ezra Klein:
Three Mile Island, the site of the United States' worst nuclear disaster, will close down for good in 2019. Exelon Corp., the company that owns the nuclear power plant, announced the closure Tuesday. The company cited high operating costs and a recent failure to auction off the plant's power into the regional power grid as part of the reasoning for the plant's permanent closure. Almost 700 people were set to be laid off as the plant shuts down. "Today is a very difficult day, not just for the 675 talented men and women who have dedicated themselves to operating Three Mile Island safely and reliably every day, but also for their families, the communities and customers who depend on this plant to produce clean energy and support local jobs," Exelon CEO Chris Crane said in a statement Tuesday.
This year, several leading researchers have sounded warnings about the risks of using the CRISPR gene-editing technique to modify human1 and other species' genomes in ways that could have "unpredictable effects on future generations"2 and "profound implications for our relationship to nature" (see go.nature.com/jq5sik). Concerns are coming from the silicon sector as well. Last year, the physicist Stephen Hawking proclaimed that rapidly advancing artificial intelligence (AI) could destroy the human race. And in 2013, former Royal Society president Martin Rees co-founded the Centre for the Study of Existential Risk at the University of Cambridge, UK, in part to study threats from advanced AI. Leaders of the scientific community are ready to share the responsibility for these powerful technologies with the public. George Church, a geneticist at Harvard University in Cambridge, Massachusetts, and others wrote last year of CRISPR that "the decision of when and where to apply this technology, and for what purposes, will be in our collective hands".
A computer's victory over a human go master this past March reminds us of the pending "singularity" -- the rapidly approaching moment in time when artificial intelligence overtakes human intelligence. Machines will learn, and we won't be their teachers. Are we prepared for it? Can we prepare for it? Many futurists declare it inevitable, probably within a generation, maybe less.
The infrastructure and tools necessary for large-scale data analytics, formerly the exclusive purview of experts, are increasingly available. Whereas a knowledgeable data-miner or domain expert can rightly be expected to exercise caution when required (for example, around fallacious conclusions supposedly supported by the data), the nonexpert may benefit from some judicious assistance. This article describes an end-to-end learning framework that allows a novice to create models from data easily by helping structure the model building process and capturing extended aspects of domain knowledge. By treating the whole modeling process interactively and exploiting high-level knowledge in the form of an ontology, the framework is able to aid the user in a number of ways, including in helping to avoid pitfalls such as data dredging. Prudence must be exercised to avoid these hazards as certain conclusions may only be supported if, for example, there is extra knowledge which gives reason to trust a narrower set of hypotheses. This article adopts the solution of using higher-level knowledge to allow this sort of domain knowledge to be used automatically, selecting relevant input attributes, and thence constraining the hypothesis space. We describe how the framework automatically exploits structured knowledge in an ontology to identify relevant concepts, and how a data extraction component can make use of online data sources to find measurements of those concepts so that their relevance can be evaluated. To validate our approach, models of four different problem domains were built using our implementation of the framework. Prediction error on unseen examples of these models show that our framework, making use of the ontology, helps to improve model generalization.
Psychologist Paul Slovic and his colleagues are responsible for much of the overwhelming evidence that people evaluate risk-based situational features that evoke emotion rather than on expected utility. In one example of this pioneering work, they showed that people rank the riskiness of various activities (nuclear power, motor vehicles, handguns) based on principles such as whether the activity evoked dread or whether people undertake the activity voluntarily--neither of which factor into a rational cost-benefit analysis. For example, in one study, people judged nuclear power to be the riskiest of 30 activities presented even though the risk of death from nuclear power is far less than the risk of swimming, which participants judged to be of little concern. Two factors in particular appear to drive risk judgments: the involvement of technology (particularly novel and unknown technologies) and the perceived certainty that adverse consequences would result in death. These factors help explain fears about the driverless car.