Last month, researchers at OpenAI in San Francisco revealed an algorithm capable of learning, through trial and error, how to manipulate the pieces of a Rubik's Cube using a robotic hand. It was a remarkable research feat, but it required more than 1,000 desktop computers plus a dozen machines running specialized graphics chips crunching intensive calculations for several months. The effort may have consumed about 2.8 gigawatt-hours of electricity, estimates Evan Sparks, CEO of Determined AI, a startup that provides software to help companies manage AI projects. A spokesperson for OpenAI questioned the calculation, noting that it makes several assumptions. But OpenAI declined to disclose further details of the project or offer an estimate of the electricity it consumed.
The version of Project Debater used in the live debates included the seeds of the latest system, such as the capability to search hundreds of millions of new articles. But in the months since, the team has extensively tweaked the neural networks it uses, improving the quality of the evidence the system can unearth. One important addition is BERT, a neural network Google built for natural-language processing, which can answer queries. The work will be presented at the Association for the Advancement of Artificial Intelligence conference in New York next month.
This past fall, diplomats from around the globe gathered in Geneva to do something about killer robots. In a result that surprised nobody, they failed. The formal debate over lethal autonomous weapons systems--machines that can select and fire at targets on their own--began in earnest about half a decade ago under the Convention on Certain Conventional Weapons, the international community's principal mechanism for banning systems and devices deemed too hellish for use in war. But despite yearly meetings, the CCW has yet to agree what "lethal autonomous weapons" even are, let alone set a blueprint for how to rein them in. Meanwhile, the technology is advancing ferociously; militaries aren't going to wait for delegates to pin down the exact meaning of slippery terms such as "meaningful human control" before sending advanced warbots to battle.
Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies. In an op-ed published in today's Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale -- with the Google chief claiming: "AI has the potential to improve billions of lives, and the biggest risk may be failing to do so" -- thereby seeking to frame'no hard limits' as actually the safest option for humanity. Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock -- presenting "potential negative consequences" as simply the inevitable and necessary price of technological progress. It's all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
More than a decade has passed since the British government issued an apology to the mathematician Alan Turing. The tone of pained contrition was appropriate, given Britain's grotesquely ungracious treatment of Turing, who played a decisive role in cracking the German Enigma cipher, allowing Allied intelligence to predict where U-boats would strike and thus saving tens of thousands of lives. Unapologetic about his homosexuality, Turing had made a careless admission of an affair with a man, in the course of reporting a robbery at his home in 1952, and was arrested for an "act of gross indecency" (the same charge that had led to a jail sentence for Oscar Wilde in 1895). Turing was subsequently given a choice to serve prison time or undergo a hormone treatment meant to reverse the testosterone levels that made him desire men (so the thinking went at the time). Turing opted for the latter and, two years later, ended his life by taking a bite from an apple laced with cyanide.
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).
The transformative power of artificial intelligence has come to preoccupy big business and government as well as academics. But as AI's potential sinks in, a growing number of policy experts -- along with some leading figures in technology -- are asking tough questions: Should these cutting-edge algorithms be regulated, taxed or even, in certain cases, blocked? Consider what AI can do in the workplace. For example, managers realize that office politics, stress and other pressures take a toll on employees. They also know that standard-issue job-satisfaction surveys "don't provide a true gauge of what's going on" around the water cooler or in the staff lunchroom, says Jonathan Kreindler, Chief Executive Officer of Receptiviti.ai.
Researchers at US universities have created an imaging system powered by artificial intelligence that could help self-driving cars "see" around corners in minute detail to identify hazards. The imaging system uses a conventional camera sensor and a laser beam that can be "bounced" off walls and onto objects to create a pattern – visually similar to the static on an old untuned television to the naked eye. The image is then reconstructed by an AI algorithm, eliminating the'noise' and able to produce images of even 1cm tall letters. Deep-learning is a form of artificial intelligence that mimics the working of the human brain to process data and create patterns. It is a particularly powerful subset of machine learning AI, able to learn unsupervised with unstructured data.
A team of Stanford University researchers designed the PigeonBot. A team of Stanford University researchers designed the PigeonBot. For decades, scientists have been trying to create machines that mimic the way birds fly. A team from Stanford University has gotten one big step closer. They created the PigeonBot -- a winged robot that they say approximates the graceful complexities of bird flight better than any other robot to date.