If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
International negotiations to regulate artificial intelligence-based weapons are encountering difficulties, with Japan, Germany and others backing international rules on regulation but maintaining a cautious stance on a treaty to prohibit killer robots. Behind their muted approach is a fear that countries that develop autonomous weapons would shun such a treaty anyway, diminishing the significance of international efforts toward any regulation. Therefore, countries differ over how to attain this objective while agreeing on the need to prevent lethal autonomous weapons from running out of control. Germany hosted an online meeting in early April amid the COVID-19 pandemic to facilitate talks on the control of killer robots, as promoted by the U.N. Convention on Certain Conventional Weapons (CCW). Representatives of more than 60 countries and regions, including the United States and Israel, both developers of AI weapons, the European Union and the United Nations, as well as nongovernmental organizations, logged in to participate in the forum.
About 78 years ago, back in the year, 1942 sci-fi legend Isaac Asimov laid out the popularly known Asimov's Laws, a set of principles which the robots should follow for the future applications, these include- Unfortunately, the first rule has been broken a lot of times, causing concerns about the dangers of Automation. The gravest threat arises from the co-working Robots also called as the Cobots, who work in tandem with the human hands. This is no exaggeration, the US Department of Labor which keeps a track of robotic injuries to the workforce, lists out serious injuries in 38 pages which are caused by robotic malfunction, and that not include the manual dangers of hacking. The insecure software systems are no help, regularly attacked by a growing number of hackers who take advantage of insecure software systems to manipulate robot programming to turn on the dark side of Robotics. In his book When Robots Kill, law professor Gabriel Hallevy discusses the criminal liability that arises from the perils of AI infiltrating the commercial, industrial, military, medical, and personal spheres.
The US Navy is developing a robot submarine that is controlled by artificial intelligence that could kill without human control or input. The project is being run by the Office of Naval Research and has been described as an'autonomous undersea weapon system' according to a report by New Scientist. Details of the killer submersible were made available as part of the 2020 budget documents, which also revealed it has been named CLAWS by the US Navy. Very few details about the'top secret' project have been revealed beyond the fact it will use sensors and algorithms to carry out complex missions on its own. It's expected CLAWS will be installed on the new Orca class robot submarines that have 12 torpedo tubes and are being developed for the Navy by Boeing.
On International Women's Day, weapons development won't be the first thing that springs to mind for achieving global gender equality. But banning autonomous weapons systems AKA "killer robots" is needed to strengthen global peace, advance human security and ensure a feminist future. Technology could be a benevolent force in our increasingly integrated society. The potential benefits of innovative advancements in the fields of artificial intelligence, robotics, and machine learning could secure our future. As United Nations Secretary General Antonio Guterres said: "…these new capacities can help us to lift millions of people out of poverty, achieve the Sustainable Development Goals and enable developing countries to leap‑frog into a better future."
"Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan." The former Google engineer predicts autonomous weapons currently in development pose a far greater risk to humanity than remote-controlled drones. She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with potentially fatal consequences. She told The Guardian: "You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. "The machine doesn't have the discernment or common sense that the human touch has.
A Navy X-47B drone is launched off the nuclear powered aircraft carrier USS George H. W. Bush off the coast of Virginia, Tuesday, May 14, 2013. It was the Navy's first test flight of the unmanned aircraft off a carrier. A Navy X-47B drone is launched off the nuclear powered aircraft carrier USS George H. W. Bush off the coast of Virginia, Tuesday, May 14, 2013. It was the Navy's first test flight of the unmanned aircraft off a carrier. Dr. Robert J. Marks, Director of the Walter Bradley Center for Natural and Artificial Intelligence, joins the Nick Digilio Show to make the case for killer robots.
Killer robots may remain a dystopian vision of the future for now, but another military deployment of AI could be sooner to arrive on the battlefield. Known as the Aided Threat Recognition from Mobile Cooperative and Autonomous Sensors (ATR-MCAS), the system is being developed by the US Army to transform how the military plans and conducts operations. It's comprised of a network of air and ground vehicles equipped with sensors that identify potential threats and autonomously notify soldiers. The information collected would then be analysed by an AI-enabled decision support agent that can recommend responses -- such as which threats to prioritize. The system was developed by the Army's Artificial Intelligence Task Force (AITF), which was activated last year to improve the Army's connections with the broader AI community.
This past fall, diplomats from around the globe gathered in Geneva to do something about killer robots. In a result that surprised nobody, they failed. The formal debate over lethal autonomous weapons systems--machines that can select and fire at targets on their own--began in earnest about half a decade ago under the Convention on Certain Conventional Weapons, the international community's principal mechanism for banning systems and devices deemed too hellish for use in war. But despite yearly meetings, the CCW has yet to agree what "lethal autonomous weapons" even are, let alone set a blueprint for how to rein them in. Meanwhile, the technology is advancing ferociously; militaries aren't going to wait for delegates to pin down the exact meaning of slippery terms such as "meaningful human control" before sending advanced warbots to battle.
The chief executive of Google has called for international cooperation on regulating artificial intelligence technology to ensure it is'harnessed for good'. Sundar Pichai said that while regulation by individual governments and existing rules such as GDPR can provide a'strong foundation' for the regulation of AI, a more coordinated international effort is'critical' to making global standards work. The CEO said that history is full of examples of how'technology's virtues aren't guaranteed' and that with technological innovations come side effects. These range from internal combustion engines, which allowed people to travel beyond their own areas but also caused more accidents, to the internet, which helped people connect but also made it easier for misinformation to spread. These lessons teach us'we need to be clear-eyed about what could go wrong' in the development of AI-based technologies, he said.
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).