That said, the government has recently begun to act on the issue, making a start with the security guidelines for smart homes. While it does make life easy, the fact remains that AI is based on algorithms and if a base algorithm is tampered with, AI can also be reprogrammed. Unless and until these risks are properly assessed and preventive measures to plug vulnerabilities are put in place, AI adoption needs to be closely monitored. Strict security guidelines need to be put in place by governments, while tech companies need address the issue more seriously, and start issuing regular updates to plug vulnerabilities the way they currently do for smartphones.
Advantages of such weapons were discussed in a New York Times article published last year, which stated that speed and precision of the novel weapons could not be matched by humans. The official stance of the United States on such weapons, was discussed at the Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems held in 2016 in Geneva, where the U.S. said that "appropriate levels" of human approval was necessary for any engagement of autonomous weapons that involved lethal force. In 2015, numerous scientists and experts signed an open letter that warned that developing such intelligent weapons could set off a global arms race. A similar letter, urging the United Nations to ban killer robots or lethal autonomous weapons, was signed by world's top artificial intelligence (AI) and robotics companies in the International Joint Conference on Artificial Intelligence (IJCAI) held in Melbourne in August.
A coordinated international coalition of non-governmental organizations dedicated to bringing about a preemptive ban of fully autonomous weaponry -- The Campaign to Stop Killer Robots -- was started in April 2013. A breakthrough was reached in 2016 when the fifth review conference of the United Nations Convention on Conventional Weapons (CCW) saw countries hold formal talks to expand their deliberations on fully autonomous weapons. The conference also saw the establishment of a Group of Governmental Experts (GGE) chaired by India's ambassador to the U.N., Amandeep Gill. According to Human Rights Watch, over a dozen countries are developing autonomous weapon systems.
According to the report, most computer scientists believe the possible threats posed by AI to be "at best uninformed" and those fears "do not align with the most rapidly advancing current research directions of AI as a field." It instead says these existential fears stem from a very particular--and small--part of the field of research called Artificial General Intelligence (AGI), which is defined as an AI that can successfully perform any intellectual task that a human can. The report argues we are unlikely to see the reality of an AGI come from the current artificial intelligence research and the concept "has high visibility, disproportionate to its size or present level of success." Musk launched a nonprofit AI research company called OpenAI in 2015 and pledged $1 billion to it, with the intention of developing best practices and helping prevent potentially damaging applications of the technology.
A suspected U.S. drone strike killed four members of al Qaeda's Yemen branch, including a local commander, two unidentified officials in Yemen said Saturday. On Thursday, a drone strike on a vehicle in al-Bayda province in central Yemen killed a senior AQAP leader known as Abdallah al-Sanaani. The U.S. has carried out drone strikes to target the Islamist militant group that has been exploiting Yemen's civil war, which has left at least 10,000 dead since fighting escalated in March 2015. The U.S. has targeted AQAP many times in recent years, and in 2011, Anwar al-Awlaki, an American-born cleric, who had reportedly become an al Qaeda leader in Yemen, was killed in an airstrike.