Results


will-artificial-intelligence-solve-cybersecurity-put-experts-out-work-2595731

International Business Times

Artificial intelligence and machine learning is making its way into more security products, helping organizations and individuals automate certain tasks required to keep their services and information safe. Kashyap, the senior vice president and chief product officer at Cylance--a cybersecurity firm known for its use of AI--doesn't view AI and machine learning as a replacement for human workers but rather as a supplemental service that can enable those workers to do their job more efficiently. He said there were now "billions of pieces of malware" in the wild, and "well thought-out cyber campaigns" being carried out on the regular, with targeted threats directed at individuals and organizations that require a more efficient way to check the validity of code and defend against attacks. With a widening gap between the number of security professionals needed compared to the number available--a shortage of more than 1.5 million is expected by 2020--Kashyap determined the issue no longer just required a human scale solution; it needed a computing solution.


Increasing Adoption Of AI, Autonomous Tech Shows Up Gaps In Cybersecurity Protocols

International Business Times

That said, the government has recently begun to act on the issue, making a start with the security guidelines for smart homes. While it does make life easy, the fact remains that AI is based on algorithms and if a base algorithm is tampered with, AI can also be reprogrammed. Unless and until these risks are properly assessed and preventive measures to plug vulnerabilities are put in place, AI adoption needs to be closely monitored. Strict security guidelines need to be put in place by governments, while tech companies need address the issue more seriously, and start issuing regular updates to plug vulnerabilities the way they currently do for smartphones.


What Is Terminator Conundrum? 'Killer Robots' In Military Raise Ethical Concerns

International Business Times

Advantages of such weapons were discussed in a New York Times article published last year, which stated that speed and precision of the novel weapons could not be matched by humans. The official stance of the United States on such weapons, was discussed at the Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems held in 2016 in Geneva, where the U.S. said that "appropriate levels" of human approval was necessary for any engagement of autonomous weapons that involved lethal force. In 2015, numerous scientists and experts signed an open letter that warned that developing such intelligent weapons could set off a global arms race. A similar letter, urging the United Nations to ban killer robots or lethal autonomous weapons, was signed by world's top artificial intelligence (AI) and robotics companies in the International Joint Conference on Artificial Intelligence (IJCAI) held in Melbourne in August.


Killer Robots Could Change Warfare More Than Gunpowder, Nuclear Arms, Experts Warn

International Business Times

A coordinated international coalition of non-governmental organizations dedicated to bringing about a preemptive ban of fully autonomous weaponry -- The Campaign to Stop Killer Robots -- was started in April 2013. A breakthrough was reached in 2016 when the fifth review conference of the United Nations Convention on Conventional Weapons (CCW) saw countries hold formal talks to expand their deliberations on fully autonomous weapons. The conference also saw the establishment of a Group of Governmental Experts (GGE) chaired by India's ambassador to the U.N., Amandeep Gill. According to Human Rights Watch, over a dozen countries are developing autonomous weapon systems.


Artificial Intelligence: Military Advisors Say AI Won't Bring About Robot Apocalypse

International Business Times

According to the report, most computer scientists believe the possible threats posed by AI to be "at best uninformed" and those fears "do not align with the most rapidly advancing current research directions of AI as a field." It instead says these existential fears stem from a very particular--and small--part of the field of research called Artificial General Intelligence (AGI), which is defined as an AI that can successfully perform any intellectual task that a human can. The report argues we are unlikely to see the reality of an AGI come from the current artificial intelligence research and the concept "has high visibility, disproportionate to its size or present level of success." Musk launched a nonprofit AI research company called OpenAI in 2015 and pledged $1 billion to it, with the intention of developing best practices and helping prevent potentially damaging applications of the technology.


Who Is Abu Khaled Al-Sanaani? Al Qaeda's Yemen Branch Commander, Other Members Killed In Suspected US Drone Strike

International Business Times

A suspected U.S. drone strike killed four members of al Qaeda's Yemen branch, including a local commander, two unidentified officials in Yemen said Saturday. On Thursday, a drone strike on a vehicle in al-Bayda province in central Yemen killed a senior AQAP leader known as Abdallah al-Sanaani. The U.S. has carried out drone strikes to target the Islamist militant group that has been exploiting Yemen's civil war, which has left at least 10,000 dead since fighting escalated in March 2015. The U.S. has targeted AQAP many times in recent years, and in 2011, Anwar al-Awlaki, an American-born cleric, who had reportedly become an al Qaeda leader in Yemen, was killed in an airstrike.