AI technology can be used to protect confidential data and critical infrastructure from attackers with its ability to detect, prevent, and patch vulnerabilities. The Internet of Thing (IoT) technologies like tablets, cell phones, Bluetooth devices, or smart phone devices, are integrating as a part of our everyday lives. As technology advances, the threats organizations face are becoming more sophisticated and harder to detect, as hackers are finding new and smarter ways to disguise their trails. It is important for organizations to perform risk assessments, determining the real risks they face and the impact of those risks.
One potential approach to setup is known as "Programming by Demonstration" in which a human trainer demonstrates the task to the robot, and from that demonstration the robot learns the task which can be performed. Lockheed Martin successfully launched Vector Hawk, a small, unmanned aerial vehicle (UAV), on command from the Marlin MK2 autonomous underwater vehicle (AUV) during a cross-domain command and control event hosted by the U.S. Navy. In addition to Marlin and Vector Hawk, the Submaran, an unmanned surface vehicle (USV) developed by Ocean Aero, provided surface reconnaissance and surveillance. All three autonomous vehicles--Marlin, Submaran and Vector Hawk--communicated operational status to the ground control station to maintain situational awareness and provide a means to command and control all assets.
Some Islamic State units have used drones to shoot propaganda footage. Although the Islamic State documents show the group has procured components to build drones, Spleeters said the group mostly relies on products from China-based DJI, the so-called Apple of the drone world, which dominates an estimated 70% of the drone market. Reports that Islamic State had used DJI products pushed the company in February to create a geofence, a software restriction that creates a no-fly zone, over large swaths of Iraq and Syria, specifically over Mosul. Iraq's Popular Mobilization Units, paramilitary factions that operate with support from Iran, regularly deploy DJI's Phantom 4 drone to scan an area for Islamic State positions and car bombs.
The firm, owned by a consortium of German automakers, will spend an estimated $8.5 million to cover the annual salaries for the 50 AI jobs it's looking to fill according to the study. The AI study counted 55 open AI-based job openings at Magic Leap, and estimates the average gig will pay over $135,000 per year. Another notable top 20 company: BAE Systems, which develops combat vehicles, ammunition, artillery systems, naval guns and missile launchers. If it fills all 52 of its open AI positions, Paysa estimates BAE would be investing an extra $8.3 million in machine learning talent each year.
"Generating Visual Explanations" can explain the decisions of an image-to-wild-bird-name classifier with sentences like "This is a Laysan Albatross because this bird has a large wingspan, hooked yellow beak, and white belly". In this case, the XAI doesn't extensively cover how the deep neural net (DNN) made the decision, but the ability to generate an explanation from a visual image that was categorized using a neural net is pretty cool, and has a ton of wide-ranging applications (healthcare, military, etc.). University of Washington's LIME paper focuses on producing model-agnostic explanations, explaining the results of any ML system by looking only at its inputs and outputs. The bottom line trade-off here is: trying to articulate the decision boundaries created by the DNN is stupid hard -- a DNN will create very complex decision boundaries to classify stuff, probably accounting for the interaction of 1,000s-100,000s of variables in large models simultaneously, which is difficult to explain to a human.
More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. The campaign defines three types of robotic weapons: human-in-the-loop weapons, robots that can select targets and deliver force only with a human command; human-on-the-loop weapons, robots that can select targets and deliver force under the oversight of a human operator who can override the robots' actions; and human-out-of-the-loop weapons, robots that are capable of selecting targets and delivering force without any human input or interaction. Reporting on a February 2016 round-table discussion on autonomous weapons, civilian safety, and regulation versus prohibition among AI and robotics developers, Heather Roff, a research scientist in the Global Security Initiative at Arizona State University with research interests in the ethics of emerging military technologies, international humanitarian law, humanitarian intervention, and the responsibility to protect, distinguishes automatic weapons from autonomous weapons. Roff describes initial autonomous weapons as limited learning weapons that are capable both of learning and of changing their sub-goals while deployed, saying, "Where sophisticated automatic weapons are concerned, governments must think carefully about whether these weapons should be deployed in complex environments.
The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. But it was not until the start of this decade, after several clever tweaks and refinements, that very large--or "deep"--neural networks demonstrated dramatic improvements in automated perception. Deep learning has transformed computer vision and dramatically improved machine translation.
Artificial intelligence (AI) is all over the technology headlines lately. What problem could machine learning and artificial intelligence solve for cybersecurity? With machine learning, that mountain of data could be whittled down in a fraction of the time, helping organizations quickly identify and then mitigate a security incident. Between the mountain of security data that IT teams must manage and the lack of visibility I described above, now might be AI's time to shine.
It was a crude machine, dubbed the Robot Gargantua by its creator. But even with a high degree of dexterity, robotic hands can't achieve the same level of performance as biological ones if they don't possess a sense of touch. Dubbed "Revolutionizing Prosthetics,"", this program effectively sought to build a robotic hand as capable as Luke Skywalker's arm from The Empire Strikes Back. The early touch sensors employed in the DARPA program were measuring force and profilometry, which, as the name implies, physically maps a surface's profile.
My latest book is a thriller and it is science fiction, but it is also known as what is called a genre smasher, and so I felt it was time to address the cyberpunk body hacking grinders in Cyberwar. Fifty percent of the population consider the cyborg way of life a right and have permanently altered some part of their body; many of these body hackers have cybernetic eyes that replace one of their own functioning eyes with an infrared and thermal imaging device. The other half of the world populous remain steadfast in what they deem their birthright: the right to have no mechanized or electronic device, forced into their bodies. "The Cyberpunk Body Hacking Grinders in Cyberwar" was written by R.J. Huneke.