Professor Hao Li used to think it could take two to three years for the perfection of deepfake videos to make copycats indistinguishable from reality. But now, the associate professor of computer science at the University of Southern California, says this technology could be perfected in as soon as six to 12 months. Deepfakes are realistic manipulated videos that can, for example, make it look a person said or did something they didn't. "The best possible algorithm will not be able to distinguish," he says of the difference between a perfect deepfake and real videos. Li says he's changed his mind because developments in computer graphics and artificial intelligence are accelerating the development of deepfake applications.
Twenty IT leaders look into their crystal balls to predict the technologies and trends that will drive the sector in 2020. CIO Australia asked Australian technology bosses about their top line predictions for 2020, the technologies that will have the greatest impact next year, and what top trends will impact the IT and business landscape. Here are the predictions from IT leaders across vendor land to CIOs and CTOs across a host of industries. Intelligent systems (machine learning, artificial intelligence and automation) are the top trends in 2020. Intelligent systems will have a significant impact on increasing situational awareness (insights) and using these insights to enhance decision making – to deliver optimal outcomes for customers. One large impact on the business landscape will be the expanding role of digital twins – extending beyond the optimisation of individual assets/systems to driving improvements at the organisational level. We are introducing a reference to'Digital Twin of Operations (DTO)' – having recently built some proof of concepts. The DTO brings together inputs from a range of different systems and assets onto a common data & analytics platform; is able to process large-scale and real-time data sets to simulate millions of'what if' scenarios through cloud technologies.
Washington State University researchers are creating the first-ever "IQ test" for artificial intelligence (AI) systems that would score systems on how well they learn and adapt to new, unknown environments. Diane Cook, Regents Professor and Huie-Rogers Chair Professor, and Larry Holder, professor in the School of Electrical Engineering and Computer Science, received a grant of just over $1 million from the Defense Advanced Research Projects Agency (DARPA) to create a framework to test the "intelligence" of AI systems. "Previously, research on measuring intelligence in AI systems has been mostly theoretical," Holder said. Holder and Cook will design a test that will grade AI systems based on the difficulty of problems that they can solve. Creating methods to rank problems on their difficulty will be one of the major parts of the research.
Turkey will be the first customer for a new military drone with a machine gun mount that can fire single shots or 15-round bursts and carry a total of 200 rounds. Developed by the Asisguard, a technology firm in Ankara that specializes in military technology, the drone will use a laser sighting system to deliver a high degree of accuracy. The drone will also use a set of robotic braces to offset weapon recoil and ensure the drone's flight path isn't thrown off by firing. According to a report in the New Scientist, the drone will be able to hit targets as small as six inches from a distance of up to 650 feet. The 55-pound drone, called Songar, will be able to travel up to six miles at heights of up to 1.7 miles above ground.
China is planning to upgrade its naval power with unmanned AI submarines that aim to provide an edge over the fleets of their global counterparts. A report by the South China Post on Sunday revealed Beijing's plans to build the automated subs by the early 2020s in response to unmanned weapons being developed in the US. The subs will be able to patrol areas in the South China Sea and Pacific Ocean that are home to disputed military bases. While the expected cost of the submarines has not been disclosed, they're likely to be cheaper than conventional submarines as they do not require life-supporting apparatus for humans. However, without a human crew, they'll also need to be resilient enough to be at sea without onboard repairs possible.
As a national health system, the VA has amassed a significant amount of data--possibly giving it a leg up because lack of trustworthy and accessible data has traditionally been one of the major roadblocks to AI development. In other health technology news: a website helps patients with rare diseases find more information about them. Modern Healthcare: VA Dives Into Artificial Intelligence R&D The Department of Veterans Affairs has opened a new artificial intelligence institute to pursue research and inform national strategy. The National Artificial Intelligence Institute, a joint initiative of the VA's office of research and development and the VA secretary's center for strategic partnerships, will work with public and private partners to carry out AI research and development projects, including efforts to apply AI to identify veterans at high risk for suicide or to help reduce patient wait times. The Washington Post: Rare Diseases Lack Data But This Website Aims To Help Achondrogenesis, Noonan syndrome and sialadenitis aren't household names.
A drone with a machine gun attached can hit targets with high precision, according to its makers. Turkey is set to become the first country to have the drone, when it gets a delivery this month. The 25-kilogram drone has eight rotating blades to get it in the air. Its machine gun carries 200 rounds of ammunition and can fire single shots or 15-round bursts. Many countries and groups already use small military drones that can drop grenades or fly into a target to detonate an explosive.
Artificial intelligence in autonomous systems (i.e., drones) can address human error and fatigue issues, but also, in the future, concerns over ethical behaviour on the battlefield. Installing an algorithmic "moral compass" in AI, however, will be challenging. A common theme among many discussions concerning the military uses of artificial intelligence (AI) is the "Skynet" trope: the fear that AI will be self-aware and decide to turn on its masters. Inherent in this argument is the contention that AI does not share the same ethical constraints that humans do. While almost certainly an over-exaggeration, the Skynet scenario does highlight the problem of ensuring that the ethical behaviour we believe is incumbent on humans in combat is not lost as we increasingly devolve battlefield decision-making to autonomous systems.
At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that "we're in the process of potentially losing the AI arms race to China right now." As evidence, he cited Beijing's access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and others--most notably the National Security Commission on Artificial Intelligence, which released its interim report to Congress last month--are right about China's current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an "arms race" is both wrong and counterproductive. Instead, while being clear-eyed about China's aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI.
At the 2007 self-driving competition staged by DoD's Defense Advanced Research Projects Agency (DARPA) in remote Victorville, California, Salesky's CMU team and one from rival Stanford University included the future founders of at least four self-driving startups. Those competitors were Chris Urmson and Drew Bagnell of self-driving vehicle startup Aurora, Dave Ferguson of Nuro, Apex.ai's Jan Becker and Anthony Levandowski of Pronto.ai. Sebastian Thrun, who with Levandowski and Urmson helped build Google's self-driving business, also participated in the 2007 DARPA Urban Challenge, as did Dmitri Dolgov, who now heads engineering at Google's self-driving spinout Waymo. CMU edged Stanford to win the competition.