Despite the ubiquity of drones nowadays, it seems to be generally accepted that learning how to control them properly is just too much work. Consumer drones are increasingly being stuffed full of obstacle-avoidance systems, based on the (likely accurate) assumption that most human pilots are to some degree incompetent. It's not that humans are entirely to blame, because controlling a drone isn't the most intuitive thing in the world, and to make it easier, roboticists have been coming up with all kinds of creative solutions. There's body control, face control, and even brain control, all of which offer various combinations of convenience and capability. The more capability you want in a drone control system, usually the less convenient it is, in that it requires more processing power or infrastructure or brain probes or whatever.
Sheila McGee-Smith is a leading communications industry analyst and strategic consultant with a proven track record in new product development, competitive assessment, market research, and sales strategies for customer care solutions and services. Her insight helps enterprises and solution providers develop strategies to meet the escalating demands of today's consumer and business customers. It seems one cannot pick up a newspaper or magazine in 2018 without seeing a headline related to artificial intelligence (AI). In September 2018, The Wall Street Journal published a thought-provoking article, The Human Promise of the AI Revolution. Taking an opposite and certainly a more sensationalist approach, Newsweek had this headline in its Tech & Science section, Forget Terrorism, Climate Change and Pandemics: Artificial Intelligence is the Biggest Threat to Humanity.
A pledge against the use of autonomous weapons was in July signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries. The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, called on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons". The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop". Arkin told D61 Live on Wednesday that instead of banning autonomous systems in war zones, they instead should be guided by strong legal and legislative directives. Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.
They're using machine learning to sort through millions of malware files, searching for common characteristics that will help them identify new attacks. They're analyzing people's voices, fingerprints and typing styles to make sure that only authorized users get into their systems. And they're hunting for clues to figure out who launched cyberattacks--and make sure they can't do it again. "The problem we're running into these days is the amount of data we see is overwhelming," says Mathew Newfield, chief information-security officer at Unisys Corp. UIS 0.50% "Trying to analyze that information is impossible for a human, and that's where machine learning can come into play." The push for AI comes as companies face a huge increase in threats and more-sophisticated criminals who can often draw on nation-states for resources.
Bogota - Carmenza Gomez was planning a surprise Christmas dinner in the winter of 2008 to celebrate having her eight children back together under one roof in their home in an impoverished suburb in Bogota, the capital of Colombia. That summer, the family had finally been reunited after years apart due to the sons' military service. It was months away, but Carmenza wanted to throw an elaborate dinner to share their first Christmas together in years. But just days after the last of her sons arrived home, 23-year-old Victor Fernando, her third youngest, disappeared. "I didn't tell any of them what I was planning [for Christmas]," Carmenza recalled nearly a decade later.
One of the most common refrains about fighting in cyber space is that the offense has the advantage over the defense: The offense only needs to be successful once, while the defense needs to be perfect all the time. Even though this has always been a bit of an exaggeration, we believe artificial intelligence has the potential to dramatically improve cyber defense to help right the offense-defense balance in cyber space.
As businesses struggle to combat increasingly sophisticated cybersecurity attacks, the severity of which is exacerbated by both the vanishing IT perimeters in today's mobile and IoT era, coupled with an acute shortage of skilled security professionals, IT security teams need both a new approach and powerful new tools to protect data and other high-value assets. Increasingly, they are looking to artificial intelligence (AI) as a key weapon to win the battle against stealthy threats inside their IT infrastructures, according to a new global research study conducted by the Ponemon Institute on behalf of Aruba, a Hewlett Packard Enterprise company HPE, 1.66% This press release features multimedia. The Ponemon Institute study, entitled "Closing the IT Security Gap with Automation & AI in the Era of IoT," surveyed 4,000 security and IT professionals across the Americas, Europe and Asia to understand what makes security deficiencies so hard to fix, and what types of technologies and processes are needed to stay a step ahead of bad actors within the new threat landscape. The research revealed that in the quest to protect data and other high-value assets, security systems incorporating machine learning and other AI-based technologies are essential for detecting and stopping attacks that target users and IoT devices.
Data61, the innovation arm of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), has announced a partnership with Germany's Hensoldt Cyber that will focus on defending against cyber attacks. Under the arrangement announced at D61 LIVE on Wednesday, the pair will be developing a hardware-software stack to protect against cyber attacks on defence systems, smart factories, autonomous vehicles, and critical infrastructure. Data61 said the partnership will secure cyber-physical systems through seL4, which was developed by Data61's Trustworthy Systems group. The group will adapt the operating system to run on Hensoldt Cyber processors, and will extend seL4's existing correctness proofs to apply to that hardware, Data61 explained. "seL4 is provably secure, but its security guarantee relies on the assumption that the underlying hardware is trustworthy," chief research scientist for Data61's Trustworthy Systems group Professor Gernot Heiser said.
If North Korea's dear leader wakes up tomorrow, takes a crazy pill and decides to lob a nuclear missile at the US mainland, there's a good chance the military will be relying on artificial intelligence to protect us. Reuters reported earlier this summer on the existence of a secretive military effort -- actually, of multiple classified programs in various stages that are all focused on the development of AI-reliant systems to help us anticipate the launch of a missile, as well as to track launchers. Fears about runaway AI notwithstanding, the Pentagon is now apparently taking that kind of an effort and planning to crank it up to 11. The Pentagon's research agency DARPA announced Monday it will be spending $2 billion on AI, the focus of which, according to CNN, will include "creating systems with common sense, contextual awareness and better energy efficiency. Advances could help the government automate security clearances, accredit software systems and make AI systems that explain themselves."
The University at Buffalo announced today that it is launching a multidisciplinary artificial intelligence institute -- the University at Buffalo Artificial Intelligence Institute (UBuffalo.AI). UBuffalo.AI will explore how to combine machines' superior ability to ingest, connect and recall information with concepts that humans excel at, such as reasoning, judgement and strategizing, to develop dynamic human-machine partnerships. To lead UBuffalo.AI, the university recruited David Doermann, PhD, from the University of Maryland (UMD) and the Defense Advanced Research Projects Agency (DARPA). Doermann built his career at UMD developing technologies for document understanding and computer vision for the defense and intelligence communities. Human language is considered one of the grand challenges of AI, and the fundamental and applied research performed in his UMD laboratory has provided a critical foundation for addressing the next wave of AI challenges.