Artificial Intelligence (AI) has been omnipresent and the latest in the block is Military. In recent times, AI has become a critical part of modern warfare. Compared with the conventional systems, military establishments churning enormous volumes of data are capable to integrate AI on a more unified process. Ensuring operational efficiency, AI improves self-regulation, self-control and self-actuation of combat systems, credit to its inherent computing coupled with accurate decision-making capabilities. Taking into account the enormous capability Artificial intelligence (AI) holds in the modern-day warfare, many of the world's most powerful countries have increased their investments into military and self-security.
You might think that it would be impossible for people to value a piece of hardware over human life, yet new research from Radboud University suggests that such circumstances may exist. Bizarrely, one of these circumstances might involve a perception that robots feel pain. "It is known that military personnel may mourn a robot that is used to clear mines in the army. Funerals are organized for them. We wanted to investigate how far this empathy for robots extends, and what moral principles influence this behavior towards robots. Little research has been done in this area as of yet, " the authors explain.
A bill to prohibit the flying of drones over Self-Defence Forces and U.S. military facilities in Japan, as well as venues for the 2020 Tokyo Olympics and Paralympics, cleared the Lower House on Tuesday. The bill, aimed at guarding against terrorism, has sparked protests from the media over its potential disruption of newsgathering activities. Taking these into account, a House of Representatives panel added a supplementary provision to the legislation, requesting the government ensure press freedom and people's right to know. The ruling parties aim to enact the bill, an amendment to the existing law on drones, during the current Diet session through June. The legislation also bans drones from flying over venues for this year's Rugby World Cup.
The smartest companies now approach cybersecurity with a risk management strategy. Learn how to make policies to protect your most important digital assets. The Royal Melbourne Institute of Technology (RMIT) has announced a new online course on cybersecurity in a bid to address Australia's cybersecurity skills shortage. As part of the course, RMIT Online has partnered with the National Australia Bank (NAB) and Palo Alto Networks, with both organisations to provide mentors for the course. The course, called Cyber Security Risk and Strategy, will cover topics such as the fundamentals of cybersecurity and how to apply cybersecurity risk mitigation strategies to an organisation.
In the United States and Europe, the debate in the artificial intelligence community has focused on the unconscious biases of those designing the technology. Recent tests showed facial recognition systems made by companies like I.B.M. and Amazon were less accurate at identifying the features of darker-skinned people. China's efforts raise starker issues. While facial recognition technology uses aspects like skin tone and face shapes to sort images in photos or videos, it must be told by humans to categorize people based on social definitions of race or ethnicity. Chinese police, with the help of the start-ups, have done that.
Eric Horvitz is a technical fellow and director at Microsoft Research Labs. A recipient of the Feigenbaum and the Allen Newell Prizes for contributions to artificial intelligence (AI), he is also on the US President's Council of Advisors on Science and Technology, Defense Advanced Research Projects Agency, and the Allen Institute for Artificial Intelligence. He is also part of the standing committee of Stanford University's One Hundred Year Study on Artificial Intelligence. Horvitz, who comes at least once a year to the country to interact with the India labs team, spoke about his work at Microsoft Research. He also shared his thoughts on the benefits and fear of AI, and attempts to address the bias in algorithms.
In a case of technology penetration through acquisition and investment, thermal imaging company FLIR Systems has been making an aggressive push into the military drone sector. In February, I wrote about FLIR's acquisition of Endeavor Robotic Holdings, a military defense company specializing in ground robots, for a whopping $385 million. That acquisition came shortly after FLIR acquired aerial drone company Aeryon for $200 million in January, and overnight it made FLIR a powerful player in defense robotics. Now the company has announced it has made a strategic investment in DroneBase, a global drone operations company that provides businesses access to one of the largest Unmanned Aerial Surveillance (UAS) pilot networks. FLIR will be the exclusive provider of thermal product solutions for DroneBase.
While there are innumerable cybersecurity threats, the end goal for many attacks is data exfiltration. Much has been said about using machine learning to detect malicious programs, but it's less common to discuss how machine learning can aid in identifying other types of notable threats. Critically, machine learning can be key in detecting one of the most insidious types of malicious actors – one with legitimate access to your network and systems. When properly trained, machine-learning algorithms can be used to identify insider threats and frauds before they become dangerous. When people hear the term "insider threat," many of them imagine an employee gone rogue, a disgruntled member of your team committing corporate espionage and leaking sensitive data or documents to competitors or criminals.
Reprinted with permission from Quanta Magazine's Abstractions blog. In the 1950s, four mathematically minded U.S. Army soldiers used primitive electronic calculators to work out the optimal strategy for playing blackjack. Their results, later published in the Journal of the American Statistical Association, detailed the best decision a player could make for every situation encountered in the game. Yet that strategy--which would evolve into what gamblers call "the book"--did not guarantee a player would win. Blackjack, along with solitaire, checkers, or any number of other games, has a ceiling on the percentage of games in which players can expect to triumph, even if they play the absolute best that the game can be played.
AI can reveal how many cigarettes a person has smoked based on the DNA contained in a single drop of their blood, or scrutinize Islamic State propaganda to discover whether violent videos are radicalizing potential recruits. Because AI is such a powerful tool, Microsoft president Brad Smith told the crowd at Columbia University's recent Data Science Day that tech companies and universities performing AI research must also help ensure the ethical use of such technologies. AI is now an invisible but inextricable part of life for hundreds of millions of people. The rise of machine learning algorithms combined with cloud computing services has put massive computer power at the fingertips of companies and customers worldwide. These trends have also enabled the rise of data science that applies AI methods to constantly analyze information from online services and Internet-connected devices.