Introducing self-governing killer robots to the battlefield could have horrific consequences for mankind, a leading academic has warned. The lethal technology is being developed around the world and is slowly being used in warfare as countries try to stay ahead of other nations. A global initiative to prohibit the use of fully autonomous killing machines that do not require any human oversight to choose and execute people was blocked earlier this year. A handful of countries including Australia, Israel, the US, Russia and South Korea prevented the worldwide ban - citing the need for further talks on the'benefits and advantages of autonomous weapons'. Richard Moyes, an honorary fellow at the University of Exeter and founding member of the Campaign to Stop Killer Robots (CSKR), has revealed the long-term use of killer robots, without human controllers, may result in unnecessary loss of life to both civilians and soldiers.
To illustrate how artificial intelligence (AI) could affect the future battlefield, consider the following scenario based on a future book I am writing entitled The Senkaku Paradox: Risking Great Power War over Limited Stakes. The scenario, imagined to occur sometime between now and 2040, begins with a hypothesized Russian "green men" attack against a small farming village in eastern Estonia or Latvia. Russia's presumed motive would be to sow discord and dissent within NATO, weakening the alliance. Estonia and Latvia are NATO member states, and thus the United States is sworn to defend them. But in the event of such a Russian aggression, a huge, direct NATO response may or may not be wise.
In it, CW3 Crifasi describes the inherent tension between human critical thinking and the benefits of Augmented Intelligence facilitating warfare at machine speed. "CAITT, let's re-run the targeting solution for tomorrow's engagement… again," asked Chief Warrant Officer Five Robert Menendez, in a not altogether annoyed tone of voice. Considering this was the fifth time he had asked, the tone of control Bob was exercising was nothing short of heroic for those knew him well. Fortunately, CAITT, short for Commander's Artificially Intelligent Targeting Tool, did not seem to notice. Bob quietly thanked the nameless software engineer who had not programmed it to recognize the sarcasm and vitriol that he felt when he made the request.
Many experts in education and psychology argue that critical thinking skills are declining. In 2017, Dr. Stephen Camarata wrote about the emerging crisis in critical thinking and college students' struggles to tackle real world problem solving. He emphasized the essential need for critical thinking and asserted that "a young adult whose brain has been "wired' to be innovative, think critically, and problem solve is at a tremendous competitive advantage in today's increasingly complex and competitive world."3 Although most government agencies, policy makers, and businesses deem critical thinking important, STEM fields continue to be prioritized. However, if creative thinking skills are not fused with STEM, then there will continue to be a decline in those equipped with well-rounded critical thinking abilities. In 2017, Mark Cuban opined during an interview with Bloomberg TV that the nature of work is changing and the future skill that will be more in-demand will be "creative thinking."
Elon Musk is worried about the perils of artificial intelligence. The year is 2030, and artificial intelligence has changed practically everything. Is it a change for the better or has AI threatened what it means to be human, to be productive and to exercise free will? You've heard the dire predictions from some of the brightest minds about AI's impact. Tesla and SpaceX chief Elon Musk worries that AI is far more dangerous than nuclear weapons.
I've always been a loner, avoiding crowds as much as possible, but last Friday I found myself in the company of 500 million people. The breach of the personal accounts of Marriott and Starwood customers forced us to join the 34% of U.S. consumers who experienced a compromise of their personal information over the last year. Viewed another way, there were 2,216 data breaches and more than 53,000 cybersecurity incidents reported in 65 countries in the 12 months ending in March 2018. How many data breaches we will see in 2019 and how big are they going to be? No one has a crystal ball this accurate and it's difficult to make predictions, especially about the future. Still, I made a brilliant, contrarian, and very accurate prediction last year, stating unequivocally that "there will be more spectacular data breaches" in 2018. Just like last year, this year's 60 predictions reveal the state-of-mind of key participants in the cybersecurity industry (on the defense team, of course) and cover all that's hot today. Topics include the use and misuse of data; artificial intelligence (AI) and machine learning as a double-edge sword helping both attackers and defenders; whether we are going to finally "get over privacy" or see our data finally being treated as a private and protected asset; how the cloud changes everything and how connected and moving devices add numerous security risks; the emerging global cyber war conducted by terrorists, criminals, and countries; and the changing skills and landscape of cybersecurity.
The detention in Canada of Meng Wanzhou, Huawei's CFO and the daughter of its founder, is further inflaming tensions between the US and China. Her arrest is linked to a US extradition request on undisclosed charges, but China says it's a human rights violation and is demanding her swift release. Behind this very public drama is a long-running, behind-the-scenes one centered on western intelligence agencies' fears that Huawei poses a significant threat to global security. Among the spooks' biggest concerns are: The Chinese firm is the world's largest manufacturer of things like base stations and antennae that mobile operators use to run wireless networks. And those networks carry data that's used to help control power grids, financial markets, transport systems, and other parts of countries' vital infrastructure.
You don't usually find companies asking for regulation on the technology that they're developing, but Microsoft is doing just that. The company wants Congress to write laws for its facial recognition technology in 2019. Microsoft is positioning itself as an outspoken elder statesman while still trying to beat its competitors. ALINA SELYUKH, BYLINE: For Silicon Valley, this has been a troubled year. UNIDENTIFIED REPORTER #3: Facebook just had what might be the biggest wipeout in stock market history.
Symantec is rolling out a new product that it says will help enterprises protect public infrastructure from cyber attacks and cyberattack-induced blackouts. The Industrial Control System Protection (ICSP) Neural is a device that scans for malware on USB devices to block attacks on IoT and operational technology environments. Today's security threats have expanded in scope and seriousness. There can now be millions -- or even billions -- of dollars at risk when information security isn't handled properly. The cybersecurity firm said the ICSP station functions as a neural network, using artificial intelligence to detect USB-borne malware and sanitize the devices.
Every year, Accenture's Technology Vision pinpoints the key technology trends that will reshape and reinvent organisations of all kinds over the next few years. This year's top-line theme is "The Intelligent Enterprise Unleashed". It's a concept that's hugely relevant not only to all businesses, but also – in my view – to armed forces the world over. Why? Because, in the defence context, becoming a more intelligent military organisation is clearly a critical goal. And – as with any other type of organisation – it's a goal that armed forces can only achieve by unlocking the data that's currently held within silos and freeing it up to flow to the right place at the right time for the right use.