Goto

Collaborating Authors

 real threat


The Real Threat From A.I. Isn't Superintelligence. It's Gullibility.

Slate

The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. A.I. programs have long been chess and Jeopardy! Champions, but they have also conquered poker, crossword puzzles, Go, and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity. Given this meteoric ascent, it's not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent A.I.s that we unwittingly unleash upon ourselves.


Imaginary numbers protect AI from very real threats

#artificialintelligence

Computer engineers at Duke University have demonstrated that using complex numbers--numbers with both real and imaginary components--can play an integral part in securing artificial intelligence algorithms against malicious attacks that try to fool object-identifying software by subtly altering the images. By including just two complex-valued layers among hundreds if not thousands of training iterations, the technique can improve performance against such attacks without sacrificing any efficiency. The research was presented at the Proceedings of the 38th International Conference on Machine Learning. "We're already seeing machine learning algorithms being put to use in the real world that are making real decisions in areas like vehicle autonomy and facial recognition," said Eric Yeats, a doctoral student working in the laboratory of Helen Li, the Clare Boothe Luce Professor of Electrical and Computer Engineering at Duke. "We need to think of ways to ensure that these algorithms are reliable to make sure they can't cause any problems or hurt anyone." One way that machine learning algorithms built to identify objects and images can be fooled is through adversarial attacks.


Forget Covid – is artificial intelligence the real threat to humanity?

#artificialintelligence

The former Google X executive Mo Gawdat is beginning the rounds on his new book Scary Smart, which renders artificial intelligence to be as much a force of nature as Covid. Indeed, he sees AI as nothing less than the next evolutionary step on this planet. For Gawdat, it's clear: the capacity for learning from data and experience in these machines is on an exponential curve (which doesn't just gently ascend but shoots eventually into the sky). At some singular point – probably aided by the unimaginable calculating power of quantum computing, and apparently by the end of the decade – we will be in the presence of massively superior beings. READ MORE: Even Google's algorithm understands this one key fact about the Union Gawdat wants us – indeed, warns us – to think of them as "our children", with a voracious appetite for learning from their environment.



7 Real Threats of Artificial Intelligence You Should Know

#artificialintelligence

You regularly see science-fiction-like Hollywood doom movies. All based on the destruction of humanity by AI. They are quite often misrepresented, yet an expanding number of alarming reports on technical machine learning is fuelled by the new advanced AI systems. Science fiction is becoming reality. This is because smart computer systems become progressively proficient at recollecting and understanding what we as individuals do, skills like looking, tuning in, or talking.


The real threat from artificial intelligence – basic science - KSU

#artificialintelligence

What do AI and chloroquine have in common? The reader has already understood the astronomical impact artificial intelligence (AI) has on businesses and governments, forcing large economies to make strategic plans for the technology. What not everyone understands yet are the real risks posed by the technology. A historical overview of artificial intelligence takes us on a roller coaster ride of exaggerated promises and gigantic disappointments. One of their milestones is the emergence of artificial neural networks (ANN) in 1958, when Frank Rosenblat invented the "Perceptron".


Fear itself is the real threat to democracy, not tall tales of Chinese AI John Naughton

The Guardian

This week the American National Security Commission on artificial intelligence released its final report. Cursory inspection of its 756 pages suggests that it's just another standard product of the military-industrial complex that so worried President Eisenhower at the end of his term of office. On closer examination, however, it turns out to be a set of case notes on a tragic case of what we psychologists call "hegemonic anxiety" – the fear of losing global dominance. The report is the work of 15 bigwigs, led by Dr Eric Schmidt, the former CEO of Alphabet (and before that the adult supervisor imposed by venture capitalists on the young co-founders of Google). Of the 15 members of the commission only four are female.


The Real Threat to Business Schools from Artificial Intelligence - Knowledge@Wharton

#artificialintelligence

Artificial intelligence (AI) will change the way we learn and work in the near future. Nearly 400 million workers globally will change their occupations in the next 10 years, and business schools are uniquely situated to respond to the shifts coming to the future of work. However, a recent study, "Implications of Artificial Intelligence on Business Schools and Lifelong Learning," shows that business schools remain cautious in adapting management education to address the changing needs of students, workers and organizations, writes Anne Trumbore in this opinion piece. Trumbore, one of the study's coauthors, is senior director of Wharton Online, a strategic digital learning initiative at the Wharton School of the University of Pennsylvania. In the past few weeks, COVID 19 has moved hundreds of millions of students around the globe from physical to online classes.


AI, automation emerge as critical tools for cybersecurity

#artificialintelligence

Artificial intelligence and automation adoption rates are rising, and investment plans are high on enterprise radars. AI is in pilots or use at 41% of companies, with another 42% actively researching it, according to the 2019 IDG Digital Business Study. Cybersecurity has emerged as an ideal use case for these technologies. Digital business has opened a score of new risks and vulnerabilities that, combined with a security skills gap, is weighing down security teams. As a result, more organizations are looking at AI and machine learning as a way to relieve some of the burden on security teams by sifting through high volumes of security data and automating routine tasks.


Automation And AI: The New Frontier In Cybersecurity

#artificialintelligence

Digital technology is changing the way we work. Employees are accessing their productivity applications from outside the physical workplace on an increasing number of mobile devices. Thus, the number of assets that internal IT organizations are expected to manage is rising, as are the amounts of data that need to be examined. Sensors for HVAC systems and intelligent CCTVs for physical building security are examples of IoT devices that are new sources of additional network traffic. The burden falls to IT organizations who are being asked to accommodate these advances in technology, but are confronted with a heightened risk to security in their businesses. The scale and complexity of a company's digital assets needing protection from malicious attacks and data breach has grown significantly.