Goto

Collaborating Authors

 hysteria


Russia-Ukraine war: List of key events, day 1,309

Al Jazeera

Can Ukraine restore its pre-war borders? How is Russia replenishing its military? At least two people were killed by a daytime Ukrainian drone attack on the Russian city of Novorossiysk on Wednesday, according to The Moscow Times. Among those injured were employees of a Russian-Kazakh oil project. Russia's Ministry of Defence on Wednesday said 1,495 Ukrainian troops were killed in the past 24 hours of fighting, according to Russia's state news agency TASS.


AI hysteria is a distraction: algorithms already sow disinformation in Africa

The Guardian

More than 70 countries are due to hold regional or national elections by the end of 2024. It will be a period of huge political significance across the globe, with more than 2 billion people (mostly from the global south) directly affected by the outcome of these elections. The stakes for the integrity of democracy have never been higher. As concerns mount about the influential role of information pollution, disseminated through the vast platforms of US and Chinese corporations, in shaping these elections, a new shadow looms: how artificial intelligence – more specifically, generative AI such as OpenAI's ChatGPT – has increasingly moved into the mainstream of technology. The recent wave of hype around AI has seen a fair share of doom-mongering.


Misplaced fears of an 'evil' ChatGPT obscure the real harm being done

#artificialintelligence

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversation with Bing, Microsoft's ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructive desires and was in love with him. The transcript of the conversation, together with Roose's appearance on the paper's The Daily podcast, immediately ratcheted up the moral panic already raging about the implications of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other "generative AI" tools that are now loose in the world. These are variously seen as chronically untrustworthy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligence (AGI) – ie human-level intelligence – and therefore posing an existential threat to humanity. Accompanying this hysteria is a new gold rush, as venture capitalists and other investors strive to get in on the action.


Attack of the drones: the mystery of disappearing swarms in the US midwest

The Guardian

At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.


AI's current hype and hysteria could set the technology back by decades

#artificialintelligence

Most discussions about artificial intelligence (AI) are characterised by hyperbole and hysteria. Though some of the world's most prominent and successful thinkers regularly forecast that AI will either solve all our problems or destroy us or our society, and the press frequently report on how AI will threaten jobs and raise inequality, there's actually very little evidence to support these ideas. What's more, this could actually end up turning people against AI research, bringing significant progress in the technology to a halt. The hyperbole around AI largely stems from its promotion by tech-evangelists and self-interested investors. Google CEO Sundar Pichai declared AI to be "probably the most important thing humanity has ever worked on".



[FoR&AI] The Seven Deadly Sins of Predicting the Future of AI – Rodney Brooks

#artificialintelligence

We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs. As I write these words on September 2nd, 2017, I note just two news stories from the last 48 hours. Yesterday, in the New York Times, Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote an opinion piece titled How to Regulate Artificial Intelligence where he does a good job of arguing against the hysteria that Artificial Intelligence is an existential threat to humanity. He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little "the sky is falling" calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better. Today, there is a story in Market Watch that robots will take half of today's jobs in 10 to 20 years. It even has a graphic to prove the numbers. How many robots are currently operational in those jobs? How many realistic demonstrations have there been of robots working in this arena? Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site. Mistaken predictions lead to fear of things that are not going to happen. Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them? Below I outline seven ways of thinking that lead to mistaken predictions about robotics and Artificial Intelligence. We find instances of these ways of thinking in many of the predictions about our AI future. I am going to first list the four such general topic areas of such predictions that I notice, along with a brief assessment of where I think they currently stand. Research on AGI is an attempt to distinguish a thinking entity from current day AI technology such as Machine Learning. Here the idea is that we will build autonomous agents that operate much like beings in the world. This has always been my own motivation for working in robotics and AI, but the recent successes of AI are not at all like this.


The Future of Work and the 'Hyperbole Curve'

#artificialintelligence

The perils and promise we imagine the future to hold are like a mirage on the horizon, reflecting a time that never really arrives. It is the perfect canvas for us to project our hopes and fears onto, always ahead, ominous or inviting. The result is that we fail to attend to the present and our recent past, and the clues they might offer to validate or diminish our fears and hopes. Our organizations and institutions are no different, made up as they are of imperfect and irrational individuals. Even in the face of clear examples to the contrary we often persist in assuming organizations to be rational actors, taking action based on the best information, dispassionately assessing the costs and benefits of various options before making a decision.


Time to stop panicking about artificial intelligence

#artificialintelligence

Ryan Hagemann for the Niskanen Center: The fears over artificial intelligence, while at a very early stage, are an expected feature of the "techno-panics" associated with emerging technologies. As described in a paper from the Information Technology and Innovation Foundation's Daniel Castro and Alan McQuinn, these panics are part of a broader privacy panic cycle ... The cycle is composed of four stages: trusted beginnings, rising panic, deflating fears and moving on. So where are we with fears over AI? Based on the cross-ideological concerns, Mercatus senior research fellow Adam Thierer noted ... that we're still in the "rising panic" stage of the current AI hysteria. Unfortunately, we haven't yet reached peak hysteria. That boiling-over point, however, is probably coming sooner than a lot of people expect -- legislators, regulators, researchers and those techno-optimists who tout this technology's benefits should be prepared.