Goto

Collaborating Authors

 blindly


Humans Aren't Mentally Ready for an AI-Saturated 'Post-Truth World'

WIRED

Artificial intelligence is arguably the most rapidly advancing technology humans have ever developed. A year ago you wouldn't often hear AI come up in a regular conversation, but today it seems there's constant talk about how generative AI tools like ChatGPT and DALL-E will affect the future of work, the spread of information, and more. A major question that has thus far been almost entirely unexamined is how this AI-dominated future will affect people's minds. There's been some research into how using AI in their jobs will affect people mentally, but there isn't yet an understanding of how simply living amongst so much AI-generated content and systems will affect people's sense of the world. How is AI going to change individuals and society in the not-too-distant future?


The father of the internet warns against rushing into artificial intelligence because ChatGPT is 'really cool'

#artificialintelligence

As the popularity of ChatGPT and other conversational AI technologies continue to grow, Google's Vint Cerf, known as the "father of the internet," warns executives to think twice before investing in them. During a recent conference in Mountain View, California, Vint Cerf cautioned attendees against hastily investing in conversational AI technology simply because it's a trending topic. Cerf explained that there's an ethical issue at hand, one that requires thoughtful consideration. While ChatGPT and other AI technologies have gained popularity, they don't always work as intended. The pressure to stay competitive in the conversational AI space has become a growing concern for tech giants such as Google, Meta, and Microsoft.


If we blindly follow AI, where does that leave us?

#artificialintelligence

We rely on artificial intelligence to choose our movies, our music, even our dates. At this point, are we blindly following where it leads us? Three experts join us to talk about algorithms in our lives and the consequences of turning over so much power to them. Journalist Chris Jones talks about re-learning how to think for ourselves; Greg Beringer of The New York Times discusses the geopolitical influences of our digital maps; and Karen Hao of MIT Technology Review talks about Facebook's "machine learning" algorithms. This episode originally aired on April 15, 2022.


How does a spider weave web using artificial intelligence?

#artificialintelligence

Trap-making spiders that build webs blindly using only the sense of touch. The mechanism of making webs by spiders has fascinated humans for centuries. In a recent study, researchers observed a hacked orb weaver during a night time web building task. They monitored how they built the webs by tracking millions of individual leg actions with AI machine vision software specifically designed to detect limb movement. Researchers have precisely traced how spiders build webs using night vision and artificial intelligence to track and record every movement of all eight legs as spiders operate in the dark.


Trust In Artificial Intelligence, But Not Blindly

#artificialintelligence

Imagine the following situation: A company wants to teach an artificial intelligence (AI) to recognise a horse on photos. To this end, it uses several thousand images of horses to train the AI until it is able to reliably identify the animal even on unknown images. The AI learns quickly – it is not clear to the company how it is making its decisions but this is not really an issue for the company. It is simply impressed by how reliably the process works. Researchers talk in these cases about confounders – which are confounding factors that should actually have nothing to do with the identification process.


Trust In Artificial Intelligence, But Not Blindly - Eurasia Review

#artificialintelligence

Imagine the following situation: A company wants to teach an artificial intelligence (AI) to recognise a horse on photos. To this end, it uses several thousand images of horses to train the AI until it is able to reliably identify the animal even on unknown images. The AI learns quickly – it is not clear to the company how it is making its decisions but this is not really an issue for the company. It is simply impressed by how reliably the process works. Researchers talk in these cases about confounders – which are confounding factors that should actually have nothing to do with the identification process.


AI Isn't a Solution to All Our Problems

#artificialintelligence

Artificial intelligence is here to stay, but as with any helpful new tool, there are notable flaws and consequences to blindly adapting it. From the esoteric worlds of predictive health care and cybersecurity to Google's e-mail completion and translation apps, the impacts of AI are increasingly being felt in our everyday lived experience. The way it has crepted into our lives in such diverse ways and its proficiency in low-level knowledge shows that AI is here to stay. But like any helpful new tool, there are notable flaws and consequences to blindly adapting it. AI is a tool--not a cure-all to modern problems. AI tools aim to increase efficiency and effectiveness for organizations that implement them.


Teaching artificial intelligence to connect senses like vision and touch

#artificialintelligence

In Canadian author Margaret Atwood's book "Blind Assassins," she says that "touch comes before sight, before speech. It's the first language and the last, and it always tells the truth." While our sense of touch gives us a channel to feel the physical world, our eyes help us immediately understand the full picture of these tactile signals. Robots that have been programmed to see or feel can't use these signals quite as interchangeably. To better bridge this sensory gap, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a predictive artificial intelligence (AI) that can learn to see by touching, and learn to feel by seeing.


AI Strategy Foundation: Breaking it Down to the Basics

#artificialintelligence

One question I used to hear a lot as a kid was this: "If your friend jumped off a bridge, would you do it, too?" Yes, there were lots of bad trends in the 80s, and I may have fallen prey to more than a few of them. But that idea of blindly following a trend doesn't just apply to questionable hair and clothing choices. It also applies to every technology that pops up in digital transformation when building an AI strategy foundation. And one of the technologies getting adopted most blindly in today's marketplace: artificial intelligence (AI). I can hear it now: "But AI is the future! If we don't adopt it now, our company will fall behind!"


Twitter has a #AI and #bots problem : it's rejecting them blindly!

#artificialintelligence

Strangely, while everyone is seeing that smart bots, chatbots and AI are the future and most platforms are opening the doors to new user interactions and experiences with chatbots (slack!, Facebook!), Twitter seems to be blindly looking backwards by agressively refusing to let the bots in the platform; although a well-made, well filtered bot can be a very useful source of information. Am I saying that all bots are good for twitter? Of course not, especially the ones that are basically spamming engines and are build to follow thousands of people automatically, etc. But what about smart, AI-driven bots that try to act as a newswire, or a good newsletter sending updated to their followers?