Goto

Collaborating Authors

 digital noise


How to Get Heard Above the Digital Noise Using Artificial Intelligence – Eularis

#artificialintelligence

One thing I have heard a lot recently is the increased difficulty in getting heard above all the digital noise. This is in part because Covid-19 accelerated the digital journey of many companies. However, now there is so much digital marketing from pharmaceutical companies that the competition to get to the physician and the patient has intensified dramatically. I don't know about you, but I have so many webinars that I would love to attend but there just isn't enough time in the day. Pre-Covid, there were nowhere near as many and I could make time to attend the ones of interest as they were few and far between.


Sneaky attacks trick AIs into seeing or hearing what's not there

New Scientist

When it comes to AI, seeing isn't always believing. It's possible to trick machine learning systems into hearing and seeing things that aren't really there. We already know that wearing a pair of snazzy glasses can fool face recognition software into thinking you're someone else, but research from Facebook now shows that the same approach can fool other algorithms too. The technique – known as an adversarial example – could be used by hackers to trick driverless cars into ignoring stop signs or prevent a CCTV camera from spotting a suspect in a crowd. Show an algorithm a photo of a cat that's been manipulated in a subtle way and it will think it's looking at a dog.


AI Can Be Fooled With One Misspelled Word

#artificialintelligence

Computers are getting really good at learning things about the world and applying that knowledge to new situations, like being able to identify a cat in a photo. But researchers have recently discovered that there's a hitch: by adding some digital noise to an image, imperceptible to the human eye, it's really easy to trick a machine. Now, researchers have engineered a similar way to fool AI trained to understand human language. One can imagine how this might pose a risk if, for example, machines are one day autonomously vetting legal documents. In a paper posted to the arXiv preprint server this week, a team of computer scientists from the Renmin University of China in Beijing describes their system for fooling a computer trained to understand language.