fooled
- Government (1.00)
- Information Technology > Security & Privacy (0.73)
How Not to be Fooled by Time Series Models
It is easy to be tricked by time-series models. I have seen models that are able to (seemingly) predict the most random trends accurately, such as stock and crypto prices, using advanced techniques that most don't fully understand. Is time series really like magic in this regard? Perform the right data manipulations, apply a complex-enough model, and presto, amazingly accurate predictions are produced for any date-indexed line into the future? If you have seen the same things I'm describing and are skeptical, you are right to feel that way.
Seven Times I Was Fooled by a Julia Child Deepfake
Deepfakes are "synthetic media in which a person in an existing image or video is replaced with someone else's likeness," according to Wikipedia. I unfortunately only learned this after I was fooled by a Julia Child deepfake on several different occasions. During a Zoom Seminar This first time I was fooled by a Julia Child deepfake was at the start of a financial-literacy class. The instructor introduced a "special guest" and, to my surprise, Julia Child popped up on the screen, addressing us all by name. As my heart raced, the instructor told us that it was not actually Julia Child--it was a deepfake. At a Dentist Appointment My dentist offers a cool option where you can watch a movie while they work on you.
Researchers Demonstrate AI Can Be Fooled
The artificial intelligence systems used by image recognition tools, such as those that certain connected cars use to identify street signs, can be tricked to make an incorrect identification by a low-cost but effective attack using a camera, a projector and a PC, according to Purdue University researchers. A research paper describes an Optical Adversarial Attack, or OPAD, which uses a projector to project calculated patterns that alter the appearance of the 3D objects to AI-based image recognition systems. The paper will be presented in October at an ICCV 2021 Workshop. In an experiment, a pattern was projected onto a stop sign, causing the image recognition to read the sign as a speed limit sign instead. The researchers say this attack method could also work with image recognition tools in applications ranging from military drones to facial recognition systems, potentially undermining their reliability.
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
How To Ensure Your Machine Learning Models Aren't Fooled - InformationWeek
All neural networks are susceptible to "adversarial attacks," where an attacker provides an example intended to fool the neural network. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks. We will look at a brief case study of face recognition systems and their potential vulnerabilities.
FaceID Not Fooled By Masks, Unlike Other Facial Recognition Systems - The Mac Observer
Intelligence company Kneron tested out a number of facial recognition systems used in payments and banking. It found many could be fooled by photographs or masks. However, not Apple's FaceID, reported Fortune. Kneron conducted the experiments to learn about the technology's limitations while developing its own facial recognition technology. The company, which is led backed by high-profile investors including Qualcomm and Sequoia Capital, is creating what it calls "Edge AI," an artificial intelligence tool that does the job of recognizing individual entirely on devices rather than though cloud-based services. Kneron also noted that its experiments could not fool some facial recognition applications, notably Apple's iPhone X.
Artificial Intelligence Systems can be Fooled, Research Suggests
Despite all its benefits and the ease that technology has brought in, the fear that new-age technologies like artificial intelligence (AI), machine learning and robotics would displace human jobs still looms. However, some researchers don't agree with the idea that technology would take away jobs from humans anytime soon. Some researchers at University of California, Los Angeles (UCLA) in the US conducted various experiments, which show the severe limitations of'deep learning' machines. "How smart is the form of AI known as deep learning computer networks, and how closely do these machines mimic the human brain? They have improved greatly in recent years, but still have a long way to go," reports a team of UCLA cognitive psychologists in the journal PLOS Computational Biology.
How Artificial Intelligence Can Be Fooled with 3D Printing…and Stickers
This was, in fact, the reaction the scientists were hoping for. Using subtle alterations imperceptible to the human eye, they changed the objects in a way that would make them unrecognizable to artificial intelligence. The technique is referred to as an adversarial attack, a way to fool AI without being evident to humans. Song also mentioned a trick in which a Hello Kitty was placed in an image recognition AI's view of a street scene. The cars in the scene simply disappeared.
- Machinery > Industrial Machinery (0.42)
- Information Technology > Security & Privacy (0.37)