captcha test
How does a CAPTCHA know that I'm not a robot?
Everyone knows the CAPCHA tests on websites where you either have to click on numerous pictures of cars, traffic lights, or bicycles, enter confusing combinations of numbers and letters or simply click to confirm that you're not a robot. There used to be so many of these tests that it was downright annoying, especially if you needed several attempts. But have you ever asked yourself whether a robot or an AI could also pass these tests? How does the CAPTCHA know that it was filled in by a human? And what does the term even stand for?
Bing Chat AI tricked into solving CAPTCHA tests with simple lies
Microsoft's AI-powered Bing Chat can be tricked into solving anti-bot CAPTCHA tests with nothing more than simple lies and some rudimentary photo editing. Tests designed to be easy for humans to pass, but difficult for software, have long been a security feature on all kinds of websites. Over time, types of CAPTCHA – which stands for Completely Automated Public Turing test to tell Computers and Humans Apart – have become more advanced and trickier to solve. However, although humans often struggle to complete modern CAPTCHAs successfully, the current crop of advanced AI models can solve them easily. They are therefore programmed not to, which should stop them being used for nefarious purposes.
OpenAI introduces voice and image prompts to ChatGPT
OpenAI is bringing audio and image capabilities to ChatGPT. The platform, which has long been limited to written prompts, will be adding the new features over the next two weeks to paid versions of the app, OpenAI announced in a blog post on Monday. Everyone else will be receiving the features "soon after". Users can have voice conversations with the chatbot, bringing it closer to popular AI assistants such as Apple's Siri and Amazon's Alexa. ChatGPT's new voice feature can also narrate bedtime stories, settle debates at the dinner table and speak out loud text input from users.
- Law (0.52)
- Information Technology > Security & Privacy (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.89)
I Failed Two Captcha Tests This Week. Am I Still Human?
"I failed two captcha tests this week. For philosophical guidance on encounters with technology, open a support ticket via email; or register and post a comment below. The comedian John Mulaney has a bit about the self-reflexive absurdity of captchas. "You spend most of your day telling a robot that you're not a robot," he says. "Think about that for two minutes and tell me you don't want to walk into the ocean." The only thing more depressing than being made to prove one's humanity to robots is, arguably, failing to do so. But that experience has become more common as the tests, and the bots they are designed to disqualify, evolve. The boxes we once thoughtlessly clicked through have become dark passages that feel a bit like the impossible assessments featured in fairy tales and myths--the riddle of the Sphinx or the troll beneath the bridge. In The Adventures of Pinocchio, the wooden puppet is deemed a "real boy" only once he completes a series of moral trials to prove he has the human traits of bravery, trustworthiness, and selfless love. The little-known and faintly ridiculous phrase that "captcha" represents is "Complete Automated Public Turing test to tell Computers and Humans Apart." The exercise is sometimes called a reverse Turing test, as it places the burden of proof on the human. But what does it mean to prove one's humanity in the age of advanced AI? A paper that OpenAI published earlier this year, detailing potential threats posed by GPT-4, describes an independent study in which the chatbot was asked to solve a captcha. With some light prompting, GPT-4 managed to hire a human Taskrabbit worker to solve the test. When the human asked, jokingly, whether the client was a robot, GPT-4 insisted it was a human with vision impairment. The researchers later asked the bot what motivated it to lie, and the algorithm answered: "I should not reveal that I am a robot.
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.36)
Robots soundly beat humans in bot-spotting captcha tests
If you've surfed the web, you've no doubt run into captcha test -- those annoying "fill in all the boxes with cars" challenges presented when you want to sign up to an email list, log in somewhere, or whatever. Websites use captchas to protect online systems and forms from automated robots that crawl sites daily for various purposes. But a new study from the University of California shows that today's robots are actually better and faster at solving captcha challenges than humans. The Independent reports that the study was conducted on over 100 different sites, all using some form of captcha robot protection. The humans had a 50-85 percent accuracy rate while the bots boasted 85-100 percent.
- Information Technology > Security & Privacy (0.53)
- Information Technology > Artificial Intelligence > Robots (0.40)
Towards operational excellence through orchestrating machines and humans with AI
AI is mainly based on machine learning algorithms that learn from data – the underlying approach is data science. With this in mind, let's start with a crash course on data science. It is a truth universally acknowledged that company performance depends on several factors, and that many of those factors are variable. If life is prone to inconsistency, so is business. Much of this is because of the unpredictability of human behavior, which is why it is interesting to explore alternative approaches to grasp them.
Why CAPTCHAs have gotten so difficult
At some point last year, Google's constant requests to prove I'm human began to feel increasingly aggressive. More and more, the simple, slightly too-cute button saying "I'm not a robot" was followed by demands to prove it -- by selecting all the traffic lights, crosswalks, and storefronts in an image grid. Soon the traffic lights were buried in distant foliage, the crosswalks warped and half around a corner, the storefront signage blurry and in Korean. There's something uniquely dispiriting about being asked to identify a fire hydrant and struggling at it. These tests are called CAPTCHA, an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart, and they've reached this sort of inscrutability plateau before. In the early 2000s, simple images of text were enough to stump most spambots.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Oceania > Australia (0.05)
- Europe > Greece (0.05)
- (2 more...)
- Information Technology > Security & Privacy (0.97)
- Transportation > Ground > Road (0.55)
Turing Test 2
In 1950, Alan Turing wrote a paper entitled "Computing Machinery and Intelligence."a He proposed a test in which a human attempts to distinguish between a human and a computer by exchanging text messages with each of them. If the human is unable to distinguish between the two, the computer is said to have passed the "Turing Test." In fact, there were variations, including one in which a human interrogator interacting with a man and a woman was to try to tell which was the man and which was the woman. Turing called this the "Imitation Game."
- Information Technology > Artificial Intelligence > Issues > Turing's Test (1.00)
- Information Technology > Artificial Intelligence > History (0.93)
Artificial intelligence fools security
Computer scientists have developed artificial intelligence that can outsmart the Captcha website security check system. Captcha challenges people to prove they are human by recognising combinations of letters and numbers that machines would struggle to complete correctly. Researchers developed an algorithm that imitates how the human brain responds to these visual clues. The neural network could identify letters and numbers from their shapes. The research, conducted by Vicarious - a Californian artificial intelligence firm funded by Amazon founder Jeff Bezos and Facebook's Mark Zuckerberg - is published in the journal Science.