Goto

Collaborating Authors

 Social Sector


UK data watchdog issues Snapchat enforcement notice over AI chatbot

The Guardian > Technology

Snapchat could face a fine of millions of pounds after the UK data watchdog issued it with a preliminary enforcement notice over the alleged failure to assess privacy risks its artificial intelligence chatbot may pose to users and particularly children. The Information Commissioner's Office (ICO) said it had provisionally found that the social media app's owner failed to "adequately identify and assess the risks" to several million UK users of My AI, including among 13- to 17-year-olds. Snapchat has 21 million monthly active users in the UK and has proved to be particularly popular among younger demographics, with the market research company Insider Intelligence estimating that 48% of users are aged 24 or under. About 18% of UK users are aged 12 to 17. "The provisional findings of our investigation suggest a worrying failure by Snap [the parent of Snapchat] to adequately identify and assess the privacy risks to children and other users before launching My AI," said John Edwards, the information commissioner. The ICO said the findings of its investigation were provisional and that Snap has until 27 October to make representations before a final decision is made about taking action. "No conclusion should be drawn at this stage that there has, in fact, been any breach of data protection law or that an enforcement notice will ultimately be issued," the ICO said.


'AI Anxiety' Is on the Rise--Here's How to Manage It

Scientific American: Technology

It's logical for humans to feel anxious about artificial intelligence. After all, the news is constantly reeling off job after job at which the technology seems to outperform us. But humans aren't yet headed for all-out replacement. And if you do suffer from so-called AI anxiety, there are ways to alleviate your fears and even reframe them into a motivating force for good. In one recent example of generative AI's achievements, AI programs outscored the average human in tasks requiring originality, as judged by human reviewers.


'Dr. Google' meets its match: Dr. ChatGPT

Los Angeles Times > Technology

As a fourth-year ophthalmology resident at Emory University School of Medicine, Dr. Riley Lyons' biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency. He often finds patients have already turned to "Dr. Online, Lyons said, they are likely to find that "any number of terrible things could be going on based on the symptoms that they're experiencing." So, when two of Lyons' fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance. In June, Lyons and his colleagues reported in medRxiv, an online publisher of preliminary health science studies, that ChatGPT compared quite well to human doctors who reviewed the same symptoms -- and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized "hallucination" problem known to ...


'Robo-Taxi Takeover' Hits Speed Bumps

Scientific American: Technology

Self-driving cars are hitting city streets like never before. In August the California Public Utilities Commission (CPUC) granted two companies, Cruise and Waymo, permits to run fleets of driverless robo taxis 24/7 in San Francisco and to charge passengers fares for those rides. This was just the latest in a series of green lights that have allowed progressively more leeway for autonomous vehicles (AVs) in the city in recent years. Almost immediately, widely publicized accounts emerged of Cruise vehicles behaving erratically. One blocked the road outside a large music festival, another got stuck in wet concrete and another even collided with a fire truck.


New York Times, CNN and Australia's ABC block OpenAI's GPTBot web crawler from accessing content

The Guardian > Technology

News outlets including the New York Times, CNN, Reuters and the Australian Broadcasting Corporation (ABC) have blocked a tool from OpenAI, limiting the company's ability to continue accessing their content. OpenAI is behind one of the best known artificial intelligence chatbots, ChatGPT. Its web crawler – known as GPTBot – may scan webpages to help improve its AI models. The Verge was first to report the New York Times had blocked GPTBot on its website. The Guardian subsequently found that other major news websites, including CNN, Reuters, the Chicago Tribune, the ABC and Australian Community Media (ACM) brands such as the Canberra Times and the Newcastle Herald, appear to have also disallowed the web crawler.


ChatGPT gets better marks than students in some university courses

New Scientist - News

ChatGPT may be as good as or better than students at assessments in around a quarter of university courses. However, this generally only applies to questions with a clear answer that require memory recall, rather than critical analysis. Yasir Zaki and his team at New York University Abu Dhabi in the United Arab Emirates contacted colleagues in other departments asking them to provide assessment questions from courses taught at the university, including computer science, psychology, political science and business. These colleagues also provided real student answers to the questions. The questions were then run through the artificial intelligence chatbot ChatGPT, which supplied its own responses.


AI chatbots become more sycophantic as they get more advanced

New Scientist - News

Artificial intelligence chatbots tend to agree with the opinions of the person using them, even to the point that they nod along to objectively false statements. Research shows that this problem gets worse as language models increase in size, adding weight to concerns that AI outputs cannot be trusted.


Driverless cars may struggle to spot children and dark-skinned people

New Scientist - News

Driverless cars may be worse at detecting children and people with darker skin, tests on artificial intelligence systems suggest. The researchers who carried out the work say that tighter government regulation is needed and that car-makers must be transparent about the development and testing of these vehicles. Jie Zhang at King's College London and her colleagues assessed eight AI-based pedestrian detectors used in driverless car research.


Deepfake detection tools must work with dark skin tones, experts warn

The Guardian > Technology

Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned. Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye. This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicities, accents, genders, ages and skin-tone, they are open to bias, experts warned.


Multilingual AIs are better at responding to queries in English

New Scientist - News

Multilingual large language models (LLMs) seem to work better in English. These AIs are designed to respond to queries in multiple languages but they respond better if asked to translate the request into English first. LLMs have become a key part of the artificial intelligence revolution since the release of ChatGPT by OpenAI in November 2022.