Issues
Google Is Using On-Device AI to Spot Scam Texts and Investment Fraud
Digital scammers have never been so successful. Last year Americans lost 16.6 billion to online crimes, with almost 200,000 people reporting scams like phishing and spoofing to the FBI. More than 470 million was stolen in scams that started with a text message last year, according to the Federal Trade Commission. And as the biggest mobile operating system maker in the world, Google has been scrambling to do something, building out tools to warn consumers about potential scams. Ahead of Google's Android 16 launch next week, the company said on Tuesday that it is expanding its recently launched AI flagging feature for the Google Messages app, known as Scam Detection, to provide alerts on potentially nefarious messages like possible crypto scams, financial impersonation, gift card and prize scams, technical support scams, and more.
Pope Leo XIV calls this a challenge to 'human dignity' in first address to cardinals
Newly elected Pope Leo XIV addressed the College of Cardinals in the New Synod Hall at the Vatican on Saturday, May 10. He credits his Papal name choice as a response to the digital age facing the Catholic Church. In his first official remarks as pope, Leo XIV delivered a powerful message to the College of Cardinals on Saturday, warning that artificial intelligence (AI) presents serious new risks to human dignity. He called on the Catholic Church to step up and respond to these challenges with moral clarity and bold action. Speaking at the New Synod Hall, the Pope said the Catholic Church has faced similar moments before.
Amazon says new Vulcan warehouse robot has human touch but wont replace humans
This week Amazon debuted a new warehouse robot that has a sense of "touch," but the company also promised its new bot will not replace human warehouse workers. On Monday, at Amazon's Delivering the Future event in Dortmund, Germany, the retail giant introduced the world to Vulcan, a robot designed to sort, pick up, and place objects in storage compartments with the finesse and dexterity of human hands. Instead, the robot's "end of arm tooling" looks like a "ruler stuck onto a hair straightener," as Amazon describes it. The Vulcan warehouse robot is also loaded with cameras and feedback sensors to process when it makes contact with items and how much force to apply to prevent damage. In its warehouses, Amazon's inventory is stored in soft fabric compartments of about one square foot in size.
I saw how an "evil" AI chatbot finds vulnerabilities. It's as scary as you think
When the presenters take the stage, their attitude is briskly professional but energetic. I'm expecting a technical dive into standard AI tools--something that gives an up-close look at how ChatGPT and its rivals are manipulated for dirty deeds. Sherri Davidoff, Founder and CEO of LMG Security, reinforces this belief with her opener about software vulnerabilities and exploits. But then Matt Durrin, Director of Training and Research at LMG Security, drops an unexpected phrase: "Evil AI." "What if hackers can use their evil AI tools that don't have guardrails to find vulnerabilities before we have a chance to fix them?" "[We're] going to show you examples." And not just screenshots, though as the presentation continues, plenty of those illustrate the points made by the LMG Security team.
Bitter argument breaks out over controversial theory of consciousness
Where does consciousness come from? Supporters and detractors of a leading theory of how consciousness arises are stuck in an increasingly bitter debate. Opponents suggest that integrated information theory (IIT), which claims that consciousness can be defined on a mathematical spectrum, is pseudoscience that could be misused to influence sensitive debates around abortion and the sentience of artificial intelligences – while supporters say the detractors are just jealous. Scientists have long sought to explain how the brain gives rise to conscious experience, but two prominent ideas have recently come to the fore: IIT and global neuronal workspace theory (GNWT).…
Microsoft says everyone will be a boss in the future – of AI employees
Microsoft has good news for anyone with corner office ambitions. In the future we're all going to be bosses – of AI employees. The tech company is predicting the rise of a new kind of business, called a "frontier firm", where ultimately a human worker directs autonomous artificial intelligence agents to carry out tasks. Everyone, according to Microsoft, will become an agent boss. "As agents increasingly join the workforce, we'll see the rise of the agent boss: someone who builds, delegates to and manages agents to amplify their impact and take control of their career in the age of AI," wrote Jared Spataro, a Microsoft executive, in a blogpost this week.
Dataset reveals how Reddit communities are adapting to AI
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated content. The team collected metadata and community rules from the online communities, known as subreddits, during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery's CHI conference on Human Factors in Computing Systems being held April 26 to May 1 in Yokohama, Japan. One of the researchers' most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024. "This is important because it demonstrates that AI concern is spreading in these communities.
AI Is Spreading Old Stereotypes to New Languages and Cultures
Margaret Mitchell is a pioneer when it comes to testing generative AI tools for bias. She founded the Ethical AI team at Google, alongside another well-known researcher, Timnit Gebru, before they were later both fired from the company. She now works as the AI ethics leader at Hugging Face, a software startup focused on open source tools. We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures.
The Machine Ethics podcast: Co-design with Pinar Guvenc
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode we're chatting with Pinar Guvenc about her "What's Wrong With" podcast, co-design, whether AI is ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, whether AI is making us lazy, human experience, digital life and our attention, and talking to diverse people… Pinar Guvenc is Partner at SOUR – an award-winning global design studio with the mission to address social and urban problems – where she leads business and design strategy. She is an educator teaching ethical leadership and co-design at Parsons School of Design, MS Strategic Design and Management and School of Visual Arts MFA Interaction Design. Pinar serves on the Board of Directors of Open Style Lab and advises local businesses in NYC through Pratt Center for Community Development. She is a frequent public speaker and lecturer, and is the host of SOUR's "What's Wrong With: The Podcast", a discussion series with progress makers in diverse fields across the world.