Goto

Collaborating Authors

 Wellness


Introduction to AI Safety, Ethics, and Society

Hendrycks, Dan

arXiv.org Artificial Intelligence

Artificial Intelligence is rapidly embedding itself within militaries, economies, and societies, reshaping their very foundations. Given the depth and breadth of its consequences, it has never been more pressing to understand how to ensure that AI systems are safe, ethical, and have a positive societal impact. This book aims to provide a comprehensive approach to understanding AI risk. Our primary goals include consolidating fragmented knowledge on AI risk, increasing the precision of core ideas, and reducing barriers to entry by making content simpler and more comprehensible. The book has been designed to be accessible to readers from diverse backgrounds. You do not need to have studied AI, philosophy, or other such topics. The content is skimmable and somewhat modular, so that you can choose which chapters to read. We introduce mathematical formulas in a few places to specify claims more precisely, but readers should be able to understand the main points without these.


AI Chatbots Are Causing Bank Customers Headaches - CNET

CNET - News

The Consumer Financial Protection Bureau issued a warning on Tuesday on generative AI chatbots being used by banks. The agency says it's received "numerous" complaints from customers who have interacted with the chatbots and have failed to receive "timely, straightforward" answers to their questions. "Working with customers to resolve a problem or answer a question is an essential function for financial institutions – and is the basis of relationship banking," the agency said in its press release. AI chatbots could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data, CFPB said. Artificial intelligence chatbots could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data, the CFPB said.


Robot farmers? Machines are crawling through America's fields. And some have lasers.

USATODAY - News Top Stories

It uses three high-resolution cameras to peer down at the ground below. Lit by synchronized strobe lights, an onboard computer creates a digital image of each seedling as it glides by, comparing them with all the greenery it might reasonably find in a field of rich Salinas valley farmland two hours south of San Francisco. "It puts a dot on the stem and maps around it," says Todd Rinkenberger of FarmWise, the robot's maker. "Now it knows what's plant. Everything else is a weed."


Computer Vision - Richard Szeliski

#artificialintelligence

As humans, we perceive the three-dimensional structure of the world around us with apparent ease. Think of how vivid the three-dimensional percept is when you look at a vase of flowers sitting on the table next to you. You can tell the shape and translucency of each petal through the subtle patterns of light and shading that play across its surface and effortlessly segment each flower from the background of the scene (Figure 1.1). Looking at a framed group por- trait, you can easily count (and name) all of the people in the picture and even guess at their emotions from their facial appearance. Perceptual psychologists have spent decades trying to understand how the visual system works and, even though they can devise optical illusions1 to tease apart some of its principles (Figure 1.3), a complete solution to this puzzle remains elusive (Marr 1982; Palmer 1999; Livingstone 2008).


Voice assistants could 'hinder children's social and cognitive development'

The Guardian > Technology

From reminding potty-training toddlers to go to the loo to telling bedtime stories and being used as a "conversation partner", voice-activated smart devices are being used to help rear children almost from the day they are born. But the rapid rise in voice assistants, including Google Home, Amazon Alexa and Apple's Siri could, researchers suggest, have a long-term impact on children's social and cognitive development, specifically their empathy, compassion and critical thinking skills. "The multiple impacts on children include inappropriate responses, impeding social development and hindering learning opportunities," said Anmol Arora, co-author of an article published in the journal Archives of Disease in Childhood. A key concern is that children attribute human characteristics and behaviour to devices that are, said Arora, "essentially a list of trained words and sounds mashed together to make a sentence." The children anthropomorphise and then emulate the devices, copying their failure to alter their tone, volume, emphasis or intonation.



Remote Cloud Administrator openings near you -Updated September 24, 2022 - Remote Tech Jobs

#artificialintelligence

An employer in Northwest Arkansas is seeking an experienced Cloud Database Administrator who has experience with migrating SQL server databases onto the cloud & maintaining the cloud. The ideal candidate will also be able work through daily que, working with other teams to assist with the migration, and prepare diagrams for the migration.


The robots are here. And they are making you fries.

General News Tweet Watch

You could see it coming. Flippy started acting weird, jerking and hitching. The worker on the fry station had witnessed this behavior before. Even Joe Garcia, the Miso Robotics "robot support specialist" assigned to troubleshoot at Jack in the Box, had seen it. Garcia, a mechanical engineering graduate from Loyola Marymount University who one day wants to work for NASA, is spending his days swooping in when Flippy occasionally loses his mind as he encounters tacos.


Not-so dumb waiter: UK restaurant chain Bella Italia trials robot service

The Guardian > Technology

As worker shortages are felt across the hospitality sector, the owners of the Bella Italia chain are turning to robots to provide table service to customers. Big Table Group, which also owns Café Rouge and Las Iguanas, is testing out the robot at its Bella Italia restaurant in Center Parcs Whinfell Forest in Cumbria, in the first such trial by a big restaurant chain. The BellaBot, made by Chinese company Pudu, can carry up to 40kg on four trays and deliver and retrieve plates from tables with help from humans who load and unload its "body". Eric Guo, the chief executive of Spark which distributes Pudu robots in the UK, said there were 60 working across 20 British businesses and he expected more orders in the year ahead. Most are operating in restaurants, but hotels, supermarkets, care homes, snooker clubs and bowling alleys are also experimenting with the technology.


Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

Weinberg, Lindsay

Journal of Artificial Intelligence Research

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.