Goto

Collaborating Authors

 vallor


'Dangerous nonsense': AI-authored books about ADHD for sale on Amazon

The Guardian

Amazon is selling books marketed at people seeking techniques to manage their ADHD that claim to offer expert advice yet appear to be authored by a chatbot such as ChatGPT. Amazon's marketplace has been deluged with works produced by artificial intelligence that are easy and cheap to publish but include unhelpful or dangerous misinformation, such as shoddy travel guidebooks and mushroom foraging books that encourage risky tasting. A number of books have appeared on the online retailer's site offering guides to ADHD that also seem to be written by chatbots. The titles include Navigating ADHD in Men: Thriving with a Late Diagnosis, Men with Adult ADHD: Highly Effective Techniques for Mastering Focus, Time Management and Overcoming Anxiety and Men with Adult ADHD Diet & Fitness. Samples from eight books were examined for the Guardian by Originality.ai,


The Good Robot Podcast: Featuring Shannon Vallor

AIHub

Hosted by Eleanor Drage and Kerry Mackereth, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode we chat to Shannon Vallor, the Baillie Gifford professor in the ethics and data of AI at the University of Edinburgh and the Director for the Centre for Technomoral Futures. We talk about feminist care ethics; technologies, vices and virtues; why Aristotle believed that the people who make technology should be excluded from citizenship; and why we still don't have the kinds of robots that we imagined that we'd have in the early 2000s. We also discuss Shannon's new book, The AI Mirror, which is now available for pre-order. Professor Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.


'Care bots': a dream for carers or a dangerous fantasy?

The Guardian

He cannot leave the house by himself because he does not know that cars may kill him and, in winter, he forgets to wear enough clothes to stay warm. He was born with Down's syndrome and Ingrid says that "he's calm and shy and really polite, but he needs help with everything". Ingrid is one of millions of people caring for a loved one at home today. In the UK, "family caregivers" constitute about 9% of the population and they outstrip paid care workers by more than three to one. This is because most care continues to be carried out in people's homes, rather than in residential facilities or by paid workers in the community. According to an annual survey of family caregivers in the UK, 45% had been providing support for 90 hours or more each week, and a similar proportion had not taken a break from caring in the past year.


Data and Artificial Intelligence: The Only Way is Ethics

#artificialintelligence

Professor Shannon Vallor, an expert in the challenging relationship between ethics and technology, reminds us that artificial intelligence is "human all the way down" - and therefore reflects the positives and negatives of human nature. Prof Vallor, Baillie Gifford Chair in the Ethics of Data and AI at the Edinburgh Futures Institute, insists self-aware machines are not about to take over the world. She says: "We have gone through a period where people like Stephen Hawking and Elon Musk have perhaps unwittingly misled the public about machines becoming self-aware or hyper-intelligent and enslaving humanity - and from a scientific perspective, that's just a complete fantasy at this point. "There is nothing mysterious or magical about AI - it's something that is transforming our world but completely reflective of our own human strengths and weaknesses." Professor Vallor is joined on the podcast by Nick Thomas and Kyle McEnery of Baillie Gifford. Nick Thomas highlights how "access to data is going to be a key competitive advantage for business in the future, while Kyle McEnery describes his work on harnessing data and AI to make better decisions about where Baillie Gifford invests its clients money - and the potential for greater targeting of ethical investment.


You can get a robot to keep your lonely grandparents company. Should you?

#artificialintelligence

"He's my baby," she tells me over Zoom, holding up a puppy to the camera. I laugh and say, "Who's a good robot?" Lucky barks again, and the sound is convincing, as if it's coming from a real dog. He's got a tail that wags, eyes that open and close, and a head that turns to face you when you talk. Under his synthetic golden fur, he has sensors that respond to your touch and a heartbeat you can feel. LeRuzic, who lives in a rural area outside Albany, is fully aware that her pet is a robot. But ever since she got him in March, he's made her feel less lonely, she says.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have variously called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


From information to I, Robot: the reality of AI ethics

#artificialintelligence

But Vallor – a leading American scholar of the ethics of data and artificial intelligence shortly to flit to Edinburgh University – reckons we should be concerned with military robots. Because of the people who may control them. Science fiction writers have fretted for decades about the moral philosophy of smart robots. What, at least in popular culture, we have not done so much is think about the ethics of dumb humans who will suddenly have control of vast amounts of artificial intelligence. That is where thinkers like Vallor come in.


Artificial general intelligence is a Rorschach Test: Perhaps we need orangutans? ZDNet

#artificialintelligence

Artificial general intelligence, or "AGI," the idea of a machine that can approach human levels of cognition, is a great topic to get people all worked up. Because no one can really define it, it serves as a Rorschach Test, onto which one can imprint whatever thoughts and feelings they care to. What is artificial general intelligence? Everything you need to know about the path to creating an AI as smart as a human. The result was a spirited discussion this past Friday night at John Jay College in Manhattan, site of the World Science Festival, now in its twelfth year.


INFLUENCE - Why better tech requires better humans

#artificialintelligence

Here at Thwaites we are lucky enough to have not one, but two offices – our Shoreditch HQ, and our Northern home at The Federation in Manchester. Here we share co-working space with lots of brilliant digital and tech firms who have signed up to a pledge outlining a broad set of values that chime with us – to be open, honest and ethical. As well as providing a great space to work, Federation Manchester also gives us access to excellent talks by leading speakers from around the world – most recently The Federation Presents series, which explored ethics in the tech industry and wider society. Naturally, one of the topics that's arisen (more than once) is AI, and the seemingly boundless scope of machine learning. Yet despite the many ways in which intelligent systems can transform our lives for the better, there is still an underlying mistrust.