Goto

Collaborating Authors

Results


AI-Based Innovations at Mayo Clinic

#artificialintelligence

In a previous AI in Action column, we argued that in the world of health care, administrative applications of artificial intelligence were the low-hanging fruit. Sometimes, however, it is reasonable to reach for higher branches of the tree, and clinical applications of AI fall into that category. We expect that someday many important diagnosis and treatment decisions will be made or augmented by AI applications. Today we are in the early stages of achieving that objective. Most of the advances currently made by clinical AI are coming from innovative health care institutions.


CBS News deletes tweet claiming only 'like 30%' of US military aid for Ukraine ever reaches the front lines

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. CBS News announced changes to its upcoming Ukrainian documentary after deleting a tweet suggesting that only 30% of U.S. aid has been reaching the front lines of the war against Russia. On Friday, the news organization originally tweeted a promotion for its documentary "Arming Ukraine" which reportedly tracked the billions of dollars in U.S. aid and weaponry being sent to the country to fight Russia's invasion. The tweet revealed a claim by a nonprofit founder who reported a majority of the funding does not reach Ukrainian front lines.


Last Week in AI #177: OpenAI commercializes DALL-E 2, Sony AI beats human competitors in racing game, Gmail getting smarter searches, and more!

#artificialintelligence

Last week OpenAI moved DALL-E 2, the image generation tool, into Beta (the company hopes to expand its current user base to 1 million) while granting users the "the right to reprint, sell, and merchandise" images they generate with DALL-E. This is useful for users who wish to use the generated images for commercial purposes, like making illustrations for children's books. Other openly available AI image generation models face similar problems. Also, it's not clear if OpenAI violated any IP laws for just training on these Internet images and then commercializing their model. While the UK is exploring allowing commercial use of models trained on public but trademarked data, the U.S. may not follow suit.


Responses to Jack Clark's AI Policy Tweetstorm

#artificialintelligence

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.


Ambitious Researchers Want to Use AI to Talk to All Animals

#artificialintelligence

A group of researchers are looking to use machine learning to translate animal "languages" into something humans can understand -- and they want to apply it to the whole animal kingdom, a highly ambitious plan to say the least. As The Guardian reports, California-based nonprofit Earth Species Project (ESP) -- which was founded in 2017 with the help of Silicon Valley investors like LinkedIn cofounder Reid Hoffman -- plans to first decode animal communication via machine learning, and then make its findings available to all. ESP co-founder and president Aza Raskin says that the group, which published its first paper in December 2021, doesn't discriminate and is looking to help humans communicate with, or at least understand, as many species as possible. "We're species agnostic," Raskin told The Guardian, adding that the translation algorithms the ESP is developing are designed to "work across all of biology, from worms to whales." In the interview, Raskin likened the group's ambitions to "going to the Moon," especially given that, like humans, animals also have various forms of non-verbal communication, like bees doing a special "wiggle dance" to indicate to each other that they should land on a specific flower.


The logic of feeling: Teaching computers to identify emotions

#artificialintelligence

This is an interview with Professor Emily Mower Provost that was first published by The Michigan Engineer News Center. Using machine learning to decode the unpredictable world of human emotion might seem like an unusual choice. But in the ambiguity of human expression, U-M computer science and engineering associate professor Emily Mower Provost has discovered a rich trove of data waiting to be analyzed. Mower Provost uses machine learning to help measure emotion, mood, and other aspects of human behavior; for example, she has developed a smartphone app that analyzes the speech of patients with bipolar disorder to track their mood, with the ultimate goal of helping them more effectively manage their health. How do you quantify something as ambiguous as emotion in a field where, traditionally, ambiguity is the enemy?


New Podcast and Video Series Seeks to Highlight AI Researchers Stories Over Stats

#artificialintelligence

With accolades showered upon them, seemingly perfect educational pedigrees, and conversations focused mostly on their groundbreaking work, it's hard to remember that artificial intelligence (AI) researchers are real people. They're people who take ballet classes as an adult, who suffer from anxiety, and love art. They're also people who feel giddy over fresh flowers, have a difficult time staying organized, and roll out of bed just in time for their first meeting. Devi Parikh, an associate professor in the Machine Learning Center at Georgia Tech (ML@GT) and School of Interactive Computing (IC,) is working to change that with her new podcast and video series, Humans of AI: Stories, Not Stats. The series, which launches on Oct. 20, features 18 conversations with leading AI researchers, including Jeff Dean (head of AI at Google), Animashree Anandkumar (Bren Professor at California Institute of Technology and Director of Machine Learning Research at NVIDIA), Ayanna Howard (School of IC chair and professor at Georgia Tech),and Timnit Gebru (co-lead of ethical AI at Google.)


Exploring the source of social stereotypes

#artificialintelligence

Incoming CSE PhD student Wilka Carvalho has been selected for the GEM Fellowship under the sponsorship of Adobe. Carvalho plans to work with Profs. Satinder Singh Baveja, Honglak Lee, and Richard Lewis (Psychology) to pursue research at the intersection of reinforcement learning, machine learning, and computational cognitive science. Over the course of his research, Carvalho hopes to combine these different directions to understand the models and algorithms employed by the brain as it tries to generalize previous knowledge to new situations. "I have a particular interest in fundamentally understanding why we socially stereotype," Carvalho says.


US appeals court says artificial intelligence can't be patent inventor - forbque

#artificialintelligence

The Patent Act requires an "inventor" to be a natural person, the US Court of Appeals for the Federal Circuit said, rejecting computer scientist Stephen Thaler's bid for patents on two inventions he said his DABUS system created. Thaler said in an email Friday that DABUS, which stands for "Device for the Autonomous Bootstrapping of Unified Sentience," is "natural and sentient." His attorney Ryan Abbott of Brown Neri Smith & Khan said the decision "ignores the purpose of the Patent Act" and has "real negative social consequences." He said they plan to appeal. The US Patent and Trademark Office declined to comment on the decision.


U.S. appeals court says artificial intelligence can't be patent inventor

#artificialintelligence

Thaler had asked for patents on behalf of his AI system Court affirms ruling that patent'inventor' must be human being Court affirms ruling that patent'inventor' must be human being The Patent Act requires an "inventor" to be a natural person, the U.S. Court of Appeals for the Federal Circuit said, rejecting computer scientist Stephen Thaler's bid for patents on two inventions he said his DABUS system created. Thaler said in an email Friday that DABUS, which stands for "Device for the Autonomous Bootstrapping of Unified Sentience," is "natural and sentient." His attorney Ryan Abbott of Brown Neri Smith & Khan said the decision "ignores the purpose of the Patent Act" and has "real negative social consequences." He said they plan to appeal. The U.S. Patent and Trademark Office declined to comment on the decision.