Goto

Collaborating Authors

California


Medical AI systems are often built with data from just three states

#artificialintelligence

Late last year, Stanford University researcher Amit Kaushal and a collaborator noticed something striking while sifting through the scientific literature on artificial intelligence systems designed to make a diagnosis by analyzing medical images. "It became apparent that all the datasets [being used to train those algorithms] just seemed to be coming from the same sorts of places: the Stanfords and UCSFs and Mass Generals," Kaushal said. Unlock this article by subscribing to STAT Plus and enjoy your first 30 days free! STAT Plus is STAT's premium subscription service for in-depth biotech, pharma, policy, and life science coverage and analysis. Our award-winning team covers news on Wall Street, policy developments in Washington, early science breakthroughs and clinical trial results, and health care disruption in Silicon Valley and beyond.


NumPy's contribution to Python is remarkable, but where it goes next could be even more so

ZDNet

Something fascinating happened in the world of scientific publishing last week: The prestigious journal Nature featured an overview of a 15-year-old programming library for the language Python. The widely popular library, called NumPy, gives Python the ability to perform scientific computing functions. Asked on Twitter why a paper is coming out now, 15 years after NumPy's creation, Stefan van der Walt of the University of California at Berkeley's Institute for Data Science, one of the article's authors, said that the publication of the article would give long-overdue formal recognition to some of NumPy's contributors. Our last paper was 2010 & not fully representative of the team. While we love that people use our software, many of our team members are in academia where citations count.


AI can detect how lonely you are by analysing your speech

Daily Mail - Science & tech

Artificial intelligence (AI) can detect loneliness with 94 per cent accuracy from a person's speech, a new scientific paper reports. Researchers in the US used several AI tools, including IBM Watson, to analyse transcripts of older adults interviewed about feelings of loneliness. By analysing words, phrases, and gaps of silence during the interviews, the AI assessed loneliness symptoms nearly as accurately as loneliness questionnaires completed by the participants themselves, which can be biased. It revealed that lonely individuals tend to have longer responses to direct questions about loneliness, and express more sadness in their answers. 'Most studies use either a direct question of "how often do you feel lonely", which can lead to biased responses due to stigma associated with loneliness,' said senior author Ellen Lee at UC San Diego (UCSD) School of Medicine.


Embracing the reality of digital transformation - Raconteur

#artificialintelligence

Digital disruption has broken out of Silicon Valley. Any company, no matter how nuts-and-bolts, can be disrupted by a digital competitor; equally, any company could be that digital disruptor. The discussion was kick-started by two leading industry thinkers: Andrew Moore, chief transformation officer of chipmaking giant Intel, and Nigel Moulton, chief technology officer at Dell EMC, part of a corporation that services 99% of the Fortune 500 companies. Their remarks sparked lively discussion. Both Intel's Mr Moore and Dell EMC's Mr Moulton spend a lot of time talking to leading companies about their digital transformation journey, and they kicked off with a tough message: it's hard work.


Thwarting adversarial AI with context awareness -- GCN

#artificialintelligence

Researchers at the University of California at Riverside are working to teach computer vision systems what objects typically exist in close proximity to one another so that if one is altered, the system can flag it, potentially thwarting malicious interference with artificial intelligence systems. The yearlong project, supported by a nearly $1 million grant from the Defense Advanced Research Projects Agency, aims to understand how hackers target machine-vision systems with adversarial AI attacks. Led by Amit Roy-Chowdhury, an electrical and computer engineering professor at the school's Marlan and Rosemary Bourns College of Engineering, the project is part of the Machine Vision Disruption program within DARPA's AI Explorations program. Adversarial AI attacks – which attempt to fool machine learning models by supplying deceptive input -- are gaining attention. "Adversarial attacks can destabilize AI technologies, rendering them less safe, predictable, or reliable," Carnegie Mellon University Professor David Danks wrote in IEEE's Spectrum in February.


Can artificial intelligence encourage good behaviour among internet users?

#artificialintelligence

SAN FRANCISCO, Sept 25 ― Hostile and hateful remarks are thick on the ground on social networks in spite of persistent efforts by Facebook, Twitter, Reddit and YouTube to tone them down. Now researchers at the OpenWeb platform have turned to artificial intelligence to moderate internet users' comments before they are even posted. The method appears to be effective because one third of users modified the text of their comments when they received a nudge from the new system, which warned that what they had written might be perceived as offensive. The study conducted by OpenWeb and Perspective API analyzed 400,000 comments that some 50,000 users were preparing to post on sites like AOL, Salon, Newsweek, RT and Sky Sports. Some of these users received a feedback message or nudge from a machine learning algorithm to the effect that the text they were preparing to post might be insulting, or against the rules for the forum they were using.


Data Digest: Innovative Applications for Machine Learning

#artificialintelligence

How machine learning and AI are being used to cut emissions, picture the past, and study DNA. This company claims their AI platform can cut carbon dioxide emissions by improving buildings' efficiency. Read how an artist used machine learning to extrapolate realistic portraits of ancient Roman emperors. Researchers at the University of California San Diego have used machine learning to solve a long-standing question about gene activation in humans.


Coronavirus may just sink the world's video game museums

Washington Post - Technology News

When Alex Handy first founded the Museum of Arts and Digital Entertainment (or the MADE) in Oakland, Calif. in 2011, he imagined the institution as a bucket placed underneath an industry that was constantly leaking and dripping out vital artifacts of its own history. Over the museum's near-decade of existence, it has weathered rising rents, flooding, and even robberies to deliver a playable library of more than 10,000 games to its visitors. However, more than six months after the ongoing coronavirus crisis forced its closure, it's not at all clear if the MADE -- or its fellow video game museums across the globe -- will be able to survive the economic fallout wrought by the virus. And given the interactive nature of video games, it's clear that these museums will have an even tougher time mitigating the risk of transmission once they open back up.


Microsoft exclusively licenses OpenAI's groundbreaking GPT-3 text generation model

#artificialintelligence

Microsoft's ongoing partnership with San Francisco-based artificial intelligence research company OpenAI now includes a new exclusive license on the AI firm's groundbreaking GPT-3 language model, an auto-generating text program that's emerged as the most sophisticated of its kind in the industry. The two companies have been entwined for years through OpenAI's use of the Azure cloud computing platform, with Azure being how OpenAI accesses the vast computing resources it needs to train many of its models. Last year, Microsoft made a major $1 billion investment to become OpenAI's exclusive cloud provider, a deal that now involves being the exclusive licensee for GPT-3. OpenAI released GPT-3, the third iteration of its ever-growing language model, in July, and the program and its prior iterations have helped create some of the most fascinating AI language experiments to date. It's also inspired vigorous debate around the ethics of powerful AI programs that may be used for more nefarious purposes, with OpenAI initially refusing to publish research about the model for fear it would be misused.


MIT researcher held up as model of how algorithms can benefit humanity

#artificialintelligence

In June, when MIT artificial intelligence researcher Regina Barzilay went to Massachusetts General Hospital for a mammogram, her data were run through a deep learning model designed to assess her risk of developing breast cancer, which she had been diagnosed with once before. The workings of the algorithm, which predicted that her risk was low, were familiar: Barzilay helped build that very model, after being spurred by her 2014 cancer diagnosis to pivot her research to health care. Barzilay's work in AI, which ranges from tools for early cancer detection to platforms to identify new antibiotics, is increasingly garnering recognition: On Wednesday, the Association for the Advancement of Artificial Intelligence named Barzilay as the inaugural recipient of a new annual award honoring an individual developing or promoting AI for the good of society. The award comes with a $1 million prize sponsored by the Chinese education technology company Squirrel AI Learning. While there are already prizes in the AI field, notably the Turing Award for computer scientists, those existing awards are typically "more focused on scientific, technical contributions and ideas," said Yolanda Gil, a past president of AAAI and an AI researcher at the University of Southern California.