Goto

Collaborating Authors

Machine Learning


MIT's oncological risk AI calculates cancer chances regardless of race

Engadget

Artificial intelligence and machine learning systems continue to be adopted into an ever wider array of healthcare applications, such as assisting doctors with medical image diagnostics. Capable of understanding X-rays and rapidly generating MRIs -- sometimes even able to spot cases of COVID -- these systems have also proven effective at noticing early signs of breast cancer which might otherwise be missed by radiologists. Google and IBM, as well as medical centers and university research teams around the world, have all sought to develop such cancer-catching algorithms. They can spot worrisome lumps as well as radiologists can and predict future onsets of the disease "significantly" better than the humans that trained them. However many medical AI imaging systems produce markedly less accurate results for black and brown people -- despite WOC being 43 percent more likely to die from breast cancer compared to their white counterparts.


These Doctors Are Using AI to Screen for Breast Cancer

WIRED

When Covid came to Massachusetts, it forced Constance Lehman to change how Massachusetts General Hospital screens women for breast cancer. Many people were skipping regular checkups and scans due to worries about the virus. So the center Lehman codirects began using an artificial intelligence algorithm to predict who is at most risk of developing cancer. Since the outbreak began, Lehman says, around 20,000 women have skipped routine screening. Normally five of every 1,000 women screened shows signs of cancer.


Pinecone, a serverless vector database for machine learning, leaves stealth with $10M funding

ZDNet

Built by the team behind Amazon SageMaker. Having attracted investment by Wing Venture Capital, with Wing's Founding Partner and early Snowflake investor, Peter Wagner, joining startup Pinecone's board and comparing Pinecone's potential impact to Snowflake. Pinecone, a machine learning cloud infrastructure company, left stealth today with $10m in seed funding led by Wing Venture Capital. So what makes Pinecone special, rather than yet another database? ZDNet caught up with Pinecone CEO and founder, scientist and former AWS Director Edo Liberty to find out.


2020 in Review With Brian Tse

#artificialintelligence

In 2020, Synced has covered a lot of memorable moments in the AI community. Such as the current situation of women in AI, the born of GPT-3, AI fight against covid-19, hot debates around AI bias, MT-DNN surpasses human baselines on GLUE, AlphaFold Cracked a 50-Year-Old Biology Challenge and so on. To close the chapter of 2020 and look forward to 2021, we are introducing a year-end special issue following Synced's tradition to look back at current AI achievements and explore the possible trend of future AI with leading AI experts. Here, we invite Mr. Brian Tse to share his insights about the current development and future trends of artificial intelligence. Brian Tse focuses on researching and improving cooperation over AI safety, governance, and stability between great powers. He is a Policy Affiliate at the University of Oxford's Center for the Governance of AI, Coordinator at the Beijing AI Academy's AI4SDGs Cooperation Network, and Senior Advisor at the Partnership on AI.


DP-Cryptography

Communications of the ACM

On Feb 15, 2019, John Abowd, chief scientist at the U.S. Census Bureau, announced the results of a reconstruction attack that they proactively launched using data released under the 2010 Decennial Census.19 The decennial census released billions of statistics about individuals like "how many people of the age 10-20 live in New York City" or "how many people live in four-person households." Using only the data publicly released in 2010, an internal team was able to correctly reconstruct records of address (by census block), age, gender, race, and ethnicity for 142 million people (about 46% of the U.S. population), and correctly match these data to commercial datasets circa 2010 to associate personal-identifying information such as names for 52 million people (17% of the population). This is not specific to the U.S. Census Bureau--such attacks can occur in any setting where statistical information in the form of deidentified data, statistics, or even machine learning models are released. That such attacks are possible was predicted over 15 years ago by a seminal paper by Irit Dinur and Kobbi Nissim12--releasing a sufficiently large number of aggregate statistics with sufficiently high accuracy provides sufficient information to reconstruct the underlying database with high accuracy. The practicality of such a large-scale reconstruction by the U.S. Census Bureau underscores the grand challenge that public organizations, industry, and scientific research faces: How can we safely disseminate results of data analysis on sensitive databases? An emerging answer is differential privacy. An algorithm satisfies differential privacy (DP) if its output is insensitive to adding, removing or changing one record in its input database. DP is considered the "gold standard" for privacy for a number of reasons. It provides a persuasive mathematical proof of privacy to individuals with several rigorous interpretations.25,26 The DP guarantee is composable and repeating invocations of differentially private algorithms lead to a graceful degradation of privacy.


Differential Privacy

Communications of the ACM

Over the past decade, calls for better measures to protect sensitive, personally identifiable information have blossomed into what politicians like to call a "hot-button issue." Certainly, privacy violations have become rampant and people have grown keenly aware of just how vulnerable they are. When it comes to potential remedies, however, proposals have varied widely, leading to bitter, politically charged arguments. To date, what has chiefly come of that have been bureaucratic policies that satisfy almost no one--and infuriate many. Now, into this muddled picture comes differential privacy. First formalized in 2006, it's an approach based on a mathematically rigorous definition of privacy that allows formalization and proof of the guarantees against re-identification offered by a system. While differential privacy has been accepted by theorists for some time, its implementation has turned out to be subtle and tricky, with practical applications only now starting to become available. To date, differential privacy has been adopted by the U.S. Census Bureau, along with a number of technology companies, but what this means and how these organizations have implemented their systems remains a mystery to many. It's also unlikely that the emergence of differential privacy signals an end to all the difficult decisions and trade-offs, but it does signify that there now are measures of privacy that can be quantified and reasoned about--and then used to apply suitable privacy protections. A milestone in the effort to make this capability generally available came in September 2019 when Google released an open source version of the differential privacy library that the company has used with many of its core products. In the exchange that follows, two of the people at Google who were central to the effort to release the library as open source--Damien Desfontaines, privacy software engineer; and Miguel Guevara, who leads Google's differential privacy product development effort--reflect on the engineering challenges that lie ahead, as well as what remains to be done to achieve their ultimate goal of providing privacy protection by default.


Polanyi's Revenge and AI's New Romance with Tacit Knowledge

Communications of the ACM

In his 2019 Turing Award Lecture, Geoff Hinton talks about two approaches to make computers intelligent. One he dubs--tongue firmly in cheek--"Intelligent Design" (or giving task-specific knowledge to the computers) and the other, his favored one, "Learning" where we only provide examples to the computers and let them learn. Hinton's not-so-subtle message is that the "deep learning revolution" shows the only true way is the second. Hinton is of course reinforcing the AI Zeitgeist, if only in a doctrinal form. Artificial intelligence technology has captured popular imagination of late, thanks in large part to the impressive feats in perceptual intelligence--including learning to recognize images, voice, and rudimentary language--and bringing fruits of those advances to everyone via their smartphones and personal digital accessories.


Salvaging the school year depends on quickly vaccinating teachers, lower infection rates

Los Angeles Times

Saving the Los Angeles school year has become a race against the clock -- as campuses are unlikely to reopen until teachers are vaccinated against COVID-19 and infection rates decline at least three-fold, officials said Monday. The urgency to salvage the semester in L.A. and throughout the state was underscored by new research showing the depth of student learning loss and by frustrated parents who organized statewide to pressure officials to bring back in-person instruction. A rapid series of developments Monday -- involving the governor, L.A. Unified School District, the teachers union and the county health department -- foreshadowed the uncertainties that will play out in the high-stakes weeks ahead for millions of California students. "We're never going to get back if teachers can't get vaccinated," said Assemblyman Patrick O'Donnell (D-Long Beach), who chairs the state's Assembly Education Committee and has two high schoolers learning from home. He expressed frustration that educators are not being prioritized by the L.A. County Health Department even as teachers in Long Beach are scheduled for vaccines this week. Although Long Beach is part of L.A. County, it operates its own independent health agency.


I Love Reading 1980s Computer Magazines, and So Should You

WIRED

Imagine walking into a greeting card store to get a bespoke bit of poetry, written just for you by a computer. It's not such a wild idea, given the recent development of AI-powered language models such as GPT-3. Called Magical Poet, it was installed on early Macintosh computers and deployed in retail settings nationwide all the way back in 1985. I came across this gem of a fact in that year's November issue of MacUser Magazine--a small item right near the announcement that Apple was working on the ability to generate digitized speech. You see, perusing 1980s and 1990s computer magazines from the Internet Archive is a hobby of mine, and it rarely disappoints.


Using AI-enhanced music-supported therapy to assist stroke patients

AIHub

Stroke currently ranks as the second most common cause of death and the second most common cause of disability worldwide. Motor deficits of the upper extremity (hemiparesis) are the most common and debilitating consequences of stroke, affecting around 80% of patients. These deficits limit the accomplishment of daily activities, affect social participation, are the origin of significant emotional distress, and cause profound detrimental effects on quality of life. Stroke rehabilitation aims to improve and maintain functional ability through restitution, substitution and compensation of functions. The restoration of motor deficits and improvements in motor function typically occurs during the first months following a stroke and therefore, major efforts are devoted to this acute stage.