"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
In 2020, Synced has covered a lot of memorable moments in the AI community. Such as the current situation of women in AI, the born of GPT-3, AI fight against covid-19, hot debates around AI bias, MT-DNN surpasses human baselines on GLUE, AlphaFold Cracked a 50-Year-Old Biology Challenge and so on. To close the chapter of 2020 and look forward to 2021, we are introducing a year-end special issue following Synced's tradition to look back at current AI achievements and explore the possible trend of future AI with leading AI experts. Here, we invite Mr. Brian Tse to share his insights about the current development and future trends of artificial intelligence. Brian Tse focuses on researching and improving cooperation over AI safety, governance, and stability between great powers. He is a Policy Affiliate at the University of Oxford's Center for the Governance of AI, Coordinator at the Beijing AI Academy's AI4SDGs Cooperation Network, and Senior Advisor at the Partnership on AI.
On Feb 15, 2019, John Abowd, chief scientist at the U.S. Census Bureau, announced the results of a reconstruction attack that they proactively launched using data released under the 2010 Decennial Census.19 The decennial census released billions of statistics about individuals like "how many people of the age 10-20 live in New York City" or "how many people live in four-person households." Using only the data publicly released in 2010, an internal team was able to correctly reconstruct records of address (by census block), age, gender, race, and ethnicity for 142 million people (about 46% of the U.S. population), and correctly match these data to commercial datasets circa 2010 to associate personal-identifying information such as names for 52 million people (17% of the population). This is not specific to the U.S. Census Bureau--such attacks can occur in any setting where statistical information in the form of deidentified data, statistics, or even machine learning models are released. That such attacks are possible was predicted over 15 years ago by a seminal paper by Irit Dinur and Kobbi Nissim12--releasing a sufficiently large number of aggregate statistics with sufficiently high accuracy provides sufficient information to reconstruct the underlying database with high accuracy. The practicality of such a large-scale reconstruction by the U.S. Census Bureau underscores the grand challenge that public organizations, industry, and scientific research faces: How can we safely disseminate results of data analysis on sensitive databases? An emerging answer is differential privacy. An algorithm satisfies differential privacy (DP) if its output is insensitive to adding, removing or changing one record in its input database. DP is considered the "gold standard" for privacy for a number of reasons. It provides a persuasive mathematical proof of privacy to individuals with several rigorous interpretations.25,26 The DP guarantee is composable and repeating invocations of differentially private algorithms lead to a graceful degradation of privacy.
Over the past decade, calls for better measures to protect sensitive, personally identifiable information have blossomed into what politicians like to call a "hot-button issue." Certainly, privacy violations have become rampant and people have grown keenly aware of just how vulnerable they are. When it comes to potential remedies, however, proposals have varied widely, leading to bitter, politically charged arguments. To date, what has chiefly come of that have been bureaucratic policies that satisfy almost no one--and infuriate many. Now, into this muddled picture comes differential privacy. First formalized in 2006, it's an approach based on a mathematically rigorous definition of privacy that allows formalization and proof of the guarantees against re-identification offered by a system. While differential privacy has been accepted by theorists for some time, its implementation has turned out to be subtle and tricky, with practical applications only now starting to become available. To date, differential privacy has been adopted by the U.S. Census Bureau, along with a number of technology companies, but what this means and how these organizations have implemented their systems remains a mystery to many. It's also unlikely that the emergence of differential privacy signals an end to all the difficult decisions and trade-offs, but it does signify that there now are measures of privacy that can be quantified and reasoned about--and then used to apply suitable privacy protections. A milestone in the effort to make this capability generally available came in September 2019 when Google released an open source version of the differential privacy library that the company has used with many of its core products. In the exchange that follows, two of the people at Google who were central to the effort to release the library as open source--Damien Desfontaines, privacy software engineer; and Miguel Guevara, who leads Google's differential privacy product development effort--reflect on the engineering challenges that lie ahead, as well as what remains to be done to achieve their ultimate goal of providing privacy protection by default.
In his 2019 Turing Award Lecture, Geoff Hinton talks about two approaches to make computers intelligent. One he dubs--tongue firmly in cheek--"Intelligent Design" (or giving task-specific knowledge to the computers) and the other, his favored one, "Learning" where we only provide examples to the computers and let them learn. Hinton's not-so-subtle message is that the "deep learning revolution" shows the only true way is the second. Hinton is of course reinforcing the AI Zeitgeist, if only in a doctrinal form. Artificial intelligence technology has captured popular imagination of late, thanks in large part to the impressive feats in perceptual intelligence--including learning to recognize images, voice, and rudimentary language--and bringing fruits of those advances to everyone via their smartphones and personal digital accessories.
Saving the Los Angeles school year has become a race against the clock -- as campuses are unlikely to reopen until teachers are vaccinated against COVID-19 and infection rates decline at least three-fold, officials said Monday. The urgency to salvage the semester in L.A. and throughout the state was underscored by new research showing the depth of student learning loss and by frustrated parents who organized statewide to pressure officials to bring back in-person instruction. A rapid series of developments Monday -- involving the governor, L.A. Unified School District, the teachers union and the county health department -- foreshadowed the uncertainties that will play out in the high-stakes weeks ahead for millions of California students. "We're never going to get back if teachers can't get vaccinated," said Assemblyman Patrick O'Donnell (D-Long Beach), who chairs the state's Assembly Education Committee and has two high schoolers learning from home. He expressed frustration that educators are not being prioritized by the L.A. County Health Department even as teachers in Long Beach are scheduled for vaccines this week. Although Long Beach is part of L.A. County, it operates its own independent health agency.
Imagine walking into a greeting card store to get a bespoke bit of poetry, written just for you by a computer. It's not such a wild idea, given the recent development of AI-powered language models such as GPT-3. Called Magical Poet, it was installed on early Macintosh computers and deployed in retail settings nationwide all the way back in 1985. I came across this gem of a fact in that year's November issue of MacUser Magazine--a small item right near the announcement that Apple was working on the ability to generate digitized speech. You see, perusing 1980s and 1990s computer magazines from the Internet Archive is a hobby of mine, and it rarely disappoints.
Stroke currently ranks as the second most common cause of death and the second most common cause of disability worldwide. Motor deficits of the upper extremity (hemiparesis) are the most common and debilitating consequences of stroke, affecting around 80% of patients. These deficits limit the accomplishment of daily activities, affect social participation, are the origin of significant emotional distress, and cause profound detrimental effects on quality of life. Stroke rehabilitation aims to improve and maintain functional ability through restitution, substitution and compensation of functions. The restoration of motor deficits and improvements in motor function typically occurs during the first months following a stroke and therefore, major efforts are devoted to this acute stage.
Five years ago, the world of artificial intelligence--and the algorithms it runs on--looked very different. Asking your Google Home to play Adele's chart-topping single wasn't possible yet. IBM Watson was still widely considered a beacon for AI advancement, and DeepMind's AI victory over a human at Go was still fresh. Machine learning engineers were facing earlier versions of today's image classification and speech recognition challenges. And though most tech giants hadn't earmarked corporate funding for ethical AI, the conversation was becoming more mainstream as the impact of algorithms on human lives became clearer.
Innovations in artificial intelligence (AI) have fundamentally changed the email security landscape in recent years, but it can often be hard to determine what makes one system different than the next. In reality, under that umbrella term significant differences exist in approaches that may determine whether the technology provides genuine protection or simply a perceived notion of defense. The Rise of Fearware When the global pandemic hit, and governments began enforcing travel bans and imposing stringent restrictions, there was undoubtedly a collective sense of fear and uncertainty. As explained in this blog, cybercriminals were quick to capitalize, taking advantage of people's desire for information to send out topical emails related to COVID-19 containing malware or credential-grabbing links. These emails often spoofed the Centers for Disease Control and Prevention (CDC) and, later on, as the economic impact of the pandemic began to take hold, the Small Business Administration (SBA).