It's been a busy year for Encode Justice, an international group of grassroots activists pushing for ethical uses of artificial intelligence. There have been legislators to lobby, online seminars to hold, and meetings to attend, all in hopes of educating others about the harms of facial-recognition technology. It would be a lot for any activist group to fit into the workday; most of the team behind Encode Justice have had to cram it all in around high school. That's because the group was created and is run almost entirely by high schoolers. Its founder and president, Sneha Revanur, is a 16-year-old high-school senior in San Jose, California and at least one of the members of the leadership team isn't old enough to get a driver's license.
When freak lightning ignited massive wildfires across Northern California last year, it also sparked efforts from data scientists to improve predictions for blazes. One effort came from SpaceML, an initiative of the Frontier Development Lab, which is an AI research lab for NASA in partnership with the SETI Institute. Dedicated to open-source research, the SpaceML developer community is creating image recognition models to help advance the study of natural disaster risks, including wildfires. SpaceML uses accelerated computing on petabytes of data for the study of Earth and space sciences, with the goal of advancing projects for NASA researchers. It brings together data scientists and volunteer citizen scientists on projects that tap into the NASA Earth Observing System Data and Information System data.
Textual Question Answering (QA) aims to provide precise answers to user's questions in natural language using unstructured data. One of the most popular approaches to this goal is machine reading comprehension(MRC). In recent years, many novel datasets and evaluation metrics based on classical MRC tasks have been proposed for broader textual QA tasks. In this paper, we survey 47 recent textual QA benchmark datasets and propose a new taxonomy from an application point of view. In addition, We summarize 8 evaluation metrics of textual QA tasks. Finally, we discuss current trends in constructing textual QA benchmarks and suggest directions for future work.
Every time an AI article finds its way to social media there's hundreds of people invoking the terrifying specter of "SKYNET." SKYNET is a fictional artificial general intelligence that's responsible for the creation of the killer robots from the Terminator film franchise. It was a scary vision of AI's future until deep learning came along and big tech decided to take off its metaphorical belt and really give us something to cry about. At least the people fighting the robots in The Terminator film franchises get to face a villain they can see and shoot at. And that makes it difficult to explain why, based on what's happening now, the real future might be even scarier than the one from those killer robot movies.
Only a few months ago, there was a brief window of time when many New Yorkers, among others, watched as the numbers of the vaccinated climbed and dared to hope that the year-long pandemic was finally coming to an end. Vacations were booked, weddings were scheduled, and parents began looking forward to getting their children out of the living room and back to attending school in person. But, as Barry Blitt captures in his new cover, the pandemic has not gone away, and, for students and their parents, the usual anxieties around returning to the classroom have been compounded by an increasing incidence of coronavirus infections in children, many of whom are too young to be vaccinated, and other related uncertainties. We recently spoke to Blitt about back-to-school blues and presenting his work at elementary schools. Were you a good student?
MIT students Spencer Compton, Karna Morey, Tara Venkatadri, and Lily Zhang have been selected to receive a Barry Goldwater Scholarship for the 2021-22 academic year. Over 5,000 college students from across the United States were nominated for the scholarships, from which only 410 recipients were selected based on academic merit. The Goldwater scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields. All of the 2021-22 Goldwater Scholars intend to obtain a doctorate in their area of research, including the four MIT recipients.
This transcript has been edited for clarity. This is Eric Topol with Medicine and the Machine, with my co-host, Abraham Verghese. This is a special edition for us, to speak with one of the leading lights of artificial intelligence (AI) in the world, Jeff Dean, who heads up Google AI. Jeff Dean, PhD: Thank you for having me. Topol: You have now been at Google for 22 years. In a recent book by Cade Metz (a New York Times tech journalist) called Genius Makers, you are one of the protagonists. I didn't know this about you, but you grew up across the globe. Your parents took you from Hawaii, where you were born, to Somalia, where you helped run a refugee camp during your middle school years. As a high school senior in Georgia where your father worked at the CDC, you built a software tool for them that helped researchers collect disease data, and nearly four decades later it remains a staple of epidemiology across the developing world.
Deep neural networks (DNNs) are known to be vulnerable to adversarial images, while their robustness in text classification is rarely studied. Several lines of text attack methods have been proposed in the literature, including character-level, word-level, and sentence-level attacks. However, it is still a challenge to minimize the number of word changes necessary to induce misclassification, while simultaneously ensuring lexical correctness, syntactic soundness, and semantic similarity. In this paper, we propose a Bigram and Unigram based adaptive Semantic Preservation Optimization (BU-SPO) method to examine the vulnerability of deep models. Our method has four major merits. Firstly, we propose to attack text documents not only at the unigram word level but also at the bigram level which better keeps semantics and avoids producing meaningless outputs. Secondly, we propose a hybrid method to replace the input words with options among both their synonyms candidates and sememe candidates, which greatly enriches the potential substitutions compared to only using synonyms. Thirdly, we design an optimization algorithm, i.e., Semantic Preservation Optimization (SPO), to determine the priority of word replacements, aiming to reduce the modification cost. Finally, we further improve the SPO with a semantic Filter (named SPOF) to find the adversarial example with the highest semantic similarity. We evaluate the effectiveness of our BU-SPO and BU-SPOF on IMDB, AG's News, and Yahoo! Answers text datasets by attacking four popular DNNs models. Results show that our methods achieve the highest attack success rates and semantics rates by changing the smallest number of words compared with existing methods.
Some kids spend their summers swimming and building campfires. This year thousands will head to university and high school campuses to program robots, write software, and design video games. Artificial intelligence--the increasingly ubiquitous technology routing delivery trucks, sifting through job applicants' resumes, and making small business lending decisions--is now a summer camp experience for ages five to 18. At least 447 AI summer camps opened across 48 states in the US this year, according to an August report from Georgetown University's Center for Security and Emerging Technology. Because US schools typically don't cover the subject, these camps may offer one of the only chances for middle and high-school students to study AI, the report notes.
Search engines scan the internet to find what you're looking for, or what you don't know you're looking for. Social media platforms surface content you might want to read. And the latest iPhones recognize your face in a split second to unlock your phone. As the artificial intelligence industry grows, the consequences of lacking a diverse workforce can pose major challenges that threaten to ripple into everyday life. Women and minorities are underrepresented in the AI workforce, according to a Stanford report on diversity in AI. More than 83% of AI tenure-track faculty at top universities are male, while over 46% of Ph.D. students in the United States studying AI are white.