Computational thinking, a K–12 education movement begun in 2006, has defined a curriculum to teach basic computing in pre-college schools. It has been dramatically more successful than prior computer literacy or fluency movements at convincing K–12 school teachers and boards to adopt a computer curriculum. Learning problem-solving with algorithms is seen widely as valuable for students. Hundreds of CT initiatives have blossomed around the world. By 2010, the movement settled on a definition of CT that can be paraphrased as "Designing computations that get computers to do jobs for us."
Fifty years ago this winter, a young student teacher by the name of Don Rawitsch introduced his eighth grade American history class to a computer game on westward expansion that he had developed along with his colleagues Bill Heinemann and Paul Dillenberger. The game, called The Oregon Trail, would go on to sell over 65 million copies, many of them to educational institutions, making it one of the bestselling games of all time, right up there with Super Mario Bros. and Tetris. But when I talked to Rawitsch recently, he said that when he first came up with the idea, making money was the furthest thing from his mind. "Back in 1971, there was a lot of activity going on in the world of schools to upgrade curriculum and come up with innovative methods of teaching," Rawitsch said. Inspired by his teachers at Carleton College in Northfield, Minnesota, Rawitsch decided to pursue new types of pedagogy for his student teacher classes at Jordan Junior High School in Minneapolis.
It's been a busy year for Encode Justice, an international group of grassroots activists pushing for ethical uses of artificial intelligence. There have been legislators to lobby, online seminars to hold, and meetings to attend, all in hopes of educating others about the harms of facial-recognition technology. It would be a lot for any activist group to fit into the workday; most of the team behind Encode Justice have had to cram it all in around high school. That's because the group was created and is run almost entirely by high schoolers. Its founder and president, Sneha Revanur, is a 16-year-old high-school senior in San Jose, California and at least one of the members of the leadership team isn't old enough to get a driver's license.
When freak lightning ignited massive wildfires across Northern California last year, it also sparked efforts from data scientists to improve predictions for blazes. One effort came from SpaceML, an initiative of the Frontier Development Lab, which is an AI research lab for NASA in partnership with the SETI Institute. Dedicated to open-source research, the SpaceML developer community is creating image recognition models to help advance the study of natural disaster risks, including wildfires. SpaceML uses accelerated computing on petabytes of data for the study of Earth and space sciences, with the goal of advancing projects for NASA researchers. It brings together data scientists and volunteer citizen scientists on projects that tap into the NASA Earth Observing System Data and Information System data.
Textual Question Answering (QA) aims to provide precise answers to user's questions in natural language using unstructured data. One of the most popular approaches to this goal is machine reading comprehension(MRC). In recent years, many novel datasets and evaluation metrics based on classical MRC tasks have been proposed for broader textual QA tasks. In this paper, we survey 47 recent textual QA benchmark datasets and propose a new taxonomy from an application point of view. In addition, We summarize 8 evaluation metrics of textual QA tasks. Finally, we discuss current trends in constructing textual QA benchmarks and suggest directions for future work.
Every time an AI article finds its way to social media there's hundreds of people invoking the terrifying specter of "SKYNET." SKYNET is a fictional artificial general intelligence that's responsible for the creation of the killer robots from the Terminator film franchise. It was a scary vision of AI's future until deep learning came along and big tech decided to take off its metaphorical belt and really give us something to cry about. At least the people fighting the robots in The Terminator film franchises get to face a villain they can see and shoot at. And that makes it difficult to explain why, based on what's happening now, the real future might be even scarier than the one from those killer robot movies.
Only a few months ago, there was a brief window of time when many New Yorkers, among others, watched as the numbers of the vaccinated climbed and dared to hope that the year-long pandemic was finally coming to an end. Vacations were booked, weddings were scheduled, and parents began looking forward to getting their children out of the living room and back to attending school in person. But, as Barry Blitt captures in his new cover, the pandemic has not gone away, and, for students and their parents, the usual anxieties around returning to the classroom have been compounded by an increasing incidence of coronavirus infections in children, many of whom are too young to be vaccinated, and other related uncertainties. We recently spoke to Blitt about back-to-school blues and presenting his work at elementary schools. Were you a good student?
MIT students Spencer Compton, Karna Morey, Tara Venkatadri, and Lily Zhang have been selected to receive a Barry Goldwater Scholarship for the 2021-22 academic year. Over 5,000 college students from across the United States were nominated for the scholarships, from which only 410 recipients were selected based on academic merit. The Goldwater scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields. All of the 2021-22 Goldwater Scholars intend to obtain a doctorate in their area of research, including the four MIT recipients.
This transcript has been edited for clarity. This is Eric Topol with Medicine and the Machine, with my co-host, Abraham Verghese. This is a special edition for us, to speak with one of the leading lights of artificial intelligence (AI) in the world, Jeff Dean, who heads up Google AI. Jeff Dean, PhD: Thank you for having me. Topol: You have now been at Google for 22 years. In a recent book by Cade Metz (a New York Times tech journalist) called Genius Makers, you are one of the protagonists. I didn't know this about you, but you grew up across the globe. Your parents took you from Hawaii, where you were born, to Somalia, where you helped run a refugee camp during your middle school years. As a high school senior in Georgia where your father worked at the CDC, you built a software tool for them that helped researchers collect disease data, and nearly four decades later it remains a staple of epidemiology across the developing world.
Deep neural networks (DNNs) are known to be vulnerable to adversarial images, while their robustness in text classification is rarely studied. Several lines of text attack methods have been proposed in the literature, including character-level, word-level, and sentence-level attacks. However, it is still a challenge to minimize the number of word changes necessary to induce misclassification, while simultaneously ensuring lexical correctness, syntactic soundness, and semantic similarity. In this paper, we propose a Bigram and Unigram based adaptive Semantic Preservation Optimization (BU-SPO) method to examine the vulnerability of deep models. Our method has four major merits. Firstly, we propose to attack text documents not only at the unigram word level but also at the bigram level which better keeps semantics and avoids producing meaningless outputs. Secondly, we propose a hybrid method to replace the input words with options among both their synonyms candidates and sememe candidates, which greatly enriches the potential substitutions compared to only using synonyms. Thirdly, we design an optimization algorithm, i.e., Semantic Preservation Optimization (SPO), to determine the priority of word replacements, aiming to reduce the modification cost. Finally, we further improve the SPO with a semantic Filter (named SPOF) to find the adversarial example with the highest semantic similarity. We evaluate the effectiveness of our BU-SPO and BU-SPOF on IMDB, AG's News, and Yahoo! Answers text datasets by attacking four popular DNNs models. Results show that our methods achieve the highest attack success rates and semantics rates by changing the smallest number of words compared with existing methods.