Until recently, it wasn't possible to say that AI had a hand in forcing a government to resign. But that's precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair. When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority's workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.
We are a group of researchers from Sweden, Netherlands, and Germany and kindly invite you to our survey on "Machine Learning Experiment Management Tools." Such tools support practitioners performing machine learning (ML) or deep learning (DL) experiments, systematically managing all involved artifacts (scripts, datasets, hyperparameters, models, …). As a machine learning practitioner, we kindly invite you to participate. We also invite you to forward this invitation to other colleagues who might be interested in this survey as well. Our survey elicits information on the management tools practitioners adopt, their perceived benefits, challenges, and limitations.
This month, we get stuck into AI and consciousness. This topic has long been much-discussed, and especially so recently with one tweet in particular sparking a debate online. Joining the discussion this time are: Tom Dietterich (Oregon State University), Stephen Hanson (Rutgers University), Sabine Hauert (University of Bristol), Holger Hoos (Leiden University), Sarit Kraus (Bar-Ilan University) and Michael Littman (Brown University). Stephen Hanson: So, the topic of consciousness has come up a lot recently in discussions on the Connectionists. This area of cognitive science was pretty much wiped out in the first five years of NeurIPS [Conference on Neural Information Processing Systems].
Dr. Richard Benjamins is Chief AI & Data Strategist at Telefónica. He is among the 100 most influential people in data-driven business (DataIQ 100, 2018). He is also co-founder and Vice President of the Spanish observatory for ethical and social impacts of AI (OdiseIA). He was Group Chief Data Officer at AXA (Insurance) and before that held for 10 years executive positions at Telefónica on Big Data and Analytics. He is the founder of Telefónica's Big Data for Social Good department, expert to the EP's AI Observatory (EPAIO), member of the B2G data-sharing Expert Group of the EC, and a frequent speaker at Artificial Intelligence events.
Students in the MIT course 6.036 (Introduction to Machine Learning) study the principles behind powerful models that help physicians diagnose disease or aid recruiters in screening job candidates. Now, thanks to the Social and Ethical Responsibilities of Computing (SERC) framework, these students will also stop to ponder the implicationsof these artificial intelligence tools, which sometimes come with their share of unintended consequences. Last winter, a team of SERC Scholars worked with instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and the 6.036 teaching assistants to infuse weekly labs with material covering ethical computing, data and model bias, and fairness in machine learning. The process was initiated in the fall of 2019 by Jacob Andreas, the X Consortium Assistant Professor in the Department of Electrical Engineering and Computer Science. SERC Scholars collaborate in multidisciplinary teams to help postdocs and faculty develop new course material. Because 6.036 is such a large course, more than 500 students who were enrolled in the 2021 spring term grappled with these ethical dimensions alongside their efforts to learn new computing techniques.
Consider the squeak of a mouse and the low rumble of a lion's roar. In the animal kingdom, bigger animals usually produce lower pitch sounds as a result of their larger larynges and longer vocal tracts. But harbour seals seem to break that rule: they can learn how to change their calls. That means they can deliberately move between lower or higher pitch sounds and make themselves sound bigger than they really are. "The information that is in their calls is not necessarily honest," says Koen de Reus at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands.
GTC brought together dozens of healthcare innovators to present their work -- and announce new AI-accelerated applications and medical devices. More than 200,000 people registered for last week's online conference, where they attended hundreds of sessions spanning industries. The healthcare track included speakers from AstraZeneca, Mayo Clinic, Medtronic, Netherlands Cancer Institute and Pfizer, as well as more than 50 NVIDIA Inception startups. Here's some of the healthcare news shared at the show by NVIDIA, leading research institutions, and AI and medical device companies: Janssen, the pharmaceutical arm of Johnson & Johnson, shared its intelligent automation work using natural language processing models to scan medical literature that reports patients who may have experienced possible side effects. The latest version of the Janssen AI platform, which uses a customized version of BioMegatron using the NVIDIA NeMo framework, was set up in late 2021 to help accelerate the shortlisting of medical literature for human review to analyze drug safety.
How global thinking on AI is shaping the world, from Berlin, Brussels, London and beyond. NEWS ABOUT POLITICO'S AI COVERAGE: Your AI: Decoded newsletter will be taking a break from this week as we refocus on core policy issues, but we are not giving up our coverage of artificial intelligence. On the contrary, we will intensify reporting on major political and policy debates on AI via news stories and events accessible to POLITICO readers around the world while covering the deeper policy stories as usual for our POLITICO Pro subscribers. As we prepare future coverage on emerging technologies including AI, I'd like to thank Melissa Heikkilä for her remarkable coverage on everything from algorithmic bias to neural networks and the politics behind large language models. In the meantime, please tune in on April 21 for our AI & Tech Summit to follow high-level interviews with top AI newsmakers, don't miss our upcoming Tech 28 ranking of the most influential players in technology on May 18 and subscribe to Mark Scott's free Digital Bridge newsletter here.
For about two years, the business world was forced to bring its live events to a halt due to the uncertainty surrounding the COVID-19 pandemic. Because safety comes first, we also postponed our Machine Learning events hosted in person. If you have been following our activities, you may know we have been running online events instead to present the impact Machine Learning has had in many industries and contexts such as mobility, retail, risk management and compliance, EdgeML, how Machine Learning is being taught in Business Schools, and more. After many quarantines and lockdowns, we are thrilled to announce that the first event we are going to host in situ will be the second edition of our Machine Learning School in The Netherlands! It seems that The Netherlands is at the forefront of wider Machine Learning adoption as was apparent from the feedback we received at our last event (the first edition of our school) right before the pandemic started.
Chermaine Leysner's life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over €100,000. "I thought, 'Don't worry, this is a big mistake.' It was the start of something big," she said.