How Black Girls Code is preparing marginalized kids for the AI revolution
Despite its global prominence, and years of investment from the tech industry's loudest voices and biggest pocketbooks, AI still has a diversity problem. Filling an increasingly worrisome gap created by the tech's creators and evangelists, diversity-based organizations have been trying to tackle that issue on their own. Black Girls Code for example -- which offers tech skill building for Black girls and other historically underrecognized groups -- has been leaning more heavily into AI as part of its tech preparedness and training curriculum, including creating the brand new position of AI Expert-in-Residence to oversee a more thoughtful approach to teaching about AI. "Most AI is built in environments that prioritize profit over people, which means bias gets baked in and the same communities left out of past tech waves are now at risk of being harmed again. It's not enough to teach people to use AI, we have to teach them to be thoughtful about the tools that they use," Black Girls Code CEO Cristina Mancini tells Mashable. What values does it reflect?
Teacher quits profession after viral rant on how AI is 'ruining' education
Hannah, a former teacher, joins'Fox & Friends' to explain why she left the classroom, saying AI tools are making it difficult to teach. A former high school English teacher went viral this week after posting a candid video on social media announcing she was quitting the teaching profession because of how technology was "ruining" education. In her video, which reached over 1 million views on TikTok, Hannah explained how AI tools have made teaching more difficult because students rely on technology to do the work for them and are unmotivated to put in effort themselves. She said that kids do not know how to read because of read-aloud tools, and have short attention spans because of the "high stimulation" of social media. "They want to use [technology] for entertainment. They don't want to use it for education," she said in a TikTok video which reached over 1 million views.
Elon Musk's Grok AI Can't Stop Talking About 'White Genocide'
A chatbot developed by Elon Musk's multibillion-dollar artificial intelligence startup xAI appeared to be suffering from a glitch Wednesday when it repeatedly brought up white genocide in South Africa in response to user queries about unrelated topics on X. Grok, which competes with other chatbots like OpenAI's ChatGPT, is directly integrated into the social media platform that Musk also owns. Numerous examples of the phenomenon could be found by searching the official Grok profile for posts containing the term "boer," a word used to refer to people from South Africa of "Dutch, German, or Huguenot descent." It is sometimes used by Black South Africans as a pejorative against white Afrikaners, or people associated with the apartheid regime. In response to topics ranging from streaming platform HBO Max's name change to Medicaid cuts proposed by US lawmakers, the chatbot often seemed to initially stay on topic before veering back to white genocide in South Africa, completely unprompted. When asked to confirm the salary of Toronto Blue Jays player Max Scherzer, for example, the generative artificial intelligence chatbot launched into an explanation of white genocide and a controversial South African anti-apartheid song.
Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization
We study differentially private (DP) algorithms for smooth stochastic minimax optimization, with stochastic minimization as a byproduct. The holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss, using an algorithm with a linear time-complexity in the number of training samples. We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off. Our framework is inspired from the recently proposed Phased-ERM method [22] for nonsmooth differentially private stochastic convex optimization (DP-SCO), which exploits the stability of the empirical risk minimization (ERM) for the privacy guarantee. The flexibility of our approach enables us to sidestep the requirement that the base algorithm needs to have bounded sensitivity, and allows the use of sophisticated variance-reduced accelerated methods to achieve near-linear time-complexity.
Waymo recalls more than 1,200 automated vehicles after minor crashes
Waymo, the autonomous ride-hailing company that launched its services in Los Angeles late last year, is recalling more than 1,200 vehicles due to a software defect, the National Highway Traffic Safety Assn. said Wednesday. The recall comes after a series of minor crashes with gates, chains and other obstacles in the road that did not result in any injuries, the Mountain View, Calif.-based company said in a filing with the NHTSA. The recall applies to 1,212 driverless vehicles operating on Waymo's fifth-generation automated driving software. Waymo released a software update to resolve the issue, and that update has already been rolled out in all affected vehicles, the recall notice said. The company operates more than 1,500 vehicles across Los Angeles, San Francisco, Phoenix and Austin.
Variable-rate hierarchical CPC leads to acoustic unit discovery in speech
The success of deep learning comes from its ability to capture the hierarchical structure of data by learning high-level representations defined in terms of low-level ones. In this paper we explore self-supervised learning of hierarchical representations of speech by applying multiple levels of Contrastive Predictive Coding (CPC). We observe that simply stacking two CPC models does not yield significant improvements over single-level architectures. Inspired by the fact that speech is often described as a sequence of discrete units unevenly distributed in time, we propose a model in which the output of a low-level CPC module is non-uniformly downsampled to directly minimize the loss of a high-level CPC module. The latter is designed to also enforce a prior of separability and discreteness in its representations by enforcing dissimilarity of successive high-level representations through focused negative sampling, and by quantization of the prediction targets.
Recruitment Strategies That Take a Chance
In academic recruitment settings, including faculty hiring and PhD admissions, committees aim to maximize the overall quality of recruited candidates, but there is uncertainty about whether a candidate would accept an offer if given one. Previous work has considered algorithms that make offers sequentially and are subject to a hard budget constraint. We argue that these modeling choices may be inconsistent with the practice of academic recruitment. Instead, we restrict ourselves to a single batch of offers, and we treat the target number of positions as a soft constraint, so we risk overshooting or undershooting the target. Specifically, our objective is to select a subset of candidates that maximizes the overall expected value associated with candidates who accept, minus an expected penalty for deviating from the target.
Elon Musk and DOGE reportedly tried to take over the U.S. Copyright Office
Did Elon Musk try, and fail, to take over the Library of Congress so he could feed the nation's intellectual property into training fuel for his AI company? That's what some U.S. Congress members -- and even some fierce supporters of President Donald Trump -- are saying. The timing of the firing was notable as the office had just released a report on AI, and under some unusual circumstances. Big Tech companies and their executives have gone out of their way to curry favor with Trump since the 2024 election, and none more so than Elon Musk, who donated hundreds of millions of dollars to help elect President Trump and other Republicans. It seems, however, that this concern was well-founded.
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent
As part of the effort to understand implicit bias of gradient descent in overparametrized models, several results have shown how the training trajectory on the overparametrized model can be understood as mirror descent on a different objective. The main result here is a complete characterization of this phenomenon under a notion termed commuting parametrization, which encompasses all the previous results in this setting. It is shown that gradient flow with any commuting parametrization is equivalent to continuous mirror descent with a related mirror map. Conversely, continuous mirror descent with any mirror map can be viewed as gradient flow with a related commuting parametrization. The latter result relies upon Nash's embedding theorem.
Fox News Politics Newsletter: Bondi Backs the Blue
Welcome to the Fox News Politics newsletter, with the latest updates on the Trump administration, Capitol Hill and more Fox News politics content. The Justice Department (DOJ) is moving funds formerly granted to groups supporting transgender ideology and diversity, equity and inclusion (DEI) initiatives to law enforcement, Fox News Digital has confirmed. A Justice Department official told Fox News Digital that the DOJ, under Attorney General Pam Bondi's watch, will "not waste" funds on DEI. "The Department of Justice under Pam Bondi will not waste discretionary funds on DEI passion projects that do not make Americans safer," the official told Fox News Digital. "We will use our money to get criminals off the streets, seize drugs, and in some cases, fund programs that deliver a tangible impact for victims of crime."…READ