Goto

Collaborating Authors

 sharkey


China sets out to sample an unusual near-Earth asteroid

Science

Following its successes retrieving lunar samples from both the near and far sides of the Moon, China is planning an encore, sending a probe to snatch material from a near-Earth asteroid. The target of the Tianwen-2 mission, which is expected to launch by the end of the month, is a chunk of rock named 469219 Kamo'oalewa. It is one of just seven asteroids that fall into a little-understood class known as quasi-satellites of Earth--and it could also be the first known asteroid comprised of lunar material. That hypothesis could be confirmed by laboratory studies of fragments collected by Tianwen-2, which are due to be returned to Earth about 2.5 years after launch. "This is an ambitious mission to explore a fascinating object," says astrophysicist Amy Mainzer of the University of California, Los Angeles.


You can remove GPT2's LayerNorm by fine-tuning

Heimersheim, Stefan

arXiv.org Artificial Intelligence

The LayerNorm (LN) layer in GPT-style transformer models has long been a hindrance to mechanistic interpretability. LN is a crucial component required to stabilize the training of large language models, and LN or the similar RMSNorm have been used in practically all large language models based on the transformer architecture. The non-linear nature of the LN layers is a hindrance for mechanistic interpretability as it hinders interpretation of the residual stream, and makes it difficult to decompose the model into circuits. Some researchers have gone so far as to name "reasons interpretability researchers hate layer norm." In this paper we show that it is possible to remove the LN layers from a pre-trained GPT2-small model by fine-tuning on a fraction (500M tokens) of the training data. We demonstrate that this LN-free model achieves similar performance to the original model on the OpenWebText and ThePile datasets (-0.05 cross-entropy loss), and the Hellaswag benchmark (-0.5% accuracy). We provide our implementation at https://github.com/ApolloResearch/gpt2_noLN, and fine-tuned GPT2-small models at https://huggingface.co/apollo-research/gpt2_noLN. Our work not only provides a simplified model for mechanistic interpretability research, but also provides evidence that the LN layers, at inference time, do not play a crucial role in transformer models.


The Path to Fairer AI Starts With Audits, Standards

#artificialintelligence

Ethical principles aren't enough to defend against the worst potential impacts of artificial intelligence systems and the time has come for the U.S. to establish official legal policies for this emerging technology, said policy and technology experts during a recent report launch event from New America's Open Technology Institute. That work requires clearly defining terms and enforcement measures, and speakers sought to propose mechanisms that can help government promote fairness, accountability and transparency (FAT) in algorithmic systems, as well as outline the challenges that lie ahead. They called for the federal government to regulate how private firms like online content platforms develop and leverage AI as well as establish formal policies for overseeing and vetting the algorithmic systems public agencies adopt and purchase. Such AI audits are currently voluntary, said Spandana Singh, policy analyst at the Open Technology Institute and co-author of the report. AI can deliver newfound efficiencies, extract meaning from troves of data and deliver a variety of other benefits, but the complexity, opacity and lack of foresight in some of these systems means they can be designed, implemented or evolve in ways that produce biased and discriminatory effects.


Twitter offers $3,500 'bounty' to users who find algorithmic bias, like cropping out Black people

Daily Mail - Science & tech

Twitter is offering a cash reward to users who can help it weed out bias in its photo-cropping algorithm. The social-media platform announced'bounties' as high as $3,500 as part of this week's DEF CON hacker convention in Las Vegas. 'Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they've already reached the public,' Rumman Chowdhury and Jutta Williams of Twitter's Machine-Learning, Ethics, Transparency and Accountability (META) project said in a blog post. 'We want to change that.' The challenge was inspired by how researchers and hackers often point out security vulnerabilities to companies, Chowdhury and Williams explained.


Rights for robots: why we need better AI regulation

#artificialintelligence

We live in a world where humans aren't the only ones that have rights. In the eyes of the law, artificial entities have a legal persona too. Corporations, partnerships or nation states also have the same rights and responsibility as human beings. With rapidly evolving technologies, is it time our legal system considered a similar status for artificial intelligence (AI) and robots? "AI is already impacting most aspects of our lives. Given its pervasiveness, how this technology is developed is raising profound legal and ethical questions that need to be addressed," says Julian David, chief executive of industry body techUK.


Grilling the answers: How businesses need to show how AI decides

#artificialintelligence

Show your working: generations of mathematics students have grown up with this mantra. Getting the right answer is not enough. To get top marks, students must demonstrate how they got there. Now, machines need to do the same. As artificial intelligence (AI) is used to make decisions affecting employment, finance or justice, as opposed to which film a consumer might want to watch next, the public will insist it explains its working.


AI expert warns against 'racist and misogynist algorithms'

Daily Mail - Science & tech

A leading expert in artificial intelligence has issued a stark warning against the use of race- and gender-biased algorithms for making critical decisions. Across the globe, algorithms are beginning to oversee various processes from job applications and immigration requests to bail terms and welfare applications. Military researchers are even exploring whether facial recognition technology could enable autonomous drones to identify their own targets. However, University of Sheffield computer expert Noel Sharkey told the Guardian that such algorithms are'infected with biases' and cannot be trusted. Calling for a halt on all AI with the potential to change people's lives, Professor Sharkey instead advocates for vigorous testing before they are used in public.


AI expert calls for end to UK use of 'racially biased' algorithms

The Guardian

An expert on artificial intelligence has called for all algorithms that make life-changing decisions – in areas from job applications to immigration into the UK – to be halted immediately. Prof Noel Sharkey, who is also a leading figure in a global campaign against "killer robots", said algorithms were so "infected with biases" that their decision-making processes could not be fair or trusted. A moratorium must be imposed on all "life-changing decision-making algorithms" in Britain, he said. Sharkey has suggested testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market. In an interview with the Guardian, the Sheffield University robotics/AI pioneer said he was deeply concerned over a series of examples of machine-learning systems being loaded with bias.


Elon Musk says Neuralink could bring A.I. 'superintelligence' to the brain

#artificialintelligence

Beyond cortical and limbic systems, the company Neuralink could add a third layer of digital superintelligence to humans and avoid artificial intelligence enslavement, its founder Elon Musk claimed Tuesday. The brain-computer linkup firm is working to treat medical conditions using its implanted chip as early as next year, but during a podcast appearance, Musk reiterated his belief that the technology could avoid some of the worst consequences of advanced machines. "It's important that Neuralink solves this problem sooner rather than later, because the point at which we have digital superintelligence, that's when we pass the singularity and things become just very uncertain," Musk said during an interview with MIT professor Lex Fridman. Musk was keen to note that the singularity, a hypothesized point where machines grow so advanced that humanity slips into an irreversible change, may not necessarily be good or bad. He did state, however, that "things become extremely unstable" after that point, which means Neuralink would need to achieve its human-brain linkup either before or not long after "to minimize the existential risk for humanity and consciousness as we know it."


Rights for robots: why we need better AI regulation

#artificialintelligence

We live in a world where humans aren't the only ones that have rights. In the eyes of the law, artificial entities have a legal persona too. Corporations, partnerships or nation states also have the same rights and responsibility as human beings. With rapidly evolving technologies, is it time our legal system considered a similar status for artificial intelligence (AI) and robots? "AI is already impacting most aspects of our lives. Given its pervasiveness, how this technology is developed is raising profound legal and ethical questions that need to be addressed," says Julian David, chief executive of industry body techUK.