Goto

Collaborating Authors

 springfield


Unsupervised decoding of encoded reasoning using language model interpretability

Fang, Ching, Marks, Samuel

arXiv.org Artificial Intelligence

As large language models become increasingly capable, there is growing concern that they may develop reasoning processes that are encoded or hidden from human oversight. To investigate whether current interpretability techniques can penetrate such encoded reasoning, we construct a controlled testbed by fine-tuning a reasoning model (DeepSeek-R1-Distill-Llama-70B) to perform chain-of-thought reasoning in ROT-13 encryption while maintaining intelligible English outputs. We evaluate mechanistic interpretability methods--in particular, logit lens analysis--on their ability to decode the model's hidden reasoning process using only internal activations. We show that logit lens can effectively translate encoded reasoning, with accuracy peaking in intermediate-to-late layers. Finally, we develop a fully unsupervised decoding pipeline that combines logit lens with automated paraphrasing, achieving substantial accuracy in reconstructing complete reasoning transcripts from internal model representations. These findings suggest that current mechanistic interpretability techniques may be more robust to simple forms of encoded reasoning than previously understood. Our work provides an initial framework for evaluating interpretability methods against models that reason in non-human-readable formats, contributing to the broader challenge of maintaining oversight over increasingly capable AI systems.


Data Scientist at Novetta - Springfield, Virginia

#artificialintelligence

Accenture Federal Services delivers a range of innovative, tech-enabled services for the U.S. Federal Government to address the complex, sensitive challenges of national security and intelligence missions. Refer a qualified candidate and earn up to $20K. Accenture Federal Services is seeking a Data Scientist to analyze, design, code and test multiple components of application code across one or more clients. Compensation for roles at Accenture Federal Services varies depending on a wide array of factors including but not limited to the specific office location, role, skill set and level of experience. As required by local law, Accenture Federal Services provides a reasonable range of compensation for roles that may be hired in California, Colorado, New York City or Washington as set forth below and information on benefits offered is here.


Analysis of the Spatio-temporal Dynamics of COVID-19 in Massachusetts via Spectral Graph Wavelet Theory

Geng, Ru, Gao, Yixian, Zhang, Hongkun, Zu, Jian

arXiv.org Artificial Intelligence

The rapid spread of COVID-19 disease has had a significant impact on the world. In this paper, we study COVID-19 data interpretation and visualization using open-data sources for 351 cities and towns in Massachusetts from December 6, 2020 to September 25, 2021. Because cities are embedded in rather complex transportation networks, we construct the spatio-temporal dynamic graph model, in which the graph attention neural network is utilized as a deep learning method to learn the pandemic transition probability among major cities in Massachusetts. Using the spectral graph wavelet transform (SGWT), we process the COVID-19 data on the dynamic graph, which enables us to design effective tools to analyze and detect spatio-temporal patterns in the pandemic spreading. We design a new node classification method, which effectively identifies the anomaly cities based on spectral graph wavelet coefficients. It can assist administrations or public health organizations in monitoring the spread of the pandemic and developing preventive measures. Unlike most work focusing on the evolution of confirmed cases over time, we focus on the spatio-temporal patterns of pandemic evolution among cities. Through the data analysis and visualization, a better understanding of the epidemiological development at the city level is obtained and can be helpful with city-specific surveillance.


New: Journal of Artificial Intelligence and Consciousness - Daily Nous

#artificialintelligence

The International Journal of Machine Consciousness, which ceased publication in 2014, is being reborn as the Journal of Artificial Intelligence and Consciousness. The aims and scope of the journal are: (i) articles that take inspiration from biological consciousness and/or that explore theoretical issues of consciousness to build robots and AI systems that show forms of functional consciousness; (ii) articles that employ robots and AI systems as tools to model and better understand biological mechanisms of consciousness; (iii) articles that discuss ethical problems emerging or uncovered through the overlap of AI and consciousness, and that investigate the ethical and societal impact of consciousness and the limits of it, and (iv) to pursue the hybridization between the field of AI and the field of consciousness studies. The journal's editor in chief is Antonio Chella, a professor of robotics at the University of Palermo, and its executive editors are David Gamez (computer science, Middlesex) and Riccardo Manzotti (philosophy, IULM Milan). There are a number of philosophers on the editorial board, including Peter Boltuc (U. The journal is currently accepting submissions and will begin publishing under its new name in 2020.


GEOINT Community Week - USGIF

#artificialintelligence

USGIF's GEOINT Community Week brings together the defense, intelligence, homeland security, and geospatial communities at-large for a week of briefings, educational sessions, workshops, technology exhibits and networking opportunities. USGIF is looking for volunteers to share our Intro to GEOINT presentation at your local schools during GEOINT Community Week. This is a great way to give back by helping EdGEOcate our future leaders. We have prepared presentation materials for you that are geared toward upper elementary through lower high school grades and provide an overview of GEOINT--geography, maps, satellites, imagery, remote sensing, GIS, and careers. The presentation takes 45 minutes to one hour and is highly interactive with games, Q&A, stories, videos, and much more.


Faking the News with Natural Language Processing and GPT-2

#artificialintelligence

GPT-2 generates text that is far more realistic than any text generation system before it. OpenAI was so shocked by the quality of the output that they decided that the full GPT-2 model was too dangerous to release because it could be used to create endless amounts of fake news that could fool the public or clog up search engines like Google. How easy it is for an average person to generate fake news that could trick a real person and how good are the results? Let's explore how a system like this could work and how much of a threat it is. Let's try to build a newspaper populated with fake, computer generated news: To populate News You Can't Use, we'll create a Python script that can'clone' a news site like the New York Times and generate artificial news stories on the same topics.


Analyzing Twitter Location Data with Heron, Machine Learning, Google's NLP, and BigQuery

#artificialintelligence

In this article, we will use Heron, the distributed stream processing and analytics engine from Twitter, together with Google's NLP toolkit, Nominatim and some Machine Learning as well as Google's BigTable, BigQuery, and Data Studio to plot Twitter user's assumed location across the US. We will show how much your Twitter profile actually tells someone about you, how it is possible to map your opinions and sentiments to parts of the country without having the location enabled on the Twitter app, and how Google's Cloud can help us achieve this. While it is safe to assume that most Twitter users do not enable the Location Services while using the Social network, we can also assume that a lot of people still willingly disclose their location – or at least something resembling a location – on their public Twitter profile. Furthermore, Twitter (for the most part) is a public network – and a user's opinion (subtle or bold) can be used for various Data Mining techniques, most of which do disclose more than meets the eye. Putting this together with the vast advances in publicly available, easy-to-use cloud-driven solutions for Natural Language Processing (NLP) and Machine Learning (ML) from the likes of Google, Amazon or Microsoft, any company or engineer with the wish to tap this data has more powerful tool sets and their disposal than ever before.


Machine Learning Lends a Hand for Automated Software Testing - The New Stack

#artificialintelligence

Automated testing is increasingly important in development, especially for finding security issues, but fuzz testing requires a high level of expertise -- and the sheer volume of code developers are working with, from third-party components to open source frameworks and projects, makes it hard to test every line of code. Now, a set of artificial intelligence-powered options like Microsoft's Security Risk Detection service and Diffblue's security scanner and test generation tools aim to make these techniques easier, faster and accessible to more developers. "If you ask developers what the most hated aspect of their job is, it's testing and debugging," Diffblue CEO and University of Oxford Professor of Computer Science Daniel Kroening told the New Stack. The Diffblue tools use generic algorithms to generate possible tests, and reinforcement learning combined with a solver search to make sure that the code it's giving you is the shortest possible program, which forces the machine learning system to generalize rather than stick to just the examples in its training set. "What we have built is something that does that for you. If you give us a Java program, which could be class files or compiled Java bytecode, we give you back a bunch of Java code in the form of tests for your favorite testing framework," Kroening said.


A day in the life of a journalist in 2027: Reporting meets AI

#artificialintelligence

What would have taken weeks or months of reporting by an investigative team today could take a lone journalist aided by artificial intelligence only one day. The fictional scenario below was inspired by the very real technological progress detailed in a recent study by The Associated Press. In fact, AP spent the past few months meeting with leaders in the artificial-intelligence field for an extensive report detailing the impact of AI in journalism. You can read the report here. By 2027, newsrooms will have an arsenal of AI-powered tools at their disposal, and journalists will seamlessly integrate smart machines into their everyday work, the study predicts.


A Nation Engaged: Is This Still A Land Of Economic Opportunity?

#artificialintelligence

Darren Holly steers coils of steel through Pentaflex, a manufacturer of parts for heavy trucks, in Springfield, Ohio. Darren Holly steers coils of steel through Pentaflex, a manufacturer of parts for heavy trucks, in Springfield, Ohio. Americans who endured the brutal 2007-2009 recession and slow recovery now are seeing an economic sunrise: Wages are up, jobs are growing and more families are lifting themselves up out of poverty. And yet, dark clouds are still hanging over millions of Americans. No set of sunny statistics can help an unemployed coal miner in Kentucky pay the mortgage.