Goto

Collaborating Authors

 webster


EvolveSignal: A Large Language Model Powered Coding Agent for Discovering Traffic Signal Control Algorithms

Wang, Leizhen, Duan, Peibo, Wang, Hao, Wang, Yue, Xu, Jian, Zheng, Nan, Ma, Zhenliang

arXiv.org Artificial Intelligence

In traffic engineering, the fixed-time traffic signal control remains widely used for its low cost, stability, and interpretability. However, its design depends on hand-crafted formulas (e.g., Webster) and manual re-timing by engineers to adapt to demand changes, which is labor-intensive and often yields suboptimal results under heterogeneous or congested conditions. This paper introduces the EvolveSignal, a large language models (LLMs) powered coding agent to automatically discover new traffic signal control algorithms. We formulate the problem as program synthesis, where candidate algorithms are represented as Python functions with fixed input-output structures, and iteratively optimized through external evaluations (e.g., a traffic simulator) and evolutionary search. Experiments on a signalized intersection demonstrate that the discovered algorithms outperform Webster's baseline, reducing average delay by 20.1% and average stops by 47.1%. Beyond performance, ablation and incremental analyses reveal that EvolveSignal modifications-such as adjusting cycle length bounds, incorporating right-turn demand, and rescaling green allocations-can offer practically meaningful insights for traffic engineers. This work opens a new research direction by leveraging AI for algorithm design in traffic signal control, bridging program synthesis with transportation engineering.


This daringly experimental thriller is a puzzle-lover's delight

New Scientist

Simply by reading this, you have allowed me to hijack your thoughts, each word leaping from my mind to yours. I can even conjure mental images against your will – quick, don't think about a pink elephant! Whatever you do, don't imagine it! Thankfully, there are limits to what I can do to you with words and ideas alone. What if there were a phrase so powerful that I could use it to turn your own mind against you, to the point of death?


3D-EX : A Unified Dataset of Definitions and Dictionary Examples

Almeman, Fatemah, Sheikhi, Hadi, Espinosa-Anke, Luis

arXiv.org Artificial Intelligence

Definitions are a fundamental building block in lexicography, linguistics and computational semantics. In NLP, they have been used for retrofitting word embeddings or augmenting contextual representations in language models. However, lexical resources containing definitions exhibit a wide range of properties, which has implications in the behaviour of models trained and evaluated on them. In this paper, we introduce 3D- EX , a dataset that aims to fill this gap by combining well-known English resources into one centralized knowledge repository in the form of triples. 3D- EX is a unified evaluation framework with carefully pre-computed train/validation/test splits to prevent memorization. We report experimental results that suggest that this dataset could be effectively leveraged in downstream NLP tasks. Code and data are available at https://github.com/F-Almeman/3D-EX .


Computational Language Assessment: Open Brain AI

Themistocleous, Charalambos

arXiv.org Artificial Intelligence

Language assessment plays a crucial role in diagnosing and treating individuals with speech, language, and communication disorders caused by neurogenic conditions, whether developmental or acquired. However, traditional manual assessment methods have several drawbacks. They are often laborious and time-consuming to administer and score, causing additional patient stress. Moreover, they divert valuable resources from treatment. To address these challenges, we introduce Open Brain AI (openbrainai.com), a computational platform that harnesses innovative AI techniques, including machine learning and natural language processing, to automatically analyze spoken and written speech productions. The platform leverages state-of-the-art AI techniques and aims to present a promising advancement in language assessment. Its ability to provide reliable and efficient measurements can enhance the accuracy of diagnoses and optimize treatment strategies for individuals with speech, language, and communication disorders. Furthermore, the automation and objectivity offered by the platform alleviate the burden on clinicians, enabling them to streamline their workflow and allocate more time and resources to direct patient care. Notably, the platform is freely accessible, empowering clinicians to conduct critical analyses of their data and allowing them to allocate more attention to other critical aspects of therapy and treatment.


1923 cartoon eerily predicted 2023's AI art generators

#artificialintelligence

In 1923, an editorial cartoonist named H.T. Webster drew a humorous cartoon for the New York World newspaper depicting a fictional 2023 machine that would generate ideas and draw them as cartoons automatically. It presaged recent advancements in AI image synthesis, one century later, that actually can create artwork automatically. The vintage cartoon carries the caption "In the year 2023 when all our work is done by electricity." It depicts a cartoonist standing by his drawing table and making plans for social events while an "idea dynamo" generates ideas and a "cartoon dynamo" renders the artwork. Interestingly, this separation of labor feels similar to our neural networks of today.


Why Banks Embrace AI Platforms-as-a-Service

#artificialintelligence

Sudhir Jha, senior vice president and head of Mastercard's Brighterion unit, told Karen Webster in the most recent On the Agenda discussion that artificial intelligence (AI) can strengthen credit and risk management and broaden its value well beyond simply improving day-to-day operations. But to get there, enterprises need a bit of guidance. "What used to be cutting-edge technology five years ago is no longer cutting edge," he said, and enterprises that try to keep up with the rapid changes in data science and analysis on their own can be quickly overwhelmed. The enterprise that starts with regression and pattern analysis solutions might scale rapidly and find benefit from neural networks. For banks, acquirers and healthcare payments executives, he said, using vendors' AI-based solutions help to avoid undue losses from fraud, the abuse and misallocation of funds and poor underwriting decisions.


Adobe's Project Shasta is an AI-powered, web-based audio editor

Engadget

Adobe is testing out a new web-based tool that uses AI to simplify audio recording. The software is called Project Shasta, and it could make recording and editing podcasts and other projects a lot easier and more approachable. The project started off in Adobe Labs as an experiment to find "new ways to help people edit audio on the web," Mark Webster, Adobe's head of audio products, wrote in a post on Product Hunt. "But then it became clear that the pandemic made recording difficult too, even for audio professionals. Our vision became empowering everyone with the tools they needed to create professional sounding audio."


Patents Show Finding Transaction Anomalies

#artificialintelligence

The window financial institutions (FIs) have to determine "good" customers from "bad" lasts milliseconds. As fraudsters steal their unwitting victims' online identities, intercept SMS messages, mask device locations to commit payments fraud, banks and other firms need to be able to spot "signs" hidden in the eCommerce deluge that can separate genuine transactions from fraudulent ones. It's a $40 billion problem, that, as Dave Excell, founder of Featurespace, told Karen Webster, needs deep learning networks and a range of automated advanced technologies and models to construct the best lines of defense against the fraudsters. Two new patents, leveraging those advanced technologies, can help FIs pinpoint behavioral changes and identify high-risk behavior -- stopping fraud and financial crime before it happens. Featurespace said Monday (July 12) it had filed those two global patents, aimed at transforming network architecture and risk scoring to protect customers and accounts.


Smart Speakers Go Beyond Waiting to Be Asked

WSJ.com: WSJD - Technology

The Amazon Echo Show 10 automatically moves its display to face the user, even if it is performing a task that doesn't need user input, like showing a recipe on the screen. Get weekly insights into the ways companies optimize data, technology and design to drive success with their customers and employees. Proactive or not, features in smart-home devices need to address a real user need, not stack the product with unnecessary and potentially confusing tools, said Ashton Udall, senior product manager at Google. The company developed sensor technology to monitor sleep, for example, because its research showed that consumers frequently forget to use or charge the wearables often employed for sleep tracking, or find the devices uncomfortable, he said. Amazon and Google hope the experiences will help them compete for users and more fully integrate their devices into people's lives.


Quantifying Quantum computing's value in financial services - Fintech News

#artificialintelligence

The next great leap for computing may be a bit closer with the help of joint efforts between the U.S. government, the private sector -- and hundreds of millions of dollars. And along the way, we might see a benefit for the financial services sector in the form of reduced false positives in fraud detection. The U.S. Department of Energy said this week that it will spend $625 million over the next five years to develop a dozen research centers devoted to artificial intelligence (AI) and quantum computing. Another $340 million will come from the private sector and academia, bringing Uncle Sam together with the likes of IBM, Amazon and Google to apply the highest of high tech to a variety of verticals and applications. In an interview with Karen Webster, Dr. Stefan Wörner, global leader for quantum finance and optimization at IBM, said we're getting closer to crossing the quantum-computing Rubicon from concept to real-world applications. The basic premise behind quantum computing is that it can tackle tasks with blinding speed and pinpoint accuracy that aren't possible with "regular" computers.