controlled
LLMs with Chain-of-Thought Are Non-Causal Reasoners
Bao, Guangsheng, Zhang, Hongbo, Yang, Linyi, Wang, Cunxiang, Zhang, Yue
This paper explores the role of the Chain of Thought (CoT) in Large Language Models (LLMs) reasoning. Despite its potential to improve task performance, our analysis reveals a surprising frequency of correct answers following incorrect CoTs and vice versa. We employ causal analysis to assess the cause-effect relationship between CoTs/instructions and answers in LLMs, uncovering the Structural Causal Model (SCM) that LLMs approximate. By comparing the implied SCM with that of human reasoning, we highlight discrepancies between LLM and human reasoning processes. We further examine the factors influencing the causal structure of the implied SCM, revealing that in-context learning, supervised fine-tuning, and reinforcement learning on human feedback significantly impact the causal relations. We release the code and results at https://github.com/StevenZHB/CoT_Causal_Analysis.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Karamcheti, Siddharth, Nair, Suraj, Balakrishna, Ashwin, Liang, Percy, Kollar, Thomas, Sadigh, Dorsa
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning; adoption that has fueled a wealth of new models such as LLaVa, InstructBLIP, and PaLI-3. Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored, making it challenging to understand what factors account for model performance $-$ a challenge further complicated by the lack of objective, consistent evaluations. To address these gaps, we first compile a suite of standardized evaluations spanning visual question answering, object localization from language, and targeted challenge sets that probe properties such as hallucination; evaluations that provide calibrated, fine-grained insight into a VLM's capabilities. Second, we rigorously investigate VLMs along key design axes, including pretrained visual representations and quantifying the tradeoffs of using base vs. instruct-tuned language models, amongst others. We couple our analysis with three resource contributions: (1) a unified framework for evaluating VLMs, (2) optimized, flexible code for VLM training, and (3) checkpoints for all models, including a family of VLMs at the 7-13B scale that strictly outperform InstructBLIP and LLaVa v1.5, the state-of-the-art in open-source VLMs.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Monaco (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Asia > China (0.04)
Research Shows that Superintelligent AI is Impossible to be Controlled
A group of researchers have come to the terrifying conclusion that containing super-intelligence AI may not be possible. They claim that controlling the AI would fall beyond human comprehension. According to the Journal of Artificial Intelligence Research, in the paper titled, 'Superintelligence Cannot be Contained: Lessons from Computability Theory', researchers have argued that total containment (in principle) would be impossible due to fundamental limits inherent to computing. It further claims that it is mathematically impossible for humans to calculate an AI's plans, thereby making it uncontainable. The authors cite that implementing a rule for artificial intelligence to "cause no harm to humans" would not be an option if humans cannot predict the scenarios that an AI may come up with.
Disruptive Showdown: AI-powered Cyberattacks will be Controlled by AI
AI is revolutionizing many industries across the globe like manufacturing, retail, pharmaceutical, and IT, but it is also reinventing cyberattacks. Since the onset of the coronavirus pandemic, the remote work culture and rapid cloud computing have encouraged hackers to come up with innovative solutions to break into online networks. These cyberattacks pose a severe risk to worldwide security. According to a report by MIT Technology Review Insights, in association with Darktrace, an AI cybersecurity company, "Offensive AI risks and developments in the cyberthreat landscape are redefining enterprise security as humans already struggle to keep pace with advanced attacks." Because cyberattacks have become more sophisticated with time, professionals are researching ways to use AI to combat these threats.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.75)
Driver Identification via the Steering Wheel
Gahr, Bernhard, Liu, Shu, Koch, Kevin, Barata, Filipe, Dahlinger, André, Ryder, Benjamin, Fleisch, Elgar, Wortmann, Felix
Driver identification has emerged as a vital research field, where both practitioners and researchers investigate the potential of driver identification to enable a personalized driving experience. Within recent years, a selection of studies have reported that individuals could be perfectly identified based on their driving behavior under controlled conditions. However, research investigating the potential of driver identification under naturalistic conditions claim accuracies only marginally higher than random guess. The paper at hand provides a comprehensive summary of the recent work, highlighting the main discrepancies in the design of the machine learning approaches, primarily the window length parameter that was considered. Key findings further indicate that the longitudinal vehicle control information is particularly useful for driver identification, leaving the research gap on the extent to which the lateral vehicle control can be used for reliable identification. Building upon existing work, we provide a novel approach for the design of the window length parameter that provides evidence that reliable driver identification can be achieved with data limited to the steering wheel only. The results and insights in this paper are based on data collected from the largest naturalistic driving study conducted in this field. Overall, a neural network based on GRUs was found to provide better identification performance than traditional methods, increasing the prediction accuracy from under 15\% to over 65\% for 15 drivers. When leveraging the full field study dataset, comprising 72 drivers, the accuracy of identification prediction of the approach improved a random guess approach by a factor of 25.
- North America > United States > Texas > Coleman County (0.24)
- Europe > Switzerland > Zürich > Zürich (0.15)
- Europe > Austria > Vienna (0.14)
- (7 more...)
- Research Report > Experimental Study (0.70)
- Research Report > New Finding (0.48)
- Research Report > Promising Solution (0.48)
- Transportation > Ground > Road (1.00)
- Health & Medicine (1.00)
- Automobiles & Trucks (1.00)
- Information Technology > Security & Privacy (0.68)
Can Robots Be Controlled By Brainwaves? These Researchers Are Up For The Game
Traditionally, robots have been configured to perform tasks by programming them explicitly. They have been taught the intricacies of how humans communicate so that they can respond accordingly. While this process is not only tedious, it has significantly high error rates too. Especially in areas involving safety-critical tasks, the accuracy of robots is of paramount importance. This brings the need to control robots in a quick and effective manner, and brainwaves are a way out.
Watch: Robot Being Controlled, Corrected With Brainwaves And Hand Gestures
If we want to see robots thrive in different fields, it is important to establish appropriate ways or techniques that can be used to control or correct them whenever required. Normally, engineers prefer advanced programming or language processing techniques for a task like that, but all those methods do not provide enough flexibility, especially when there are multiple tasks at hand. This is why a group of researchers at MIT's Computer Science and Artificial Intelligence Laboratory developed a new system, one that allows robots to be controlled by our brains and gestures. Though the idea of controlling a machine with our mind may sound too farfetched, the researchers demonstrated the system successfully and are actually bringing it closer to real-world applications. A system developed at MIT allows a human supervisor to correct a robot's mistakes using gestures and brainwaves.
This Audio Speaker is Controlled By Hand Gestures
A new gesture-controlled hi-fi speaker has been launched that uses artificial intelligence and 3D infrared sensing to bring a new type of music listening experience to the home. Firemoe's AIUR 360 Air Gesture Control Hi-Fi Speaker contains a complex pattern of recognition algorithms embedded in software and combined with a 15 mm sensor module to detect more than 10 types of hand movements and gestures as far away as 11 inches. There are gesture controls for changing the songs, adjusting the volume, playing music, pausing music, track control and more. The AIUR includes Bluetooth 4.2 and has a rechargeable 6,000 mAh on-board battery that allows for up to 12 hours of streaming music. It is also water-resistant so it can be used outside or in a bathroom.
This 'Smart City' in China Is Controlled By An Artificial Intelligence
The idea of smart cities – infrastructure interlinked by software – isn't new, but it's undeniably cool. Who wouldn't want to live somewhere where programs use data and evidence, not intuition, to actively improve their day-to-day lives? Now imagine that an entire smart city actually exists, but it's even more advanced than you could possibly imagine, where infrastructural systems are altered on the fly by an artificial intelligence (AI). This may sound futuristic, but one such place can already be found in China. As reported back in October 2016, the government of the city of Hangzhou – home to over 9 million people – collaborated with Alibaba and Foxconn to build the "City Brain" project.
DJI's New Spark Camera Drone Weighs Less Than A Can Of Soda, Controlled By Hand Gestures
DJI Wednesday launched its new minicamera drone Spark, which weighs less than a can of soda -- and can be controlled by hand gestures. The small camera drone can take off from the palm of your hand and automatically goes on Gesture Mode, which features PalmControl, allowing you to control the gadget with hand movements. "Even if you've never flown a drone before, flying Spark is easy because the only remote controller you'll need is your hand," DJI said in a press release. With Gesture Mode, you can send the drone away from you, take a selfie or call it back by waving your hands. Spark can also be controlled by a remote controller and your smartphone.