deep thinking
- North America > United States (0.04)
- Europe > United Kingdom > England > Hampshire > Southampton (0.04)
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
- North America > United States (0.04)
- Europe > United Kingdom > England > Hampshire > Southampton (0.04)
Reverse Physician-AI Relationship: Full-process Clinical Diagnosis Driven by a Large Language Model
Xu, Shicheng, Huang, Xin, Wei, Zihao, Pang, Liang, Shen, Huawei, Cheng, Xueqi
Full-process clinical diagnosis in the real world encompasses the entire diagnostic workflow that begins with only an ambiguous chief complaint. While artificial intelligence (AI), particularly large language models (LLMs), is transforming clinical diagnosis, its role remains largely as an assistant to physicians. This AI-assisted working pattern makes AI can only answer specific medical questions at certain parts within the diagnostic process, but lack the ability to drive the entire diagnostic process starting from an ambiguous complaint, which still relies heavily on human physicians. This gap limits AI's ability to fully reduce physicians' workload and enhance diagnostic efficiency. To address this, we propose a paradigm shift that reverses the relationship between physicians and AI: repositioning AI as the primary director, with physicians serving as its assistants. So we present DxDirector-7B, an LLM endowed with advanced deep thinking capabilities, enabling it to drive the full-process diagnosis with minimal physician involvement. Furthermore, DxDirector-7B establishes a robust accountability framework for misdiagnoses, delineating responsibility between AI and human physicians. In evaluations across rare, complex, and real-world cases under full-process diagnosis setting, DxDirector-7B not only achieves significant superior diagnostic accuracy but also substantially reduces physician workload than state-of-the-art medical LLMs as well as general-purpose LLMs. Fine-grained analyses across multiple clinical departments and tasks validate its efficacy, with expert evaluations indicating its potential to serve as a viable substitute for medical specialists. These findings mark a new era where AI, traditionally a physicians' assistant, now drives the entire diagnostic process to drastically reduce physicians' workload, indicating an efficient and accurate diagnostic solution.
- North America > United States (0.14)
- Asia > China (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set.
SlangDIT: Benchmarking LLMs in Interpretative Slang Translation
Liang, Yunlong, Meng, Fandong, Wang, Jiaan, Zhou, Jie
The challenge of slang translation lies in capturing context-dependent semantic extensions, as slang terms often convey meanings beyond their literal interpretation. While slang detection, explanation, and translation have been studied as isolated tasks in the era of large language models (LLMs), their intrinsic interdependence remains underexplored. The main reason is lacking of a benchmark where the two tasks can be a prerequisite for the third one, which can facilitate idiomatic translation. In this paper, we introduce the interpretative slang translation task (named SlangDIT) consisting of three sub-tasks: slang detection, cross-lingual slang explanation, and slang translation within the current context, aiming to generate more accurate translation with the help of slang detection and slang explanation. To this end, we construct a SlangDIT dataset, containing over 25k English-Chinese sentence pairs. Each source sentence mentions at least one slang term and is labeled with corresponding cross-lingual slang explanation. Based on the benchmark, we propose a deep thinking model, named SlangOWL. It firstly identifies whether the sentence contains a slang, and then judges whether the slang is polysemous and analyze its possible meaning. Further, the SlangOWL provides the best explanation of the slang term targeting on the current context. Finally, according to the whole thought, the SlangOWL offers a suitable translation. Our experiments on LLMs (\emph{e.g.}, Qwen2.5 and LLama-3.1), show that our deep thinking approach indeed enhances the performance of LLMs where the proposed SLangOWL significantly surpasses the vanilla models and supervised fine-tuned models without thinking.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (12 more...)
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
Bear, Jay, Prügel-Bennett, Adam, Hare, Jonathon
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
- North America > United States (0.04)
- Europe > United Kingdom > England > Hampshire > Southampton (0.04)
Deep thinking by Garry Kasparov 2017 - The Sentient Robot
Partly about putting the record straight, partly about the workings of Deep Blue and partly musings on the nexus of man and machine, Kasparov's book is readable and worth reading. In a period when machine learning has become all the rage, it is also interesting to be reminded of the utility of something as basic (these days) as tree search, especially as enhanced and nuanced as it was in Deep Blue.
Garry Kasparov thinks deeply about losing to a machine
Former world chess champion Garry Kasparov is long overdue for telling his side of the story regarding his famous match with the IBM computer Deep Blue in May 1997. The six-game exhibition has been described as a milestone in artificial intelligence, but also as a sad day for the (human) world of chess. But then, important matters are seldom black and white. In the new book Deep Thinking, Kasparov and longtime writing partner Mig Greengard intertwine his experiences--before, during, and after the match--with a historical overview of chess-playing AI to produce a well-written, accessible book that provides food for thought about our future alongside increasingly intelligent machines. Many in the chess community, who may buy the book for insight into the match's outcome, will be surprised to see a side of Kasparov that the general public has not seen before--a man who has mellowed over time.
Garry Kasparov: "Deep Thinking" Talks at Google
Garry Kasparov and DeepMind's CEO Demis Hassabis discuss Garry's new book "Deep Thinking", his match with Deep Blue and his thoughts on the future of AI in the world of chess. Get the book here: https://goo.gl/OwuOcW It was a watershed moment in the history of technology: machine intelligence had arrived at the point where it could best human intellect. It wasn't a coincidence that Kasparov became the symbol of man's fight against the machines. Chess has long been the fulcrum in development of machine intelligence; the hoax automaton'The Turk' in the 18th century and Alan Turing's first chess program in 1952 were two early examples of the quest for machines to think like humans -- a talent we measured by their ability to beat their creators at chess.