revolve
Types of Relations: Defining Analogies with Category Theory
In order to behave intelligently both humans and machines have to represent their knowledge adequately for how it is used. Humans often use analogies to transfer their knowledge to new domains, or help others with this transfer via explanations. Hence, an important question is: What representation can be used to construct, find, and evaluate analogies? In this paper, we study features of a domain that are important for constructing analogies. We do so by formalizing knowledge domains as categories. We use the well-known example of the analogy between the solar system and the hydrogen atom to demonstrate how to construct domain categories. We also show how functors, pullbacks, and pushouts can be used to define an analogy, describe its core and a corresponding blend of the underlying domains.
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization
Zhang, Peiyan, Jin, Haibo, Hu, Leyang, Li, Xinnuo, Kang, Liying, Luo, Man, Song, Yangqiu, Wang, Haohan
Recent advancements in large language models (LLMs) have significantly enhanced the ability of LLM-based systems to perform complex tasks through natural language processing and tool interaction. However, optimizing these LLM-based systems for specific tasks remains challenging, often requiring manual interventions like prompt engineering and hyperparameter tuning. Existing automatic optimization methods, such as textual feedback-based techniques (e.g., TextGrad), tend to focus on immediate feedback, analogous to using immediate derivatives in traditional numerical gradient descent. However, relying solely on such feedback can be limited when the adjustments made in response to this feedback are either too small or fluctuate irregularly, potentially slowing down or even stalling the optimization process. To overcome these challenges, more adaptive methods are needed, especially in situations where the system's response is evolving slowly or unpredictably. In this paper, we introduce REVOLVE, an optimization method that tracks how "R"esponses "EVOLVE" across iterations in LLM systems. By focusing on the evolution of responses over time, REVOLVE enables more stable and effective optimization by making thoughtful, progressive adjustments at each step. Experimental results demonstrate that REVOLVE outperforms competitive baselines, achieving a 7.8% improvement in prompt optimization, a 20.72% gain in solution refinement, and a 29.17% increase in code optimization. Additionally, REVOLVE converges in fewer iterations, resulting in significant computational savings. These advantages highlight its adaptability and efficiency, positioning REVOLVE as a valuable tool for optimizing LLM-based systems and accelerating the development of next-generation AI technologies. Code is available at: https://github.com/Peiyance/REVOLVE.
- North America > United States > Illinois (0.04)
- Asia > China > Hong Kong (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
REvolve: Reward Evolution with Large Language Models for Autonomous Driving
Hazra, Rishi, Sygkounas, Alkis, Persson, Andreas, Loutfi, Amy, Martires, Pedro Zuidberg Dos
Designing effective reward functions is crucial to training reinforcement learning (RL) algorithms. However, this design is non-trivial, even for domain experts, due to the subjective nature of certain tasks that are hard to quantify explicitly. In recent works, large language models (LLMs) have been used for reward generation from natural language task descriptions, leveraging their extensive instruction tuning and commonsense understanding of human behavior. In this work, we hypothesize that LLMs, guided by human feedback, can be used to formulate human-aligned reward functions. Specifically, we study this in the challenging setting of autonomous driving (AD), wherein notions of "good" driving are tacit and hard to quantify. To this end, we introduce REvolve, an evolutionary framework that uses LLMs for reward design in AD. REvolve creates and refines reward functions by utilizing human feedback to guide the evolution process, effectively translating implicit human knowledge into explicit reward functions for training (deep) RL agents. We demonstrate that agents trained on REvolve-designed rewards align closely with human driving standards, thereby outperforming other state-of-the-art baselines.
- Asia > Indonesia > Bali (0.04)
- North America > United States > New York (0.04)
- North America > Montserrat (0.04)
- (3 more...)
- Overview (0.67)
- Research Report (0.64)
- Transportation > Ground > Road (0.71)
- Information Technology > Robotics & Automation (0.71)
- Leisure & Entertainment > Games > Chess (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
The feud between Elon Musk and Sam Altman – explained
The day after OpenAI launched in December 2015, its co-founder Sam Altman sat down with Vanity Fair to discuss what the magazine described as "a non-profit company to save the world from a dystopian future". Altman talked up his vision for keeping artificial intelligence safe and distributing it widely, as well as his good working relationship with his co-chair – Tesla CEO Elon Musk. "I really trust him, which is obviously important to everyone involved," Altman said. Almost a decade later, Musk and Altman are locked in a public spat and looming legal battle that revolves around the end of their previous partnership and OpenAI's creation of a for-profit subsidiary now valued at 80bn. Musk filed a suit against OpenAI in a California court last week, alleging that Altman and other executives had "breached the founding agreement" of the company by pursuing private commercial success instead of working to benefit humanity.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
FAME: Flexible, Scalable Analogy Mappings Engine
Jacob, Shahar, Shani, Chen, Shahaf, Dafna
Analogy is one of the core capacities of human cognition; when faced with new situations, we often transfer prior experience from other domains. Most work on computational analogy relies heavily on complex, manually crafted input. In this work, we relax the input requirements, requiring only names of entities to be mapped. We automatically extract commonsense representations and use them to identify a mapping between the entities. Unlike previous works, our framework can handle partial analogies and suggest new entities to be added. Moreover, our method's output is easily interpretable, allowing for users to understand why a specific mapping was chosen. Experiments show that our model correctly maps 81.2% of classical 2x2 analogy problems (guess level=50%). On larger problems, it achieves 77.8% accuracy (mean guess level=13.1%). In another experiment, we show our algorithm outperforms human performance, and the automatic suggestions of new entities resemble those suggested by humans. We hope this work will advance computational analogy by paving the way to more flexible, realistic input requirements, with broader applicability.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Asia > China (0.04)
- Transportation > Ground > Road (0.67)
- Automobiles & Trucks (0.67)
- Health & Medicine > Therapeutic Area (0.46)
- Transportation > Passenger (0.46)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.68)
Full Stack Data Scientists Are Trending Right Now: Here's How You Can Become One
Never before have we seen so many job ads for a full-stack data scientist. But what exactly is one? A full-stack data scientist is a unicorn who is capable of fulfilling the role of a software engineer, data engineer, business analyst, machine learning engineer, and data scientist, all wrapped up in one package. These individuals have diverse skill sets beyond even that of a regular data scientist and could be a company's one-stop shop for managing the entire lifecycle of a data science project. This full lifecycle approach means that full-stack data scientists are capable of identifying the business need (or working with C-level executives to determine which problem needs to be solved), setting up the data architecture required for the project, analyzing data and building models, and finally deploying the model into the production environment.
U.S. pursues a unique solution to fight hackers. It revolves around esports.
The U.S. cyber team's head coach, retired U.S. Army Special Forces Lt. Col. TJ O'Connor, noted the unique platform presented by cybersecurity competitions. Unlike other forms of computer science education, O'Connor said, staying up to date on the latest developments in cybersecurity is difficult, with hackers constantly iterating on and developing new tactics to break through cyberdefenses. it would still be hard to simulate being in the thick of such an operation, .
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Games (0.40)
Artificial Intelligence, Warfare, and Bias – PRIO Blogs
When you think about Artificial Intelligence (AI) and war, you might find yourself thinking about killer robots, like those we have seen in movies such as The Terminator. In reality, AI and warfare looks quite different from these popularized images, and today we see many countries around the world exploring the use of AI and implementing AI systems into their militaries and defense programs. With this increased interest in AI, there has also been a growing debate about the ethics and legality of using AI in warfare. While there are many concerning aspects about AI being utilized in warfare, one that is particularly troubling, but has also received less attention, is that of biased AI systems. Certain lessons can be learnt by looking at examples of biased AI in non-military settings. It has become increasingly clear from a number of investigations and studies that the biases that exist within our society will also become embedded into AI.
- North America > United States (0.16)
- Europe > United Kingdom (0.05)
Ptolemy and the Limits of Deep Learning
That's because we are encumbered in this world to have limited computational capabilities. We need abstractions and generalizations to navigate the complexities of this world. But along the way in developing a way to simplify the complex world, we discovered recurring patterns that have infinite reach. The models that we have discovered also allowed us to reason about many more different systems and to create universal computational machines. Obscured from our intuitive understanding of this world is that fundamental reality that everything is of computational origin.
The Computer Scientist Training AI to Think with Analogies
The Pulitzer Prize-winning book Gödel, Escher, Bach inspired legions of computer scientists in 1979, but few were as inspired as Melanie Mitchell. After reading the 777-page tome, Mitchell, a high school math teacher in New York, decided she "needed to be" in artificial intelligence. She soon tracked down the book's author, AI researcher Douglas Hofstadter, and talked him into giving her an internship. She had only taken a handful of computer science courses at the time, but he seemed impressed with her chutzpah and unconcerned about her academic credentials. Mitchell prepared a "last-minute" graduate school application and joined Hofstadter's new lab at the University of Michigan in Ann Arbor.
- North America > United States > New York (0.24)
- North America > United States > Michigan (0.24)
- Education > Curriculum > Subject-Specific Education (0.89)
- Education > Educational Setting (0.54)