store information
Hell or High Water: Evaluating Agentic Recovery from External Failures
Wang, Andrew, Hager, Sophia, Asija, Adi, Khashabi, Daniel, Andrews, Nicholas
As language model agents are applied to real world problems of increasing complexity, they will be expected to formulate plans across large search spaces. If those plans fail for reasons beyond their control, how well do language agents search for alternative ways to achieve their goals? We devise a specialized agentic planning benchmark to study this question. Each planning problem is solved via combinations of function calls. The agent searches for relevant functions from a set of over four thousand possibilities, and observes environmental feedback in the form of function outputs or error messages. Our benchmark confronts the agent with external failures in its workflow, such as functions that suddenly become unavailable. At the same time, even with the introduction of these failures, we guarantee that the task remains solvable. Ideally, an agent's performance on the planning task should not be affected by the presence of external failures. Overall, we find that language agents struggle to formulate and execute backup plans in response to environment feedback. While state-of-the-art models are often able to identify the correct function to use in the right context, they struggle to adapt to feedback from the environment and often fail to pursue alternate courses of action, even when the search space is artificially restricted. We provide a systematic analysis of the failures of both open-source and commercial models, examining the effects of search space size, as well as the benefits of scaling model size in our setting. Our analysis identifies key challenges for current generative models as well as promising directions for future work.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Computer made out of human BRAINS could solve the world's energy crisis - here's the scientist making science fiction reality
There is a lot of fear about robots replacing human. But maybe it should be the machines worrying about us. Swedish scientists have created the world's first'living computer' that is made out of human brain tissue. It composes of 16 organoids, or clumps of brain cells that were grown in a lab, which send information between each other. They work much like a traditional computer chip - sending and receiving signals through their neurons that act like circuits.
- Energy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.56)
7 Essential Tips to Effectively Manage Your AI Projects!
AI projects seem to get an overwhelming amount of attention and investment from global companies. The pandemic-induced digital transformation acts as great manure for AI (artificial intelligence) to flourish and thrust. Deployment and scaling of AI projects across more refined and critical scenarios are soon to become a fad. But the success rate of AI projects is not as appealing as the hype it receives. As per two Gartner reports, 85% of AI and machine learning projects fail to deliver, and only 53% of projects make it from prototypes to production.
- Information Technology > Security & Privacy (0.48)
- Banking & Finance > Trading (0.31)
GitHub - booknlp/booknlp: BookNLP, a natural language processing pipeline for books
The larger and more accurate big model is fit for GPUs and multi-core computers; the faster small model is more appropriate for personal computers. See the table below for a comparison of the difference, both in terms of overall speed and in accuracy for the tasks that BookNLP performs. To explore running BookNLP in Google Colab on a GPU, see this notebook. If using a GPU, install pytorch for your system and CUDA version by following installation instructions on https://pytorch.org. This runs the full BookNLP pipeline; you are able to run only some elements of the pipeline (to cut down on computational time) by specifying them in that parameter (e.g., to only run entity tagging and event tagging, change model_params above to include "pipeline":"entity,event").
UCL Data Science Society: Python Fundamentals
This year, as Head of Science for the UCL Data Science Society, the society is presenting a series of 20 workshops covering topics such as introduction to Python, a Data Scientists toolkit and Machine learning methods, throughout the academic year. For each of these that I present and deliver I aim to create a series of small blogposts that will outline the main points with links to the full workshop for anyone who wishes to follow along. All of these can be found at out GitHub, and will be updated throughout the year with new workshops and challenges. The first workshop up then is the introduction to Python fundamentals, which acts as an introduction to the programming environment that members can use, along with covering the basics of Python such as Variables, Data types, and Operators. While some of the highlights will be shared here, the full workshop, including the problem sheet, can be found here.
- Information Technology > Data Science (0.92)
- Information Technology > Artificial Intelligence > Machine Learning (0.56)
- Information Technology > Communications > Social Media (0.38)
Fundamentals of AI : AI for the Layman
Enough with the layman terms, let's look at AI in a little more depth. This form of AI works by exposing the machine to data along with its labels. The labels associated with the data tell the machine what the data represents. When we -- humans -- have to learn something, our best approach is to revise that certain thing again and again until our brain masters it. Similarly in Machine learning, data is shown to the machine in several iterations until the machine understands the data and is able to associate labels with similar data without any external help.
Quantum brain: The hidden answers to the open questions in AI
In a recent report published on Nanotechnology, physicists at Radboud University stated that they have moved a crucial step ahead in developing a "quantum brain." This means an entirely new generation of computers can become possible with an intelligent material that learns by a physical change in itself, similar to a human brain. This can open up a new area of challenges for AI professionals. An intelligent human brain learns by changing itself at the physical level. Thus, it can be clearly explained by applying quantum mechanical theories such as superposition and entanglement.
IT fundamentals
As the name suggests, it is the part that introduces you to the entire course (which in itself is an introduction to computer science). You can treat this section as "mini-version" of the rest of the course. So it is the part that shows you what kind of topics you may expect in further parts of the course. I cover the various topics very briefly to give you an overall picture of the topic. For example, I mention here very briefly the topic of programming, giving a very general view of it. However, the course has a whole part named "Software", in which I show you exactly what is it about. I explain how programs are created today, what tools are used etc. Probably the completion of this part of the course, therefore, will not take you much time.
Manan Suri's computer chips mimic the workings of the human brain.
Manan Suri has built key elements of computer chips that mimic the learning ability and energy efficiency of the brain. And he did it by harnessing a quirk of next-generation memory technology. That technology is known as emerging non-volatile memory (eNVM). Because of peculiarities in their nanoscale physics, eNVM devices often behave in random ways, which in computers is usually a flaw. But Suri realized that this irregularity could help researchers build so-called neuromorphic chips, which emulate the neurons and synapses in our brains.