Goto

Collaborating Authors

 store information


Hell or High Water: Evaluating Agentic Recovery from External Failures

Wang, Andrew, Hager, Sophia, Asija, Adi, Khashabi, Daniel, Andrews, Nicholas

arXiv.org Artificial Intelligence

As language model agents are applied to real world problems of increasing complexity, they will be expected to formulate plans across large search spaces. If those plans fail for reasons beyond their control, how well do language agents search for alternative ways to achieve their goals? We devise a specialized agentic planning benchmark to study this question. Each planning problem is solved via combinations of function calls. The agent searches for relevant functions from a set of over four thousand possibilities, and observes environmental feedback in the form of function outputs or error messages. Our benchmark confronts the agent with external failures in its workflow, such as functions that suddenly become unavailable. At the same time, even with the introduction of these failures, we guarantee that the task remains solvable. Ideally, an agent's performance on the planning task should not be affected by the presence of external failures. Overall, we find that language agents struggle to formulate and execute backup plans in response to environment feedback. While state-of-the-art models are often able to identify the correct function to use in the right context, they struggle to adapt to feedback from the environment and often fail to pursue alternate courses of action, even when the search space is artificially restricted. We provide a systematic analysis of the failures of both open-source and commercial models, examining the effects of search space size, as well as the benefits of scaling model size in our setting. Our analysis identifies key challenges for current generative models as well as promising directions for future work.


Computer made out of human BRAINS could solve the world's energy crisis - here's the scientist making science fiction reality

Daily Mail - Science & tech

There is a lot of fear about robots replacing human. But maybe it should be the machines worrying about us. Swedish scientists have created the world's first'living computer' that is made out of human brain tissue. It composes of 16 organoids, or clumps of brain cells that were grown in a lab, which send information between each other. They work much like a traditional computer chip - sending and receiving signals through their neurons that act like circuits.


7 Essential Tips to Effectively Manage Your AI Projects!

#artificialintelligence

AI projects seem to get an overwhelming amount of attention and investment from global companies. The pandemic-induced digital transformation acts as great manure for AI (artificial intelligence) to flourish and thrust. Deployment and scaling of AI projects across more refined and critical scenarios are soon to become a fad. But the success rate of AI projects is not as appealing as the hype it receives. As per two Gartner reports, 85% of AI and machine learning projects fail to deliver, and only 53% of projects make it from prototypes to production.


GitHub - booknlp/booknlp: BookNLP, a natural language processing pipeline for books

#artificialintelligence

The larger and more accurate big model is fit for GPUs and multi-core computers; the faster small model is more appropriate for personal computers. See the table below for a comparison of the difference, both in terms of overall speed and in accuracy for the tasks that BookNLP performs. To explore running BookNLP in Google Colab on a GPU, see this notebook. If using a GPU, install pytorch for your system and CUDA version by following installation instructions on https://pytorch.org. This runs the full BookNLP pipeline; you are able to run only some elements of the pipeline (to cut down on computational time) by specifying them in that parameter (e.g., to only run entity tagging and event tagging, change model_params above to include "pipeline":"entity,event").


UCL Data Science Society: Python Fundamentals

#artificialintelligence

This year, as Head of Science for the UCL Data Science Society, the society is presenting a series of 20 workshops covering topics such as introduction to Python, a Data Scientists toolkit and Machine learning methods, throughout the academic year. For each of these that I present and deliver I aim to create a series of small blogposts that will outline the main points with links to the full workshop for anyone who wishes to follow along. All of these can be found at out GitHub, and will be updated throughout the year with new workshops and challenges. The first workshop up then is the introduction to Python fundamentals, which acts as an introduction to the programming environment that members can use, along with covering the basics of Python such as Variables, Data types, and Operators. While some of the highlights will be shared here, the full workshop, including the problem sheet, can be found here.


Fundamentals of AI : AI for the Layman

#artificialintelligence

Enough with the layman terms, let's look at AI in a little more depth. This form of AI works by exposing the machine to data along with its labels. The labels associated with the data tell the machine what the data represents. When we -- humans -- have to learn something, our best approach is to revise that certain thing again and again until our brain masters it. Similarly in Machine learning, data is shown to the machine in several iterations until the machine understands the data and is able to associate labels with similar data without any external help.


Quantum brain: The hidden answers to the open questions in AI

#artificialintelligence

In a recent report published on Nanotechnology, physicists at Radboud University stated that they have moved a crucial step ahead in developing a "quantum brain." This means an entirely new generation of computers can become possible with an intelligent material that learns by a physical change in itself, similar to a human brain. This can open up a new area of challenges for AI professionals. An intelligent human brain learns by changing itself at the physical level. Thus, it can be clearly explained by applying quantum mechanical theories such as superposition and entanglement.


IT fundamentals

#artificialintelligence

As the name suggests, it is the part that introduces you to the entire course (which in itself is an introduction to computer science). You can treat this section as "mini-version" of the rest of the course. So it is the part that shows you what kind of topics you may expect in further parts of the course. I cover the various topics very briefly to give you an overall picture of the topic. For example, I mention here very briefly the topic of programming, giving a very general view of it. However, the course has a whole part named "Software", in which I show you exactly what is it about. I explain how programs are created today, what tools are used etc. Probably the completion of this part of the course, therefore, will not take you much time.


Artificial Intelligence PPT

#artificialintelligence

Files are self-contained objects on a computer that store information. There are a number of different file types that serve a variety of purposes. Some store information pertaining to the operating system and user settings, while others contain programs, written documents, graphics, or sound.


Synaptic Architecture for Brain Inspired Computing: IBM Research

#artificialintelligence

Our brain and all its magnificent capabilities is powered by less than 20 watts. Stop to think about that for a second. As I write this blog my laptop is using about 80 watts, yet at only a fourth of the power, our brain outperforms state-of-the-art supercomputers by several orders of magnitude when it comes to energy efficiency and volume. For this reason it shouldn't be surprising that scientists around the world are seeking inspiration from the human brain as a promising avenue towards the development of next generation AI computing systems and while the IT industry has made significant progress in the past several years, particularly in using machine learning for computer vision and speech recognition, current technology is hitting a wall when it comes to deep neural networks matching the power efficiency of their biological counterpart, but this could be about to change. As reported last week in Nature Communications, my colleagues and I at IBM Research and collaborators at EPFL and the New Jersey Institute of Technology have developed and experimentally tested an artificial synapse architecture using 1 million devices -- a significant step towards realizing large-scale and energy efficient neuromorphic computing technology.