Goto

Collaborating Authors

 own code


Hope or horror? The great AI debate dividing its pioneers

The Guardian

Demis Hassabis says he is not in the "pessimistic" camp about artificial intelligence. But that did not stop the CEO of Google DeepMind signing a statement in May warning that the threat of extinction from AI should be treated as a societal risk comparable to pandemics or nuclear weapons. That uneasy gap between hope and horror, and the desire to bridge it, is a key reason why Rishi Sunak convened next week's global AI safety summit in Bletchley Park, a symbolic choice as the base of the visionary codebreakers – including computing pioneer Alan Turing – who deciphered German communications during the second world war. "I am not in the pessimistic camp about AI obviously, otherwise I wouldn't be working on it," Hassabis tells the Guardian in an interview at Google DeepMind's base in King's Cross, London. "But I'm not in the'there's nothing to see here and nothing to worry about' [camp]. This can go well but we've got to be active about shaping that."


Google's Code-as-Policies Lets Robots Write Their Own Code

#artificialintelligence

Researchers from Google's Robotics team have open-sourced Code-as-Policies (CaP), a robot control method that uses a large language model (LL M) to generate robot-control code that achieves a user-specified goal. CaP uses a hierarchical prompting technique for code generation that outperforms previous methods on the HumanEval code-generation benchmark. The technique and experiments were described in a paper published on arXiv. CaP differs from previous attempts to use LLMs to control robots; instead of generating a sequence of high-level steps or policies to be invoked by the robot, CaP directly generates Python code for those policies. The Google team developed a set of prompting techniques that improved code-generation, including a new hierarchical prompting method.


Google's Code-as-Policies Lets Robots Write Their Own Code

#artificialintelligence

Researchers from Google's Robotics team have open-sourced Code-as-Policies (CaP), a robot control method that uses a large language model (LLM) to generate robot-control code that achieves a user-specified goal. CaP uses a hierarchical prompting technique for code generation that outperforms previous methods on the HumanEval code-generation benchmark. The technique and experiments were described in a paper published on arXiv. CaP differs from previous attempts to use LLMs to control robots; instead of generating a sequence of high-level steps or policies to be invoked by the robot, CaP directly generates Python code for those policies. The Google team developed a set of prompting techniques that improved code-generation, including a new hierarchical prompting method.


Are We Ready for AI-Generated Code?

#artificialintelligence

In recent months, we've marveled at the quality of computer-generated faces, cat pictures, videos, essays, and even art. Artificial intelligence (AI) and machine learning (ML) have also quietly slipped into software development, with tools like GitHub Copilot, Tabnine, Polycode, and others taking the logical next step of putting existing code autocomplete functionality on AI steroids. Unlike cat pics, though, the origin, quality, and security of application code can have wide-reaching implications -- and at least for security, research shows that the risk is real. Prior academic research has already shown that GitHub Copilot often generates code with security vulnerabilities. More recently, hands-on analysis from Invicti security engineer Kadir Arslan showed that insecure code suggestions are still the rule rather than the exception with Copilot.


Generative AI: The Future Is AI Writing Its Own Code

#artificialintelligence

Generative AI is in a Cambrian explosion of capability. This is just the beginning, Glimpse AI CEO Alex Cardinell told me in a recent TechFirst podcast. The ultimate thing for AI to create is more of itself. God creating Adam, in an image Dall-E created based on a prompt from the writer. Dall-E has major ... [ ] trouble with fingers still ... "The most exciting part of it all ... is when maybe AI is also at the point where it can start writing the code that will make its own AI even better," Cardinell says.


Generative AI: The Future Is AI Writing Its Own Code

#artificialintelligence

Generative AI is in a Cambrian explosion of capability. This is just the beginning, Glimpse AI CEO Alex Cardinal told me in a recent TechFirst podcast. The ultimate thing for AI to create is more of itself. God creating Adam, in an image Dall-E created based on a prompt from the writer. Dall-E has major ... [ ] trouble with fingers still ... "The most exciting part of it all ... is when maybe AI is also at the point where it can start writing the code that will make its own AI even better," Cardinal says.


Thanks to Google AI, Robots Can Now Generate Their Own Code

#artificialintelligence

One of the more common approaches used to control robots is to programme them with code on detecting objects and feedback loops to specify if they should perform certain tasks. These programmes come at a cost, where reprogramming policies for each task can become time consuming. When provided with natural language instructions, language models at present are highly proficient at not only writing generic code but also generating instructions that let the user control the actions of the robots as well. What if robots could autonomously write their own code to interact with the whole world? One such latest language model, like PaLM, is capable of complex reasoning and has been trained on millions of codes.


Google's AI is capable to write its own code

#artificialintelligence

Google has disclosed a new approach of using large language models (LLMs) that demonstrate how robots can write their own code after receiving human-based instructions. The latest efforts by Google shows advanced AI can understand open-ended prompts from humans and respond reasonably and safely in a physical space. Google published a new blog post to present the "Code as Policies" (CAP) language model program by its developers. The blog post displays experiments and interactive simulated robot demo videos as well as generated code. The experiment involves a code-writing AI model (LMPs) written in Python code which can create new code when prompts are written in plain English.


Google wants robots to generate their own code

#artificialintelligence

There are countless big problems left to solve in the world of automation, and robotic learning sits somewhere near the top. While it's true that humans have gotten pretty good at programming systems for specific tasks, there's a big, open-ended question of: and then what? New research demonstrated at Google's AI event in New York City this morning proposes the notion of letting robotic systems effectively write their own code. The concept is designed to save human developers the hassle of having to go in and reprogram things as new information arises. The company notes that existing research and trained models can be effective in implementing the concept.


Time for artificial intelligence to get its own code of conduct

#artificialintelligence

For example, what if a self-driving car is programmed to get us as quickly as possible to the airport, but is not concerned with how many pedestrians are injured along the way? Or, say an AI program identifies how to cut costs in a large healthcare system, neglecting to account for how the most vulnerable groups would be affected. As AI becomes responsible for ever more decisions, it will achieve goals more quickly without taking into account the other things that are important to our species.