Google’s Code-as-Policies Lets Robots Write Their Own Code

Google’s Code-as-Policies Lets Robots Write Their Own Code.
Researchers from Google’s Robotics team have open-sourced Code-as-Policies (CaP), a robot control method that uses a large language model (LLM) to generate robot-control code that achieves a user-specified goal.
CaP uses a hierarchical prompting technique for code generation that outperforms previous methods on the HumanEval code-generation benchmark.
The technique and experiments were described in a paper published on arXiv.
CaP differs from previous attempts to use LLMs to control robots; instead of generating a sequence of high-level steps or policies  to be invoked by the robot, CaP directly generates Python code for those policies.
The Google team developed a set of prompting techniques that improved code-generation, including a new hierarchical prompting method.
This technique achieved a new state-of-the art score of 39.8% [email protected] on the HumanEval benchmark.
According to the Google team :
To stay up to date with latest top stories, make sure to subscribe to this YouTube channel by clicking the button above this video!


Foreign [Music] Policies lets robots write their own Code researchers from Google's robotics Team have open sourced codas policies Cap a robot control method that uses a Large language model llm to generate Robot control code that achieves a User-specified goal cap uses a Hierarchical prompting technique for Code generation that outperforms Previous methods on the human Evil Code Generation Benchmark the technique and Experiments were described in a paper Published on arcsov cap differs from Previous attempts to use llms to control Robots instead of generating a sequence Of high-level steps or policies to be Invoked by the robot cap directly Generates python code for those policies The Google team developed a set of Prompting techniques that improved code Generation including a new hierarchical Prompting method this technique achieved A new state-of-the-art score of 39.8 Pas At 1 on the human evil Benchmark According to the Google team to stay up To date with latest top stories make Sure to subscribe to this YouTube Channel by clicking the button above This video coda's policies is a step Towards robots that can modify their Behaviors and expand their capabilities Accordingly this can be enabling but the

Flexibility also raises potential risks Since synthesized programs unless Manually checked for runtime may result In unintended behaviors with physical Hardware we can mitigate these risks With built-in safety checks that bound To control Primitives that the system Can access but more work is needed to Ensure new combinations of known Primitives are equally safe we welcome Broad discussion on how to minimize These risks while maximizing the Potential positive impacts towards more General-purpose robots llms have been Shown to exhibit general knowledge about Many subjects and can solve a wide range Of natural language processing NLP tasks However they also can generate responses That while logically sound would not be Helpful for controlling a robot for Example in response to I spilled my Drink can you help llm might respond you Could try using a vacuum cleaner earlier This year infoc covered Google secan Method that uses a large language model Llm to plan a sequence of robotic Actions to improve the output of the llm Secan introduced a value function that Indicates How likely the plan is to Succeed given the current state of the World the key component of cap is the Generation of language model programs LMP that map from natural language Instructions from a user to programs

That execute on a robot and take Perceptual inputs from the robot sensors And invoke controller apis these are Generated by a llm Infuse shot mode that Is prompted with hints and example lmps The generated lmps can contain High-level control structures such as Loops and conditionals as well as Hierarchically generated functions in The latter case a high-level LMP is Generated that contains calls to Undefined functions this LMP is parsed To find those undefined references and a Second llm that is fine-tuned to Generated functions is invoked to create The function definition Google evaluated Cap on multiple benchmarks and tasks Besides human evil the team developed a New co-generation Benchmark robocottagen Specifically for robotics problems the Team also used cap to control physical Robots performing several real-world Tasks mobile robot navigation and Manipulation in a kitchen environment And drawing shapes pick and place and Tabletop manipulation for a robotic arm Google researcher Jackie Liang discussed The work on Twitter in response to a Question about caps issues with building Complex structures from blocks Liang Replied cap operates best when the new Commands and the prompt are in similar Abstraction levels building complex Structures is akin to going couple

Levels up the abstraction level which Greedy llm decoding struggles with Should be possible but probably need Better ways to prompt code for Reproducing the paper's experiments is Available on GitHub an interactive demo Of the code generation technique is Available on hugging face Foreign [Music]

You May Also Like

About the Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *