Automatic robot mechanical arm is working in the modern automobile parts factory.
Image Credits:Teera Konakan / Getty Images
AI

Google wants robots to generate their own code

There are countless big problems left to solve in the world of automation, and robotic learning sits somewhere near the top. While it’s true that humans have gotten pretty good at programming systems for specific tasks, there’s a big, open-ended question of: and then what?

New research demonstrated at Google’s AI event in New York City this morning proposes the notion of letting robotic systems effectively write their own code. The concept is designed to save human developers the hassle of having to go in and reprogram things as new information arises.

Image Credits: Google

The company notes that existing research and trained models can be effective in implementing the concept. All of that work can prove foundational in developing systems that can continue to generate their own code based on objects and scenarios encountered in the real world. The new work on display today is Code as Policies (CaP).

Image Credits: Google

Google Research Intern Jacky Liang and Robotics Research Scientist Andy Zeng note in a blog post:

With CaP, we propose using language models to directly write robot code through few-shot prompting. Our experiments demonstrate that outputting code led to improved generalization and task performance over directly learning robot tasks and outputting natural language actions. CaP allows a single system to perform a variety of complex and varied robotic tasks without task-specific training.

The system, as described, also relies on third-party libraries and APIs to best generate the code suited to a specific scenario — as well as support for languages and (why not?) emojis. The information accessible in those APIs are one of the existing limitations at present. The researchers note, “These limitations point to avenues for future work, including extending visual language models to describe low-level robot behaviors (e.g., trajectories) or combining CaPs with exploration algorithms that can autonomously add to the set of control primitives.”

As part of today’s announcement, Google is releasing open source versions of the code accessible through its GitHub site to build on the research it’s thus far presented. So, you know, all of the caveats about early-stage research stuff here.

Techcrunch event

Disrupt 2026: The tech ecosystem, all in one room

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.

Save up to $300 or 30% to TechCrunch Founder Summit

1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately

Offer ends March 13.

San Francisco, CA | October 13-15, 2026

Topics

, , , , , ,
Loading the next article
Error loading the next article