Steering Your Generalists:
Improving Robotic Foundation Models via Value Guidance

CoRL 2024

1UC Berkeley, 2CMU, 3Google DeepMind

Abstract

Large, general-purpose robotic policies trained on diverse demonstration datasets have been shown to be remarkably effective both for controlling a variety of robots in a range of different scenes, and for acquiring broad repertoires of manipulation skills. However, the data that such policies are trained on is generally of mixed quality -- not only are human-collected demonstrations unlikely to perform the task perfectly, but the larger the dataset is, the harder it is to curate only the highest quality examples. It also remains unclear how optimal data from one embodiment is for training on another embodiment.

In this paper, we present a general and broadly applicable approach that enhances the performance of such generalist robot policies at deployment time by re-ranking their actions according to a value function learned via offline RL. This approach, which we call Value-Guided Policy Steering (V-GPS), is compatible with a wide range of different generalist policies, without needing to fine-tune or even access the weights of the policy. We show that the same value function can improve the performance of five different state-of-the-art policies with different architectures, even though they were trained on distinct datasets, attaining consistent performance improvement on multiple robotic platforms across a total of 12 tasks.

Video

V-GPS: Value-Guided Policy Steering

V-GPS is a novel approach that improves the performance of pre-trained generalist robotic policies by re-ranking their actions at deployment time based on a value function learned via offline RL (e.g. Cal-QL). The same single value function can be combined with any off-the-shelf generalist policy in a plug-and-play manner, without the need to fine-tune or access the policy's weights, improving downstream performance across multiple robotic platforms.

① Training: Pre-train a language-conditioned value function via offline RL

Training pipeline.

② Deployment: Re-rank the actions from generalist policies at test time



Case Studies

V-GPS Addresses the Failure Modes and Improves the Generalist Policy

"put the sushi in the pot"

In this scene with an unseen table texture, the Octo model tends to hold onto the sushi for too long, resulting in it being dropped outside the container. Incorporating V-GPS accurately up-weights the gripper open action when the sushi is over the target, allowing the policy to deviate from its default failure mode.

Octo (Baseline)
Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.

Octo + V-GPS (Ours)
Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.

"put the mushroom on the cloth"

The Octo model also tends to drop the mushroom too early such that the mushroom does not land on the cloth in this scene, where the table height is slightly lower than usual. Incorporating V-GPS can largely mitigate this failure mode by up-weighting the gripper open action when the mushroom is over the target, making it robust to environmental changes.

Octo (Baseline)
Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.

Octo + V-GPS (Ours)
Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.

"put the green pepper in the pot"

The surface of the plastic green pepper is slippery and presents an uneven curvature, often making it critical to choose the grasp point and magnitude of the gripper action appropriately for a reliable grasp. V-GPS can improve the policy to grasp the object more reliably, leading to improved performance.

Octo (Baseline)
Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.

Octo + V-GPS (Ours)
Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.


Results

Experimental Setup

We evaluate our method on 12 tasks in total. In the real-world WidowX robot platform, we study 6 tasks across 3 different scenes. In the SIMPLER simulated evaluation suite, we study 4 tasks on the WidowX platform and 2 tasks on the Google Robot.

Real-World Evaluation Results

In our real-world experiments, V-GPS consistently improves Octo-small-1.5 in all 6 tasks, with notable improvements of +55% in Scene A, +92% in Scene B, and +100% in Scene C. Qualitatively, V-GPS successfully resolved the failure modes as we discussed in the case studies. Specifically, V-GPS doubles the success rate on the "put pepper in pot" task in Scene A, doubles the success rate on "put mushroom on cloth" task in Scene B, and triples the success rate on "put sushi in pot" task in Scene C.
Real world results.

SIMPLER Evaluation Results

To verify whether V-GPS can improve various generalist policies across different embodiments, we now evaluate V-GPS on top of five generalist policies -- Octo-small, Octo-base, Octo-small-1.5, RT1-X, and OpenVLA, in SIMPLER simulation environments, across two different robot platforms -- WidowX and Google Robot. V-GPS improves the success rates of all five generalist policies across multiple embodiments on average. Note that we use the same singe value function trained on cross-embodiment data for all policies across both real-world and simulated tasks.
SIMPLER results.

BibTeX

@article{nakamoto2024steering,
  author    = {Mitsuhiko Nakamoto and Oier Mees and Aviral Kumar and Sergey Levine},
  title     = {Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance},
  journal   = {Conference on Robot Learning (CoRL)},
  year      = {2024},
}