PlanLLM: Video Procedure Planning with Refinable Large Language Models

Dejie Yang1, Zijing Zhao1, Yang Liu1,2*

1Wangxuan Institute of Computer Technology, Peking University
2State Key Laboratory of General Artificial Intelligence, Peking University
AAAI2025

*Corresponding Author

Abstract

Video procedure planning, i.e., planning a sequence of action steps given the video frames of start and goal states, is an essential ability for embodied AI. Recent works utilize Large Language Models (LLMs) to generate enriched action step description texts to guide action step decoding. Although LLMs are introduced, these methods decode the action steps into a closed-set of one-hot vectors, limiting the model’s capability of generalizing to new steps or tasks. Additionally, fixed action step descriptions based on world-level commonsense may contain noise in specific instances of visual states. In this paper, we propose PlanLLM, a cross-modal joint learning framework with LLMs for video procedure planning. We propose an LLM-Enhanced Planning module which fully uses the generalization ability of LLMs to produce free-form planning output and to enhance action step decoding. We also propose Mutual Information Maximization module to connect world-level commonsense of step descriptions and sample-specific information of visual states, enabling LLMs to employ the reasoning ability to generate step sequences. With the assistance of LLMs, our method can both closed-set and open vocabulary procedure planning tasks. Our PlanLLM achieves superior performance on three benchmarks, demonstrating the effectiveness of our designs.

Framework

MY ALT TEXT

The framework of our PlanLLM. PlanLLM mainly consists of three parts: Feature Extraction, Mutual Information Maximization and LLM Enhanced Planning.

Results

Table 1: Comparisons on CrossTask for procedure planning with prediction horizon t∈{3,4,}.Supervision denotes the super- vision type, where V denotes the methods use intermediate visual states (frames between start and goal states) as supervisions, and Aonly uses the action or task category without visual states.

MY ALT TEXT

Table 2: Evaluation results on NIV and COIN with prediction horizon t∈{3,4}.

MY ALT TEXT

Table 3: Performance comparisons on cross-dataset with prediction horizon t∈{3,4}.

MY ALT TEXT

Table 4: Effectiveness of proposed components

MY ALT TEXT

Table 5: Effectiveness of progressive multi-modal training

MY ALT TEXT

Table 6: Different planning generation strategy

MY ALT TEXT

BibTeX

@inproceedings{planllm,
        title     = {PlanLLM: Video Procedure Planning with Refinable Large Language Models},
        author    = {Dejie Yang and Zijing Zhao and Yang Liu},
        booktitle = {The 39th Annual AAAI Conference on Artificial Intelligence, {AAAI-24}},
        year      = {2024},
      }