Skip to content

Commit

Permalink
.
Browse files Browse the repository at this point in the history
  • Loading branch information
deep-diver committed Dec 13, 2024
1 parent df607e3 commit 290c40a
Show file tree
Hide file tree
Showing 726 changed files with 12,254 additions and 0 deletions.
1 change: 1 addition & 0 deletions content/posters/3qUks3wrnH/content_list.json

Large diffs are not rendered by default.

Binary file added content/posters/3qUks3wrnH/cover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions content/posters/3qUks3wrnH/essential.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"importance": "This paper is important because it introduces a novel framework, **Cross-Task Policy Guidance (CTPG)**, that significantly improves the efficiency of multi-task reinforcement learning (MTRL). **CTPG leverages cross-task similarities by guiding the learning of unmastered tasks using the policies of already proficient tasks.** This addresses a key challenge in MTRL, leading to enhanced performance in both manipulation and locomotion benchmarks and opening new avenues for research in efficient MTRL.", "summary": "Boost multi-task reinforcement learning with Cross-Task Policy Guidance (CTPG)! CTPG cleverly uses policies from already mastered tasks to guide the learning of new tasks, significantly improving efficiency and performance.", "takeaways": ["Cross-Task Policy Guidance (CTPG) significantly improves multi-task reinforcement learning efficiency.", "CTPG uses policies from mastered tasks to guide the learning process of new, similar tasks.", "The proposed framework shows enhanced performance on manipulation and locomotion benchmarks."], "tldr": "Multi-task reinforcement learning (MTRL) aims to train agents to perform multiple tasks simultaneously, ideally leveraging shared knowledge between them. However, current MTRL methods often struggle with efficiently transferring knowledge between tasks, leading to slow learning and suboptimal performance. Many approaches focus solely on parameter sharing, neglecting the potential of using successful policies from one task to directly improve the learning of another.\n\nThe paper introduces Cross-Task Policy Guidance (CTPG), a novel framework that directly addresses this limitation. CTPG trains a separate 'guide policy' for each task, which selects the most beneficial policy from a pool of all learned task policies to generate training data for the target task. This approach, combined with gating mechanisms that filter out unhelpful policies and prioritize those for which guidance is needed, leads to substantial performance improvements compared to existing methods in multiple robotics benchmarks.", "affiliation": "Tencent AI Lab", "categories": {"main_category": "Machine Learning", "sub_category": "Reinforcement Learning"}, "podcast_path": "3qUks3wrnH/podcast.wav"}
1 change: 1 addition & 0 deletions content/posters/3qUks3wrnH/figures.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[{"figure_path": "3qUks3wrnH/figures/figures_1_1.jpg", "caption": "Figure 1: Full or partial policy sharing in the manipulation environment. (a): Task Button-Press and Drawer-Open share almost the same policy, where the robotic arm needs to reach a specified position (button or handle) and then push the target object. (b): Task Door-Open and Drawer-Open share the policy of grabbing the handle in the first phase, but they are required to open the target object by different movements (rotation or translation).", "description": "This figure shows examples of full or partial policy sharing in robotic arm manipulation tasks. In (a), the tasks \"Button-Press\" and \"Drawer-Close\" share a similar policy where the robot arm must reach a target location and then push it. In (b), \"Door-Open\" and \"Drawer-Open\" tasks share an initial policy of grasping a handle but differ in how they subsequently open the object (rotation vs. translation). These examples illustrate the potential for sharing policies between tasks with similar sub-tasks.", "section": "1 Introduction"}, {"figure_path": "3qUks3wrnH/figures/figures_3_1.jpg", "caption": "Figure 2: Overview of the CTPG framework.", "description": "This figure illustrates the CTPG (Cross-Task Policy Guidance) framework. The guide policy for a given task selects a behavior policy from a candidate set of policies for all tasks. This selected policy then interacts with the environment for a fixed number of timesteps (K). The transitions (states, actions, rewards, next states) collected during this interaction are stored in a replay buffer. This data is then used to train the guide policy and control policy.", "section": "4 Cross-Task Policy Guidance"}, {"figure_path": "3qUks3wrnH/figures/figures_5_1.jpg", "caption": "Figure 2: Overview of the CTPG framework.", "description": "This figure illustrates the CTPG (Cross-Task Policy Guidance) framework. The guide policy selects a behavior policy from a candidate set of all tasks' control policies. This chosen policy interacts with the environment for K timesteps, collecting data that is then stored in a replay buffer. This data is used to train both the guide policy and the control policy, enhancing exploration and improving the training trajectories.", "section": "4 Cross-Task Policy Guidance"}, {"figure_path": "3qUks3wrnH/figures/figures_7_1.jpg", "caption": "Figure 4: We display the state of task Pick-Place at every 10 timesteps, along with the corresponding output probability of the guide policy and the actual sampled behavior policy. Except for employing the Pick-Place task\u2019s control policy during timesteps 20 to 30, the guide policy selects control policies of other tasks for the remaining timesteps, successfully accomplishing the task.", "description": "This figure visualizes a trajectory of the Pick-Place task in the MetaWorld-MT10 environment to demonstrate how the guide policy works. The guide policy selects different behavior policies from all tasks every 10 timesteps. The figure shows the probability of each task being selected by the guide policy and which policy was actually used at each timestep. It highlights how the guide policy leverages policies from other tasks to effectively complete the Pick-Place task by dynamically choosing the most beneficial policy at each step.", "section": "5.3 Guidance Learned by Guide Policy"}, {"figure_path": "3qUks3wrnH/figures/figures_7_2.jpg", "caption": "Figure 5: Three distinct ablation studies of MHSAC w/ CTPG on MetaWorld-MT10.", "description": "This figure presents the results of three ablation studies conducted to evaluate the impact of each component of the Cross-Task Policy Guidance (CTPG) framework on the performance of the Multi-Head Soft Actor-Critic (MHSAC) algorithm. The studies were performed on the MetaWorld-MT10 benchmark. The three subfigures show the impact of removing: (a) the policy-filter gate, (b) the guide-block gate, and (c) the hindsight off-policy correction. Each subfigure shows the success rate over training samples/task for the MHSAC w/ CTPG model along with the ablation variants. This allows for an assessment of the contribution of each component to the overall performance.", "section": "5.4 Ablation Studies"}, {"figure_path": "3qUks3wrnH/figures/figures_8_1.jpg", "caption": "Figure 6: CTPG also improves performance in the absence of implicit knowledge sharing approaches.", "description": "This figure shows the performance improvement achieved by using CTPG (Cross-Task Policy Guidance) on two different benchmark tasks (HalfCheetah-MT8 and MetaWorld-MT10) even without using implicit knowledge sharing. The results demonstrate that CTPG improves the performance of single-task SAC by providing explicit policy guidance, particularly more significant improvement in MetaWorld-MT10 where task difficulty varies more.", "section": "5.5 CTPG without Implicit Knowledge Sharing"}, {"figure_path": "3qUks3wrnH/figures/figures_8_2.jpg", "caption": "Figure 10: Training curves of experiment with implicit knowledge sharing approaches. Beyond the ultimate performance improvement, CTPG also enhances the sample efficiency.", "description": "This figure shows the training curves for different combinations of explicit policy sharing methods and implicit knowledge sharing approaches across four environments. Each row represents a distinct implicit knowledge sharing approach, while each column represents a different environment. Within each subfigure, the three curves represent the base one without any explicit policy sharing method and two variations using different explicit policy sharing methods. The results show that beyond the ultimate performance improvement, CTPG also enhances the sample efficiency.", "section": "5.2 Performance Improvement on Implicit Knowledge Sharing Approaches"}, {"figure_path": "3qUks3wrnH/figures/figures_15_1.jpg", "caption": "Figure 8: Visualizations of robotic manipulation tasks on MetaWorld-MT10.", "description": "This figure shows ten different robotic manipulation tasks from the MetaWorld-MT10 benchmark. Each image displays a robotic arm in a different configuration interacting with an object in a scene. The tasks shown include Reach, Push, Pick-Place, Door-Open, Drawer-Open, Drawer-Close, Button-Press-Topdown, Peg-Insert-Side, Window-Open, and Window-Close, illustrating the diversity of manipulation skills tested in this benchmark.", "section": "D.1 MetaWorld Manipulation Benchmark"}, {"figure_path": "3qUks3wrnH/figures/figures_15_2.jpg", "caption": "Figure 9: Visualizations of robotic locomotion tasks on HalfCheetah-MT8.", "description": "The figure shows eight different variations of the HalfCheetah robot used in the HalfCheetah-MT8 locomotion benchmark. Each variation modifies the size of a specific body part (torso, thigh, leg, or foot), resulting in either a \"Big\" or a \"Small\" version of that body part. These variations create diverse locomotion challenges for the reinforcement learning agent.", "section": "D.2 HalfCheetah Locomotion Benchmark"}, {"figure_path": "3qUks3wrnH/figures/figures_16_1.jpg", "caption": "Figure 1: Full or partial policy sharing in the manipulation environment. (a): Task Button-Press and Drawer-Open share almost the same policy, where the robotic arm needs to reach a specified position (button or handle) and then push the target object. (b): Task Door-Open and Drawer-Open share the policy of grabbing the handle in the first phase, but they are required to open the target object by different movements (rotation or translation).", "description": "This figure illustrates the concept of policy sharing in multi-task reinforcement learning using robotic arm manipulation tasks. Panel (a) shows two tasks, Button-Press and Drawer-Close, that share a very similar policy because the robot arm must reach and push a button or handle. Panel (b) shows the tasks Door-Open and Drawer-Open. These share only a part of their policies (grabbing the handle), but differ in the subsequent steps required to open the door (rotation) versus the drawer (translation). This visual example is used to support the claim that sharing policies between tasks can improve learning efficiency.", "section": "1 Introduction"}, {"figure_path": "3qUks3wrnH/figures/figures_17_1.jpg", "caption": "Figure 5: Three distinct ablation studies of MHSAC w/ CTPG on MetaWorld-MT10.", "description": "This figure shows three ablation studies performed on the MHSAC (Multi-Head Soft Actor-Critic) algorithm with CTPG (Cross-Task Policy Guidance) on the MetaWorld-MT10 benchmark. Each subfigure demonstrates the impact of removing one component of CTPG: (a) the policy-filter gate, (b) the guide-block gate, and (c) the hindsight off-policy correction. The plots show success rate over training samples, comparing the full CTPG method against versions with each component removed. The results illustrate the contribution of each component to the overall performance improvement of the algorithm.", "section": "5.4 Ablation Studies"}, {"figure_path": "3qUks3wrnH/figures/figures_17_2.jpg", "caption": "Figure 12: MHSAC w/ CTPG with different guide steps K.", "description": "The ablation study on guide policy selection step K is shown in this figure. The guide step K is a hyperparameter in CTPG that determines how often the guide policy samples from other tasks' policies. The plots show the training curves on HalfCheetah-MT8 (episode return) and MetaWorld-MT10 (success rate) for different values of K. The results indicate that both very short (K=1, 3) and long (K=50) guide steps lead to decreased performance and increased variance. The optimal K appears to be around 10 for these environments.", "section": "Ablation Studies"}, {"figure_path": "3qUks3wrnH/figures/figures_18_1.jpg", "caption": "Figure 5: Three distinct ablation studies of MHSAC w/ CTPG on MetaWorld-MT10.", "description": "This figure shows three ablation studies performed on the MHSAC (Multi-Head Soft Actor-Critic) algorithm with CTPG (Cross-Task Policy Guidance) on the MetaWorld-MT10 benchmark. Each subplot shows the impact of removing one component of CTPG. (a) shows ablation of the policy-filter gate, (b) shows ablation of the guide-block gate, and (c) shows ablation of the hindsight off-policy correction. The x-axis represents the number of samples per task (in millions) and the y-axis represents the success rate (%). The results demonstrate the importance of each component of CTPG for achieving high performance.", "section": "5.4 Ablation Studies"}, {"figure_path": "3qUks3wrnH/figures/figures_18_2.jpg", "caption": "Figure 10: Training curves of experiment with implicit knowledge sharing approaches. Beyond the ultimate performance improvement, CTPG also enhances the sample efficiency.", "description": "This figure presents the training curves for five different implicit knowledge sharing approaches (MTSAC, MHSAC, PCGrad, SM, PaCo) with and without CTPG across four different environments (HalfCheetah-MT5, HalfCheetah-MT8, MetaWorld-MT10, MetaWorld-MT50). Each row represents a different implicit knowledge sharing approach, while each column represents a different environment. For each combination, three curves are shown: one for the base approach, one with QMP (a single-step policy sharing method), and one with CTPG. The x-axis represents the number of samples per task, and the y-axis represents either the episode return (for HalfCheetah) or the success rate (for MetaWorld). The shaded areas indicate the standard deviation across five different random seeds. The results demonstrate that CTPG consistently improves performance and sample efficiency across all implicit methods and environments, showing that it helps learn better policies more quickly.", "section": "5 Experiments"}, {"figure_path": "3qUks3wrnH/figures/figures_19_1.jpg", "caption": "Figure 10: Training curves of experiment with implicit knowledge sharing approaches. Beyond the ultimate performance improvement, CTPG also enhances the sample efficiency.", "description": "This figure presents the training curves for different combinations of explicit policy sharing methods and implicit knowledge sharing approaches across four environments. Each row represents a distinct implicit knowledge sharing approach, while each column represents a different environment. The three curves in each subfigure show the base performance without explicit policy sharing, the performance with QMP, and the performance with CTPG. The figure demonstrates that CTPG not only improves the final performance but also enhances the sample efficiency across all environments.", "section": "5.2 Performance Improvement on Implicit Knowledge Sharing Approaches"}]
Loading

0 comments on commit 290c40a

Please sign in to comment.