{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Gymnasium - 强化学习环境标准接口教程\n", "\n", "欢迎来到 Gymnasium 教程!Gymnasium 是一个用于开发和比较强化学习 (RL) 算法的开源 Python 库。它提供了一个标准化的 API 来与各种模拟环境交互,从简单的经典控制问题到更复杂的模拟器。\n", "\n", "**背景**: Gymnasium 是由 OpenAI Gym 项目分叉而来,并由 Farama Foundation 维护。对于大多数用户来说,它现在是推荐使用的库,因为它得到了更积极的维护和更新。\n", "\n", "**为什么 Gymnasium 对 RL 很重要?**\n", "\n", "1. **标准化接口**: 提供了一个统一的方式来与不同的 RL 环境进行交互 (`reset`, `step`),使得算法的实现和测试更加通用。\n", "2. **丰富的测试环境**: 内置了大量经典的 RL 基准测试环境(如 CartPole, MountainCar, Atari 游戏, MuJoCo 模拟等),方便算法的开发和比较。\n", "3. **可扩展性**: 允许用户创建自己的自定义环境,并遵循相同的 API。\n", "4. **社区标准**: 是 RL 研究和开发领域广泛接受的标准工具包。\n", "\n", "**本教程将涵盖 Gymnasium 的核心概念和基本用法:**\n", "\n", "1. 环境创建 (`gymnasium.make`)\n", "2. 核心 API: `reset`, `step`\n", "3. 观测空间 (`observation_space`) 与动作空间 (`action_space`)\n", "4. 理解 `step` 返回值: `observation`, `reward`, `terminated`, `truncated`, `info`\n", "5. 一个基本的 RL 交互循环 (随机 Agent)\n", "6. 环境渲染 (`render`)\n", "7. 常见环境示例\n", "8. 环境包装器 (Wrappers) 简介" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 准备工作:导入 Gymnasium\n", "\n", "确保你已经安装了 Gymnasium (`pip install gymnasium`). 你可能还需要安装一些包含特定环境的额外包 (例如 `pip install gymnasium[classic_control] gymnasium[toy_text]`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import gymnasium as gym # 常用别名仍是 gym\n", "import time\n", "import numpy as np\n", "\n", "print(f\"Gymnasium version: {gym.__version__}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. 环境创建 (`gymnasium.make`)\n", "\n", "使用 `gymnasium.make()` 函数,通过环境 ID 字符串来创建环境实例。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"--- Creating Environments ---\")\n", "\n", "# 创建 CartPole 环境 (经典控制)\n", "try:\n", " env_cartpole = gym.make('CartPole-v1')\n", " print(\"Successfully created 'CartPole-v1' environment.\")\n", "except gym.error.NameNotFound as e:\n", " print(f\"Error creating CartPole-v1: {e}\")\n", " print(\"You might need to install classic control environments: pip install gymnasium[classic_control]\")\n", " env_cartpole = None\n", "\n", "# 创建 FrozenLake 环境 (文本/离散)\n", "try:\n", " env_frozenlake = gym.make('FrozenLake-v1', map_name=\"4x4\", is_slippery=True)\n", " # 可以传递参数来配置环境,如此处的 map_name 和 is_slippery\n", " print(\"\\nSuccessfully created 'FrozenLake-v1' environment.\")\n", "except gym.error.NameNotFound as e:\n", " print(f\"\\nError creating FrozenLake-v1: {e}\")\n", " print(\"You might need to install toy text environments: pip install gymnasium[toy_text]\")\n", " env_frozenlake = None\n", "\n", "# 查看可用环境 (部分,可能很长)\n", "# from gymnasium.envs.registration import registry\n", "# print(\"\\nSome available environments:\")\n", "# print(list(registry.keys())[:20])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. 核心 API: `reset` 和 `step`\n", "\n", "这是与环境交互的两个最基本的方法。\n", "\n", "* **`env.reset(seed=None, options=None)`**: \n", " * 重置环境到初始状态。\n", " * 返回一个包含两个元素的元组:`(initial_observation, info)`。\n", " * `initial_observation`: 环境的初始观测值。\n", " * `info`: 包含环境信息的字典 (通常在 `reset` 时为空或包含调试信息)。\n", " * 在每个新的回合 (episode) 开始时必须调用。\n", " * 可以设置 `seed` 以获得可复现的起始状态。\n", "\n", "* **`env.step(action)`**: \n", " * 在环境中执行给定的 `action`。\n", " * 返回一个包含五个元素的元组:`(observation, reward, terminated, truncated, info)`。\n", " * `observation`: 执行动作后的新观测值。\n", " * `reward`: 执行动作后获得的奖励 (通常是浮点数)。\n", " * `terminated`: 一个布尔值,表示回合是否因为达到某个终止状态而结束 (例如,CartPole 倒下,或者到达 FrozenLake 目标)。\n", " * `truncated`: 一个布尔值,表示回合是否因为达到时间限制或其他外部条件而被截断 (例如,CartPole 运行了 500 步)。\n", " * `info`: 包含环境诊断信息的字典 (例如,底层模拟器的状态)。\n", " * **重要**: 一个回合结束的条件是 `terminated` 或 `truncated` 为 `True`。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. 观测空间 (`observation_space`) 与动作空间 (`action_space`)\n", "\n", "每个环境都有明确定义的观测空间和动作空间,它们描述了环境的有效观测和智能体可以采取的有效动作。\n", "\n", "* `env.observation_space`: 定义了观测值的结构、类型和范围 (例如,`Box` 表示连续值,`Discrete` 表示离散值)。\n", "* `env.action_space`: 定义了智能体可以采取的动作的结构、类型和范围。\n", "\n", "这两个属性通常是 `gymnasium.spaces` 模块中定义的空间对象。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"--- Observation and Action Spaces ---\")\n", "\n", "if env_cartpole:\n", " print(\"\\nCartPole-v1:\")\n", " print(f\" Observation Space: {env_cartpole.observation_space}\")\n", " # Box(4,) 表示一个包含4个连续值的向量\n", " # print(f\" Shape: {env_cartpole.observation_space.shape}\")\n", " # print(f\" Low bounds: {env_cartpole.observation_space.low}\")\n", " # print(f\" High bounds: {env_cartpole.observation_space.high}\")\n", " \n", " print(f\" Action Space: {env_cartpole.action_space}\")\n", " # Discrete(2) 表示两个离散动作 (0 或 1)\n", " # print(f\" Number of actions: {env_cartpole.action_space.n}\")\n", " # 我们可以从中采样一个随机动作\n", " random_action_cartpole = env_cartpole.action_space.sample()\n", " print(f\" Sample random action: {random_action_cartpole}\")\n", "else:\n", " print(\"\\nCartPole environment not created, skipping space info.\")\n", "\n", "if env_frozenlake:\n", " print(\"\\nFrozenLake-v1:\")\n", " print(f\" Observation Space: {env_frozenlake.observation_space}\")\n", " # Discrete(16) 表示 16 个离散状态 (0 到 15)\n", " # print(f\" Number of states: {env_frozenlake.observation_space.n}\")\n", " \n", " print(f\" Action Space: {env_frozenlake.action_space}\")\n", " # Discrete(4) 表示 4 个离散动作 (通常是 0:左, 1:下, 2:右, 3:上)\n", " # print(f\" Number of actions: {env_frozenlake.action_space.n}\")\n", " random_action_frozenlake = env_frozenlake.action_space.sample()\n", " print(f\" Sample random action: {random_action_frozenlake}\")\n", "else:\n", " print(\"\\nFrozenLake environment not created, skipping space info.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. 理解 `step` 返回值\n", "\n", "`step()` 返回的五个值 `(observation, reward, terminated, truncated, info)` 是 RL 循环的核心。\n", "\n", "* `observation`: Agent 根据这个观测来决定下一个动作。\n", "* `reward`: Agent 的目标是最大化累积奖励。\n", "* `terminated`: 表示回合因任务本身的原因(如成功、失败)而结束。\n", "* `truncated`: 表示回合因外部原因(如时间限制)而结束,但任务本身可能还未完成。\n", "* **重要区分**: 在许多算法中(特别是进行价值估计时),`terminated` 和 `truncated` 的处理方式可能不同。例如,如果回合被 `truncated`,我们通常仍然会用学习到的价值函数来估计下一个状态的价值 (bootstrap);但如果回合 `terminated`,下一个状态的价值通常被认为是 0。\n", "* `info`: 通常包含对调试有用的额外信息,但不应用于训练 Agent。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. 一个基本的 RL 交互循环 (随机 Agent)\n", "\n", "让我们用一个简单的循环来模拟 Agent 与 CartPole 环境的交互,其中 Agent 每次都随机选择动作。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"--- Basic Interaction Loop (Random Agent on CartPole) ---\")\n", "\n", "if env_cartpole:\n", " num_episodes = 3\n", " max_steps_per_episode = 100\n", "\n", " for episode in range(num_episodes):\n", " print(f\"\\nStarting Episode {episode + 1}\")\n", " # 重置环境获取初始观测\n", " observation, info = env_cartpole.reset(seed=episode) # Use different seed for variation\n", " # print(f\" Initial Observation: {observation}\")\n", " # print(f\" Initial Info: {info}\")\n", " \n", " total_reward = 0\n", " terminated = False\n", " truncated = False\n", " step_count = 0\n", " \n", " while not terminated and not truncated and step_count < max_steps_per_episode:\n", " # 1. 选择一个动作 (这里是随机的)\n", " action = env_cartpole.action_space.sample()\n", " \n", " # 2. 执行动作并获取结果\n", " observation, reward, terminated, truncated, info = env_cartpole.step(action)\n", " \n", " total_reward += reward\n", " step_count += 1\n", " \n", " # 打印一些信息 (可选)\n", " # if step_count % 20 == 0:\n", " # print(f\" Step {step_count}: Action={action}, Reward={reward:.2f}, Term={terminated}, Trunc={truncated}\")\n", " # print(f\" Obs: {observation.round(2)}\")\n", "\n", " print(f\"Episode {episode + 1} finished after {step_count} steps.\")\n", " print(f\" Total Reward: {total_reward}\")\n", " if terminated:\n", " print(\" Reason: Episode Terminated (e.g., pole fell)\")\n", " elif truncated:\n", " print(\" Reason: Episode Truncated (e.g., time limit reached)\")\n", " else:\n", " print(f\" Reason: Reached max steps ({max_steps_per_episode})\")\n", " \n", " # 关闭环境 (释放资源)\n", " env_cartpole.close()\n", " print(\"\\nCartPole environment closed.\")\n", "else:\n", " print(\"CartPole environment not available, skipping interaction loop.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. 环境渲染 (`render`)\n", "\n", "许多 Gymnasium 环境支持将环境状态可视化渲染出来。\n", "\n", "* **`env = gym.make(env_id, render_mode=\"human\")`**: 在创建环境时指定 `render_mode=\"human\"`,通常会弹出一个窗口显示环境。\n", "* **`env.render()`**: 在 `step` 之后调用 `env.render()` 来更新显示。 (在 v0.26 之前的 Gym 版本中,`render()` 用于返回图像数组或渲染到屏幕;在 Gymnasium 中,渲染模式在 `make` 时指定,`render()` 的行为取决于该模式)。\n", "* `render_mode=\"rgb_array\"`: `env.render()` 会返回一个 NumPy 数组表示的图像帧。\n", "\n", "**注意**: 在 Jupyter Notebook 中直接使用 `render_mode=\"human\"` 可能效果不佳或无法工作,因为它通常需要一个活动的显示窗口。在本地 Python 脚本中运行通常效果更好。获取 `rgb_array` 并在 Notebook 中用 `matplotlib` 显示是另一种方法。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"--- Environment Rendering Example (CartPole, getting RGB array) ---\")\n", "\n", "try:\n", " # 创建环境并指定渲染模式为 rgb_array\n", " env_render = gym.make('CartPole-v1', render_mode='rgb_array')\n", " \n", " observation, info = env_render.reset()\n", " \n", " # 获取初始帧\n", " frame = env_render.render()\n", " \n", " print(f\"Rendered frame type: {type(frame)}\")\n", " print(f\"Rendered frame shape: {frame.shape}\") # (Height, Width, Channels)\n", "\n", " # 使用 matplotlib 显示第一帧\n", " plt.figure(figsize=(5, 4))\n", " plt.imshow(frame)\n", " plt.title(\"Initial Frame from CartPole (render_mode='rgb_array')\")\n", " plt.axis('off')\n", " plt.show()\n", " \n", " # 模拟几步并获取帧 (通常用于制作动画或记录)\n", " frames = [frame] # Store the first frame\n", " for _ in range(5):\n", " action = env_render.action_space.sample()\n", " env_render.step(action)\n", " frames.append(env_render.render()) \n", " \n", " print(f\"Collected {len(frames)} frames.\")\n", " \n", " env_render.close()\n", " print(\"Render environment closed.\")\n", "\n", "except Exception as e:\n", " print(f\"Error during rendering example (maybe CartPole not installed or display issue): {e}\")\n", "\n", "# 如果你想尝试 'human' 模式 (可能无法在所有 notebook 环境下工作):\n", "# try:\n", "# env_human = gym.make('CartPole-v1', render_mode='human')\n", "# env_human.reset()\n", "# for _ in range(50):\n", "# action = env_human.action_space.sample()\n", "# env_human.step(action)\n", "# # Rendering is handled implicitly by the environment window in 'human' mode\n", "# # time.sleep(0.05) # Add a small delay to see it\n", "# env_human.close()\n", "# except Exception as e:\n", "# print(f\"Error running 'human' mode rendering: {e}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. 常见环境示例\n", "\n", "* **`CartPole-v1`**: 经典控制问题。目标是通过左右移动推车来平衡杆子。观测是推车位置、速度、杆子角度、角速度。动作是向左或向右推。回合在杆子倾斜过大、推车移出边界或达到步数限制时结束。\n", "* **`MountainCar-v0`**: 经典控制问题。小车动力不足,无法直接开上右侧山顶,需要通过来回晃动积累动能。观测是小车位置和速度。动作是向左、不动、向右施力。目标是到达山顶旗帜处。\n", "* **`FrozenLake-v1`**: 网格世界问题。在一个冰冻湖面上从起点 (S) 移动到目标 (G),避开冰洞 (H)。观测是当前所在的格子编号 (0-15)。动作是上/下/左/右。环境可以是光滑的 (`is_slippery=False`) 或打滑的 (`is_slippery=True`)。\n", "* **Atari 环境 (需要 `gymnasium[atari]` 和 ALE)**: 基于 Atari 2600 游戏的环境,通常使用屏幕像素作为观测。\n", "* **MuJoCo 环境 (需要 `gymnasium[mujoco]` 和 MuJoCo 引擎)**: 基于物理引擎的连续控制环境,如机器人运动。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. 环境包装器 (Wrappers) 简介\n", "\n", "Gymnasium 提供了包装器 (Wrapper) 机制,允许你修改现有环境的行为,而无需更改环境本身的源代码。\n", "\n", "**常见用途:**\n", "* **观测修改**: 归一化观测值、裁剪观测范围、堆叠帧 (Frame Stacking,用于捕捉动态信息)。\n", "* **动作修改**: 改变动作空间或动作的解释方式。\n", "* **奖励修改**: 改变奖励函数 (Reward Shaping)。\n", "* **时间限制**: 添加最大步数限制。\n", "\n", "**示例 (概念):**\n", "```python\n", "# import gymnasium as gym\n", "# from gymnasium.wrappers import NormalizeObservation, FrameStack\n", "\n", "# env = gym.make('SomeEnv-v0')\n", "# # 应用包装器\n", "# env = NormalizeObservation(env)\n", "# env = FrameStack(env, num_stack=4)\n", "\n", "# # 现在使用包装后的 env 进行交互\n", "# observation, info = env.reset()\n", "# # observation 现在是归一化的,并且可能包含了堆叠的帧\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 总结\n", "\n", "Gymnasium 是强化学习领域的基础库,它通过标准化的 API 和丰富的测试环境,极大地促进了 RL 算法的开发、测试和比较。\n", "\n", "**关键要点:**\n", "* 使用 `gymnasium.make(env_id)` 创建环境。\n", "* 理解 `observation_space` 和 `action_space`。\n", "* 核心交互通过 `env.reset()` 和 `env.step(action)` 完成。\n", "* 区分 `step()` 返回的 `terminated` 和 `truncated` 对于回合结束处理很重要。\n", "* 环境可以通过 `render()` 进行可视化 (方式取决于 `render_mode`)。\n", "* 包装器 (Wrappers) 可用于修改环境行为。\n", "\n", "掌握 Gymnasium 的基本用法是进行强化学习实践的第一步。" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 5 }