diff --git a/Robodriver.txt b/Robodriver.txt
new file mode 100644
index 0000000..89c2440
--- /dev/null
+++ b/Robodriver.txt
@@ -0,0 +1,1897 @@
+# DeepWiki Q&A with Code Context for Repository: FlagOpen/RoboDriver
+## Q1
+帮助我快速了解这个项目
+### Answer
+RoboDriver 是一个面向具身智能的标准化机器人驱动与数据采集框架,作为 CoRobot 数据体系的设备端驱动适配层,兼容 LeRobot 接口,支持 ROS/ROS2/Dora 多种接入方式,并提供改进的单条存储数据集格式与仿真集成(Genesis/MuJoCo) RoboDriver:11-20 RoboDriver:11-20 。
+
+---
+
+## 三层架构
+- 云端:RoboXStudio(总控台与数据管理) RoboDriver:18-19
+- 桥接层:RoboDriver-Server(数据/控制路由,HTTP/SocketIO) RoboDriver:31-44
+- 设备层:RoboDriver(机器人控制、数据采集、仿真) RoboDriver:18-19
+
+## 核心组件
+- Coordinator:异步 SocketIO 客户端,与服务器通信并处理命令 RoboDriver:31-44 。
+- Monitor:后台线程,每 60s 上报设备信息 RoboDriver:23-36 。
+- Simulator:MuJoCo/Genesis 仿真,双线程(物理+渲染) RoboDriver:163-185 。
+- Recorder:数据录制,支持单条存储与多线程图像写入 RoboDriver:14-15 。
+- 入口脚本:robodriver-run/robodriver-sim/robodriver-visual RoboDriver:31-35 。
+
+## 机器人接入方式
+| 类型 | 示例机器人 | 说明 |
+|------|------------|------|
+| ROS1 | Realman | 基于 ROS Topic RoboDriver:98-102 |
+| ROS2 | GALAXEALITE、SO101 | 基于 rclpy RoboDriver:103-108 |
+| Dora | Realman、SO101、Franka | 基于 Dora 数据流,支持多虚拟环境隔离 RoboDriver:109-115 |
+
+## 数据格式
+- 采集端:单条存储(每帧一行),便于编辑与传输 RoboDriver:42-43 。
+- 训练端:可聚合成多集合格式,兼容 LeRobot RoboDriver:41-43 。
+
+## 快速上手
+```bash
+git clone https://github.com/FlagOpen/RoboDriver.git && cd RoboDriver
+pip install uv
+uv venv -p 3.10
+uv pip install -e .
+source .venv/bin/activate
+robodriver-run -h
+``` RoboDriver:51-79
+
+## Notes
+- 完整文档:[RoboDriver-Doc](https://flagopen.github.io/RoboDriver-Doc) RoboDriver:20-21 。
+- 仿真示例:robodriver/simulations/robodriver-sim-genesis-franka-aio-dora RoboDriver:89-94 。
+- 各机器人详细接入指南见 robodriver/robots/robodriver-robot-*/README.md RoboDriver:81-82 。
+
+Wiki pages you might want to explore:
+- [Overview (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#1)
+- [Dora Dataflow Integration (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#3.2)
+### Citations
+**File:** README.md (L11-21)
+```markdown
+# RoboDriver
+RoboDriver是DataCollect的核心驱动层组件,也是[CoRobot](https://github.com/FlagOpen/CoRobot)数据体系中的标准化机器人接入模块。
+
+
+
+
+
+如图所示,RoboDriver为“设备端驱动适配层”,[RoboDriver-Server](https://github.com/FlagOpen/RoboDriver-Server)是“数据/控制的桥接层与通道路由器”,[RoboXStudio](https://ei2data.baai.ac.cn/home)是“云端或平台侧总控制台与数据管理中心”。
+
+RoboDriver使用文档: [RoboDriver-Doc](https://flagopen.github.io/RoboDriver-Doc)
+## 最新消息
+```
+**File:** README.md (L41-43)
+```markdown
+- **LeRobot兼容**: RoboDriver 的机器人接口直接使用了LeRobot的`Robot`类,这意味着RoboDriver与LeRobot可以互相兼容。
+- **改进的LeRobot数据集格式**:在不同数据阶段采取不同数据结构。在采集端单条储存,更易编辑和传输。还扩展了LeRobot格式的内容。
+
+```
+**File:** README.md (L51-79)
+```markdown
+首先将 RoboDriver 项目代码clone下来,然后进入项目目录:
+
+```
+git clone https://github.com/FlagOpen/RoboDriver.git && cd RoboDriver
+```
+
+安装 `uv`,不要激活任何环境
+
+```
+pip install uv
+```
+
+创建uv环境
+
+```
+uv venv -p 3.10
+```
+
+安装项目:
+
+```
+uv pip install -e .
+```
+
+使用:
+```
+source .venv/bin/activate
+robodriver-run -h
+```
+```
+**File:** README.md (L81-82)
+```markdown
+要使用对应的机器人,请安装对应的机器人,并参考其文档完成部署和启动。路径:robodriver/robots/robodriver-robot-xxx-xxx-xxx/README.md
+
+```
+**File:** README.md (L89-94)
+```markdown
+### 🪞 Genesis
+
+| 机器人型号 | 简介 | 仓库链接 | 贡献人 |
+|------------|------|--------------|------------------------|
+| Franka 机械臂 | 一台Franka机械臂抓取方块 | [robodriver/simulations/robodriver-sim-genesis-franka-aio-dora](./robodriver/simulations/robodriver-sim-genesis-franka-aio-dora) | [](https://github.com/Ryu-Yang) |
+
+```
+**File:** README.md (L98-102)
+```markdown
+### 🔌 ROS1 接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|------------|------|--------------|------------------------|
+| Realman 机械臂 | 基于Realman,6DOF+力控模块,3*RealSense相机 | [robodriver/robots/robodriver-robot-realman-aio-ros1](./robodriver/robots/robodriver-robot-realman-aio-ros1) | [
](https://github.com/zhanglei-web) |
+
+```
+**File:** README.md (L103-108)
+```markdown
+### 🔌 ROS2 接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|--------------|--------------------------------------------------------------|------------------------------------------------------------------------------------------|------------------------|
+| GALAXEALITE | 基于Galaxealite,双臂6DOF+末端夹爪,4*RealSense相机 | [robodriver/robots/robodriver-robot-galaxealite-aio-ros2](./robodriver/robots/robodriver-robot-galaxealite-aio-ros2) | [](https://github.com/liuyou1103) |
+| SO101 机械臂 | 开源轻量级机械臂,6DOF+末端夹爪,1*RealSense相机,1*RGB相机模块 | [robodriver/robots/robodriver-robot-so101-aio-ros2](./robodriver/robots/robodriver-robot-so101-aio-ros2) | [](https://github.com/Ryu-Yang) |
+
+```
+**File:** README.md (L109-115)
+```markdown
+### 🔌 Dora(SDK)接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|--------------|--------------------------------------------------------------|------------------------------------------------------------------------------------------|------------------------|
+| Realman 机械臂 | 基于Realman,6DOF+力控模块,3*RealSense相机 | [robodriver/robots/robodriver-robot-realman1-aio-dora](./robodriver/robots/robodriver-robot-realman1-aio-dora) | [](https://github.com/XuRuntian) |
+| SO101 机械臂 | 开源轻量级机械臂,6DOF+末端夹爪,1*RealSense相机,1*RGB相机模块 | [robodriver/robots/robodriver-robot-so101-aio-dora](./robodriver/robots/robodriver-robot-so101-aio-dora) | [](https://github.com/Ryu-Yang) |
+| Franka | 工业级机械臂,6DOF+末端夹爪,1*RealSense相机 | [robodriver/robots/robodriver-robot-franka-aio-dora](./robodriver/robots/robodriver-robot-franka-aio-dora) | [](https://github.com/XuRuntian) |
+
+```
+**File:** README_en.md (L11-20)
+```markdown
+## Overview
+
+RoboDriver is the core driver-layer component of DataCollect and serves as the standardized robot access module within the [CoRobot](https://github.com/FlagOpen/CoRobot) data stack.
+
+
+
+
+
+As shown above, RoboDriver acts as the device-side driver adaptation layer. [RoboDriver-Server](https://github.com/FlagOpen/RoboDriver-Server) is the data/control bridge layer and channel router, and [RoboXStudio](https://ei2data.baai.ac.cn/home) is the cloud- or platform-side console and data management center.
+
+```
+**File:** robodriver/core/coordinator.py (L14-15)
+```python
+from robodriver.core.recorder import Record, RecordConfig
+from robodriver.core.replayer import DatasetReplayConfig, ReplayConfig, replay
+```
+**File:** robodriver/core/coordinator.py (L31-44)
+```python
+class Coordinator:
+ def __init__(
+ self,
+ daemon: Daemon,
+ teleop: Optional[Teleoperator],
+ server_url="http://localhost:8088",
+ ):
+ self.server_url = server_url
+ # 异步客户端
+ self.sio = socketio.AsyncClient()
+ self.session = aiohttp.ClientSession(
+ connector=aiohttp.TCPConnector(limit=10, limit_per_host=10)
+ )
+
+```
+**File:** robodriver/core/monitor.py (L23-36)
+```python
+ def send_machine_info_periodically(self, interval_seconds=60):
+ """定时发送设备信息(线程安全)"""
+ while self._running:
+ time.sleep(interval_seconds)
+ try:
+ with self._lock:
+ # if self.daemon.robot.status is None:
+ # continue
+ json_data = self.daemon.get_status()
+ # json_data = json.dumps(self._machine_info_dict, ensure_ascii=False)
+
+ response = requests.post(
+ self.url, data=json_data, headers=self.headers, timeout=10
+ )
+```
+**File:** robodriver/core/simulator.py (L163-185)
+```python
+class Simulator:
+ def __init__(self, config: SimulatorConfig):
+ self.config = config
+
+ self.model = mujoco.MjModel.from_xml_path(self.config.xml_path)
+ self.data = mujoco.MjData(self.model)
+
+ self.model.opt.timestep = config.timestep
+
+ self.running_event = threading.Event()
+ self.lock = threading.Lock()
+
+ self.sim_thread = SimulationThread(self.model, self.data, self.config, self.running_event, self.lock)
+ self.view_thread = ViewerRendererThread(self.model, self.data, self.config, self.running_event, self.lock)
+ # self._last_update_time = None
+ # self._max_steps_per_update = 10 # Limit steps to prevent freezing
+
+ def start(self):
+ """启动模拟器线程"""
+ self.running_event.set()
+ self.sim_thread.start()
+ self.view_thread.start()
+
+```
+**File:** pyproject.toml (L31-35)
+```text
+[project.scripts]
+robodriver-run="robodriver.scripts.run:main"
+robodriver-sim="robodriver.scripts.sim:main"
+robodriver-visual="robodriver.scripts.visual:main"
+
+```
+## Q2
+我想接入新机械臂该怎样接入?
+### Answer
+接入新机械臂可按以下步骤进行,选择 ROS1/ROS2/Dora 三种方式之一,并参考已有示例的目录结构与实现。
+
+---
+
+## 1 选择接入方式并创建目录
+- 命名规范:`robodriver-robot-[机器人型号]-[遥操方式]-[接入类型]`,例如 `robodriver-robot-myarm-aio-dora` RoboDriver:116-119 。
+- 在 `robodriver/robots/` 下新建同名目录,并包含 `pyproject.toml`、`README.md`、`robodriver_robot_xxx/` 包。
+
+## 2 实现核心文件
+### 2.1 配置类(config.py)
+- 继承 `RobotConfig` 并用 `@RobotConfig.register_subclass("myarm_aio_dora")` 注册 RoboDriver:11-16 。
+- 定义 `follower_motors`/`leader_motors`(电机与归一化模式)和 `cameras`(相机配置) RoboDriver:18-36 。
+
+### 2.2 机器人类(robot.py)
+- 继承 `lerobot.robots.robot.Robot`,实现 `connect`、`get_observation`、`send_action` RoboDriver:30-96 。
+- 在 `__init__` 中初始化通信节点(如 ROS2 节点或 Dora 动态节点)并启动 RoboDriver:24-42 。
+- 实现 `observation_features` 与 `action_features` 属性,描述观测与动作结构 RoboDriver:75-83 。
+
+### 2.3 通信节点(按接入方式)
+- **ROS2**:实现 rclpy 节点,订阅/发布关节与图像话题(参考 `A2DAioRos2Node`) RoboDriver:40-42 。
+- **Dora**:实现继承 `DoraRobotNode` 的动态节点,管理 `send_queue` 与 `dora_run` 线程,处理 PyArrow 消息 RoboDriver:38-40 。
+- **ROS1**:实现 ZMQ 桥接脚本,将 ROS Topic 转为 ZMQ 供主进程使用(见 realman-aio-ros1 示例) RoboDriver:41-49 。
+
+### 2.4 数据流定义(仅 Dora)
+- 在 `dora/dataflow.yml` 中定义相机节点、机械臂节点与动态节点,配置 `VIRTUAL_ENV`、`DEVICE_SERIAL`、`ARM_IP` 等环境变量 RoboDriver:1-15 。
+
+## 3 编写文档与安装配置
+- 在 `README.md` 中写明环境准备、依赖安装、配置修改与启动步骤,参考 realman1-aio-dora 的 README RoboDriver:17-39 。
+- 在 `pyproject.toml` 中定义包名与入口,确保可被 `robodriver-run` 动态发现。
+
+## 4 启动与验证
+```bash
+# 安装插件
+uv pip install -e ./robodriver/robots/robodriver-robot-myarm-aio-dora
+
+# 启动 Dora 数据流(如为 Dora 接入)
+dora start dora/dataflow.yml --uv
+
+# 启动 RoboDriver
+robodriver-run --robot.type=myarm_aio_dora
+```
+
+## Notes
+- 各接入方式的完整示例目录:`robodriver-robot-realman-aio-ros1`、`robodriver-robot-a2d-aio-ros2`、`robodriver-robot-agilex-aloha-aio-dora`、`robodriver-robot-realman1-aio-dora`、`robodriver-robot-so101-aio-dora` RoboDriver:98-115 。
+- 插件会被 `import_utils` 自动发现,确保包名以 `robodriver_robot_` 开头 RoboDriver:64-66 。
+- 如遇 OpenCV 显示报错,可注释 `run.py` 中的 `cv2.imshow` 与 `cv2.waitKey` RoboDriver:68-71 。
+
+Wiki pages you might want to explore:
+- [Overview (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#1)
+- [Dora Dataflow Integration (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#3.2)
+### Citations
+**File:** README.md (L98-115)
+```markdown
+### 🔌 ROS1 接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|------------|------|--------------|------------------------|
+| Realman 机械臂 | 基于Realman,6DOF+力控模块,3*RealSense相机 | [robodriver/robots/robodriver-robot-realman-aio-ros1](./robodriver/robots/robodriver-robot-realman-aio-ros1) | [
](https://github.com/zhanglei-web) |
+
+### 🔌 ROS2 接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|--------------|--------------------------------------------------------------|------------------------------------------------------------------------------------------|------------------------|
+| GALAXEALITE | 基于Galaxealite,双臂6DOF+末端夹爪,4*RealSense相机 | [robodriver/robots/robodriver-robot-galaxealite-aio-ros2](./robodriver/robots/robodriver-robot-galaxealite-aio-ros2) | [](https://github.com/liuyou1103) |
+| SO101 机械臂 | 开源轻量级机械臂,6DOF+末端夹爪,1*RealSense相机,1*RGB相机模块 | [robodriver/robots/robodriver-robot-so101-aio-ros2](./robodriver/robots/robodriver-robot-so101-aio-ros2) | [](https://github.com/Ryu-Yang) |
+
+### 🔌 Dora(SDK)接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|--------------|--------------------------------------------------------------|------------------------------------------------------------------------------------------|------------------------|
+| Realman 机械臂 | 基于Realman,6DOF+力控模块,3*RealSense相机 | [robodriver/robots/robodriver-robot-realman1-aio-dora](./robodriver/robots/robodriver-robot-realman1-aio-dora) | [](https://github.com/XuRuntian) |
+| SO101 机械臂 | 开源轻量级机械臂,6DOF+末端夹爪,1*RealSense相机,1*RGB相机模块 | [robodriver/robots/robodriver-robot-so101-aio-dora](./robodriver/robots/robodriver-robot-so101-aio-dora) | [](https://github.com/Ryu-Yang) |
+| Franka | 工业级机械臂,6DOF+末端夹爪,1*RealSense相机 | [robodriver/robots/robodriver-robot-franka-aio-dora](./robodriver/robots/robodriver-robot-franka-aio-dora) | [](https://github.com/XuRuntian) |
+
+```
+**File:** README.md (L116-119)
+```markdown
+> ✨ 说明:
+> 1. 接入方式命名规范:`robodriver-robot-[机器人型号]-[遥操方式]-[接入类型]`(如 `aio`/`follwer`/`teleoperate`, `ros2`/`dora`);
+> 2. 每个适配仓库内包含**环境搭建、配置修改、采集/控制验证**等完整接入指南;
+> 3. 持续新增适配机器人,可关注本列表或项目更新。
+```
+**File:** robodriver/robots/configs.py (L11-16)
+```python
+@dataclass
+class RobotConfig(draccus.ChoiceRegistry, abc.ABC):
+ @property
+ def type(self) -> str:
+ return self.get_choice_name(self.__class__)
+
+```
+**File:** robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/config.py (L18-36)
+```python
+ leader_motors: Dict[str, Motor] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "joint1_right": Motor(1, "piper-motor", norm_mode_body),
+ "joint2_right": Motor(2, "piper-motor", norm_mode_body),
+ "joint3_right": Motor(3, "piper-motor", norm_mode_body),
+ "joint4_right": Motor(4, "piper-motor", norm_mode_body),
+ "joint5_right": Motor(5, "piper-motor", norm_mode_body),
+ "joint6_right": Motor(6, "piper-motor", norm_mode_body),
+ "gripper_right": Motor(7, "piper-gripper", MotorNormMode.RANGE_0_100),
+
+ "joint1_left": Motor(8, "piper-motor", norm_mode_body),
+ "joint2_left": Motor(9, "piper-motor", norm_mode_body),
+ "joint3_left": Motor(10, "piper-motor", norm_mode_body),
+ "joint4_left": Motor(11, "piper-motor", norm_mode_body),
+ "joint5_left": Motor(12, "piper-motor", norm_mode_body),
+ "joint6_left": Motor(13, "piper-motor", norm_mode_body),
+ "gripper_left": Motor(14, "piper-gripper", MotorNormMode.RANGE_0_100),
+ }
+ )
+```
+**File:** robodriver/_simulators/robot.py (L30-96)
+```python
+class Robot(abc.ABC):
+ """
+ The base abstract class for all LeRobot-compatible robots.
+
+ This class provides a standardized interface for interacting with physical robots.
+ Subclasses must implement all abstract methods and properties to be usable.
+
+ Attributes:
+ config_class (RobotConfig): The expected configuration class for this robot.
+ name (str): The unique robot name used to identify this robot type.
+ """
+
+ # Set these in ALL subclasses
+ config_class: builtins.type[RobotConfig]
+ name: str
+
+ def __init__(self, config: RobotConfig):
+ self.robot_type = self.name
+ self.id = config.id
+ self.calibration_dir = (
+ config.calibration_dir if config.calibration_dir else HF_LEROBOT_CALIBRATION / ROBOTS / self.name
+ )
+ self.calibration_dir.mkdir(parents=True, exist_ok=True)
+ self.calibration_fpath = self.calibration_dir / f"{self.id}.json"
+ self.calibration: dict[str, MotorCalibration] = {}
+ if self.calibration_fpath.is_file():
+ self._load_calibration()
+
+ def __str__(self) -> str:
+ return f"{self.id} {self.__class__.__name__}"
+
+ # TODO(aliberts): create a proper Feature class for this that links with datasets
+ @property
+ @abc.abstractmethod
+ def observation_features(self) -> dict:
+ """
+ A dictionary describing the structure and types of the observations produced by the robot.
+ Its structure (keys) should match the structure of what is returned by :pymeth:`get_observation`.
+ Values for the dict should either be:
+ - The type of the value if it's a simple value, e.g. `float` for single proprioceptive value (a joint's position/velocity)
+ - A tuple representing the shape if it's an array-type value, e.g. `(height, width, channel)` for images
+
+ Note: this property should be able to be called regardless of whether the robot is connected or not.
+ """
+ pass
+
+ @property
+ @abc.abstractmethod
+ def action_features(self) -> dict:
+ """
+ A dictionary describing the structure and types of the actions expected by the robot. Its structure
+ (keys) should match the structure of what is passed to :pymeth:`send_action`. Values for the dict
+ should be the type of the value if it's a simple value, e.g. `float` for single proprioceptive value
+ (a joint's goal position/velocity)
+
+ Note: this property should be able to be called regardless of whether the robot is connected or not.
+ """
+ pass
+
+ @property
+ @abc.abstractmethod
+ def is_connected(self) -> bool:
+ """
+ Whether the robot is currently connected or not. If `False`, calling :pymeth:`get_observation` or
+ :pymeth:`send_action` should raise an error.
+ """
+ pass
+```
+**File:** robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/robot.py (L24-42)
+```python
+ def __init__(self, config: A2DAioRos2RobotConfig):
+ rclpy.init()
+ super().__init__(config)
+ self.config = config
+ self.robot_type = self.config.type
+ self.use_videos = self.config.use_videos
+ self.microphones = self.config.microphones
+
+ # A2D 通常没有 leader(遥操作主手),所以 leader_motors 为空
+ self.leader_motors = config.leader_motors # 应为 {}
+ self.follower_motors = config.follower_motors # 包含 14 arm + 2 gripper
+ self.cameras = make_cameras_from_configs(self.config.cameras)
+
+ # 排除某些虚拟相机(如 pose 图)
+ self.connect_excluded_cameras = ["image_pika_pose"]
+
+ # 启动 ROS2 节点
+ self.robot_ros2_node = A2DAioRos2Node()
+ self.robot_ros2_node.start()
+```
+**File:** robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/robot.py (L75-83)
+```python
+ @cached_property
+ def observation_features(self) -> dict[str, Any]:
+ return {**self._follower_motors_ft, **self._cameras_ft}
+
+ @cached_property
+ def action_features(self) -> dict[str, Any]:
+ # A2D 是 follower-only 控制,action 即目标 follower joint
+ # 所以 action_features 应与 follower 一致(用于模仿学习)
+ return self._follower_motors_ft
+```
+**File:** robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/robot.py (L38-40)
+```python
+ self.status = AgilexAlohaAIODoraRobotStatus()
+ self.robot_dora_node = AgilexAlohaAIODoraRobotNode()
+ self.robot_dora_node.start()
+```
+**File:** robodriver/robots/robodriver-robot-realman-aio-ros1/README.md (L41-49)
+```markdown
+### 2.3 Start dataflow (ROS1 → ZMQ bridge) / 启动 ROS1 → ZMQ 数据流
+
+In this repository directory:
+在本仓库目录中运行:
+
+```bash
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-realman-aio-ros1
+python -m robodriver_robot_realman_aio_ros1.ros1_zmq_bridge
+```
+```
+**File:** robodriver/robots/robodriver-robot-realman-aio-ros1/README.md (L68-71)
+```markdown
+- If you encounter OpenCV `cvShowImage` related errors when launching RoboXStudio,
+ you can comment out `cv2.imshow` / `cv2.waitKey` in `robodriver/scripts/run.py`.
+ 如果在启动 RoboXStudio 时遇到 OpenCV `cvShowImage` 的报错,可以在
+ `robodriver/scripts/run.py` 中注释掉 `cv2.imshow` 和 `cv2.waitKey` 调用。
+```
+**File:** robodriver/robots/robodriver-robot-realman1-aio-dora/dora/dataflow.yml (L1-15)
+```yaml
+nodes:
+ - id: camera_top
+ path: ../../../../components/cameras/dora-camera-realsense/dora_camera_realsense/main.py
+ inputs:
+ tick: dora/timer/millis/33
+ outputs:
+ - image
+ - image_depth
+ env:
+ VIRTUAL_ENV: ../.venv
+ DEVICE_SERIAL: 150622072280
+ IMAGE_WIDTH: 640
+ IMAGE_HEIGHT: 480
+
+ - id: camera_left_wrist
+```
+**File:** robodriver/robots/robodriver-robot-realman1-aio-dora/README.md (L17-39)
+```markdown
+安装 robodriver-robot-realman1-aio-dora 模块 环境
+```
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-realman1-aio-dora
+uv venv .venv -p 3.9
+source .venv/bin/activate
+uv pip install -e .
+```
+
+配置 dataflow.yml 文件
+```
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-realman1-aio-dora
+cd dora
+```
+
+打开 dataflow.yml 文件,修改其中的 VIRTUAL_ENV 路径,以及 DEVICE_SERIAL、ARM_IP 和 ARM_PORT 参数。
+### 参数说明:
+
+- VIRTUAL_ENV:realman dora 节点使用的虚拟环境路径
+- DEVICE_SERIAL:RealSense 相机的序列号
+- ARM_IP:机械臂的 IP 地址
+- ARM_PORT:机械臂的端口号
+
+## 开始数据采集
+```
+**File:** robodriver/_simulators/utils.py (L64-66)
+```python
+ try:
+ return cast(Robot, make_device_from_device_class(config))
+ except Exception as e:
+```
+## Q3
+可以pip install直接加机械臂吗?
+### Answer
+可以。只要你的包名以 `robodriver_robot_` 开头并通过 pip/uv 安装到当前 Python 环境,RoboDriver 启动时会自动发现并注册该机械臂插件,无需将代码放入主仓库 RoboDriver:100-136 。
+
+---
+
+## 原理
+- `register_third_party_devices` 在 `robodriver-run` 启动时扫描 `sys.path` 中所有以 `robodriver_robot_` 等前缀开头的模块并导入,使其配置类注册到 `ChoiceRegistry` RoboDriver:127-136 。
+- `make_device_from_device_class` 根据配置的 `type` 动态实例化对应的机器人类,支持从外部包加载 RoboDriver:27-97 。
+
+## 步骤
+1. 按规范创建独立包,包名如 `robodriver_robot_myarm_aio_dora`,内部实现 `RobotConfig` 子类并注册,例如 `@RobotConfig.register_subclass("myarm_aio_dora")` RoboDriver:11-16 。
+2. 在 RoboDriver 主环境中安装该包:
+ ```bash
+ pip install ./robodriver_robot_myarm_aio_dora # 或 uv pip install
+ ```
+3. 启动时指定类型:
+ ```bash
+ robodriver-run --robot.type=myarm_aio_dora
+ ```
+
+## 示例参考
+- `robodriver-robot-realman1-aio-dora` 等插件即采用此方式,在其目录内 `uv pip install -e .` 后,主环境可直接使用 `--robot.type=realman1_aio_dora` RoboDriver:17-39 。
+
+## Notes
+- 确保包名前缀为 `robodriver_robot_`,否则不会被自动扫描 RoboDriver:107-113 。
+- 插件安装后无需修改主仓库代码,重启 `robodriver-run` 即生效。
+
+Wiki pages you might want to explore:
+- [Overview (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#1)
+### Citations
+**File:** robodriver/utils/import_utils.py (L27-97)
+```python
+def make_device_from_device_class(config: ChoiceRegistry) -> Any:
+ """
+ Dynamically instantiates an object from its `ChoiceRegistry` configuration.
+
+ This factory uses the module path and class name from the `config` object's
+ type to locate and instantiate the corresponding device class (not the config).
+ It derives the device class name by removing a trailing 'Config' from the config
+ class name and tries a few candidate modules where the device implementation is
+ commonly located.
+ """
+ if not isinstance(config, ChoiceRegistry):
+ raise ValueError(
+ f"Config should be an instance of `ChoiceRegistry`, got {type(config)}"
+ )
+
+ config_cls = config.__class__
+ module_path = (
+ config_cls.__module__
+ ) # typical: lerobot_teleop_mydevice.config_mydevice
+ config_name = config_cls.__name__ # typical: MyDeviceConfig
+
+ # Derive device class name (strip "Config")
+ if not config_name.endswith("Config"):
+ raise ValueError(
+ f"Config class name '{config_name}' does not end with 'Config'"
+ )
+
+ device_class_name = config_name[:-6] # typical: MyDeviceConfig -> MyDevice
+
+ # Build candidate modules to search for the device class
+ parts = module_path.split(".")
+ parent_module = ".".join(parts[:-1]) if len(parts) > 1 else module_path
+ candidates = [
+ parent_module, # typical: lerobot_teleop_mydevice
+ parent_module
+ + "."
+ + device_class_name.lower(), # typical: lerobot_teleop_mydevice.mydevice
+ ]
+
+ # handle modules named like "config_xxx" -> try replacing that piece with "xxx"
+ last = parts[-1] if parts else ""
+ if last.startswith("config_"):
+ candidates.append(".".join(parts[:-1] + [last.replace("config_", "")]))
+
+ # de-duplicate while preserving order
+ seen: set[str] = set()
+ candidates = [c for c in candidates if not (c in seen or seen.add(c))]
+
+ tried: list[str] = []
+ for candidate in candidates:
+ tried.append(candidate)
+ try:
+ module = importlib.import_module(candidate)
+ except ImportError:
+ continue
+
+ if hasattr(module, device_class_name):
+ cls = getattr(module, device_class_name)
+ if callable(cls):
+ try:
+ return cls(config)
+ except TypeError as e:
+ raise TypeError(
+ f"Failed to instantiate '{device_class_name}' from module '{candidate}': {e}"
+ ) from e
+
+ raise ImportError(
+ f"Could not locate device class '{device_class_name}' for config '{config_name}'. "
+ f"Tried modules: {tried}. Ensure your device class name is the config class name without "
+ f"'Config' and that it's importable from one of those modules."
+ )
+```
+**File:** robodriver/utils/import_utils.py (L100-136)
+```python
+def register_third_party_devices() -> None:
+ """
+ Discover and import third-party robodriver_* plugins so they can register themselves.
+
+ Scans top-level modules on sys.path for packages starting with
+ 'robodriver_robot_', 'robodriver_camera_', "robodriver_policy_" or 'robodriver_teleoperator_' and imports them.
+ """
+ prefixes = (
+ "robodriver_robot_",
+ "robodriver_camera_",
+ "robodriver_sim",
+ "robodriver_policy_",
+ "robodriver_teleoperator",
+ )
+ imported: list[str] = []
+ failed: list[str] = []
+
+ for module_info in pkgutil.iter_modules():
+ name = module_info.name
+ if name.startswith(prefixes):
+ _attempt_import(name, imported, failed)
+
+ editable_pattern = re.compile(r"__editable___(.+?)_\d+_\d+_\d+_finder")
+ for module_info in pkgutil.iter_modules():
+ match = editable_pattern.match(module_info.name)
+ if match:
+ original_name = match.group(1).replace(
+ "_", "-"
+ ) # Convert to canonical package name
+ if any(
+ original_name.startswith(prefix.replace("_", "-"))
+ for prefix in prefixes
+ ):
+ # Convert back to module name format (replace - with _)
+ module_name = original_name.replace("-", "_")
+ _attempt_import(module_name, imported, failed)
+
+```
+**File:** robodriver/scripts/run.py (L127-136)
+```python
+def main():
+ git_branch_log()
+
+ register_third_party_devices()
+ logger.info(f"Registered robot types: {list(RobotConfig._choice_registry.keys())}")
+ logger.info(
+ f"Registered teleoperator types: {list(TeleoperatorConfig._choice_registry.keys())}"
+ )
+
+ asyncio.run(async_main())
+```
+**File:** robodriver/robots/robodriver-robot-realman1-aio-dora/README.md (L17-39)
+```markdown
+安装 robodriver-robot-realman1-aio-dora 模块 环境
+```
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-realman1-aio-dora
+uv venv .venv -p 3.9
+source .venv/bin/activate
+uv pip install -e .
+```
+
+配置 dataflow.yml 文件
+```
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-realman1-aio-dora
+cd dora
+```
+
+打开 dataflow.yml 文件,修改其中的 VIRTUAL_ENV 路径,以及 DEVICE_SERIAL、ARM_IP 和 ARM_PORT 参数。
+### 参数说明:
+
+- VIRTUAL_ENV:realman dora 节点使用的虚拟环境路径
+- DEVICE_SERIAL:RealSense 相机的序列号
+- ARM_IP:机械臂的 IP 地址
+- ARM_PORT:机械臂的端口号
+
+## 开始数据采集
+```
+## Q4
+运行robodriver-run -h 时INFO:root:Environment variable 'ROBODRIVER_HOME' not set. Using defaul
+ t path: /home/dora/DoRobot 这个地址是什么意思?
+### Answer
+`ROBODRIVER_HOME` 是 RoboDriver 用于存放采集数据、数据集与运行时文件的根目录。未设置时默认为 `~/DoRobot`(当前用户主目录下的 DoRobot),程序会自动创建该目录 RoboDriver:26-38 。
+
+---
+
+## 用途
+- 数据采集默认存储位置:任务数据按任务名存为子目录,包含 data、videos、depth、meta、audio 等子文件夹 RoboDriver:200-237 。
+- 数据集默认根路径:`DOROBOT_DATASET = ROBODRIVER_HOME / "dataset"`,用于 DoRobotDataset 读取与回放 RoboDriver:49-49 。
+
+## 如何自定义
+```bash
+export ROBODRIVER_HOME=/path/to/my_storage
+robodriver-run -h
+```
+设置后日志会显示已使用自定义路径 RoboDriver:30-36 。
+
+## 目录结构示例(采集后)
+```
+/home/dora/DoRobot/
+└─ TaskName_TaskId/
+ ├─ data/ # Parquet 运动数据
+ ├─ videos/ # MP4 图像
+ ├─ depth/ # AVI 深度图
+ ├─ audio/ # WAV 音频
+ ├─ meta/ # 元信息 JSON
+ └─ device/ # 设备信息
+``` RoboDriver:214-250
+
+## Notes
+- 若无写权限,程序会报错并提示设置 ROBODRIVER_HOME 到可写位置 RoboDriver:40-44 。
+- 回放与可视化工具默认也基于该路径查找数据集,除非用 --root 参数覆盖 RoboDriver:21-22 RoboDriver:420-424 。
+
+Wiki pages you might want to explore:
+- [ROS2 Integration (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#3.3)
+### Citations
+**File:** robodriver/utils/constants.py (L26-38)
+```python
+user_home = Path.home()
+ROBODRIVER_HOME = Path(os.getenv("ROBODRIVER_HOME", str(user_home / "DoRobot"))).expanduser().resolve()
+
+
+if "ROBODRIVER_HOME" not in os.environ:
+ logging.info(
+ f"Environment variable 'ROBODRIVER_HOME' not set. Using default path: {ROBODRIVER_HOME}"
+ )
+else:
+ logging.info(f"Environment variable 'ROBODRIVER_HOME' is set to: {ROBODRIVER_HOME}")
+
+try:
+ ROBODRIVER_HOME.mkdir(parents=True, exist_ok=True)
+```
+**File:** robodriver/utils/constants.py (L40-44)
+```python
+except PermissionError as e:
+ logging.error(f"Permission denied! Cannot create {ROBODRIVER_HOME}")
+ logging.error(f"Please check permissions for: {user_home}")
+ logging.error(f"Or set ROBODRIVER_HOME environment variable to a writable location")
+ raise
+```
+**File:** robodriver/utils/constants.py (L49-49)
+```python
+DOROBOT_DATASET = ROBODRIVER_HOME / "dataset"
+```
+**File:** robodriver/robots/robodriver-robot-galaxealite-aio-ros1/README-EN.md (L200-237)
+```markdown
+## Data Description
+### Storage Path
+Collected data is stored in the `/home/yourname/DoRobot` directory by default, organized into folders named by task.
+
+### Directory Structure
+```
+TaskName_TaskId/
+├── audio/ # Audio data
+│ └── chunk-000/
+│ ├── observation.audio.audio_left/ # Left channel audio (WAV format)
+│ └── observation.audio.audio_right/ # Right channel audio (WAV format)
+├── data/ # Motion command/feedback data (Parquet format)
+│ └── chunk-000/
+├── depth/ # Depth images (AVI format)
+│ └── chunk-000/
+│ ├── observation.images.image_depth_right/
+│ └── observation.images.image_depth_top/
+├── device/ # Device information
+│ └── device_info.json
+├── label/ # Annotation data (generated after annotation)
+│ └── data_annotation.json
+├── meta/ # Metadata
+│ ├── common_record.json # Collection task information
+│ ├── episodes.jsonl # Task description and frame length
+│ ├── episodes_stats.jsonl # Normalized statistical information
+│ ├── info.json # Feature schema, frame rate, version
+│ ├── op_dataid.jsonl # Device number
+│ └── tasks.jsonl
+└── videos/ # Visible light images (MP4 format)
+ └── chunk-000/
+ ├── observation.images.image_left/ # Left camera
+ ├── observation.images.image_left_tac_l/ # Left tactile left camera
+ ├── observation.images.image_left_tac_r/ # Left tactile right camera
+ ├── observation.images.image_right/ # Right camera
+ ├── observation.images.image_right_tac_l/ # Right tactile left camera
+ ├── observation.images.image_right_tac_r/ # Right tactile right camera
+ └── observation.images.image_top/ # Top camera
+```
+```
+**File:** robodriver/robots/robodriver-robot-galaxealite-aio-ros2/README.md (L214-250)
+```markdown
+### 存储路径
+采集数据默认存储于 `/home/yourname/DoRobot` 目录,按任务命名划分文件夹。
+
+### 目录结构
+```
+TaskName_TaskId/
+├── audio/ # 音频数据
+│ └── chunk-000/
+│ ├── observation.audio.audio_left/ # 左声道音频(WAV格式)
+│ └── observation.audio.audio_right/ # 右声道音频(WAV格式)
+├── data/ # 运动指令/反馈数据(Parquet格式)
+│ └── chunk-000/
+├── depth/ # 深度图像(AVI格式)
+│ └── chunk-000/
+│ ├── observation.images.image_depth_right/
+│ └── observation.images.image_depth_top/
+├── device/ # 设备信息
+│ └── device_info.json
+├── label/ # 标注数据(标注后生成)
+│ └── data_annotation.json
+├── meta/ # 元数据
+│ ├── common_record.json # 采集任务信息
+│ ├── episodes.jsonl # 任务描述与帧长
+│ ├── episodes_stats.jsonl # 归一化统计信息
+│ ├── info.json # 特征schema、帧率、版本
+│ ├── op_dataid.jsonl # 设备编号
+│ └── tasks.jsonl
+└── videos/ # 可见光图像(MP4格式)
+ └── chunk-000/
+ ├── observation.images.image_left/ # 左侧相机
+ ├── observation.images.image_left_tac_l/ # 左侧触觉左相机
+ ├── observation.images.image_left_tac_r/ # 左侧触觉右相机
+ ├── observation.images.image_right/ # 右侧相机
+ ├── observation.images.image_right_tac_l/ # 右侧触觉左相机
+ ├── observation.images.image_right_tac_r/ # 右侧触觉右相机
+ └── observation.images.image_top/ # 顶部相机
+```
+```
+**File:** robodriver/core/replayer.py (L21-22)
+```python
+ # Root directory where the dataset will be stored (e.g. 'dataset/path').
+ root: str | Path | None = None
+```
+**File:** robodriver/dataset/visual/visualize_dataset_html.py (L420-424)
+```python
+ "--root",
+ type=Path,
+ default=None,
+ help="Root directory for a dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.",
+ )
+```
+## Q5
+怎样进行的数据保存?
+### Answer
+RoboDriver 通过 Coordinator 接收采集命令,使用 Record 组件将观测与动作以改进的 LeRobot 单条存储格式写入磁盘,支持图像(PNG/MP4)、音频(WAV)与元数据(JSON),并在保存后进行完整性校验 RoboDriver:113-205 RoboDriver:26-49 。
+
+---
+
+## 保存触发与路径构建
+- 触发:Coordinator 收到 `start_collection` 后检查磁盘空间,构造 `RecordConfig` 与 `Record` 实例并调用 `record.start()`;收到 `finish_collection` 后调用 `record.stop()` 与 `record.save()` RoboDriver:113-219 。
+- 路径:默认根目录为 `ROBODRIVER_HOME/dataset`,采集目录按 `日期/user/任务名_任务ID/任务名_任务ID_数据ID` 组织,支持通过 `ROBODRIVER_HOME` 环境变量自定义根目录 RoboDriver:26-49 。
+
+## 数据格式与文件组织
+- Parquet:每帧一行(单条存储),路径模板 `data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet` RoboDriver:42-44 。
+- 图像:可选 PNG(逐帧)或 MP4(视频),路径模板见 `DEFAULT_IMAGE_PATH` 与 `DEFAULT_VIDEO_PATH` RoboDriver:45-38 。
+- 音频:WAV 格式,由 `AsyncAudioWriter` 多设备异步写入 RoboDriver:12-30 。
+- 元信息:`meta/` 目录下包含 info.json、episodes.jsonl、episodes_stats.jsonl、tasks.jsonl、common_record.json 等 RoboDriver:214-250 。
+
+## 并发写入与线程模型
+- 图像:ImageWriter 线程池(每相机一线程)异步写入 PNG 或封装 MP4 RoboDriver:1-258 。
+- 音频:AsyncAudioWriter 为每个麦克风启动独立录制任务,通过 soundfile 写 WAV RoboDriver:31-73 。
+- 主线程:在控制循环中调用 `add_frame` 将观测/动作加入 episode 缓冲,最终由 Record 线程落盘 Parquet。
+
+## 保存后校验
+- 文件完整性:检查 Parquet 可读、videos/images 目录存在与帧数匹配 RoboDriver:403-434 。
+- 帧率校验:对比视频帧率与配置 FPS,生成校验报告 RoboDriver:780-821 。
+
+## 回放与可视化
+- DoRobotDataset 读取本地数据集,默认从 `DOROBOT_DATASET` 加载,支持 `--root` 覆盖 RoboDriver:21-22 RoboDriver:420-424 。
+- 可视化工具支持本地与远程模式,通过 Rerun 或 Web 界面回放视频与时序数据 RoboDriver:242-338 。
+
+## 目录结构示例
+```
+/home/dora/DoRobot/dataset/20250827/user/TaskA_001/TaskA_001_abc123/
+├─ data/chunk-000/episode_000000.parquet
+├─ videos/chunk-000/image_top/episode_000000.mp4
+├─ audio/chunk-000/audio_left/episode_000000.wav
+├─ meta/info.json
+├─ meta/episodes.jsonl
+└─ device/device_info.json
+```
+
+## Notes
+- 单条存储便于逐帧编辑与传输,训练时可聚合为多集合格式 RoboDriver:42-43 。
+- 若磁盘空间不足(<2GB),采集会被拒绝 RoboDriver:117-120 。
+- 完整数据保存流程见 Record 类实现与 Coordinator 的 `__on_robot_command_handle` RoboDriver:1-258 。
+
+Wiki pages you might want to explore:
+- [Overview (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#1)
+### Citations
+**File:** robodriver/core/coordinator.py (L113-219)
+```python
+ elif data.get("cmd") == "start_collection":
+ logger.info("处理开始采集命令...")
+ msg = data.get("msg")
+
+ if not check_disk_space(min_gb=2):
+ logger.warning("存储空间不足,小于2GB,取消采集!")
+ await self.send_response("start_collection", "存储空间不足,小于2GB")
+ return
+
+ if self.replaying == True:
+ logger.warning("Replay is running, cannot start collection.")
+ await self.send_response("start_collection", "fail")
+ return
+
+ if self.recording == True:
+ self.record.stop()
+ self.record.discard()
+ self.recording = False
+
+ self.recording = True
+
+ task_id = msg.get("task_id")
+ task_name = msg.get("task_name")
+ task_data_id = msg.get("task_data_id")
+ countdown_seconds = msg.get("countdown_seconds", 3)
+ task_dir = f"{task_name}_{task_id}"
+ repo_id = f"{task_name}_{task_id}_{task_data_id}"
+
+ date_str = datetime.now().strftime("%Y%m%d")
+
+ # 构建目标目录路径
+ dataset_path = DOROBOT_DATASET
+
+ git_branch_name = get_current_git_branch()
+ target_dir = dataset_path / date_str / "user" / task_dir / repo_id
+ # if "release" in git_branch_name or "main" in git_branch_name:
+ # target_dir = dataset_path / date_str / "user" / task_dir / repo_id
+ # elif "dev" in git_branch_name:
+ # target_dir = dataset_path / date_str / "dev" / task_dir / repo_id
+ # else:
+ # target_dir = dataset_path / date_str / "dev" / task_dir / repo_id
+
+ # 判断是否存在对应文件夹以决定是否启用恢复模式
+ resume = False
+
+ # 检查数据集目录是否存在
+ if not dataset_path.exists():
+ logger.info(
+ f"Dataset directory '{dataset_path}' does not exist. Cannot resume."
+ )
+ else:
+ # 检查目标文件夹是否存在且为目录
+ if target_dir.exists() and target_dir.is_dir():
+ # resume = True
+ # logging.info(f"Found existing directory for repo_id '{repo_id}'. Resuming operation.")
+
+ logger.info(
+ f"Found existing directory for repo_id '{repo_id}'. Delete directory."
+ )
+ shutil.rmtree(target_dir)
+ time.sleep(0.5) # make sure delete success.
+ else:
+ logger.info(
+ f"No directory found for repo_id '{repo_id}'. Starting fresh."
+ )
+
+ # resume 变量现在可用于后续逻辑
+ logger.info(f"Resume mode: {'Enabled' if resume else 'Disabled'}")
+
+ record_cfg = RecordConfig(
+ fps=DEFAULT_FPS,
+ single_task=task_name,
+ repo_id=repo_id,
+ video=self.daemon.robot.use_videos,
+ resume=resume,
+ root=target_dir,
+ )
+ self.record = Record(
+ fps=DEFAULT_FPS,
+ robot=self.daemon.robot,
+ daemon=self.daemon,
+ teleop=self.teleop,
+ record_cfg=record_cfg,
+ record_cmd=msg,
+ )
+ # 发送响应
+ await self.send_response("start_collection", "success")
+ # 开始采集倒计时
+ logger.info(f"开始采集倒计时{countdown_seconds}s...")
+ time.sleep(countdown_seconds)
+
+ # 开始采集
+ self.record.start()
+
+ elif data.get("cmd") == "finish_collection":
+ logger.info("处理完成采集命令...")
+ if self.replaying == True:
+ logger.warning("Replay is running, cannot finish collection.")
+ await self.send_response("finish_collection", "fail")
+ return
+
+ if not self.saveing and self.record.save_data is None:
+ # 如果不在保存状态,立即停止记录并保存
+ self.saveing = True
+ self.record.stop()
+ self.record.save()
+ self.recording = False
+```
+**File:** robodriver/utils/constants.py (L26-49)
+```python
+user_home = Path.home()
+ROBODRIVER_HOME = Path(os.getenv("ROBODRIVER_HOME", str(user_home / "DoRobot"))).expanduser().resolve()
+
+
+if "ROBODRIVER_HOME" not in os.environ:
+ logging.info(
+ f"Environment variable 'ROBODRIVER_HOME' not set. Using default path: {ROBODRIVER_HOME}"
+ )
+else:
+ logging.info(f"Environment variable 'ROBODRIVER_HOME' is set to: {ROBODRIVER_HOME}")
+
+try:
+ ROBODRIVER_HOME.mkdir(parents=True, exist_ok=True)
+ logging.info(f"Directory ready: {ROBODRIVER_HOME}")
+except PermissionError as e:
+ logging.error(f"Permission denied! Cannot create {ROBODRIVER_HOME}")
+ logging.error(f"Please check permissions for: {user_home}")
+ logging.error(f"Or set ROBODRIVER_HOME environment variable to a writable location")
+ raise
+except Exception as e:
+ logging.error(f"Failed to create directory: {e}")
+ raise
+
+DOROBOT_DATASET = ROBODRIVER_HOME / "dataset"
+```
+**File:** robodriver/utils/dataset.py (L42-44)
+```python
+DEFAULT_PARQUET_PATH = (
+ "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet"
+)
+```
+**File:** robodriver/utils/dataset.py (L45-38)
+```python
+
+```
+**File:** robodriver/dataset/audio_writer.py (L12-30)
+```python
+class AsyncAudioWriter:
+ """异步多设备音频录制器
+
+ 示例:
+ writer = AsyncAudioWriter(
+ microphones={
+ "left": 0, # 设备ID 0
+ "right": 1 # 设备ID 1
+ },
+ savepath={
+ "left": Path("./cache/audio/left.wav"), # 使用Path对象
+ "right": Path("./cache/audio/right.wav") # 使用Path对象
+ }
+ )
+ writer.start()
+ # ... 录制中 ...
+ writer.stop()
+ writer.wait_until_done() # 确保所有数据写入完成
+ """
+```
+**File:** robodriver/dataset/audio_writer.py (L31-73)
+```python
+
+ def __init__(
+ self,
+ microphones: Dict[str, int],
+ savepath: Dict[str, Path], # 修改为Path对象字典
+ samplerate: Optional[int] = None,
+ channels: int = 1,
+ subtype: Optional[str] = "PCM_16",
+ ):
+ """
+ 初始化音频录制器
+
+ Args:
+ microphones: 设备ID映射字典 (设备名称 -> 设备ID)
+ savepath: 保存路径映射字典 (设备名称 -> Path对象)
+ samplerate: 采样率 (None表示使用设备默认值)
+ channels: 声道数 (默认1)
+ subtype: 音频格式 (如 "PCM_24", None表示使用soundfile默认)
+ """
+ # 验证每个microphones键都能在savepath中找到对应的键(通过后缀匹配)
+ for mic_key in microphones.keys():
+ # 查找包含mic_key的savepath键
+ matching_keys = [k for k in savepath.keys() if k.endswith(mic_key)]
+
+ if not matching_keys:
+ raise ValueError(
+ f"savepath中找不到与microphones键 '{mic_key}' 对应的键\n"
+ f"microphones keys: {list(microphones.keys())}\n"
+ f"savepath keys: {list(savepath.keys())}"
+ )
+
+ self.microphones = microphones
+ self.savepath = savepath # 现在是一个字典 {设备名: Path对象}
+ self.samplerate = samplerate
+ self.channels = channels
+ self.subtype = subtype
+
+ # 内部状态
+ self._tasks: List[Dict] = [] # 存储每个任务的资源
+ self._is_started = False
+ self._stop_event = threading.Event()
+ self._stop_timeout = 5.0 # 停止超时时间(秒)
+
+```
+**File:** robodriver/utils/data_file.py (L403-434)
+```python
+ img_session_dir = images_dir
+
+ if not parquet_path.exists():
+ verification_result["file_integrity"] = "no pass"
+ verification_result["file_integrity_comment"] = (
+ f"检测不到Parquet文件: {parquet_path}"
+ )
+ return {"verification": verification_result}
+
+ try:
+ df = pd.read_parquet(parquet_path)
+ except Exception as e:
+ verification_result["file_integrity"] = "no pass"
+ verification_result["file_integrity_comment"] = f"读取Parquet文件失败: {str(e)}"
+ return {"verification": verification_result}
+
+ # 2. 确定媒体目录和类型(核心:优先videos,后images)
+ media_dir = None
+ media_type = None
+ if videos_dir.exists():
+ media_dir = videos_dir
+ media_type = "video"
+ print(f"[INFO] 检测到videos目录,优先使用该目录校验: {videos_dir}")
+ elif images_dir.exists():
+ media_dir = images_dir
+ media_type = "image"
+ print(f"[INFO] 未检测到videos目录,使用images目录校验: {images_dir}")
+ else:
+ verification_result["camera_frame_rate"] = "no pass"
+ verification_result["file_integrity"] = "no pass"
+ verification_result["file_integrity_comment"] = "未检测到videos/images目录"
+ return {"verification": verification_result}
+```
+**File:** robodriver/utils/data_file.py (L780-821)
+```python
+def update_common_record_json(path, data):
+ opdata_path = os.path.join(path, "meta", "common_record.json")
+ os.makedirs(os.path.dirname(opdata_path), exist_ok=True)
+ if os.path.isfile(opdata_path):
+ with open(opdata_path, "r", encoding="utf-8") as f:
+ data = json.load(f)
+ if "task_id" in data:
+ return
+ overwrite_data = {
+ "task_id": str(data["task_id"]),
+ "task_name": str(data["task_name"]),
+ "machine_id": str(data["machine_id"]),
+ }
+ # 假设data变量已提前定义(包含所需的所有键值对)
+ # overwrite_data = {
+ # "task_id": str(data["task_id"]),
+ # "task_name": str(data["task_name"]),
+ # "machine_id": str(data["machine_id"]),
+ # "scene_type": str(data["scene_type"]), # 场景标签
+ # "task_description": str(data["task_description"]), # 任务描述
+ # "tele_type": str(data["tele_type"]), # 遥操作类型
+ # "task_instruction": str(data["task_instruction"]) # 任务步骤
+ # }
+
+ # 以追加模式打开文件(如果不存在则创建)
+ with open(opdata_path, "w", encoding="utf-8") as f:
+ # 写入一行 JSON 数据(每行一个 JSON 对象)
+ f.write(json.dumps(overwrite_data, ensure_ascii=False) + "\n")
+
+
+def check_disk_space(min_gb=1):
+ # 获取根目录(/)的磁盘使用情况(Docker 默认挂载点)
+ total, used, free = shutil.disk_usage("/")
+ # 转换为 GB
+ free_gb = free // (2**30) # 1 GB = 2^30 bytes
+ if free_gb < min_gb:
+ print(f"警告:剩余存储空间不足 {min_gb}GB(当前剩余 {free_gb}GB)")
+ return False
+ else:
+ print(f"存储空间充足(剩余 {free_gb}GB)")
+ return True
+
+```
+**File:** robodriver/dataset/visual/visualize_dataset_html.py (L420-424)
+```python
+ "--root",
+ type=Path,
+ default=None,
+ help="Root directory for a dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.",
+ )
+```
+**File:** robodriver/dataset/visual/visual_dataset.py (L242-338)
+```python
+def main():
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "--repo-id",
+ type=str,
+ required=True,
+ help="Name of hugging face repository containing a DoRobotDataset dataset (e.g. `lerobot/pusht`).",
+ )
+ parser.add_argument(
+ "--episode-index",
+ type=int,
+ required=True,
+ help="Episode to visualize.",
+ )
+ parser.add_argument(
+ "--root",
+ type=Path,
+ default=None,
+ help="Root directory for the dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.",
+ )
+ parser.add_argument(
+ "--output-dir",
+ type=Path,
+ default=None,
+ help="Directory path to write a .rrd file when `--save 1` is set.",
+ )
+ parser.add_argument(
+ "--batch-size",
+ type=int,
+ default=32,
+ help="Batch size loaded by DataLoader.",
+ )
+ parser.add_argument(
+ "--num-workers",
+ type=int,
+ default=4,
+ help="Number of processes of Dataloader for loading the data.",
+ )
+ parser.add_argument(
+ "--mode",
+ type=str,
+ default="local",
+ help=(
+ "Mode of viewing between 'local' or 'distant'. "
+ "'local' requires data to be on a local machine. It spawns a viewer to visualize the data locally. "
+ "'distant' creates a server on the distant machine where the data is stored. "
+ "Visualize the data by connecting to the server with `rerun ws://localhost:PORT` on the local machine."
+ ),
+ )
+ parser.add_argument(
+ "--web-port",
+ type=int,
+ default=9195,
+ help="Web port for rerun.io when `--mode distant` is set.",
+ )
+ parser.add_argument(
+ "--ws-port",
+ type=int,
+ default=9285,
+ help="Web socket port for rerun.io when `--mode distant` is set.",
+ )
+ parser.add_argument(
+ "--save",
+ type=int,
+ default=0,
+ help=(
+ "Save a .rrd file in the directory provided by `--output-dir`. "
+ "It also deactivates the spawning of a viewer. "
+ "Visualize the data by running `rerun path/to/file.rrd` on your local machine."
+ ),
+ )
+
+ parser.add_argument(
+ "--tolerance-s",
+ type=float,
+ default=1e-4,
+ help=(
+ "Tolerance in seconds used to ensure data timestamps respect the dataset fps value"
+ "This is argument passed to the constructor of DoRobotDataset and maps to its tolerance_s constructor argument"
+ "If not given, defaults to 1e-4."
+ ),
+ )
+
+ args = parser.parse_args()
+ kwargs = vars(args)
+ repo_id = kwargs.pop("repo_id")
+ root = kwargs.pop("root")
+ tolerance_s = kwargs.pop("tolerance_s")
+
+ logging.info("Loading dataset")
+
+ stop_event = Event()
+ dataset = DoRobotDataset(repo_id, root=root, tolerance_s=tolerance_s)
+
+ # try:
+ visualize_dataset(dataset, **vars(args), stop_event=stop_event)
+```
+**File:** README.md (L42-43)
+```markdown
+- **改进的LeRobot数据集格式**:在不同数据阶段采取不同数据结构。在采集端单条储存,更易编辑和传输。还扩展了LeRobot格式的内容。
+
+```
+## Q6
+我是说我采集完一组数据后怎样做才能保存这一组数据?
+### Answer
+在采集完成后,你需要通过 RoboXStudio 点击“完成采集”或向 Coordinator 发送 `finish_collection` 命令,系统会自动停止录制并保存数据到磁盘 RoboDriver:207-233 。
+
+---
+
+## 保存触发流程
+- **触发方式**:RoboXStudio 界面点击“完成采集”,或直接发送 SocketIO 命令 `{"cmd": "finish_collection"}` 到 Coordinator RoboDriver:207-213 。
+- **内部执行**:
+ 1. Coordinator 将 `self.saveing` 置为 true,调用 `self.record.stop()` 停止录制线程 RoboDriver:216-218 。
+ 2. 调用 `self.record.save()` 将内存中的 episode 缓冲写入 Parquet,并等待图像/音频线程完成写入 RoboDriver:218-220 。
+ 3. 保存完成后将 `self.recording` 与 `self.saveing` 置为 false,并将 `self.record.save_data`(包含元信息)返回给服务器 RoboDriver:220-233 。
+
+## 行为细节
+- 若同名目录已存在,当前实现会在开始采集时删除旧目录并重新采集,因此“完成采集”总是完整保存新数据,不支持断点续存的保存模式(resume=False) RoboDriver:169-177 。
+- 保存路径为 `ROBODRIVER_HOME/dataset/日期/user/任务名_任务ID/任务名_任务ID_数据ID`,可在环境变量 `ROBODRIVER_HOME` 中自定义根目录 RoboDriver:26-49 。
+
+## Notes
+- 保存过程可能较久(图像转 MP4/写入 WAV),期间 UI 会显示保存中,请勿关闭程序 RoboDriver:222-225 。
+- 若想放弃本次采集,可发送 `discard_collection` 命令,系统会调用 `record.discard()` 并删除临时数据 RoboDriver:235-249 。
+
+Wiki pages you might want to explore:
+- [Overview (FlagOpen/RoboDriver)](/wiki/FlagOpen/RoboDriver#1)
+### Citations
+**File:** robodriver/core/coordinator.py (L169-177)
+```python
+ logger.info(
+ f"Found existing directory for repo_id '{repo_id}'. Delete directory."
+ )
+ shutil.rmtree(target_dir)
+ time.sleep(0.5) # make sure delete success.
+ else:
+ logger.info(
+ f"No directory found for repo_id '{repo_id}'. Starting fresh."
+ )
+```
+**File:** robodriver/core/coordinator.py (L207-233)
+```python
+ elif data.get("cmd") == "finish_collection":
+ logger.info("处理完成采集命令...")
+ if self.replaying == True:
+ logger.warning("Replay is running, cannot finish collection.")
+ await self.send_response("finish_collection", "fail")
+ return
+
+ if not self.saveing and self.record.save_data is None:
+ # 如果不在保存状态,立即停止记录并保存
+ self.saveing = True
+ self.record.stop()
+ self.record.save()
+ self.recording = False
+ self.saveing = False
+
+ # 如果正在保存,循环等待直到 self.record.save_data 有数据
+ while self.saveing:
+ time.sleep(0.1) # 避免CPU过载,适当延迟
+ # 此时无论 saveing 状态如何,self.record.save_data 已有有效数据
+ response_data = {
+ "msg": "success",
+ "data": self.record.save_data,
+ }
+ # 发送响应
+ await self.send_response(
+ "finish_collection", response_data["msg"], response_data
+ )
+```
+**File:** robodriver/core/coordinator.py (L235-249)
+```python
+ elif data.get("cmd") == "discard_collection":
+ # 模拟处理丢弃采集
+ logger.info("处理丢弃采集命令...")
+
+ if self.replaying == True:
+ logger.warning("Replay is running, cannot discard collection.")
+ await self.send_response("discard_collection", "fail")
+ return
+
+ self.record.stop()
+ self.record.discard()
+ self.recording = False
+
+ # 发送响应
+ await self.send_response("discard_collection", "success")
+```
+**File:** robodriver/utils/constants.py (L26-49)
+```python
+user_home = Path.home()
+ROBODRIVER_HOME = Path(os.getenv("ROBODRIVER_HOME", str(user_home / "DoRobot"))).expanduser().resolve()
+
+
+if "ROBODRIVER_HOME" not in os.environ:
+ logging.info(
+ f"Environment variable 'ROBODRIVER_HOME' not set. Using default path: {ROBODRIVER_HOME}"
+ )
+else:
+ logging.info(f"Environment variable 'ROBODRIVER_HOME' is set to: {ROBODRIVER_HOME}")
+
+try:
+ ROBODRIVER_HOME.mkdir(parents=True, exist_ok=True)
+ logging.info(f"Directory ready: {ROBODRIVER_HOME}")
+except PermissionError as e:
+ logging.error(f"Permission denied! Cannot create {ROBODRIVER_HOME}")
+ logging.error(f"Please check permissions for: {user_home}")
+ logging.error(f"Or set ROBODRIVER_HOME environment variable to a writable location")
+ raise
+except Exception as e:
+ logging.error(f"Failed to create directory: {e}")
+ raise
+
+DOROBOT_DATASET = ROBODRIVER_HOME / "dataset"
+```
+
+# DeepWiki Q&A with Code Context for Repository: FlagOpen/RoboDriver
+## Q1
+支持哪些机械臂,怎样添加新的机械臂
+### Answer
+# RoboDriver 支持的机械臂及如何添加新机械臂
+
+## 一、当前支持的机械臂
+
+RoboDriver 按接入方式分类,目前支持以下机械臂: RoboDriver:96-119
+
+### 按目录结构整理如下:
+
+| 机器人型号 | 接入方式 | 目录名 | Robot Type 标识 |
+|---|---|---|---|
+| **Realman 机械臂** | ROS1 | `robodriver-robot-realman-aio-ros1` | `realman_aio_ros1` |
+| **Realman 机械臂** | Dora (SDK) | `robodriver-robot-realman1-aio-dora` | `realman1_aio_dora` |
+| **SO101 机械臂** | ROS2 | `robodriver-robot-so101-aio-ros2` | `so101_aio_ros2` |
+| **SO101 机械臂** | Dora (SDK) | `robodriver-robot-so101-aio-dora` | — |
+| **SO101 机械臂** | Dora (Follower模式) | `robodriver-robot-so101-follower-dora` | — |
+| **Franka 机械臂** | Dora (SDK) | `robodriver-robot-franka-aio-dora` | `franka_aio_dora` |
+| **GALAXEALITE 双臂** | ROS2 | `robodriver-robot-galaxealite-aio-ros2` | `galaxealite-aio-ros2` |
+| **GALAXEALITE 双臂** | ROS1 | `robodriver-robot-galaxealite-aio-ros1` | — |
+| **GALAXEALITE 双臂** | ROS1 (Follower模式) | `robodriver-robot-galaxealite-follower-ros1` | — |
+| **Agilex Aloha 双臂** | Dora (SDK) | `robodriver-robot-agilex-aloha-aio-dora` | `agilex_aloha_aio_dora` |
+| **宇树 A2D (G1)** | ROS2 | `robodriver-robot-a2d-aio-ros2` | `a2d_aio_ros2` |
+
+仿真环境中还支持 **Franka 机械臂**(Genesis 仿真): RoboDriver:91-93
+
+---
+
+## 二、每个机械臂适配包的结构
+
+以 Franka(Dora接入)为例,每个机器人适配包都包含以下核心文件:
+
+```
+robodriver_robot_xxx_xxx_xxx/
+├── __init__.py # 导出 Robot 类和 Config 类
+├── config.py # 机器人配置(电机、相机等)
+├── node.py # 通信节点(Dora/ROS)
+├── robot.py # Robot 类实现
+└── status.py # 状态定义(可选)
+``` RoboDriver:1-3
+
+---
+
+## 三、如何添加新的机械臂
+
+### 步骤总览
+
+```mermaid
+graph TD
+ "1. 在 robots/ 目录下新建适配包" --> "2. 实现 config.py(注册 RobotConfig 子类)"
+ "2. 实现 config.py(注册 RobotConfig 子类)" --> "3. 实现 node.py(通信节点)"
+ "3. 实现 node.py(通信节点)" --> "4. 实现 robot.py(继承 Robot 基类)"
+ "4. 实现 robot.py(继承 Robot 基类)" --> "5. pip install -e . 安装适配包"
+ "5. pip install -e . 安装适配包" --> "6. 用 --robot.type=xxx 启动"
+```
+
+---
+
+### 第1步:命名与目录结构
+
+按规范命名:`robodriver-robot-[机器人型号]-[遥操方式]-[接入类型]`,在 `robodriver/robots/` 下新建目录: RoboDriver:116-127
+
+---
+
+### 第2步:实现 `config.py`
+
+需要用 `@RobotConfig.register_subclass("your_robot_type")` 装饰器注册你的机器人类型,并在 dataclass 中声明电机(`motors`/`leader_motors`/`follower_motors`)和相机(`cameras`):
+
+参考 SO101 ROS2 版本的 config: RoboDriver:10-50
+
+参考 Franka Dora 版本的 config: RoboDriver:27-78
+
+---
+
+### 第3步:实现 `node.py`(通信节点)
+
+根据接入方式不同,有两种主要实现方式:
+
+**ROS2 接入**:继承 `rclpy.node.Node`,订阅关节状态和图像 topic,并在 `NODE_CONFIG` 中配置 topic 名称: RoboDriver:24-58
+
+**Dora 接入**:使用 `dora.Node` 接收数据,在独立线程处理发送/接收: RoboDriver:23-60
+
+---
+
+### 第4步:实现 `robot.py`
+
+继承 LeRobot 的 `Robot` 基类,实现以下必要方法:
+
+- `connect()` / `disconnect()`
+- `get_observation()`
+- `get_action()`
+- `send_action()`
+- `is_connected` / `is_calibrated` 属性
+
+参考 SO101 ROS2 版本 robot 类: RoboDriver:20-44
+
+---
+
+### 第5步:自动注册机制
+
+RoboDriver 启动时会自动扫描所有以 `robodriver_robot_` 前缀开头的已安装包,并自动导入注册: RoboDriver:100-139
+
+在 `run.py` 中的 `main()` 函数调用了上述自动注册逻辑: RoboDriver:127-136
+
+因此,只需将适配包以 `pip install -e .` 安装后,RoboDriver 就能自动发现并注册新的机器人类型。
+
+---
+
+### 第6步:启动验证
+
+安装后用 `--robot.type=你注册的类型名` 启动:
+
+参考 Franka 的启动方式: RoboDriver:36-40
+
+参考 Agilex Aloha 的启动方式: RoboDriver:233-240
+
+---
+
+## Notes
+
+1. **命名约定**:Python 包名需以 `robodriver_robot_` 开头(下划线分隔),这是自动注册机制的扫描前缀。目录名可用连字符(如 `robodriver-robot-xxx`),包名用下划线(如 `robodriver_robot_xxx`)。
+2. **LeRobot 兼容**:所有 Robot 类直接继承自 LeRobot 的 `Robot` 基类,因此 RoboDriver 与 LeRobot 生态互相兼容。 RoboDriver:40-42
+3. **贡献流程**:开发完成后向主仓库 `dev` 分支提交 PR,代码需放在 `robodriver/robots/` 目录下。 RoboDriver:121-127
+4. **Dora 接入**的适配包还需要配置 `dora/dataflow.yml` 文件来定义数据流图,ROS 接入则需要配置对应的 ROS topic 名称(在 `node.py` 的 `NODE_CONFIG` 中)。
+### Citations
+**File:** README.md (L40-42)
+```markdown
+- **多种机器人接入方式**: RoboDriver 考虑了除SDK外,使用ROS、Dora的接入方式。
+- **LeRobot兼容**: RoboDriver 的机器人接口直接使用了LeRobot的`Robot`类,这意味着RoboDriver与LeRobot可以互相兼容。
+- **改进的LeRobot数据集格式**:在不同数据阶段采取不同数据结构。在采集端单条储存,更易编辑和传输。还扩展了LeRobot格式的内容。
+```
+**File:** README.md (L91-93)
+```markdown
+| 机器人型号 | 简介 | 仓库链接 | 贡献人 |
+|------------|------|--------------|------------------------|
+| Franka 机械臂 | 一台Franka机械臂抓取方块 | [robodriver/simulations/robodriver-sim-genesis-franka-aio-dora](./robodriver/simulations/robodriver-sim-genesis-franka-aio-dora) | [](https://github.com/Ryu-Yang) |
+```
+**File:** README.md (L96-127)
+```markdown
+RoboDriver 已完成多款主流机器人的适配,按接入方式示例如下(各仓库包含对应机器人的接入步骤、环境配置、指令适配等完整指南):
+
+### 🔌 ROS1 接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|------------|------|--------------|------------------------|
+| Realman 机械臂 | 基于Realman,6DOF+力控模块,3*RealSense相机 | [robodriver/robots/robodriver-robot-realman-aio-ros1](./robodriver/robots/robodriver-robot-realman-aio-ros1) | [
](https://github.com/zhanglei-web) |
+
+### 🔌 ROS2 接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|--------------|--------------------------------------------------------------|------------------------------------------------------------------------------------------|------------------------|
+| GALAXEALITE | 基于Galaxealite,双臂6DOF+末端夹爪,4*RealSense相机 | [robodriver/robots/robodriver-robot-galaxealite-aio-ros2](./robodriver/robots/robodriver-robot-galaxealite-aio-ros2) | [](https://github.com/liuyou1103) |
+| SO101 机械臂 | 开源轻量级机械臂,6DOF+末端夹爪,1*RealSense相机,1*RGB相机模块 | [robodriver/robots/robodriver-robot-so101-aio-ros2](./robodriver/robots/robodriver-robot-so101-aio-ros2) | [](https://github.com/Ryu-Yang) |
+
+### 🔌 Dora(SDK)接入
+| 机器人型号 | 简介 | 代码链接 | 贡献人 |
+|--------------|--------------------------------------------------------------|------------------------------------------------------------------------------------------|------------------------|
+| Realman 机械臂 | 基于Realman,6DOF+力控模块,3*RealSense相机 | [robodriver/robots/robodriver-robot-realman1-aio-dora](./robodriver/robots/robodriver-robot-realman1-aio-dora) | [](https://github.com/XuRuntian) |
+| SO101 机械臂 | 开源轻量级机械臂,6DOF+末端夹爪,1*RealSense相机,1*RGB相机模块 | [robodriver/robots/robodriver-robot-so101-aio-dora](./robodriver/robots/robodriver-robot-so101-aio-dora) | [](https://github.com/Ryu-Yang) |
+| Franka | 工业级机械臂,6DOF+末端夹爪,1*RealSense相机 | [robodriver/robots/robodriver-robot-franka-aio-dora](./robodriver/robots/robodriver-robot-franka-aio-dora) | [](https://github.com/XuRuntian) |
+
+> ✨ 说明:
+> 1. 接入方式命名规范:`robodriver-robot-[机器人型号]-[遥操方式]-[接入类型]`(如 `aio`/`follwer`/`teleoperate`, `ros2`/`dora`);
+> 2. 每个适配仓库内包含**环境搭建、配置修改、采集/控制验证**等完整接入指南;
+> 3. 持续新增适配机器人,可关注本列表或项目更新。
+
+我们非常欢迎社区开发者贡献更多机器人的实现!可按以下方式参与:
+1. 参考已适配机器人的代码结构和 README 模板,按接入类型(ROS1/ROS2/Dora)完成适配开发;
+2. 将适配代码新增至主仓库的 `robodriver/robots/` 目录下(命名规范与已适配机器人保持一致);
+3. 确保代码规范、文档完整(包含环境准备、配置步骤、功能验证);
+4. 提交代码 PR 至主仓库的 `dev` 分支,我们将及时审核并合并。
+
+期待与您一起丰富 RoboDriver 的机器人生态! 🤝
+```
+**File:** robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/__init__.py (L1-3)
+```python
+from .robot import FrankaAioDoraRobot
+from .config import FrankaAioDoraRobotConfig
+from .status import FrankaAioDoraRobotStatus
+```
+**File:** robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/config.py (L10-50)
+```python
+@RobotConfig.register_subclass("so101_aio_ros2")
+@dataclass
+class So101AioRos2RobotConfig(RobotConfig):
+ use_degrees = True
+ norm_mode_body = (
+ MotorNormMode.DEGREES if use_degrees else MotorNormMode.RANGE_M100_100
+ )
+
+ # 按组件分组:{ comp_id: { joint_name: Motor, ... }, ... }
+ leader_motors: Dict[str, Dict[str, Motor]] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "shoulder_pan": Motor(1, "sts3215", norm_mode_body),
+ "shoulder_lift": Motor(2, "sts3215", norm_mode_body),
+ "elbow_flex": Motor(3, "sts3215", norm_mode_body),
+ "wrist_flex": Motor(4, "sts3215", norm_mode_body),
+ "wrist_roll": Motor(5, "sts3215", norm_mode_body),
+ "gripper": Motor(6, "sts3215", MotorNormMode.RANGE_0_100),
+ }
+ )
+
+ follower_motors: Dict[str, Dict[str, Motor]] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "shoulder_pan": Motor(1, "sts3215", norm_mode_body),
+ "shoulder_lift": Motor(2, "sts3215", norm_mode_body),
+ "elbow_flex": Motor(3, "sts3215", norm_mode_body),
+ "wrist_flex": Motor(4, "sts3215", norm_mode_body),
+ "wrist_roll": Motor(5, "sts3215", norm_mode_body),
+ "gripper": Motor(6, "sts3215", MotorNormMode.RANGE_0_100),
+ }
+ )
+
+ cameras: Dict[str, CameraConfig] = field(
+ default_factory=lambda: {
+ "image_top": OpenCVCameraConfig(index_or_path=0, fps=30, width=640, height=480),
+ }
+ )
+
+ use_videos: bool = False
+
+ microphones: Dict[str, int] = field(
+ default_factory=lambda: {}
+```
+**File:** robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/config.py (L27-78)
+```python
+@RobotConfig.register_subclass("franka_aio_dora")
+@dataclass
+class FrankaAioDoraRobotConfig(RobotConfig):
+ use_degrees = True
+ norm_mode_body = MotorNormMode.DEGREES if use_degrees else MotorNormMode.RANGE_M100_100
+
+ motors: Dict[str, Motor] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "follower_arm": {
+ "joint_1": Motor(1, "franka", norm_mode_body),
+ "joint_2": Motor(2, "franka", norm_mode_body),
+ "joint_3": Motor(3, "franka", norm_mode_body),
+ "joint_4": Motor(4, "franka", norm_mode_body),
+ "joint_5": Motor(5, "franka", norm_mode_body),
+ "joint_6": Motor(6, "franka", norm_mode_body),
+ "joint_7": Motor(7, "franka", norm_mode_body),
+ "gripper": Motor(8, "franka", norm_mode_body),
+ "pose_x": Motor(9, "franka", norm_mode_body),
+ "pose_y": Motor(10, "franka", norm_mode_body),
+ "pose_z": Motor(11, "franka", norm_mode_body),
+ "pose_rx": Motor(12, "franka", norm_mode_body),
+ "pose_ry": Motor(13, "franka", norm_mode_body),
+ "pose_rz": Motor(14, "franka", norm_mode_body),
+ },
+ }
+ )
+
+ cameras: Dict[str, CameraConfig] = field(
+ default_factory=lambda: {
+ "image_wrist": OpenCVCameraConfig(
+ index_or_path=1,
+ fps=30,
+ width=640,
+ height=480,
+ ),
+ "image_wrist_depth": OpenCVCameraConfig(
+ index_or_path=2,
+ fps=30,
+ width=640,
+ height=480,
+ ),
+
+ }
+ )
+
+ use_videos: bool = False
+
+ microphones: Dict[str, int] = field(
+ default_factory=lambda: {
+ }
+ )
+
+```
+**File:** robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/node.py (L24-58)
+```python
+NODE_CONFIG = {
+ "leader_joint_topics": {
+ "leader_arm": {
+ "topic": "/joint_states",
+ "msg": "JointState"
+ }
+ },
+ "follower_joint_topics": {
+ "follower_arm": {
+ "topic": "/f_joint_states",
+ "msg": "JointState"
+ }
+ },
+ "camera_topics": {
+ "image_top": {
+ "topic": "/camera/camera/color/image_raw",
+ "msg": "Image"
+ }
+ }
+}
+
+
+class So101AioRos2Node(Node):
+ """
+ ROS2 → 本地数据存储(无 ZMQ,无 Dora)
+ leader / follower / camera 多 topic,按 JSON 配置自动订阅。
+ """
+
+ def __init__(
+ self,
+ leader_joint_topics: Dict[str, Dict[str, str]] = NODE_CONFIG["leader_joint_topics"],
+ follower_joint_topics: Dict[str, Dict[str, str]] = NODE_CONFIG["follower_joint_topics"],
+ camera_topics: Dict[str, Dict[str, str]] = NODE_CONFIG["camera_topics"],
+ ):
+ super().__init__("so101_aio_ros2_direct")
+```
+**File:** robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/node.py (L23-60)
+```python
+class FrankaAioDoraRobotNode(DoraRobotNode):
+ def __init__(self):
+ self.node = Node("franka_aio_dora")
+
+ self.send_queue = queue.Queue(maxsize=100)
+ self.recv_images: Dict[str, float] = {}
+ self.recv_joint: Dict[str, float] = {}
+ self.recv_images_status: Dict[str, int] = {}
+ self.recv_joint_status: Dict[str, int] = {}
+ self.lock = threading.Lock()
+
+ self.thread = threading.Thread(target=self.dora_recv, daemon=True)
+ self.running = False
+
+ # 使用字典记录不同事件类型的最后发送时间
+ self.last_send_times: Dict[str, float] = {}
+
+ def dora_send(self, event_id, data):
+ """线程安全的发送方法 - 只负责入队"""
+ # logger.debug(f"{self} queue send event_id:{event_id}, value:{data}")
+ try:
+ # 限制发送频率 (可选) - 针对不同事件类型分别限制
+ current_time = time.time()
+ last_time = self.last_send_times.get(event_id, 0)
+ if current_time - last_time < 0.005: # 200Hz上限
+ return
+
+ # 转换为列表确保类型安全
+ data_list = [float(x) for x in data]
+ self.send_queue.put_nowait((event_id, data_list))
+
+ # 更新该事件类型的最后发送时间
+ self.last_send_times[event_id] = current_time
+ except queue.Full:
+ logger.warning(f"Send queue full! Dropping {event_id} event")
+ except Exception as e:
+ logger.error(f"Queue error: {str(e)}")
+
+```
+**File:** robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/robot.py (L20-44)
+```python
+class So101AioRos2Robot(Robot):
+ config_class = So101AioRos2RobotConfig
+ name = "so101_aio_ros2"
+
+ def __init__(self, config: So101AioRos2RobotConfig):
+ rclpy.init()
+ super().__init__(config)
+ self.config = config
+ self.robot_type = self.config.type
+ self.use_videos = self.config.use_videos
+ self.microphones = self.config.microphones
+
+ # 这里的 leader_motors / follower_motors 可以是按组件分组的 dict
+ # (比如 {"leader_arm": {...}, "left_arm": {...}})
+ self.leader_motors = config.leader_motors
+ self.follower_motors = config.follower_motors
+ self.cameras = make_cameras_from_configs(self.config.cameras)
+
+ self.connect_excluded_cameras = ["image_pika_pose"]
+
+ self.robot_ros2_node = So101AioRos2Node()
+ self.robot_ros2_node.start()
+
+ self.connected = False
+ self.logs = {}
+```
+**File:** robodriver/utils/import_utils.py (L100-139)
+```python
+def register_third_party_devices() -> None:
+ """
+ Discover and import third-party robodriver_* plugins so they can register themselves.
+
+ Scans top-level modules on sys.path for packages starting with
+ 'robodriver_robot_', 'robodriver_camera_', "robodriver_policy_" or 'robodriver_teleoperator_' and imports them.
+ """
+ prefixes = (
+ "robodriver_robot_",
+ "robodriver_camera_",
+ "robodriver_sim",
+ "robodriver_policy_",
+ "robodriver_teleoperator",
+ )
+ imported: list[str] = []
+ failed: list[str] = []
+
+ for module_info in pkgutil.iter_modules():
+ name = module_info.name
+ if name.startswith(prefixes):
+ _attempt_import(name, imported, failed)
+
+ editable_pattern = re.compile(r"__editable___(.+?)_\d+_\d+_\d+_finder")
+ for module_info in pkgutil.iter_modules():
+ match = editable_pattern.match(module_info.name)
+ if match:
+ original_name = match.group(1).replace(
+ "_", "-"
+ ) # Convert to canonical package name
+ if any(
+ original_name.startswith(prefix.replace("_", "-"))
+ for prefix in prefixes
+ ):
+ # Convert back to module name format (replace - with _)
+ module_name = original_name.replace("-", "_")
+ _attempt_import(module_name, imported, failed)
+
+ logger.debug(
+ "Third-party plugin import summary: imported=%s failed=%s", imported, failed
+ )
+```
+**File:** robodriver/scripts/run.py (L127-136)
+```python
+def main():
+ git_branch_log()
+
+ register_third_party_devices()
+ logger.info(f"Registered robot types: {list(RobotConfig._choice_registry.keys())}")
+ logger.info(
+ f"Registered teleoperator types: {list(TeleoperatorConfig._choice_registry.keys())}"
+ )
+
+ asyncio.run(async_main())
+```
+**File:** robodriver/robots/robodriver-robot-franka-aio-dora/README.md (L36-40)
+```markdown
+```
+cd /path/to/your/RoboDriver
+python robodriver/scripts/run.py \
+ --robot.type=franka_aio_dora
+```
+```
+**File:** robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/README.md (L233-240)
+```markdown
+运行:
+
+```bash
+robodriver-run \
+ --robot.type=agilex_aloha_aio_dora \
+ --sim.xml_path=descriptions/agilex_aloha/scene.xml \
+ --sim.from_unit=rad
+```
+```
diff --git a/components/arms/dora-arm-piper/pyproject.toml b/components/arms/dora-arm-piper/pyproject.toml
index a465831..1d5cfbc 100644
--- a/components/arms/dora-arm-piper/pyproject.toml
+++ b/components/arms/dora-arm-piper/pyproject.toml
@@ -12,7 +12,7 @@ authors = [
readme = "README.md"
requires-python = ">=3.8,<3.14"
dependencies = [
- "dora-rs (>=0.3.11,<0.4.0)",
+ "dora-rs (>=0.4.0,<0.5.0)",
"piper_sdk >= 0.0.8",
"numpy>=1.24.4",
]
diff --git a/components/cameras/dora-camera-orbbec-v1/dora_camera_orbbec_v1/main.py b/components/cameras/dora-camera-orbbec-v1/dora_camera_orbbec_v1/main.py
index 2704455..1287ff3 100644
--- a/components/cameras/dora-camera-orbbec-v1/dora_camera_orbbec_v1/main.py
+++ b/components/cameras/dora-camera-orbbec-v1/dora_camera_orbbec_v1/main.py
@@ -1,261 +1,65 @@
-"""TODO: Add docstring."""
+"""Orbbec Gemini 335 color camera node — V4L2 backend (OpenCV).
-import os
+Uses the kernel uvcvideo driver via /dev/videoN instead of pyorbbecsdk/libusb,
+because libob_usb.so fails to parse UVC payload headers on this device,
+producing all-constant (green) frames regardless of scene content.
+"""
+import os
import cv2
import numpy as np
import pyarrow as pa
from dora import Node
-import time
-
-
-try:
- from pyorbbecsdk import (
- Config,
- Context,
- FrameSet,
- OBError,
- OBFormat,
- OBSensorType,
- Pipeline,
- VideoFrame,
- VideoStreamProfile,
- )
-except ImportError as err:
- print(
- "Please install pyorbbecsdk first by following the instruction at: https://github.com/orbbec/pyorbbecsdk",
- )
- raise err
-
-
-class TemporalFilter:
- """TODO: Add docstring."""
-
- def __init__(self, alpha):
- """TODO: Add docstring."""
- self.alpha = alpha
- self.previous_frame = None
-
- def process(self, frame):
- """TODO: Add docstring."""
- if self.previous_frame is None:
- result = frame
- else:
- result = cv2.addWeighted(
- frame,
- self.alpha,
- self.previous_frame,
- 1 - self.alpha,
- 0,
- )
- self.previous_frame = result
- return result
-
-
-def yuyv_to_bgr(frame: np.ndarray, width: int, height: int) -> np.ndarray:
- """TODO: Add docstring."""
- yuyv = frame.reshape((height, width, 2))
- return cv2.cvtColor(yuyv, cv2.COLOR_YUV2BGR_YUY2)
-
-
-def uyvy_to_bgr(frame: np.ndarray, width: int, height: int) -> np.ndarray:
- """TODO: Add docstring."""
- uyvy = frame.reshape((height, width, 2))
- return cv2.cvtColor(uyvy, cv2.COLOR_YUV2BGR_UYVY)
-
-
-def i420_to_bgr(frame: np.ndarray, width: int, height: int) -> np.ndarray:
- """TODO: Add docstring."""
- y = frame[0:height, :]
- u = frame[height : height + height // 4].reshape(height // 2, width // 2)
- v = frame[height + height // 4 :].reshape(height // 2, width // 2)
- yuv_image = cv2.merge([y, u, v])
- return cv2.cvtColor(yuv_image, cv2.COLOR_YUV2BGR_I420)
-
-
-def nv21_to_bgr(frame: np.ndarray, width: int, height: int) -> np.ndarray:
- """TODO: Add docstring."""
- y = frame[0:height, :]
- uv = frame[height : height + height // 2].reshape(height // 2, width)
- yuv_image = cv2.merge([y, uv])
- return cv2.cvtColor(yuv_image, cv2.COLOR_YUV2BGR_NV21)
-
-
-def nv12_to_bgr(frame: np.ndarray, width: int, height: int) -> np.ndarray:
- """TODO: Add docstring."""
- y = frame[0:height, :]
- uv = frame[height : height + height // 2].reshape(height // 2, width)
- yuv_image = cv2.merge([y, uv])
- return cv2.cvtColor(yuv_image, cv2.COLOR_YUV2BGR_NV12)
-
-
-def frame_to_bgr_image(frame: VideoFrame):
- """TODO: Add docstring."""
- width = frame.get_width()
- height = frame.get_height()
- color_format = frame.get_format()
- data = np.asanyarray(frame.get_data())
- image = np.zeros((height, width, 3), dtype=np.uint8)
- if color_format == OBFormat.RGB:
- image = np.resize(data, (height, width, 3))
- image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
- elif color_format == OBFormat.BGR:
- image = np.resize(data, (height, width, 3))
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- elif color_format == OBFormat.YUYV:
- image = np.resize(data, (height, width, 2))
- image = cv2.cvtColor(image, cv2.COLOR_YUV2BGR_YUYV)
- elif color_format == OBFormat.MJPG:
- image = cv2.imdecode(data, cv2.IMREAD_COLOR)
- elif color_format == OBFormat.I420:
- return i420_to_bgr(data, width, height)
- elif color_format == OBFormat.NV12:
- return nv12_to_bgr(data, width, height)
- elif color_format == OBFormat.NV21:
- return nv21_to_bgr(data, width, height)
- elif color_format == OBFormat.UYVY:
- image = np.resize(data, (height, width, 2))
- image = cv2.cvtColor(image, cv2.COLOR_YUV2BGR_UYVY)
- else:
- print(f"Unsupported color format: {color_format}")
- return None
- return image
-
-
-MIN_DEPTH_METERS = 0.01
-MAX_DEPTH_METERS = 15.0
-
-GET_DEVICE_FROM = os.getenv("GET_DEVICE_FROM", "SN") # SN or INDEX
-DEVICE_SN = os.getenv("DEVICE_SN")
-DEVICE_INDEX = int(os.getenv("DEVICE_INDEX", "0"))
+V4L2_DEVICE = os.getenv("V4L2_DEVICE", "/dev/video16")
+IMAGE_WIDTH = int(os.getenv("IMAGE_WIDTH", "640"))
+IMAGE_HEIGHT = int(os.getenv("IMAGE_HEIGHT", "480"))
def main():
- """TODO: Add docstring."""
node = Node()
- ctx = Context()
- device_list = ctx.query_devices()
-
- if GET_DEVICE_FROM == "SN":
- if DEVICE_SN is not None:
- curr_device_cnt = device_list.get_count()
- if curr_device_cnt == 0:
- print("No device connected")
- return
- for i in range(curr_device_cnt):
- device = device_list.get_device_by_index(i)
- device_info = device.get_device_info()
- # get device serial number
- serial_number = device_info.get_serial_number()
- if serial_number == DEVICE_SN:
- device = device_list.get_device_by_index(i)
- break
- else:
- raise ValueError
- elif GET_DEVICE_FROM == "INDEX":
- device = device_list.get_device_by_index(int(DEVICE_INDEX))
-
- # temporal_filter = TemporalFilter(alpha=0.5)
- pipeline = Pipeline(device)
-
- profile_list = pipeline.get_stream_profile_list(OBSensorType.COLOR_SENSOR)
- try:
- color_profile: VideoStreamProfile = profile_list.get_video_stream_profile(
- 640,
- 480,
- OBFormat.MJPG,
- 30,
- )
- except OBError as e:
- print(e)
- color_profile = profile_list.get_default_video_stream_profile()
- print("color profile: ", color_profile)
-
- profile_list = pipeline.get_stream_profile_list(OBSensorType.DEPTH_SENSOR)
- try:
- depth_profile: VideoStreamProfile = profile_list.get_video_stream_profile(
- 640,
- 400,
- OBFormat.Y11,
- 30,
- )
- except OBError as e:
- print(e)
- depth_profile = profile_list.get_default_video_stream_profile()
- print("depth profile: ", depth_profile)
-
- config = Config()
- config.enable_stream(color_profile)
- config.enable_stream(depth_profile)
- pipeline.start(config)
-
- for _event in node:
- # while True:
-
- try:
- frames: FrameSet = pipeline.wait_for_frames(100)
- if frames is None:
+ cap = cv2.VideoCapture(V4L2_DEVICE, cv2.CAP_V4L2)
+ cap.set(cv2.CAP_PROP_FRAME_WIDTH, IMAGE_WIDTH)
+ cap.set(cv2.CAP_PROP_FRAME_HEIGHT, IMAGE_HEIGHT)
+ cap.set(cv2.CAP_PROP_FPS, 30)
+ # 请求 MJPG,带宽低且解码质量好
+ cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
+
+ if not cap.isOpened():
+ print(f"[CAM_TOP] ERROR: cannot open {V4L2_DEVICE}", flush=True)
+ return
+
+ print(f"[CAM_TOP] opened {V4L2_DEVICE} "
+ f"{int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))}x"
+ f"{int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))} "
+ f"@ {cap.get(cv2.CAP_PROP_FPS):.0f}fps", flush=True)
+
+ for event in node:
+ if event["type"] == "INPUT" and event["id"] == "tick":
+ ret, bgr_image = cap.read()
+ if not ret or bgr_image is None:
+ print("[CAM_TOP] frame read failed", flush=True)
continue
- # Get Color image
- color_frame = frames.get_color_frame()
- if color_frame is None:
- continue
+ mean_val = bgr_image.mean()
+ print(f"[CAM_TOP] mean={mean_val:.1f} std={bgr_image.std():.1f} size={bgr_image.shape}", flush=True)
- # convert to RGB format
- color_image = frame_to_bgr_image(color_frame)
- if color_image is None:
- print("failed to convert frame to image")
+ if mean_val < 5:
continue
-
- # # Send Color Image
- # ret, frame = cv2.imencode("." + "jpeg", color_image)
- # if ret:
- node.send_output("image", pa.array(color_image.ravel()), {"encoding": "bgr8", "width": int(640), "height": int(480)})
- # Get Depth data
- depth_frame = frames.get_depth_frame()
- if depth_frame is None:
+ ret2, jpeg_buf = cv2.imencode(".jpg", bgr_image)
+ if not ret2:
continue
- width = depth_frame.get_width()
- height = depth_frame.get_height()
- scale = depth_frame.get_depth_scale()
- depth_data = np.frombuffer(depth_frame.get_data(), dtype=np.uint16)
- depth_data = depth_data.reshape((height, width))
- depth_data = depth_data.astype(np.float32) * scale * 0.001
- depth_data = np.where(
- (depth_data > MIN_DEPTH_METERS) & (depth_data < MAX_DEPTH_METERS),
- depth_data,
- 0,
- )
-
- # Send Depth data
- storage = pa.array(depth_data.ravel())
- node.send_output("depth", storage, {"width": int(width), "height": int(height)})
-
- # Convert to Image
- depth_image = cv2.normalize(
- depth_data,
- None,
- 0,
- 255,
- cv2.NORM_MINMAX,
- dtype=cv2.CV_8U,
- )
- # Send Depth Image
- depth_image = cv2.applyColorMap(depth_image, cv2.COLORMAP_JET)
- ret, frame = cv2.imencode("." + "jpeg", depth_image)
- if ret:
- node.send_output("image_depth", pa.array(frame), {"encoding": "jpeg", "width": int(640), "height": int(400)})
- # cv2.imshow("0", color_image)
- # cv2.waitKey(40)
+ jpeg_array = np.ascontiguousarray(jpeg_buf).ravel()
+ node.send_output("image", pa.array(jpeg_array), {
+ "encoding": "jpeg",
+ "width": IMAGE_WIDTH,
+ "height": IMAGE_HEIGHT,
+ })
- except KeyboardInterrupt:
- break
- pipeline.stop()
+ cap.release()
if __name__ == "__main__":
diff --git a/components/cameras/dora-camera-orbbec-v1/pyproject.toml b/components/cameras/dora-camera-orbbec-v1/pyproject.toml
index 60249f7..bd053f8 100644
--- a/components/cameras/dora-camera-orbbec-v1/pyproject.toml
+++ b/components/cameras/dora-camera-orbbec-v1/pyproject.toml
@@ -12,7 +12,7 @@ authors = [
readme = "README.md"
requires-python = ">=3.10,<3.11"
dependencies = [
- "dora-rs (>=0.3.11,<0.4.0)",
+ "dora-rs (>=0.4.0,<0.5.0)",
"sb-pyorbbecsdk",
"opencv-python",
"numpy (<2.0)"
diff --git a/components/cameras/dora-camera-realsense/dora_camera_realsense/main.py b/components/cameras/dora-camera-realsense/dora_camera_realsense/main.py
index c41d38b..6246059 100644
--- a/components/cameras/dora-camera-realsense/dora_camera_realsense/main.py
+++ b/components/cameras/dora-camera-realsense/dora_camera_realsense/main.py
@@ -17,35 +17,49 @@ def main():
device_serial = os.getenv("DEVICE_SERIAL", "")
image_height = int(os.getenv("IMAGE_HEIGHT", "480"))
image_width = int(os.getenv("IMAGE_WIDTH", "640"))
- encoding = os.getenv("ENCODING", "rgb8")
- ctx = rs.context()
- devices = ctx.query_devices()
- if devices.size() == 0:
- raise ConnectionError("No realsense camera connected.")
-
- # Serial list
- serials = [device.get_info(rs.camera_info.serial_number) for device in devices]
- if device_serial and (device_serial not in serials):
- raise ConnectionError(
- f"Device with serial {device_serial} not found within: {serials}.",
- )
+ encoding = os.getenv("ENCODING", "jpeg") # 改为 jpeg 以降低带宽
pipeline = rs.pipeline()
config = rs.config()
- config.enable_device(device_serial)
+ # 如果指定了 device_serial,尝试使用它,否则使用任何可用设备
+ if device_serial:
+ try:
+ config.enable_device(device_serial)
+ print(f"Attempting to connect to device {device_serial}")
+ except Exception as e:
+ print(f"Warning: Could not enable specific device {device_serial}: {e}")
+ print("Connecting to any available RealSense device...")
+
config.enable_stream(rs.stream.color, image_width, image_height, rs.format.rgb8, 30)
- config.enable_stream(rs.stream.depth, image_width, image_height, rs.format.z16, 30)
+ # 注释掉深度流配置 - 只输出彩色图
+ # config.enable_stream(rs.stream.depth, image_width, image_height, rs.format.z16, 30)
- align_to = rs.stream.color
- align = rs.align(align_to)
+ # 不需要对齐,因为只有彩色流
+ # align_to = rs.stream.color
+ # align = rs.align(align_to)
- profile = pipeline.start(config)
+ try:
+ profile = pipeline.start(config)
+ except RuntimeError as e:
+ print(f"Error starting pipeline: {e}")
+ raise
rgb_profile = profile.get_stream(rs.stream.color)
- depth_profile = profile.get_stream(rs.stream.depth)
- _depth_intr = depth_profile.as_video_stream_profile().get_intrinsics()
+ # 不需要获取深度配置
+ # depth_profile = profile.get_stream(rs.stream.depth)
+ # _depth_intr = depth_profile.as_video_stream_profile().get_intrinsics()
rgb_intr = rgb_profile.as_video_stream_profile().get_intrinsics()
+
+ # Warm up the camera - wait for a few frames to stabilize
+ print("Warming up RealSense camera...")
+ for _ in range(30):
+ try:
+ pipeline.wait_for_frames(timeout_ms=1000)
+ except RuntimeError:
+ pass
+ print("RealSense camera ready")
+
node = Node()
start_time = time.time()
@@ -63,15 +77,21 @@ def main():
event_id = event["id"]
if event_id == "tick":
- frames = pipeline.wait_for_frames()
- aligned_frames = align.process(frames)
- aligned_depth_frame = aligned_frames.get_depth_frame()
- color_frame = aligned_frames.get_color_frame()
- if not aligned_depth_frame or not color_frame:
+ try:
+ frames = pipeline.wait_for_frames(timeout_ms=10000)
+ except RuntimeError as e:
+ print(f"Warning: Failed to get frames: {e}")
+ continue
+ # 不需要对齐,直接获取彩色帧
+ # aligned_frames = align.process(frames)
+ # aligned_depth_frame = aligned_frames.get_depth_frame()
+ color_frame = frames.get_color_frame()
+ if not color_frame:
continue
- depth_image = np.asanyarray(aligned_depth_frame.get_data())
- scaled_depth_image = depth_image
+ # 只处理彩色图像
+ # depth_image = np.asanyarray(aligned_depth_frame.get_data())
+ # scaled_depth_image = depth_image
frame = np.asanyarray(color_frame.get_data())
## Change rgb to bgr
@@ -92,6 +112,7 @@ def main():
if encoding == "bgr8":
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
elif encoding in ["jpeg", "jpg", "jpe", "bmp", "webp", "png"]:
+ frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) # imdecode expects BGR
ret, frame = cv2.imencode("." + encoding, frame)
if not ret:
print("Error encoding image...")
@@ -104,13 +125,13 @@ def main():
# metadata["principal_point"] = [int(rgb_intr.ppx), int(rgb_intr.ppy)]
metadata["timestamp"] = time.time_ns()
node.send_output("image", storage, metadata)
- metadata["encoding"] = "mono16"
- scaled_depth_image[scaled_depth_image > 5000] = 0
- node.send_output(
- "image_depth",
- pa.array(scaled_depth_image.ravel()),
- metadata,
- )
+# metadata["encoding"] = "mono16"
+# scaled_depth_image[scaled_depth_image > 5000] = 0
+# node.send_output(
+# "image_depth",
+# pa.array(scaled_depth_image.ravel()),
+# metadata,
+# )
elif event_type == "ERROR":
raise RuntimeError(event["error"])
diff --git a/components/cameras/dora-camera-realsense/pyproject.toml b/components/cameras/dora-camera-realsense/pyproject.toml
index 7df945d..11d5acd 100644
--- a/components/cameras/dora-camera-realsense/pyproject.toml
+++ b/components/cameras/dora-camera-realsense/pyproject.toml
@@ -13,7 +13,7 @@ authors = [
readme = "README.md"
requires-python = ">=3.9,<3.11"
dependencies = [
- "dora-rs (>=0.3.11,<0.4.0)",
+ "dora-rs (>=0.4.0,<0.5.0)",
"opencv-python",
"pyrealsense2",
"numpy"
diff --git a/robodriver/core/coordinator.py b/robodriver/core/coordinator.py
index 9fa814e..444b1b4 100644
--- a/robodriver/core/coordinator.py
+++ b/robodriver/core/coordinator.py
@@ -25,7 +25,7 @@
from robodriver.utils.data_file import check_disk_space, find_epindex_from_dataid_json
from robodriver.utils.utils import cameras_to_stream_json, get_current_git_branch
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class Coordinator:
diff --git a/robodriver/core/coordinator_need_test.py b/robodriver/core/coordinator_need_test.py
index c72dc60..1040685 100644
--- a/robodriver/core/coordinator_need_test.py
+++ b/robodriver/core/coordinator_need_test.py
@@ -27,7 +27,7 @@
from robodriver.utils.data_file import check_disk_space, find_epindex_from_dataid_json
from robodriver.utils.utils import cameras_to_stream_json, get_current_git_branch
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class Coordinator:
diff --git a/robodriver/core/recorder.py b/robodriver/core/recorder.py
index 4f82d9d..958d8a3 100644
--- a/robodriver/core/recorder.py
+++ b/robodriver/core/recorder.py
@@ -30,7 +30,7 @@
validate_session,
)
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
def sanity_check_dataset_robot_compatibility(
diff --git a/robodriver/core/replayer.py b/robodriver/core/replayer.py
index a4153a0..0afb221 100644
--- a/robodriver/core/replayer.py
+++ b/robodriver/core/replayer.py
@@ -9,7 +9,7 @@
from robodriver.robots.utils import Robot, busy_wait
from robodriver.utils.utils import log_say
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
@dataclass
@@ -34,7 +34,7 @@ class ReplayConfig:
@draccus.wrap()
def replay(cfg: ReplayConfig):
- logging_mp.basic_config(level=logging_mp.INFO)
+ logging_mp.basicConfig(level=logging_mp.INFO)
# logging.info(pformat(asdict(cfg)))
# robot = make_robot_from_config(cfg.robot)
diff --git a/robodriver/core/ros2thread.py b/robodriver/core/ros2thread.py
index 7b95c56..c4ebbed 100644
--- a/robodriver/core/ros2thread.py
+++ b/robodriver/core/ros2thread.py
@@ -8,7 +8,7 @@
import logging_mp
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class ROS2_NodeManager():
diff --git a/robodriver/core/simulator.py b/robodriver/core/simulator.py
index e90053f..04d2a31 100644
--- a/robodriver/core/simulator.py
+++ b/robodriver/core/simulator.py
@@ -10,7 +10,7 @@
from dataclasses import dataclass
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
@dataclass
diff --git a/robodriver/dataset/dorobot_dataset.py b/robodriver/dataset/dorobot_dataset.py
index 52fdfad..45e8d5c 100644
--- a/robodriver/dataset/dorobot_dataset.py
+++ b/robodriver/dataset/dorobot_dataset.py
@@ -68,7 +68,7 @@
DOROBOT_DATASET_VERSION = "v1.0"
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class DoRobotDatasetMetadata:
diff --git a/robodriver/robots/daemon.py b/robodriver/robots/daemon.py
index 4f25084..795cbab 100644
--- a/robodriver/robots/daemon.py
+++ b/robodriver/robots/daemon.py
@@ -13,7 +13,7 @@
safe_update_status,
)
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
def log_control_info(
diff --git a/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/node.py b/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/node.py
index cde67f9..4ad8fdb 100644
--- a/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/node.py
+++ b/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/node.py
@@ -14,7 +14,7 @@
from nav_msgs.msg import Odometry
from typing import Dict, Any
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
# 尝试导入厂家自定义消息
try:
diff --git a/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/robot.py b/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/robot.py
index 0e48e52..ed7dbd5 100644
--- a/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/robot.py
+++ b/robodriver/robots/robodriver-robot-a2d-aio-ros2/robodriver_robot_a2d_aio_ros2/robot.py
@@ -14,7 +14,7 @@
from .node import A2DAioRos2Node
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class A2DAioRos2Robot(Robot):
diff --git a/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/node.py b/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/node.py
index cc3641d..049cf63 100644
--- a/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/node.py
+++ b/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/node.py
@@ -10,7 +10,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/robot.py b/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/robot.py
index 322ba45..bf14fcc 100644
--- a/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/robot.py
+++ b/robodriver/robots/robodriver-robot-agilex-aloha-aio-dora/robodriver_robot_agilex_aloha_aio_dora/robot.py
@@ -15,7 +15,7 @@
from .node import AgilexAlohaAIODoraRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class AgilexAlohaAIODoraRobot(Robot):
diff --git a/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/node.py b/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/node.py
index db5246c..c5c38a6 100644
--- a/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/node.py
+++ b/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/node.py
@@ -10,7 +10,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/robot.py b/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/robot.py
index 07f8d05..32e8058 100644
--- a/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/robot.py
+++ b/robodriver/robots/robodriver-robot-franka-aio-dora/robodriver_robot_franka_aio_dora/robot.py
@@ -16,7 +16,7 @@
from .node import FrankaAioDoraRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class FrankaAioDoraRobot(Robot):
diff --git a/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/node.py b/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/node.py
index f9fac96..bf1b198 100755
--- a/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/node.py
+++ b/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/node.py
@@ -16,7 +16,7 @@
CONNECT_TIMEOUT_FRAME = 10
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GalaxeaLiteEEposeROS2RobotNode(Node):
diff --git a/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/robot.py b/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/robot.py
index dce3e6b..c37defb 100755
--- a/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/robot.py
+++ b/robodriver/robots/robodriver-robot-galaxea-lite-eepose-ros2/robodriver_robot_galaxea_lite_eepose_ros2/robot.py
@@ -15,7 +15,7 @@
from .node import GalaxeaLiteEEposeROS2RobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
diff --git a/robodriver/robots/robodriver-robot-galaxealite-aio-ros1/robodriver_robot_galaxealite_aio_ros1/robot.py b/robodriver/robots/robodriver-robot-galaxealite-aio-ros1/robodriver_robot_galaxealite_aio_ros1/robot.py
index 2045524..d0e5d15 100755
--- a/robodriver/robots/robodriver-robot-galaxealite-aio-ros1/robodriver_robot_galaxealite_aio_ros1/robot.py
+++ b/robodriver/robots/robodriver-robot-galaxealite-aio-ros1/robodriver_robot_galaxealite_aio_ros1/robot.py
@@ -17,7 +17,7 @@
from .node import GALAXEALITEAIORos1RobotNode, ros_spin_thread # 导入ROS1节点和spin函数
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GALAXEALITEAIORos1Robot(Robot): # 类名改为ROS1标识
diff --git a/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/node.py b/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/node.py
index ac8f0be..1ba659d 100755
--- a/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/node.py
+++ b/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/node.py
@@ -14,7 +14,7 @@
import logging_mp
CONNECT_TIMEOUT_FRAME = 10
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GALAXEALITEAIORos2RobotNode(ROS2Node):
def __init__(self):
diff --git a/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/robot.py b/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/robot.py
index 6199e58..624aca9 100755
--- a/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/robot.py
+++ b/robodriver/robots/robodriver-robot-galaxealite-aio-ros2/robodriver_robot_galaxealite_aio_ros2/robot.py
@@ -16,7 +16,7 @@
from .node import GALAXEALITEAIORos2RobotNode, ros_spin_thread
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros1/robodriver_robot_galaxealite_follower_ros1/robot.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros1/robodriver_robot_galaxealite_follower_ros1/robot.py
index 5bd0cef..639e292 100755
--- a/robodriver/robots/robodriver-robot-galaxealite-follower-ros1/robodriver_robot_galaxealite_follower_ros1/robot.py
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros1/robodriver_robot_galaxealite_follower_ros1/robot.py
@@ -17,7 +17,7 @@
from .node import GALAXEALITEFollowerRos1RobotNode, ros_spin_thread # 导入ROS1节点和spin函数
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GALAXEALITEFollowerRos1Robot(Robot): # 类名改为ROS1标识
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/.gitignore b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/.gitignore
new file mode 100644
index 0000000..9244879
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/.gitignore
@@ -0,0 +1,209 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[codz]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py.cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+# For a library or package, you might want to ignore these files since the code is
+# intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# UV
+# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
+# This is especially recommended for binary packages to ensure reproducibility, and is more
+# commonly ignored for libraries.
+uv.lock
+
+# poetry
+# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
+# This is especially recommended for binary packages to ensure reproducibility, and is more
+# commonly ignored for libraries.
+# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
+#poetry.lock
+#poetry.toml
+
+# pdm
+# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
+# pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python.
+# https://pdm-project.org/en/latest/usage/project/#working-with-version-control
+#pdm.lock
+#pdm.toml
+.pdm-python
+.pdm-build/
+
+# pixi
+# Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control.
+#pixi.lock
+# Pixi creates a virtual environment in the .pixi directory, just like venv module creates one
+# in the .venv directory. It is recommended not to include this directory in version control.
+.pixi
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.envrc
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
+# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+# and can be added to the global gitignore or merged into this file. For a more nuclear
+# option (not recommended) you can uncomment the following to ignore the entire idea folder.
+#.idea/
+
+# Abstra
+# Abstra is an AI-powered process automation framework.
+# Ignore directories containing user credentials, local state, and settings.
+# Learn more at https://abstra.io/docs
+.abstra/
+
+# Visual Studio Code
+# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
+# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
+# and can be added to the global gitignore or merged into this file. However, if you prefer,
+# you could uncomment the following to ignore the entire vscode folder
+# .vscode/
+
+# Ruff stuff:
+.ruff_cache/
+
+# PyPI configuration file
+.pypirc
+
+# Cursor
+# Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
+# exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
+# refer to https://docs.cursor.com/context/ignore-files
+.cursorignore
+.cursorindexingignore
+
+# Marimo
+marimo/_static/
+marimo/_lsp/
+__marimo__/
+
+**out/
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/LICENSE b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/LICENSE
new file mode 100644
index 0000000..261eeb9
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/README-EN.md b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/README-EN.md
new file mode 100644
index 0000000..61511cf
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/README-EN.md
@@ -0,0 +1,230 @@
+# robodriver-robot-galaxealite-follower-ros2
+## Quick Start
+### Access Requirements
+#### 1. Hardware Requirements
+Refer to the document: [https://jwolpxeehx.feishu.cn/wiki/LYcNwC2rBirg4Dk9CoScemx3n8f?from=from_copylink](https://jwolpxeehx.feishu.cn/wiki/LYcNwC2rBirg4Dk9CoScemx3n8f?from=from_copylink)
+
+#### 2. Environment & Network Requirements
+- Galaxea teleoperation function is normal;
+- ROS2 (Humble/Iron version) is installed on the host, which can receive galaxea topic data;
+- The host and galaxea main controller are connected to the same local area network (Ethernet connection is recommended);
+- Host is connected to the internet and can access the web normally;
+- Set ROS Domain ID (example):
+ ```bash
+ export ROS_DOMAIN_ID=1
+ ```
+
+### Prerequisites (Execute first if not completed)
+1. Embodied Platform Application: [https://ei2data.baai.ac.cn/home](https://ei2data.baai.ac.cn/home)
+2. Edge Service Installation: [https://github.com/FlagOpen/RoboDriver-Server.git](https://github.com/FlagOpen/RoboDriver-Server.git)
+
+### Clone Code Repositories
+#### 1. Clone RoboDriver Core Repository (Skip if already cloned)
+```bash
+git clone https://github.com/FlagOpen/RoboDriver.git
+```
+
+#### 2. Enter Galaxea ROS2 folder
+```bash
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-galaxealite-follower-ros2
+```
+
+### Create Miniconda Virtual Environment
+#### 1. Install Miniconda (Execute if not installed, compatible with Linux/macOS)
+```bash
+# Download installation package
+wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
+# Execute installation (follow prompts, recommended to agree to conda init)
+bash Miniconda3-latest-Linux-x86_64.sh
+# Refresh terminal to activate conda
+source ~/.bashrc
+```
+
+#### 2. Install Dependencies
+```bash
+# Create and activate Python 3.10 environment (run if environment not created)
+conda create -n robodriver python=3.10 -y
+conda activate robodriver
+
+# Install RoboDriver core dependencies (run if dependencies not installed)
+cd /path/to/your/RoboDriver
+pip install -e .
+
+# Install galaxea robot hardware dependencies
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-galaxealite-follower-ros2
+pip install -e .
+```
+
+### Configure node.py (Adapt to Robot's Actual Topics)
+This script implements core functions such as multi-topic synchronous subscription, data conversion, and command publishing for the GALAXEALITE robot. You need to modify the following configurations according to the actual topic names of the robot (core location: node initialization function `__init__`).
+
+#### 1. Publisher Topics (Motion Command Output)
+| Publisher Variable | Default Topic | Function |
+|--------------------------|--------------------------------------------|---------------------------|
+| `publisher_left_arm` | `/motion_target/target_joint_state_arm_left` | Publish left arm joint target values |
+| `publisher_right_arm` | `/motion_target/target_joint_state_arm_right` | Publish right arm joint target values |
+| `publisher_left_gripper` | `/motion_target/target_position_gripper_left` | Publish left gripper position target values |
+| `publisher_right_gripper`| `/motion_target/target_position_gripper_right` | Publish right gripper position target values |
+| `publisher_state_torso` | `/motion_target/target_joint_state_torso` | Publish torso joint target values |
+
+#### 2. Follow Feedback Subscription Topics (Robot State Input)
+| Subscriber Variable | Default Topic | Function |
+|-------------------------|------------------------------|---------------------------|
+| `sub_arm_left` | `/hdas/feedback_arm_left` | Subscribe to left arm joint feedback |
+| `sub_arm_right` | `/hdas/feedback_arm_right` | Subscribe to right arm joint feedback |
+| `sub_gripper_left` | `/hdas/feedback_gripper_left` | Subscribe to left gripper feedback |
+| `sub_gripper_right` | `/hdas/feedback_gripper_right` | Subscribe to right gripper feedback |
+| `sub_torso` | `/hdas/feedback_torso` | Subscribe to torso joint feedback |
+
+#### 3. Image Topic Subscription (Camera Data Input)
+| Subscriber Variable | Default Topic | Function |
+|---------------------------|----------------------------------------------------|---------------------------|
+| `sub_camera_top_left` | `/hdas/camera_head/left_raw/image_raw_color/compressed` | Subscribe to top-left camera image |
+| `sub_camera_top_right` | `/hdas/camera_head/right_raw/image_raw_color/compressed` | Subscribe to top-right camera image |
+| `sub_camera_wrist_left` | `/hdas/camera_wrist_left/color/image_raw/compressed` | Subscribe to left wrist camera image |
+| `sub_camera_wrist_right` | `/hdas/camera_wrist_right/color/image_raw/compressed` | Subscribe to right wrist camera image |
+
+**Modification Example**: Change the top-left camera subscription topic to a custom path
+```python
+sub_camera_top_left = Subscriber(self, CompressedImage, '/my_robot/camera/top_left/compressed')
+```
+
+#### 4. Key Parameter Adjustments
+##### (1) QoS Configuration (Network Transmission Strategy)
+```python
+# Reliable transmission (default for publishers/critical feedback)
+self.qos = QoSProfile(
+ durability=DurabilityPolicy.VOLATILE, # Non-persistent
+ reliability=ReliabilityPolicy.RELIABLE, # Ensure message delivery
+ history=HistoryPolicy.KEEP_LAST, # Keep last N messages
+ depth=10 # Queue depth
+)
+
+# Best-effort transmission (non-critical commands, prioritize speed)
+self.qos_best_effort = QoSProfile(
+ reliability=ReliabilityPolicy.BEST_EFFORT,
+ depth=10
+)
+```
+
+##### (2) Multi-Topic Synchronization Parameters
+```python
+self.sync = ApproximateTimeSynchronizer(
+ [sub_arm_left, sub_arm_right, sub_gripper_left, sub_gripper_right, sub_torso],
+ queue_size=10, # Larger queue = higher fault tolerance, more memory usage
+ slop=0.1 # Time tolerance (seconds): maximum allowed timestamp difference between topics
+)
+```
+
+##### (3) Publish Frequency Limit
+```python
+# Default 30Hz, example to modify to 10Hz
+self.min_interval_ns = 1e9 / 10
+```
+
+### Configure config.py (Hardware Collection Template)
+This script defines the robot hardware configuration based on the `lerobot` framework. Adjust according to actual collection requirements (e.g., joints, cameras, recording mode).
+
+#### 1. Motor Configuration Modification Example (Add 4th Torso Joint)
+```python
+follower_motors: Dict[str, Motor] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "follower_arms":{
+ # Original configuration
+ "torso_joint_3":Motor(17, "sts3215", norm_mode_body),
+ # Add new joint
+ "torso_joint_4":Motor(18, "sts3215", norm_mode_body),
+ }
+ }
+)
+```
+
+#### 2. Video Recording Switch (use_videos)
+| Configuration Value | Behavior Description | Additional Operations |
+|---------------------|--------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
+| `False` | Images are encoded into videos after collection (delayed encoding) | Need to modify `lerobot` source code:
Edit `/path/to/miniconda3/envs/robodriver/lib/python3.10/site-packages/lerobot/datasets/pipeline_features.py`
Comment out:
`# if is_image and not use_videos:`
`# continue` |
+| `True` | Images are encoded into videos in real-time during collection (encoding time depends on host performance) | No need to modify source code, use directly |
+
+## Start Data Collection
+### 1. Activate Environment and Start Basic Services
+```bash
+# Start nginx service
+sudo systemctl start nginx
+# Enter RoboDriver directory and activate environment
+cd /path/to/your/RoboDriver
+conda activate robodriver
+```
+
+### 2. Start Galaxealite Topics
+Execute the robot's built-in startup script to ensure ROS2 topics are published normally.
+
+### 3. Start RoboDriver
+```bash
+python -m robodriver.scripts.evaluate --robot.type=galaxealite-follower-ros2
+```
+
+## Common Issue Fixes
+1. **Arm and gripper don't move during evaluating**
+ Unplug network of teleops device
+
+2. **OpenCV cvShowImage error on startup or need to remove image popups on startup**
+ Comment out the following two lines in `robodriver/scripts/run.py`:
+ ```python
+ # cv2.imshow(key, img)
+ # cv2.waitKey(1)
+ ```
+
+3. **Cannot access http://localhost:5805/hmi/**
+ Restart the nginx service:
+ ```bash
+ sudo systemctl restart nginx
+ ```
+
+4. **Robot response timeout**
+ Check the network connectivity between the host and galaxea main controller, and verify that the RoboDriver startup script parameters are correct.
+
+## Data Description
+### Storage Path
+Collected data is stored in the `/home/yourname/DoRobot` directory by default, organized into folders named by task.
+
+### Directory Structure
+```
+TaskName_TaskId/
+├── audio/ # Audio data
+│ └── chunk-000/
+│ ├── observation.audio.audio_left/ # Left channel audio (WAV format)
+│ └── observation.audio.audio_right/ # Right channel audio (WAV format)
+├── data/ # Motion command/feedback data (Parquet format)
+│ └── chunk-000/
+├── depth/ # Depth images (AVI format)
+│ └── chunk-000/
+│ ├── observation.images.image_depth_right/
+│ └── observation.images.image_depth_top/
+├── device/ # Device information
+│ └── device_info.json
+├── label/ # Annotation data (generated after annotation)
+│ └── data_annotation.json
+├── meta/ # Metadata
+│ ├── common_record.json # Collection task information
+│ ├── episodes.jsonl # Task description and frame length
+│ ├── episodes_stats.jsonl # Normalized statistical information
+│ ├── info.json # Feature schema, frame rate, version
+│ ├── op_dataid.jsonl # Device number
+│ └── tasks.jsonl
+└── videos/ # Visible light images (MP4 format)
+ └── chunk-000/
+ ├── observation.images.image_left/ # Left camera
+ ├── observation.images.image_left_tac_l/ # Left tactile left camera
+ ├── observation.images.image_left_tac_r/ # Left tactile right camera
+ ├── observation.images.image_right/ # Right camera
+ ├── observation.images.image_right_tac_l/ # Right tactile left camera
+ ├── observation.images.image_right_tac_r/ # Right tactile right camera
+ └── observation.images.image_top/ # Top camera
+```
+
+## Acknowledgment
+
+- Thanks to LeRobot team 🤗, [LeRobot](https://github.com/huggingface/lerobot).
+- Thanks to dora-rs 🤗, [dora](https://github.com/dora-rs/dora).
+
+## Cite
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/README.md b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/README.md
new file mode 100644
index 0000000..b82a47b
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/README.md
@@ -0,0 +1,208 @@
+# robodriver-robot-galaxealite-follower-ros2
+## 快速开始
+### 接入要求
+#### 1. 硬件要求
+参考文档:[https://jwolpxeehx.feishu.cn/wiki/LYcNwC2rBirg4Dk9CoScemx3n8f?from=from_copylink](https://jwolpxeehx.feishu.cn/wiki/LYcNwC2rBirg4Dk9CoScemx3n8f?from=from_copylink)
+
+#### 2. 环境与网络要求
+- galaxea 遥操作功能正常;
+- 主机已安装 ROS2(Humble/Iron 版本),可接收 galaxea 话题数据;
+- 主机与 galaxea 主控接入同一局域网(推荐网线直连);
+- 主机连接网络,可以正常上网;
+- 设置 ROS 域 ID(示例):
+ ```bash
+ export ROS_DOMAIN_ID=1
+ ```
+
+### 前置准备(未完成需先执行)
+1. 具身平台申请:[https://ei2data.baai.ac.cn/home](https://ei2data.baai.ac.cn/home)
+2. 端侧服务安装:[https://github.com/FlagOpen/RoboDriver-Server.git](https://github.com/FlagOpen/RoboDriver-Server.git)
+
+
+### 克隆代码仓库
+#### 1. 克隆 RoboDriver 核心仓库(已克隆可跳过)
+```bash
+git clone https://github.com/FlagOpen/RoboDriver.git
+```
+
+#### 2. 进入 galaxea ros2 文件夹
+```bash
+git /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-galaxealite-follower-ros2
+```
+
+### 创建 Miniconda 虚拟环境
+#### 1. 安装 Miniconda(未安装时执行,适配 Linux/macOS)
+```bash
+# 下载安装包
+wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
+# 执行安装(按提示操作,建议同意 conda init)
+bash Miniconda3-latest-Linux-x86_64.sh
+# 刷新终端使 conda 生效
+source ~/.bashrc
+```
+
+#### 2. 安装依赖
+```bash
+# 创建并激活 Python 3.10 环境 (未创建环境时运行)
+conda create -n robodriver python=3.10 -y
+conda activate robodriver
+
+# 安装 RoboDriver 核心依赖 (未安装依赖时运行)
+cd /path/to/your/RoboDriver
+pip install -e .
+
+# 安装 galaxea 机器人硬件依赖
+cd /path/to/your/RoboDriver/robodriver/robots/robodriver-robot-galaxealite-follower-ros2
+pip install -e .
+```
+
+### 配置 node.py(适配机器人实际话题)
+该脚本实现 GALAXEALITE 机器人多话题同步订阅、数据转换、指令发布等核心功能,需根据机器人实际话题名称修改以下配置(核心位置:节点初始化函数 `__init__`)。
+
+#### 1. 发布器话题(运动指令输出)
+| 发布器变量 | 默认话题 | 功能 |
+|--------------------------|-------------------------------------------|--------------------------|
+| `publisher_left_arm` | `/motion_target/target_joint_state_arm_left` | 左臂关节目标值发布 |
+| `publisher_right_arm` | `/motion_target/target_joint_state_arm_right` | 右臂关节目标值发布 |
+| `publisher_left_gripper` | `/motion_target/target_position_gripper_left` | 左夹爪位置目标值发布 |
+| `publisher_right_gripper`| `/motion_target/target_position_gripper_right` | 右夹爪位置目标值发布 |
+| `publisher_state_torso` | `/motion_target/target_joint_state_torso` | 躯干关节目标值发布 |
+
+#### 2. 跟随反馈订阅话题(机器人状态输入)
+| 订阅器变量 | 默认话题 | 功能 |
+|------------------|------------------------|----------------------|
+| `sub_arm_left` | `/hdas/feedback_arm_left` | 左臂关节反馈订阅 |
+| `sub_arm_right` | `/hdas/feedback_arm_right` | 右臂关节反馈订阅 |
+| `sub_gripper_left` | `/hdas/feedback_gripper_left` | 左夹爪反馈订阅 |
+| `sub_gripper_right` | `/hdas/feedback_gripper_right` | 右夹爪反馈订阅 |
+| `sub_torso` | `/hdas/feedback_torso` | 躯干关节反馈订阅 |
+
+#### 3. 图像话题订阅(相机数据输入)
+| 订阅器变量 | 默认话题 | 功能 |
+|----------------------|-----------------------------------------------|----------------------|
+| `sub_camera_top_left` | `/hdas/camera_head/left_raw/image_raw_color/compressed` | 顶部左相机图像订阅 |
+| `sub_camera_top_right` | `/hdas/camera_head/right_raw/image_raw_color/compressed` | 顶部右相机图像订阅 |
+| `sub_camera_wrist_left` | `/hdas/camera_wrist_left/color/image_raw/compressed` | 左手腕相机图像订阅 |
+| `sub_camera_wrist_right` | `/hdas/camera_wrist_right/color/image_raw/compressed` | 右手腕相机图像订阅 |
+
+**修改示例**:将顶部左相机订阅话题改为自定义路径
+```python
+sub_camera_top_left = Subscriber(self, CompressedImage, '/my_robot/camera/top_left/compressed')
+```
+
+#### 4. 关键参数调整
+##### (1)QoS 配置(网络传输策略)
+```python
+# 可靠传输(默认用于发布器/关键反馈)
+self.qos = QoSProfile(
+ durability=DurabilityPolicy.VOLATILE, # 不持久化
+ reliability=ReliabilityPolicy.RELIABLE, # 确保消息到达
+ history=HistoryPolicy.KEEP_LAST, # 保留最后N条
+ depth=10 # 队列深度
+)
+
+# 尽力传输(非关键指令,优先速度)
+self.qos_best_effort = QoSProfile(
+ reliability=ReliabilityPolicy.BEST_EFFORT,
+ depth=10
+)
+```
+
+##### (2)多话题同步参数
+```python
+self.sync = ApproximateTimeSynchronizer(
+ [sub_arm_left, sub_arm_right, sub_gripper_left, sub_gripper_right, sub_torso],
+ queue_size=10, # 队列越大容错越高,内存占用越多
+ slop=0.1 # 时间容差(秒):允许话题时间戳最大差值
+)
+```
+
+##### (3)发布频率限制
+```python
+# 默认 30Hz,修改为 10Hz 示例
+self.min_interval_ns = 1e9 / 10
+```
+
+### 配置 config.py(硬件采集模板)
+该脚本基于 `lerobot` 框架定义机器人硬件配置,需根据实际采集需求调整(如关节、相机、录制模式)。
+
+#### 1. 电机配置修改示例(新增躯干第4关节)
+```python
+follower_motors: Dict[str, Motor] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "follower_arms":{
+ # 原有配置
+ "torso_joint_3":Motor(17, "sts3215", norm_mode_body),
+ # 新增关节
+ "torso_joint_4":Motor(18, "sts3215", norm_mode_body),
+ }
+ }
+)
+```
+
+#### 2. 视频录制开关(use_videos)
+| 配置值 | 行为说明 | 额外操作 |
+|--------|--------------------------------------------------------------------------|--------------------------------------------------------------------------|
+| `False`| 图像采集后滞后编码为视频 | 需修改 `lerobot` 源码:
编辑 `/path/to/miniconda3/envs/robodriver/lib/python3.10/site-packages/lerobot/datasets/pipeline_features.py`
注释:
`# if is_image and not use_videos:`
`# continue` |
+| `True` | 图像采集时实时编码为视频(编码时长依赖主机性能)| 无需修改源码,直接使用 |
+
+## 开始推理
+### 1. 激活环境并启动基础服务
+```bash
+# 启动 nginx 服务
+sudo systemctl start nginx
+# 进入 RoboDriver 目录并激活环境
+cd /path/to/your/RoboDriver
+conda activate robodriver
+```
+
+### 2. 启动 Galaxealite 话题
+执行机器人自带的启动脚本,确保 ROS2 话题正常发布。
+
+### 3. 启动 RoboDriver
+```bash
+python -m robodriver.scripts.evaluate --robot.type=galaxealite-follower-ros2
+```
+
+## 常见问题修复
+1. **机器臂不执行 action**
+ 确保摇操模块创建的ros2 node完全下线,最稳妥的方法是拔掉摇操模块的网线
+
+## 数据说明
+### 存储路径
+采集数据默认存储于 `/home/yourname/DoRobot` 目录,按任务命名划分文件夹。
+
+### 目录结构
+```
+TaskName_TaskId/
+├── audio/ # 音频数据
+│ └── chunk-000/
+│ ├── observation.audio.audio_left/ # 左声道音频(WAV格式)
+│ └── observation.audio.audio_right/ # 右声道音频(WAV格式)
+├── data/ # 运动指令/反馈数据(Parquet格式)
+│ └── chunk-000/
+├── depth/ # 深度图像(AVI格式)
+│ └── chunk-000/
+│ ├── observation.images.image_depth_right/
+│ └── observation.images.image_depth_top/
+├── device/ # 设备信息
+│ └── device_info.json
+├── label/ # 标注数据(标注后生成)
+│ └── data_annotation.json
+├── meta/ # 元数据
+│ ├── common_record.json # 采集任务信息
+│ ├── episodes.jsonl # 任务描述与帧长
+│ ├── episodes_stats.jsonl # 归一化统计信息
+│ ├── info.json # 特征schema、帧率、版本
+│ ├── op_dataid.jsonl # 设备编号
+│ └── tasks.jsonl
+└── videos/ # 可见光图像(MP4格式)
+ └── chunk-000/
+ ├── observation.images.image_left/ # 左侧相机
+ ├── observation.images.image_left_tac_l/ # 左侧触觉左相机
+ ├── observation.images.image_left_tac_r/ # 左侧触觉右相机
+ ├── observation.images.image_right/ # 右侧相机
+ ├── observation.images.image_right_tac_l/ # 右侧触觉左相机
+ ├── observation.images.image_right_tac_r/ # 右侧触觉右相机
+ └── observation.images.image_top/ # 顶部相机
+```
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/lerobot_robot_galaxealite_follower_ros2/__init__.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/lerobot_robot_galaxealite_follower_ros2/__init__.py
new file mode 100644
index 0000000..49c9d7a
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/lerobot_robot_galaxealite_follower_ros2/__init__.py
@@ -0,0 +1 @@
+from robodriver_robot_galaxealite_follower_ros2 import *
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/pyproject.toml b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/pyproject.toml
new file mode 100644
index 0000000..92cdafb
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/pyproject.toml
@@ -0,0 +1,25 @@
+[build-system]
+requires = ["setuptools>=61.0", "wheel"]
+build-backend = "setuptools.build_meta"
+
+[project]
+name = "robodriver_robot_galaxealite_follower_ros2"
+version = "0.1.0"
+readme = "README.md"
+requires-python = ">=3.8"
+license = "Apache-2.0"
+authors = [
+ {name = "Xi Wang", email = "wangxi@baai.ac.cn"},
+]
+keywords = ["robotics", "lerobot", "galaxea"]
+dependencies = [
+ "dora-rs-cli",
+ "logging_mp",
+ "opencv-python",
+]
+
+[tool.setuptools.packages.find]
+include = [
+ "robodriver_robot_galaxealite_follower_ros2",
+ "lerobot_robot_galaxealite_follower_ros2"
+]
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/__init__.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/__init__.py
new file mode 100644
index 0000000..5a63e39
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/__init__.py
@@ -0,0 +1,3 @@
+from .config import GALAXEALITEFollowerRos2RobotConfig
+from .robot import GALAXEALITEFollowerRos2Robot
+from .status import GALAXEALITEFollowerRos2RobotStatus
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/config.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/config.py
new file mode 100644
index 0000000..49e613b
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/config.py
@@ -0,0 +1,73 @@
+from typing import Dict
+from dataclasses import dataclass, field
+
+from lerobot.robots.config import RobotConfig
+from lerobot.cameras import CameraConfig
+from lerobot.cameras.opencv import OpenCVCameraConfig
+from lerobot.motors import Motor, MotorNormMode
+
+
+@RobotConfig.register_subclass("galaxealite-follower-ros2")
+@dataclass
+class GALAXEALITEFollowerRos2RobotConfig(RobotConfig):
+ use_degrees = True
+ norm_mode_body = (
+ MotorNormMode.DEGREES if use_degrees else MotorNormMode.RANGE_M100_100
+ )
+ follower_motors: Dict[str, Motor] = field(
+ default_factory=lambda norm_mode_body=norm_mode_body: {
+ "follower_arms":{
+ "left_arm_joint_1_rad": Motor(1, "sts3215", norm_mode_body),
+ "left_arm_joint_2_rad": Motor(2, "sts3215", norm_mode_body),
+ "left_arm_joint_3_rad": Motor(3, "sts3215", norm_mode_body),
+ "left_arm_joint_4_rad": Motor(4, "sts3215", norm_mode_body),
+ "left_arm_joint_5_rad": Motor(5, "sts3215", norm_mode_body),
+ "left_arm_joint_6_rad": Motor(6, "sts3215", norm_mode_body),
+ "left_gripper_degree_mm": Motor(7, "sts3215", MotorNormMode.RANGE_0_100),
+ "right_arm_joint_1_rad": Motor(8, "sts3215", norm_mode_body),
+ "right_arm_joint_2_rad": Motor(9, "sts3215", norm_mode_body),
+ "right_arm_joint_3_rad": Motor(10, "sts3215", norm_mode_body),
+ "right_arm_joint_4_rad": Motor(11, "sts3215", norm_mode_body),
+ "right_arm_joint_5_rad": Motor(12, "sts3215", norm_mode_body),
+ "right_arm_joint_6_rad": Motor(13, "sts3215", norm_mode_body),
+ "right_gripper_degree_mm": Motor(14, "sts3215", MotorNormMode.RANGE_0_100),
+ "torso_joint_1":Motor(15, "sts3215", norm_mode_body),
+ "torso_joint_2":Motor(16, "sts3215", norm_mode_body),
+ "torso_joint_3":Motor(17, "sts3215", norm_mode_body),
+ }
+ }
+ )
+
+ cameras: Dict[str, CameraConfig] = field(
+ default_factory=lambda: {
+ "image_top_left": OpenCVCameraConfig(
+ index_or_path=1,
+ fps=30,
+ width=1280,
+ height=720,
+ ),
+ "image_top_right": OpenCVCameraConfig(
+ index_or_path=2,
+ fps=30,
+ width=1280,
+ height=720,
+ ),
+ "image_wrist_left": OpenCVCameraConfig(
+ index_or_path=3,
+ fps=30,
+ width=640,
+ height=360,
+ ),
+ "image_wrist_right": OpenCVCameraConfig(
+ index_or_path=4,
+ fps=30,
+ width=640,
+ height=360,
+ ),
+ }
+ )
+
+ use_videos: bool = False
+
+ microphones: Dict[str, int] = field(default_factory=lambda: {})
+
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/node.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/node.py
new file mode 100644
index 0000000..d78218e
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/node.py
@@ -0,0 +1,233 @@
+import threading
+import time
+from typing import Dict
+
+import numpy as np
+import cv2
+import rclpy
+from rclpy.node import Node as ROS2Node
+from sensor_msgs.msg import JointState, CompressedImage
+from message_filters import Subscriber, ApproximateTimeSynchronizer
+from geometry_msgs.msg import PoseStamped
+from rclpy.qos import QoSProfile, DurabilityPolicy, ReliabilityPolicy, HistoryPolicy
+
+import logging_mp
+
+CONNECT_TIMEOUT_FRAME = 10
+logger = logging_mp.getLogger(__name__)
+
+class GALAXEALITEFollowerRos2RobotNode(ROS2Node):
+ def __init__(self):
+ super().__init__('ros2_recv_pub_driver')
+ self.stop_spin = False # 初始化停止标志
+ self.qos = QoSProfile(
+ durability=DurabilityPolicy.VOLATILE,
+ reliability=ReliabilityPolicy.RELIABLE,
+ history=HistoryPolicy.KEEP_LAST,
+ depth=10
+ )
+
+ self.qos_best_effort = QoSProfile(
+ durability=DurabilityPolicy.VOLATILE,
+ reliability=ReliabilityPolicy.BEST_EFFORT,
+ history=HistoryPolicy.KEEP_LAST,
+ depth=10
+ )
+
+ # 统一QoS配置(修复原问题)
+ self.publisher_left_arm = self.create_publisher(
+ JointState, "/motion_target/target_joint_state_arm_left", self.qos
+ )
+ self.publisher_right_arm = self.create_publisher(
+ JointState, "/motion_target/target_joint_state_arm_right", self.qos
+ )
+ self.publisher_left_gripper = self.create_publisher(
+ JointState, "/motion_target/target_position_gripper_left", self.qos
+ )
+ self.publisher_right_gripper = self.create_publisher(
+ JointState, "/motion_target/target_position_gripper_right", self.qos
+ )
+ # self.publisher_state_torso = self.create_publisher(
+ # JointState, "/motion_target/target_joint_state_torso", self.qos
+ # )
+
+ self.last_main_send_time_ns = 0
+ self.last_follow_send_time_ns = 0
+ self.min_interval_ns = 1e9 / 30 # 30Hz
+
+ self._init_message_follow_filters()
+ self._init_image_message_filters()
+
+ self.recv_images: Dict[str, float] = {}
+ self.recv_follower: Dict[str, float] = {}
+ self.recv_images_status: Dict[str, int] = {}
+ self.recv_follower_status: Dict[str, int] = {}
+
+ self.lock = threading.Lock()
+
+ def _init_message_follow_filters(self):
+ sub_arm_left = Subscriber(self, JointState, '/hdas/feedback_arm_left')
+ sub_arm_right = Subscriber(self, JointState, '/hdas/feedback_arm_right')
+ sub_gripper_left = Subscriber(self, JointState, '/hdas/feedback_gripper_left')
+ sub_gripper_right = Subscriber(self, JointState, '/hdas/feedback_gripper_right')
+ sub_torso = Subscriber(self, JointState, '/hdas/feedback_torso')
+
+ self.sync = ApproximateTimeSynchronizer(
+ [sub_arm_left, sub_arm_right, sub_gripper_left, sub_gripper_right, sub_torso],
+ queue_size=10,
+ slop=0.1
+ )
+ self.sync.registerCallback(self.synchronized_follow_callback)
+
+ def synchronized_follow_callback(self, arm_left, arm_right, gripper_left, gripper_right, torso):
+ try:
+ current_time_ns = time.time_ns()
+ if (current_time_ns - self.last_follow_send_time_ns) < self.min_interval_ns:
+ return
+ self.last_follow_send_time_ns = current_time_ns
+
+ left_pos = np.array(arm_left.position, dtype=np.float32)
+ right_pos = np.array(arm_right.position, dtype=np.float32)
+ left_arm_data = left_pos
+ right_arm_data = right_pos
+
+ gripper_left_pos = np.array(gripper_left.position, dtype=np.float32)
+ gripper_right_pos = np.array(gripper_right.position, dtype=np.float32)
+ torso_pos = np.array(torso.position, dtype=np.float32)
+ torso_pos = torso_pos[:-1]
+
+ merged_data = np.concatenate([left_arm_data, gripper_left_pos, right_arm_data, gripper_right_pos, torso_pos])
+ with self.lock:
+ self.recv_follower['follower_arms'] = merged_data
+ self.recv_follower_status['follower_arms'] = CONNECT_TIMEOUT_FRAME
+ except Exception as e:
+ self.get_logger().error(f"Synchronized follow callback error: {e}")
+
+ def _init_image_message_filters(self):
+ sub_camera_top_left = Subscriber(self, CompressedImage, '/hdas/camera_head/left_raw/image_raw_color/compressed')
+ sub_camera_top_right = Subscriber(self, CompressedImage, '/hdas/camera_head/right_raw/image_raw_color/compressed')
+ sub_camera_wrist_left = Subscriber(self, CompressedImage, '/hdas/camera_wrist_left/color/image_raw/compressed')
+ sub_camera_wrist_right = Subscriber(self, CompressedImage, '/hdas/camera_wrist_right/color/image_raw/compressed')
+
+ self.image_sync = ApproximateTimeSynchronizer(
+ [sub_camera_top_left, sub_camera_top_right, sub_camera_wrist_left, sub_camera_wrist_right],
+ queue_size=5,
+ slop=0.1
+ )
+ self.image_sync.registerCallback(self.image_synchronized_callback)
+
+ def image_synchronized_callback(self, top_left, top_right, wrist_left, wrist_right):
+ try:
+ self.images_recv(top_left, "image_top_left", 1280, 720)
+ self.images_recv(top_right, "image_top_right", 1280, 720)
+ self.images_recv(wrist_left, "image_wrist_left", 640, 360)
+ self.images_recv(wrist_right, "image_wrist_right", 640, 360)
+ except Exception as e:
+ self.get_logger().error(f"Image synchronized callback error: {e}")
+
+ def images_recv(self, msg, event_id, width, height, encoding="jpeg"):
+ try:
+ if 'image' in event_id:
+ img_array = np.frombuffer(msg.data, dtype=np.uint8)
+ if encoding == "bgr8":
+ channels = 3
+ frame = img_array.reshape((height, width, channels)).copy()
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
+ elif encoding == "rgb8":
+ channels = 3
+ frame = img_array.reshape((height, width, channels))
+ frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
+ elif encoding in ["jpeg", "jpg", "jpe", "bmp", "webp", "png"]:
+ channels = 3
+ frame = cv2.imdecode(img_array, cv2.IMREAD_COLOR)
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
+ elif encoding == "depth16":
+ frame = np.frombuffer(msg.data, dtype=np.uint16).reshape(height, width, 1)
+
+ if frame is not None:
+ with self.lock:
+ self.recv_images[event_id] = frame
+ self.recv_images_status[event_id] = CONNECT_TIMEOUT_FRAME
+ except Exception as e:
+ logger.error(f"recv image error: {e}")
+
+ def ros_replay(self, array):
+ try:
+ def normalize_precision(val, decimals=3):
+ val = float(val)
+ if np.isnan(val) or np.isinf(val):
+ self.get_logger().warning(f"检测到非法值 {val},替换为0.0")
+ return 0.0
+ return round(val, decimals)
+
+ left_arm = [normalize_precision(v) for v in array[:6]]
+ left_gripper = [normalize_precision(v) for v in array[6:7]]
+ right_arm = [normalize_precision(v) for v in array[7:13]]
+ right_gripper = [normalize_precision(v) for v in array[13:14]]
+ torso = [normalize_precision(v) for v in array[14:17]]
+
+ msg = JointState()
+ msg.position = left_arm
+ self.publisher_left_arm.publish(msg)
+
+ msg = JointState()
+ msg.position = right_arm
+ self.publisher_right_arm.publish(msg)
+
+ msg = JointState()
+ msg.position = left_gripper
+ self.publisher_left_gripper.publish(msg)
+
+ msg = JointState()
+ msg.position = right_gripper
+ self.publisher_right_gripper.publish(msg)
+
+ # msg = JointState()
+ # msg.position = torso
+ # self.publisher_state_torso.publish(msg)
+
+ except Exception as e:
+ self.get_logger().error(f"Error during replay at frame: {e}")
+ raise
+
+ def destroy(self):
+ self.stop_spin = True
+ super().destroy_node()
+
+ def _add_debug_subscribers(self):
+ self.create_subscription(
+ JointState,
+ '/motion_target/target_joint_state_arm_right',
+ lambda msg: self.get_logger().info(f"独立订阅-左臂关节: position={msg.position}"),
+ self.qos_best_effort
+ )
+ self.create_subscription(
+ PoseStamped,
+ '/motion_target/target_pose_arm_right',
+ lambda msg: self.get_logger().info(f"独立订阅-左臂位姿: x={msg.pose.position.x}, y={msg.pose.position.y}"),
+ self.qos_best_effort
+ )
+ self.create_subscription(
+ JointState,
+ '/motion_target/target_position_gripper_right',
+ lambda msg: self.get_logger().info(f"独立订阅-左夹爪: position={msg.position}"),
+ self.qos_best_effort
+ )
+ self.create_subscription(
+ PoseStamped,
+ '/motion_target/target_pose_torso',
+ lambda msg: self.get_logger().info(f"独立订阅-躯干pose: x={msg.pose.position.x}"),
+ self.qos_best_effort
+ )
+ self.create_subscription(
+ JointState,
+ '/motion_target/target_joint_state_torso',
+ lambda msg: self.get_logger().info(f"独立订阅-躯干joint: position={msg.position}"),
+ self.qos_best_effort
+ )
+
+# 保留ros_spin_thread函数(供外部调用)
+def ros_spin_thread(node):
+ while rclpy.ok() and not getattr(node, "stop_spin", False):
+ rclpy.spin_once(node, timeout_sec=0.01)
\ No newline at end of file
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/robot.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/robot.py
new file mode 100644
index 0000000..0793504
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/robot.py
@@ -0,0 +1,354 @@
+import threading
+import time
+from typing import Any
+
+import logging_mp
+import numpy as np
+from lerobot.cameras import make_cameras_from_configs
+from lerobot.robots.robot import Robot
+from lerobot.utils.errors import DeviceNotConnectedError, DeviceAlreadyConnectedError
+from functools import cached_property
+
+import rclpy
+
+from .config import GALAXEALITEFollowerRos2RobotConfig
+from .status import GALAXEALITEFollowerRos2RobotStatus
+from .node import GALAXEALITEFollowerRos2RobotNode, ros_spin_thread
+
+
+logger = logging_mp.getLogger(__name__)
+
+
+
+class GALAXEALITEFollowerRos2Robot(Robot):
+ config_class = GALAXEALITEFollowerRos2RobotConfig
+ name = "galaxealite-follower-ros2"
+
+ def __init__(self, config: GALAXEALITEFollowerRos2RobotConfig):
+ super().__init__(config)
+ self.config = config
+ self.robot_type = self.config.type
+ self.use_videos = self.config.use_videos
+ self.microphones = self.config.microphones
+
+ self.follower_motors = config.follower_motors
+ self.cameras = make_cameras_from_configs(self.config.cameras)
+
+ self.connect_excluded_cameras = ["image_pika_pose"]
+
+ self.status = GALAXEALITEFollowerRos2RobotStatus()
+ if not rclpy.ok():
+ rclpy.init()
+ self.robot_ros2_node = GALAXEALITEFollowerRos2RobotNode() # 创建节点实例
+ self.ros_spin_thread = threading.Thread(
+ target=ros_spin_thread,
+ args=(self.robot_ros2_node,),
+ daemon=True
+ )
+ self.ros_spin_thread.start()
+
+ self.connected = False
+ self.logs = {}
+
+ @property
+ def _follower_motors_ft(self) -> dict[str, type]:
+ return {
+ f"follower_{joint_name}.pos": float
+ for comp_name, joints in self.follower_motors.items()
+ for joint_name in joints.keys()
+ }
+
+ @property
+ def _cameras_ft(self) -> dict[str, tuple]:
+ return {
+ cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
+ }
+
+ @cached_property
+ def observation_features(self) -> dict[str, type | tuple]:
+ return {**self._follower_motors_ft, **self._cameras_ft}
+
+ @cached_property
+ def action_features(self) -> dict[str, type]:
+ return self._follower_motors_ft
+
+ @property
+ def is_connected(self) -> bool:
+ return self.connected
+
+ def connect(self):
+ timeout = 20 # 统一的超时时间(秒)
+ start_time = time.perf_counter()
+
+ if self.connected:
+ raise DeviceAlreadyConnectedError(f"{self} already connected")
+
+ # 定义所有需要等待的条件及其错误信息
+ conditions = [
+ (
+ lambda: all(
+ name in self.robot_ros2_node.recv_images
+ for name in self.cameras
+ if name not in self.connect_excluded_cameras
+ ),
+ lambda: [name for name in self.cameras if name not in self.robot_ros2_node.recv_images],
+ "等待摄像头图像超时",
+ ),
+ (
+ lambda: all(
+ any(name in key for key in self.robot_ros2_node.recv_follower)
+ for name in self.follower_motors
+ ),
+ lambda: [
+ name
+ for name in self.follower_motors
+ if not any(name in key for key in self.robot_ros2_node.recv_follower)
+ ],
+ "等待从臂关节角度超时",
+ ),
+ ]
+
+ # 跟踪每个条件是否已完成
+ completed = [False] * len(conditions)
+
+ while True:
+ # 检查每个未完成的条件
+ for i in range(len(conditions)):
+ if not completed[i]:
+ condition_func = conditions[i][0]
+ if condition_func():
+ completed[i] = True
+
+ # 如果所有条件都已完成,退出循环
+ if all(completed):
+ break
+
+ # 检查是否超时
+ if time.perf_counter() - start_time > timeout:
+ failed_messages = []
+ for i, (cond, get_missing, base_msg) in enumerate(conditions):
+ if completed[i]:
+ continue
+
+ missing = get_missing()
+ if cond() or not missing:
+ completed[i] = True
+ continue
+
+ if i == 0:
+ received = [
+ name
+ for name in self.cameras
+ if name not in missing
+ ]
+ else:
+ received = [
+ name
+ for name in self.follower_motors
+ if name not in missing
+ ]
+
+ msg = (
+ f"{base_msg}: 未收到 [{', '.join(missing)}]; "
+ f"已收到 [{', '.join(received)}]"
+ )
+ failed_messages.append(msg)
+
+ if not failed_messages:
+ break
+
+ raise TimeoutError(
+ f"连接超时,未满足的条件: {'; '.join(failed_messages)}"
+ )
+
+ # 减少 CPU 占用
+ time.sleep(0.01)
+
+ # ===== 新增成功打印逻辑 =====
+ success_messages = []
+
+ if conditions[0][0]():
+ cam_received = [
+ name
+ for name in self.cameras
+ if name in self.robot_ros2_node.recv_images
+ and name not in self.connect_excluded_cameras
+ ]
+ success_messages.append(f"摄像头: {', '.join(cam_received)}")
+
+ if conditions[1][0]():
+ follower_received = [
+ name
+ for name in self.follower_motors
+ if any(name in key for key in self.robot_ros2_node.recv_follower)
+ ]
+ success_messages.append(f"从臂数据: {', '.join(follower_received)}")
+
+
+ log_message = "\n[连接成功] 所有设备已就绪:\n"
+ log_message += "\n".join(f" - {msg}" for msg in success_messages)
+ log_message += f"\n 总耗时: {time.perf_counter() - start_time:.2f} 秒\n"
+ logger.info(log_message)
+ # ===========================
+
+ for i in range(self.status.specifications.camera.number):
+ self.status.specifications.camera.information[i].is_connect = True
+ for i in range(self.status.specifications.arm.number):
+ self.status.specifications.arm.information[i].is_connect = True
+
+ self.connected = True
+
+ @property
+ def is_calibrated(self) -> bool:
+ """Whether the robot is currently calibrated or not. Should be always `True` if not applicable"""
+ return True
+
+ def calibrate(self) -> None:
+ """
+ Calibrate the robot if applicable. If not, this should be a no-op.
+
+ This method should collect any necessary data (e.g., motor offsets) and update the
+ :pyattr:`calibration` dictionary accordingly.
+ """
+ pass
+
+ def configure(self) -> None:
+ """
+ Apply any one-time or runtime configuration to the robot.
+ This may include setting motor parameters, control modes, or initial state.
+ """
+ pass
+
+ def get_observation(self) -> dict[str, Any]:
+ if not self.connected:
+ raise DeviceNotConnectedError(f"{self} is not connected.")
+
+ start = time.perf_counter()
+ obs_dict: dict[str, Any] = {}
+ for comp_name, joints in self.follower_motors.items():
+ for follower_name, follower in self.robot_ros2_node.recv_follower.items():
+ if follower_name == comp_name:
+ for i, joint_name in enumerate(joints.keys()):
+ obs_dict[f"follower_{joint_name}.pos"] = float(follower[i])
+
+ dt_ms = (time.perf_counter() - start) * 1e3
+ logger.debug(f"{self} read follower state: {dt_ms:.1f} ms")
+
+ # ---- 摄像头图像保持不变 ----
+ for cam_key, _cam in self.cameras.items():
+ start = time.perf_counter()
+ for name, val in self.robot_ros2_node.recv_images.items():
+ if cam_key == name or cam_key in name:
+ obs_dict[cam_key] = val
+ break
+ dt_ms = (time.perf_counter() - start) * 1e3
+ logger.debug(f"{self} read {cam_key}: {dt_ms:.1f} ms")
+
+ return obs_dict
+
+
+ def send_action(self, action: dict[str, Any]):
+ """The provided action is expected to be a vector."""
+ if not self.is_connected:
+ raise DeviceNotConnectedError(
+ "KochRobot is not connected. You need to run `robot.connect()`."
+ )
+ goal_joint = [ val for key, val in action.items()]
+ goal_joint_numpy = np.array([
+ t.item() if hasattr(t, 'item') else t
+ for t in goal_joint
+ ], dtype=np.float32)
+ try:
+ if goal_joint_numpy.shape != (14,):
+ raise ValueError(f"Action vector must be 38-dimensional, got {goal_joint_numpy.shape[0]}")
+
+ # 调用ROS2节点的ros_replay方法发布动作
+ self.robot_ros2_node.ros_replay(goal_joint_numpy)
+
+ except Exception as e:
+ logger.error(f"Failed to send action: {e}")
+ raise
+
+ def update_status(self) -> str:
+ for i in range(self.status.specifications.camera.number):
+ match_name = self.status.specifications.camera.information[i].name
+ for name in self.robot_ros2_node.recv_images_status:
+ if match_name in name:
+ self.status.specifications.camera.information[i].is_connect = (
+ True if self.robot_ros2_node.recv_images_status[name] > 0 else False
+ )
+
+ for i in range(self.status.specifications.arm.number):
+ match_name = self.status.specifications.arm.information[i].name
+ for name in self.robot_ros2_node.recv_leader_status:
+ if match_name in name:
+ self.status.specifications.arm.information[i].is_connect = (
+ True if self.robot_ros2_node.recv_leader_status[name] > 0 else False
+ )
+
+ for i in range(self.status.specifications.arm.number):
+ match_name = self.status.specifications.arm.information[i].name
+ for name in self.robot_ros2_node.recv_follower_status:
+ if match_name in name:
+ self.status.specifications.arm.information[i].is_connect = (
+ True if self.robot_ros2_node.recv_follower_status[name] > 0 else False
+ )
+
+ return self.status.to_json()
+
+ def disconnect(self):
+ if not self.is_connected:
+ raise DeviceNotConnectedError(
+ "robot is not connected. You need to run `robot.connect()` before disconnecting."
+ )
+ if hasattr(self, "ros_node"):
+ self.robot_ros2_node.destroy()
+ if rclpy.ok():
+ rclpy.shutdown()
+
+ self.connected = False
+
+ def __del__(self):
+ try:
+ if getattr(self, "is_connected", False):
+ self.disconnect()
+ except Exception:
+ pass
+
+ def reset(self,
+ state = [
+ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 100,
+ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 100,
+ -0.8600000143051147, 1.6863000392913818, 0.7631000280380249
+ ],
+ duration: float = 3.0):
+ if not self.is_connected:
+ raise DeviceNotConnectedError(
+ "Robot is not connected. You need to run `robot.connect()`."
+ )
+
+ # Get robot current states
+ curr: list[float] = []
+ for comp_name, joints in self.follower_motors.items():
+ follower = self.robot_ros2_node.recv_follower.get(comp_name)
+ if follower is not None:
+ for i, _ in enumerate(joints):
+ curr.append(float(follower[i]))
+ curr_arr = np.array(curr, dtype=np.float32)
+ goal_arr = np.array(state, dtype=np.float32)
+
+ # Interpolate actions within the given duration
+ try:
+ FPS = 30.0
+ steps = max(1, int(round(duration * FPS))) # at least execute once
+ for i in range(1, steps + 1):
+ start_episode_t = time.perf_counter()
+ ratio = i / steps
+ action = curr_arr + ratio * (goal_arr - curr_arr)
+ self.robot_ros2_node.ros_replay(action)
+ dt_s = time.perf_counter() - start_episode_t
+ time.sleep(1 / FPS - dt_s)
+
+ except Exception as e:
+ logger.error(f"Failed to send action: {e}")
+ raise
\ No newline at end of file
diff --git a/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/status.py b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/status.py
new file mode 100644
index 0000000..0d23e2e
--- /dev/null
+++ b/robodriver/robots/robodriver-robot-galaxealite-follower-ros2/robodriver_robot_galaxealite_follower_ros2/status.py
@@ -0,0 +1,136 @@
+from dataclasses import dataclass, field, asdict
+from typing import List, Optional
+import json
+import abc
+import draccus
+
+
+@dataclass
+class CameraInfo:
+ name: str = ""
+ chinese_name: str = ""
+ type: str = ""
+ width: int = 0
+ height: int = 0
+ is_connect: bool = False
+
+@dataclass
+class CameraStatus:
+ number: int = 0
+ information: List[CameraInfo] = field(default_factory=list)
+
+ def __post_init__(self):
+ if not self.information: # 如果 information 为空,则 number 设为 0
+ self.number = 0
+ else:
+ self.number = len(self.information)
+
+@dataclass
+class ArmInfo:
+ name: str = ""
+ type: str = ""
+ start_pose: List[float] = field(default_factory=list)
+ joint_p_limit: List[float] = field(default_factory=list)
+ joint_n_limit: List[float] = field(default_factory=list)
+ is_connect: bool = False
+
+@dataclass
+class ArmStatus:
+ number: int = 0
+ information: List[ArmInfo] = field(default_factory=list)
+
+ def __post_init__(self):
+ if not self.information: # 如果 information 为空,则 number 设为 0
+ self.number = 0
+ else:
+ self.number = len(self.information)
+
+@dataclass
+class Specifications:
+ end_type: str = "Default"
+ fps: int = 30
+ camera: Optional[CameraStatus] = None
+ arm: Optional[ArmStatus] = None
+
+@dataclass
+class RobotStatus(draccus.ChoiceRegistry, abc.ABC):
+ device_name: str = "Default"
+ device_body: str = "Default"
+ specifications: Specifications = field(default_factory=Specifications)
+
+ @property
+ def type(self) -> str:
+ return self.get_choice_name(self.__class__)
+
+ def to_dict(self) -> dict:
+ return asdict(self)
+
+ def to_json(self) -> str:
+ return json.dumps(self.to_dict(), ensure_ascii=False)
+
+RobotStatus.register_subclass("galaxealite-follower-ros2")
+@dataclass
+class GALAXEALITEFollowerRos2RobotStatus(RobotStatus):
+ device_name: str = "星海图R1-Lite"
+ device_body: str = "星海图"
+
+ def __post_init__(self):
+ self.specifications.end_type = "二指夹爪"
+ self.specifications.fps = 30
+ self.specifications.camera = CameraStatus(
+ information=[
+ CameraInfo(
+ name="image_top_left",
+ chinese_name="头部左摄像头",
+ type="纯双目视觉相机",
+ width=1280,
+ height=720,
+ is_connect=False
+ ),
+ CameraInfo(
+ name="image_top_right",
+ chinese_name="头部右摄像头",
+ type="纯双目视觉相机",
+ width=1280,
+ height=720,
+ is_connect=False
+ ),
+ CameraInfo(
+ name="image_wrist_left",
+ chinese_name="腕部左摄像头",
+ type="单目深度相机",
+ width=640,
+ height=360,
+ is_connect=False
+ ),
+ CameraInfo(
+ name="image_wrist_right",
+ chinese_name="腕部右摄像头",
+ type="单目深度相机",
+ width=640,
+ height=360,
+ is_connect=False
+ )
+ ]
+ )
+
+ self.specifications.arm = ArmStatus(
+ information=[
+ ArmInfo(
+ name="piper_left",
+ type="Galaxea A1X + Galaxea G1 - 7DOF",
+ start_pose=[],
+ joint_p_limit=[165.0,180.0,0.0,90.0,90.0,165.0],
+ joint_n_limit=[-165.0,0.0,-190.0,-90.0,-90.0,-165.0],
+ is_connect=False
+ ),
+ ArmInfo(
+ name="piper_right",
+ type="Galaxea A1X + Galaxea G1 - 7DOF",
+ start_pose=[],
+ joint_p_limit=[165.0,180.0,0.0,90.0,90.0,165.0],
+ joint_n_limit=[-165.0,0.0,-190.0,-90.0,-90.0,-165.0],
+ is_connect=False
+ ),
+ ]
+ )
diff --git a/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/node.py b/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/node.py
index ab9fdf3..6a6393a 100644
--- a/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/node.py
+++ b/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/node.py
@@ -15,7 +15,7 @@
from typing import Dict, Any
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/robot.py b/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/robot.py
index deacde0..a13dcea 100644
--- a/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/robot.py
+++ b/robodriver/robots/robodriver-robot-galbot-g1-aio-ros2-dds/robodriver_robot_galbot_g1_aio_sdk/robot.py
@@ -14,7 +14,7 @@
from .node import GALBOTAioRos2Node
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GALBOTAioRos2Robot(Robot):
diff --git a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/node.py b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/node.py
index ab9fdf3..6a6393a 100644
--- a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/node.py
+++ b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/node.py
@@ -15,7 +15,7 @@
from typing import Dict, Any
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/robot.py b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/robot.py
index deacde0..a13dcea 100644
--- a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/robot.py
+++ b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-py/robodriver_robot_galbot_g1_aio_sdk/robot.py
@@ -14,7 +14,7 @@
from .node import GALBOTAioRos2Node
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GALBOTAioRos2Robot(Robot):
diff --git a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/node.py b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/node.py
index f1c968b..075584e 100644
--- a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/node.py
+++ b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/node.py
@@ -26,7 +26,7 @@
from galbot.tf2_proto import tf2_message_pb2
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/robot.py b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/robot.py
index 72e8e34..c2ad45b 100644
--- a/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/robot.py
+++ b/robodriver/robots/robodriver-robot-galbot-g1-aio-sdk-rc/robodriver_robot_galbot_g1_aio_sdk_rc/robot.py
@@ -13,7 +13,7 @@
from .node import GalbotG1AIOSDKRCRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class GalbotG1AIOSDKRCRobot(Robot):
diff --git a/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/node.py b/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/node.py
index 76d5065..5f991ec 100644
--- a/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/node.py
+++ b/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/node.py
@@ -10,7 +10,7 @@
import logging_mp
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/robot.py b/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/robot.py
index 96acfb4..a0901fa 100644
--- a/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/robot.py
+++ b/robodriver/robots/robodriver-robot-realman-aio-ros1/robodriver_robot_realman_aio_ros1/robot.py
@@ -15,7 +15,7 @@
from .node import RealmanAioRos1Node
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class RealmanAioRos1Robot(Robot):
diff --git a/robodriver/robots/robodriver-robot-realman-aio-ros1/ros1_to_zmq.py b/robodriver/robots/robodriver-robot-realman-aio-ros1/ros1_to_zmq.py
index 02c2833..a168ee7 100644
--- a/robodriver/robots/robodriver-robot-realman-aio-ros1/ros1_to_zmq.py
+++ b/robodriver/robots/robodriver-robot-realman-aio-ros1/ros1_to_zmq.py
@@ -24,7 +24,7 @@
import logging_mp
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/node.py b/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/node.py
index 29c71c5..77c3bcb 100644
--- a/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/node.py
+++ b/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/node.py
@@ -10,7 +10,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/robot.py b/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/robot.py
index 5c094be..78f86d7 100644
--- a/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/robot.py
+++ b/robodriver/robots/robodriver-robot-realman1-aio-dora/robodriver_robot_realman1_aio_dora/robot.py
@@ -15,7 +15,7 @@
from .node import RealManFollowerDoraRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class RealManFollowerDoraRobot(Robot):
diff --git a/robodriver/robots/robodriver-robot-so101-aio-dora/dora/dataflow.yml b/robodriver/robots/robodriver-robot-so101-aio-dora/dora/dataflow.yml
index c2a700d..6d95afc 100644
--- a/robodriver/robots/robodriver-robot-so101-aio-dora/dora/dataflow.yml
+++ b/robodriver/robots/robodriver-robot-so101-aio-dora/dora/dataflow.yml
@@ -7,7 +7,7 @@ nodes:
outputs:
- image
env:
- VIRTUAL_ENV: camera.venv
+ UV_PROJECT_ENVIRONMENT: /home/dora/RoboDriver/robodriver/robots/robodriver-robot-so101-aio-dora/dora/camera.venv
CAPTURE_PATH: 4
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
@@ -20,8 +20,8 @@ nodes:
outputs:
- image
env:
- VIRTUAL_ENV: camera.venv
- CAPTURE_PATH: 2
+ UV_PROJECT_ENVIRONMENT: /home/dora/RoboDriver/robodriver/robots/robodriver-robot-so101-aio-dora/dora/camera.venv
+ CAPTURE_PATH: 0
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
@@ -34,7 +34,7 @@ nodes:
outputs:
- image
env:
- VIRTUAL_ENV: camera.venv
+ UV_PROJECT_ENVIRONMENT: /home/dora/RoboDriver/robodriver/robots/robodriver-robot-so101-aio-dora/dora/camera.venv
CAPTURE_PATH: 6
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
@@ -53,7 +53,7 @@ nodes:
- joint_gripper
- all_joint
env:
- VIRTUAL_ENV: arm.venv
+ UV_PROJECT_ENVIRONMENT: /home/dora/RoboDriver/robodriver/robots/robodriver-robot-so101-aio-dora/dora/arm.venv
GET_DEVICE_FROM: PORT
PORT: /dev/ttyACM0
ARM_NAME: SO101-leader
@@ -75,7 +75,7 @@ nodes:
- joint_wrist_roll
- joint_gripper
env:
- VIRTUAL_ENV: arm.venv
+ UV_PROJECT_ENVIRONMENT: /home/dora/RoboDriver/robodriver/robots/robodriver-robot-so101-aio-dora/dora/arm.venv
GET_DEVICE_FROM: PORT
PORT: /dev/ttyACM1
ARM_NAME: SO101-follower
diff --git a/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/node.py b/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/node.py
index 0d420c9..ae8f0ca 100644
--- a/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/node.py
+++ b/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/node.py
@@ -10,7 +10,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/robot.py b/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/robot.py
index aa22049..edaae70 100644
--- a/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/robot.py
+++ b/robodriver/robots/robodriver-robot-so101-aio-dora/robodriver_robot_so101_aio_dora/robot.py
@@ -15,7 +15,7 @@
from .node import SO101AIODoraRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class SO101AIODoraRobot(Robot):
diff --git a/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/node.py b/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/node.py
index de4aeda..ebb16ba 100644
--- a/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/node.py
+++ b/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/node.py
@@ -15,7 +15,7 @@
from typing import Dict, Any
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/robot.py b/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/robot.py
index b5bb95f..f0a938a 100644
--- a/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/robot.py
+++ b/robodriver/robots/robodriver-robot-so101-aio-ros2/robodriver_robot_so101_aio_ros2/robot.py
@@ -14,7 +14,7 @@
from .node import So101AioRos2Node
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class So101AioRos2Robot(Robot):
diff --git a/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/node.py b/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/node.py
index 0d28f6d..20373da 100644
--- a/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/node.py
+++ b/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/node.py
@@ -10,7 +10,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/robot.py b/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/robot.py
index 85af665..d876037 100644
--- a/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/robot.py
+++ b/robodriver/robots/robodriver-robot-so101-follower-dora/robodriver_robot_so101_follower_dora/robot.py
@@ -15,7 +15,7 @@
from .node import SO101FollowerDoraRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class SO101FollowerDoraRobot(Robot):
diff --git a/robodriver/robots/utils.py b/robodriver/robots/utils.py
index 4694d0f..46b9581 100644
--- a/robodriver/robots/utils.py
+++ b/robodriver/robots/utils.py
@@ -8,7 +8,7 @@
from robodriver.robots.statuses import RobotStatus
from robodriver.utils.import_utils import make_device_from_device_class
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
def busy_wait(seconds):
diff --git a/robodriver/scripts/record.py b/robodriver/scripts/record.py
index 4c51243..e76dcc7 100644
--- a/robodriver/scripts/record.py
+++ b/robodriver/scripts/record.py
@@ -5,7 +5,7 @@
import logging_mp
from deepdiff import DeepDiff
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
from robodriver.core.recorder import RecordConfig
@@ -52,7 +52,7 @@ def __get_path_fields__(cls) -> list[str]:
@parser.wrap()
def record(cfg: ControlPipelineConfig):
- logging_mp.basic_config(level=logging_mp.INFO)
+ logging_mp.basicConfig(level=logging_mp.INFO)
git_branch_log()
# daemon = Daemon(fps=DEFAULT_FPS)
diff --git a/robodriver/scripts/replay.py b/robodriver/scripts/replay.py
index eebe140..cd0be62 100644
--- a/robodriver/scripts/replay.py
+++ b/robodriver/scripts/replay.py
@@ -11,8 +11,8 @@
from robodriver.robot.robots.utils import Robot, busy_wait
from robodriver.utils.utils import log_say
-logging_mp.basic_config(level=logging_mp.INFO)
-logger = logging_mp.get_logger(__name__)
+logging_mp.basicConfig(level=logging_mp.INFO)
+logger = logging_mp.getLogger(__name__)
@dataclass
diff --git a/robodriver/scripts/run.py b/robodriver/scripts/run.py
index 90ba74a..7c9a9d3 100644
--- a/robodriver/scripts/run.py
+++ b/robodriver/scripts/run.py
@@ -5,6 +5,7 @@
import cv2
import logging_mp
+logging_mp.basicConfig(level=logging_mp.INFO)
from lerobot.robots import RobotConfig
from lerobot.teleoperators import TeleoperatorConfig
@@ -24,8 +25,7 @@
# from lerobot.teleoperators import make_teleoperator_from_config
-logging_mp.basic_config(level=logging_mp.INFO)
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
@dataclass
@@ -109,12 +109,12 @@ async def async_main(cfg: ControlPipelineConfig):
img = cv2.cvtColor(observation[key], cv2.COLOR_RGB2BGR)
# name = key[len("observation.images."):]
tasks.append(coordinator.update_stream_async(key, img))
- # cv2.imshow(key, img)
+ cv2.imshow(key, img)
if observation_sim is not None:
observation_sim = cv2.cvtColor(observation_sim, cv2.COLOR_RGB2BGR)
tasks.append(coordinator.update_stream_async("image_sim", observation_sim))
- # cv2.imshow("image_sim", observation_sim)
+ cv2.imshow("image_sim", observation_sim)
if tasks:
try:
@@ -127,8 +127,8 @@ async def async_main(cfg: ControlPipelineConfig):
else:
logger.warning("observation is none")
-
- # cv2.waitKey(1)
+
+ cv2.waitKey(1)
await asyncio.sleep(0)
except KeyboardInterrupt:
logger.info("coordinator and daemon stop")
diff --git a/robodriver/scripts/sim.py b/robodriver/scripts/sim.py
index cc21387..84f6f1d 100644
--- a/robodriver/scripts/sim.py
+++ b/robodriver/scripts/sim.py
@@ -13,8 +13,8 @@
from robodriver.utils.import_utils import register_third_party_devices
from robodriver.utils.utils import git_branch_log
-logging_mp.basic_config(level=logging_mp.INFO)
-logger = logging_mp.get_logger(__name__)
+logging_mp.basicConfig(level=logging_mp.INFO)
+logger = logging_mp.getLogger(__name__)
@dataclass
diff --git a/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/node.py b/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/node.py
index d082d18..8c50ad9 100644
--- a/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/node.py
+++ b/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/node.py
@@ -10,7 +10,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/sim.py b/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/sim.py
index aaca5d8..391cba9 100644
--- a/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/sim.py
+++ b/robodriver/simulations/robodriver-sim-genesis-franka-aio-dora/robodriver_sim_genesis_franka_aio_dora/sim.py
@@ -15,7 +15,7 @@
from .node import SimFrankaAIODoraRobotNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class SimFrankaAIODoraRobot(Robot):
diff --git a/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/node.py b/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/node.py
index 3e1fa1e..4156e8e 100644
--- a/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/node.py
+++ b/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/node.py
@@ -14,7 +14,7 @@
from geometry_msgs.msg import PoseStamped
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class AutoTaskTeleoperatorNode(ROS2Node):
diff --git a/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/teleoperator.py b/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/teleoperator.py
index 9ccd842..cd22a92 100644
--- a/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/teleoperator.py
+++ b/robodriver/teleoperators/robodriver-teleoperator-autotask/robodriver_teleoperator_autotask/teleoperator.py
@@ -14,7 +14,7 @@
from .node import AutoTaskTeleoperatorNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class AutoTaskTeleoperator(Teleoperator):
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/.gitignore b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/.gitignore
new file mode 100644
index 0000000..0bc98f9
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/.gitignore
@@ -0,0 +1,182 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+# .python-version
+
+# pipenv
+#Pipfile.lock
+
+# UV
+uv.lock
+
+# poetry
+#poetry.lock
+
+# pdm
+.pdm.toml
+.pdm-python
+.pdm-build/
+
+# PEP 582
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+#.idea/
+
+# Abstra
+.abstra/
+
+# Visual Studio Code
+# .vscode/
+
+# Ruff stuff:
+.ruff_cache/
+
+# PyPI configuration file
+.pypirc
+
+# Cursor
+.cursorignore
+.cursorindexingignore
+
+**/out/
+**/Log/
+**/log/
+data/
+/dataset
+docker/*.tar*
+**/cache/
+
+**/poetry.lock
+
+**/result
+*.parquet
+
+*.STL
+.pids
+.vscode
+*.build
+
+# Local notes
+ISSUES_.md
+*.txt
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/README.md b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/README.md
new file mode 100644
index 0000000..ebe6c87
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/README.md
@@ -0,0 +1,150 @@
+# robodriver-teleoperator-pico-ultra4-dora
+
+## 项目说明
+
+本项目将 Pico Ultra4 VR 控制器的遥操作集成进 dora dataflow 框架,支持:
+- **Pico Ultra4** VR 控制器(通过 XRoboToolkit SDK)
+- **Piper** 机械臂(CAN 总线控制,含 IK 求解)
+- **RealSense D405** 手腕相机
+- **Orbbec Gemini 335** 顶部相机
+
+核心节点 `dora_node_piper.py` 将 `PiperTeleopController` 适配为 dora 事件驱动模式,在 50Hz tick 下执行 IK + 控制循环。
+
+---
+
+## 快速启动
+
+### 前置条件
+
+```bash
+# 1. 启动 CAN 总线
+sudo ip link set can0 up type can bitrate 1000000
+
+# 2. 确认 XRoboToolkit PC Service 正在运行(监听 127.0.0.1:60061)
+ss -tlnp | grep 60061
+# 应看到 RoboticsService 进程
+
+# 3. Pico 头显连接同一局域网,打开 XRoboToolkit App
+```
+
+### 启动遥操
+
+```bash
+cd RoboDriver/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora
+
+# 清理残留进程(重要!避免相机设备被占用)
+pkill -f "dora run\|dora_node_piper\|dora_camera\|dora_arm_piper" 2>/dev/null; sleep 1
+
+# 重置 Orbbec 相机 USB(重要!dora 被强杀后 Orbbec SDK 的 UVC 锁不会自动释放)
+# 不加这步会报 uvc_open res-6 / Device or resource busy
+sudo usbreset 2bc5:0800; sleep 1
+
+# 激活虚拟环境并启动
+source .venv/bin/activate
+dora run dora/dataflow.yml
+```
+
+### 操作说明
+
+- 按住 **右手 grip 键** 激活控制(`right_arm is activated` 出现后生效)
+- 移动手柄控制 Piper 末端位姿(IK 实时求解)
+- 松开 grip 键暂停控制
+- **右手扳机键** 控制夹爪开合
+- `Ctrl+C` 停止
+
+---
+
+## 依赖安装(首次配置)
+
+XR Robotics中下载XRoboToolkit-PC-Service、XRoboToolkit-Teleop-Sample-Python,pico中安装xrobotoolkits
+```bash
+# Pico 遥操依赖(本地安装)
+pip install XRoboToolkit-Teleop-Sample-Python/dependencies/XRoboToolkit-PC-Service-Pybind/
+pip install -e XRoboToolkit-Teleop-Sample-Python/
+pip install placo meshcat
+
+# 验证
+python -c "import xrobotoolkit_sdk, xrobotoolkit_teleop, placo, meshcat; print('all ok')"
+```
+
+---
+
+### 步骤 4:创建新的统一虚拟环境
+
+```bash
+# 创建 Python 3.10 虚拟环境
+uv venv .venv -p 3.10
+
+# 激活虚拟环境
+source .venv/bin/activate
+```
+
+### 步骤 5:安装依赖
+
+```bash
+# 安装基础依赖
+uv pip install -e .
+
+# 安装硬件依赖(RealSense、Orbbec、Piper)
+uv pip install -e .[hardware]
+```
+
+这将安装以下硬件驱动:
+- `pyrealsense2` (2.56.5.9235) - Intel RealSense D405 相机驱动
+- `sb-pyorbbecsdk` (1.3.1) - Orbbec Gemini 335 相机驱动
+- `piper-sdk` (0.6.1) - Piper 机械臂 SDK
+
+### 步骤 6:验证安装
+
+```bash
+# 验证 Python 可执行文件位置
+python -c "import sys; print('Python:', sys.executable)"
+
+# 验证硬件 SDK 是否正确安装
+python -c "import pyrealsense2; print('✅ RealSense SDK 已安装')"
+python -c "import piper_sdk; print('✅ Piper SDK 已安装')"
+python -c "import pyorbbecsdk; print('✅ Orbbec SDK 已安装')"
+```
+
+预期输出:
+```
+Python: /home/dora/RoboDriver/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/.venv/bin/python
+✅ RealSense SDK 已安装
+✅ Piper SDK 已安装
+✅ Orbbec SDK 已安装
+```
+
+
+## 启动数据采集(仅支持单臂遥操)
+## 1.激活piper机械臂
+```bash
+cd SDK/piper_sdk/piper_sdk
+bash can_config.sh
+```
+- pc端打开XRobotoolkits-pc-server,pico中启动xrobotoolkits(安装后在资源库中,与pc端连接到一个网段上,Head,Controller、Hand、Send、Switch W/A Button,Trcking、Vision都选择上)
+### 1. 激活虚拟环境
+
+```bash
+cd /home/dora/RoboDriver/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora
+source .venv/bin/activate
+```
+
+### 2. 启动 Dora 服务
+
+```bash
+dora up
+```
+
+### 3. 启动数据流
+
+```bash
+dora start dora/dataflow.yml
+```
+
+## 相关文档
+
+- [RoboDriver 主文档](../../README.md)
+- [Dora 框架文档](https://dora-rs.ai/)
+- [RealSense SDK 文档](https://github.com/IntelRealSense/librealsense)
+- [Orbbec SDK 文档](https://github.com/orbbec/pyorbbecsdk)
+- [Piper SDK 文档](https://github.com/agilexrobotics/piper_sdk)
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/dora/dataflow.yml b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/dora/dataflow.yml
new file mode 100644
index 0000000..8ded3f8
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/dora/dataflow.yml
@@ -0,0 +1,67 @@
+nodes:
+ # ========== 深度相机节点 ==========
+ # Top相机:奥比中光 Gemini335(仅彩色图)
+ - id: camera_top
+ path: ../../../../components/cameras/dora-camera-orbbec-v1/dora_camera_orbbec_v1/main.py
+ inputs:
+ tick: dora/timer/millis/33
+ outputs:
+ - image
+ env:
+ VIRTUAL_ENV: ../.venv
+ V4L2_DEVICE: /dev/video16
+ IMAGE_WIDTH: 640
+ IMAGE_HEIGHT: 480
+
+ # Wrist相机:Intel RealSense D405(仅彩色图)
+ - id: camera_wrist
+ path: ../../../../components/cameras/dora-camera-realsense/dora_camera_realsense/main.py
+ inputs:
+ tick: dora/timer/millis/33
+ outputs:
+ - image
+ env:
+ VIRTUAL_ENV: ../.venv
+ DEVICE_SERIAL: "130322271127" # RealSense D405 序列号
+ IMAGE_WIDTH: 640
+ IMAGE_HEIGHT: 480
+
+ # ========== Piper 机械臂节点 ==========
+ # 注意:piper_arm 节点已注释掉,由 pico_ultra4_dora 直接控制硬件
+ # - id: piper_arm
+ # path: ../../../../components/arms/dora-arm-piper/dora_arm_piper/main.py
+ # build: pip install -e ../../../../components/arms/dora-arm-piper
+ # inputs:
+ # tick: dora/timer/millis/20 # 50Hz 控制频率
+ # outputs:
+ # - leader_jointstate
+ # - follower_jointstate
+ # - follower_endpose
+ # env:
+ # VIRTUAL_ENV: ../.venv
+ # CAN_BUS: can0
+ # TEACH_MODE: "True"
+
+ # ========== Pico Ultra4 遥操器节点(集成版本)==========
+ # 需要先启动 CAN 总线:sudo ip link set can0 up type can bitrate 1000000
+ - id: pico_ultra4_dora
+ path: ../robodriver_teleoperator_pico_ultra4_dora/dora_node_piper.py
+ inputs:
+ tick: dora/timer/millis/20 # 50Hz 控制
+ camera_top_image: camera_top/image
+ camera_wrist_image: camera_wrist/image
+ outputs:
+ - follower_jointstate
+ env:
+ VIRTUAL_ENV: ../.venv
+ PYTHONPATH: /home/dora/RoboDriver
+ DISPLAY: ":0"
+ QT_QPA_PLATFORM: "xcb"
+ CAN_BUS: can0
+ SCALE_FACTOR: "1.5"
+ CONTROL_RATE_HZ: "50"
+ URDF_PATH: /home/dora/teleop_pico/XRoboToolkit-Teleop-Sample-Python/assets/piper/piper.urdf
+ RECORD_DIR: /home/dora/recordings/pico_piper
+ REPO_ID: pico_piper
+ TASK: "teleoperation"
+ USE_VIDEOS: "true"
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/lerobot_teleoperator_pico_ultra4_dora/__init__.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/lerobot_teleoperator_pico_ultra4_dora/__init__.py
new file mode 100644
index 0000000..5904dae
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/lerobot_teleoperator_pico_ultra4_dora/__init__.py
@@ -0,0 +1,2 @@
+# LeRobot compatibility layer
+from robodriver_teleoperator_pico_ultra4_dora import *
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/pyproject.toml b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/pyproject.toml
new file mode 100644
index 0000000..4a69dae
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/pyproject.toml
@@ -0,0 +1,44 @@
+[build-system]
+requires = ["setuptools>=61.0", "wheel"]
+build-backend = "setuptools.build_meta"
+
+[project]
+name = "robodriver_teleoperator_pico_ultra4_dora"
+version = "0.1.0"
+readme = "README.md"
+requires-python = ">=3.10,<3.11"
+license = {text = "Apache License 2.0"}
+authors = [
+ {name = "RoboDriver Team"},
+]
+keywords = ["robotics", "lerobot", "pico", "vr", "teleoperation"]
+dependencies = [
+ "dora-rs-cli (>=0.4.1,<0.5.0)",
+ "dora-rs (>=0.4.1,<0.5.0)",
+ "logging_mp",
+ "numpy",
+ "pyarrow",
+ "scipy",
+ "opencv-python",
+]
+
+[project.optional-dependencies]
+hardware = [
+ "pyrealsense2",
+ "sb-pyorbbecsdk",
+ "piper_sdk>=0.0.8",
+]
+pico = [
+ "placo",
+ "meshcat",
+ # xrobotoolkit_sdk: 从本地编译好的 .so 安装
+ # pip install /home/dora/teleop_pico/XRoboToolkit-Teleop-Sample-Python/dependencies/XRoboToolkit-PC-Service-Pybind/
+ # xrobotoolkit_teleop: 可编辑模式安装
+ # pip install -e /home/dora/teleop_pico/XRoboToolkit-Teleop-Sample-Python/
+]
+
+[tool.setuptools.packages.find]
+include = [
+ "robodriver_teleoperator_pico_ultra4_dora*",
+ "lerobot_teleoperator_pico_ultra4_dora*",
+]
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/__init__.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/__init__.py
new file mode 100644
index 0000000..999213d
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/__init__.py
@@ -0,0 +1,9 @@
+from .teleoperator import PicoUltra4DoraTeleoperator
+from .config import PicoUltra4DoraTeleoperatorConfig
+from .status import PicoUltra4DoraTeleoperatorStatus
+
+__all__ = [
+ "PicoUltra4DoraTeleoperator",
+ "PicoUltra4DoraTeleoperatorConfig",
+ "PicoUltra4DoraTeleoperatorStatus",
+]
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/config.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/config.py
new file mode 100644
index 0000000..6e9eb23
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/config.py
@@ -0,0 +1,42 @@
+from dataclasses import dataclass, field
+from typing import Dict, Any
+import numpy as np
+
+from lerobot.teleoperators.config import TeleoperatorConfig
+
+
+@TeleoperatorConfig.register_subclass("pico_ultra4_dora")
+@dataclass
+class PicoUltra4DoraTeleoperatorConfig(TeleoperatorConfig):
+ """
+ Pico Ultra4 VR 遥操器配置
+
+ 使用 VR 控制器进行末端位姿控制,通过 IK 求解转换为关节角度
+ """
+
+ # VR 控制器配置
+ vr_controller: str = "right_controller" # 使用右手控制器
+ control_trigger: str = "right_grip" # 右手握持键激活控制
+ gripper_trigger: str = "right_trigger" # 右手扳机键控制夹爪
+
+ # 机械臂配置
+ robot_urdf_path: str = "" # Piper URDF 文件路径
+ end_effector_link: str = "link6" # 末端执行器链接名
+
+ # IK 求解器配置
+ scale_factor: float = 1.5 # VR 控制器移动缩放因子
+ control_rate_hz: int = 50 # 控制频率
+
+ # 坐标系转换
+ # VR 头显坐标系到世界坐标系的旋转矩阵
+ R_headset_world: np.ndarray = field(default_factory=lambda: np.eye(3))
+
+ # 夹爪配置
+ gripper_open_pos: float = 0.85 # 完全打开位置
+ gripper_close_pos: float = 0.0 # 完全闭合位置
+
+ # Piper 机械臂关节配置
+ num_joints: int = 6
+ joint_names: list = field(default_factory=lambda: [
+ "joint1", "joint2", "joint3", "joint4", "joint5", "joint6"
+ ])
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/controller.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/controller.py
new file mode 100644
index 0000000..e050613
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/controller.py
@@ -0,0 +1,198 @@
+"""
+Pico Ultra4 遥操 Piper 机械臂的完整控制器
+
+集成了 VR 数据获取、IK 求解和 Piper 硬件控制
+"""
+
+import time
+import logging_mp
+import numpy as np
+from typing import Optional
+
+from .interface import PiperInterface
+from .node import PicoUltra4DoraTeleoperatorNode
+from .config import PicoUltra4DoraTeleoperatorConfig
+
+
+logger = logging_mp.getLogger(__name__)
+
+
+class PicoPiperController:
+ """
+ Pico Ultra4 遥操 Piper 机械臂的完整控制器
+
+ 整合了:
+ 1. VR 数据获取(XRoboToolkit)
+ 2. IK 求解(Placo)
+ 3. Piper 硬件控制(piper_sdk)
+ """
+
+ def __init__(
+ self,
+ config: PicoUltra4DoraTeleoperatorConfig,
+ can_port: str = "can0",
+ control_rate_hz: int = 50,
+ ):
+ """
+ 初始化控制器
+
+ Args:
+ config: 遥操器配置
+ can_port: Piper CAN 端口
+ control_rate_hz: 控制频率
+ """
+ self.config = config
+ self.can_port = can_port
+ self.control_rate_hz = control_rate_hz
+ self.dt = 1.0 / control_rate_hz
+
+ # 创建节点配置
+ node_config = {
+ "vr_controller": config.vr_controller,
+ "control_trigger": config.control_trigger,
+ "gripper_trigger": config.gripper_trigger,
+ "end_effector_link": config.end_effector_link,
+ "scale_factor": config.scale_factor,
+ "joint_names": config.joint_names,
+ }
+
+ # 初始化 VR + IK 节点
+ self.teleop_node = PicoUltra4DoraTeleoperatorNode(
+ robot_urdf_path=config.robot_urdf_path,
+ config=node_config
+ )
+
+ # 初始化 Piper 接口
+ self.piper = PiperInterface(
+ can_port=can_port,
+ dt=self.dt
+ )
+
+ self.running = False
+
+ logger.info("PicoPiperController initialized")
+
+ def start(self):
+ """启动控制器"""
+ logger.info("Starting PicoPiperController...")
+
+ # 启动 VR 节点
+ self.teleop_node.start()
+
+ # Piper 移动到 Home 位置
+ logger.info("Moving Piper to home position...")
+ self.piper.go_home()
+ time.sleep(2)
+
+ self.running = True
+ logger.info("PicoPiperController started successfully")
+
+ def run(self):
+ """运行控制循环"""
+ if not self.running:
+ logger.error("Controller not started. Call start() first.")
+ return
+
+ logger.info("Starting control loop...")
+ logger.info("Press and hold the grip button to activate control")
+ logger.info("Press Ctrl+C to stop")
+
+ try:
+ while self.running:
+ loop_start = time.perf_counter()
+
+ # 获取当前关节位置
+ current_joints = self.piper.get_joint_positions()
+
+ # 更新 IK 求解器的当前状态
+ self.teleop_node.update_current_joints(current_joints)
+
+ # 获取 VR 控制动作
+ action = self.teleop_node.get_action()
+
+ if action is not None:
+ # 提取关节角度
+ joint_positions = np.array([
+ action.get(f"{joint}.pos", 0.0)
+ for joint in self.config.joint_names
+ ])
+
+ # 提取夹爪位置
+ gripper_pos = action.get("gripper.pos", 0.0)
+
+ # 发送到 Piper
+ self.piper.set_joint_positions(joint_positions)
+ self.piper.set_gripper_position(gripper_pos)
+
+ logger.debug(f"Sent command - Joints: {joint_positions}, Gripper: {gripper_pos:.2f}")
+ else:
+ logger.debug("Control not active (grip button not pressed)")
+
+ # 控制频率
+ elapsed = time.perf_counter() - loop_start
+ sleep_time = max(0, self.dt - elapsed)
+ if sleep_time > 0:
+ time.sleep(sleep_time)
+
+ except KeyboardInterrupt:
+ logger.info("User interrupted, stopping...")
+ except Exception as e:
+ logger.error(f"Error in control loop: {e}")
+ raise
+ finally:
+ self.stop()
+
+ def stop(self):
+ """停止控制器"""
+ logger.info("Stopping PicoPiperController...")
+
+ self.running = False
+
+ # 停止 VR 节点
+ self.teleop_node.stop()
+
+ # Piper 回到 Home 位置
+ logger.info("Moving Piper to home position...")
+ self.piper.go_home()
+ time.sleep(1)
+
+ # 失能 Piper
+ self.piper.disable_robot()
+
+ logger.info("PicoPiperController stopped")
+
+
+def main():
+ """主函数示例"""
+ import os
+ from pathlib import Path
+
+ # 配置
+ urdf_path = os.path.join(
+ Path(__file__).parent.parent.parent.parent.parent,
+ "assets", "piper", "piper.urdf"
+ )
+
+ config = PicoUltra4DoraTeleoperatorConfig(
+ robot_urdf_path=urdf_path,
+ vr_controller="right_controller",
+ control_trigger="right_grip",
+ gripper_trigger="right_trigger",
+ scale_factor=1.5,
+ control_rate_hz=50,
+ )
+
+ # 创建控制器
+ controller = PicoPiperController(
+ config=config,
+ can_port="can0",
+ control_rate_hz=50,
+ )
+
+ # 启动并运行
+ controller.start()
+ controller.run()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/dora_node_piper.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/dora_node_piper.py
new file mode 100644
index 0000000..ced75c7
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/dora_node_piper.py
@@ -0,0 +1,276 @@
+"""
+Pico Ultra4 遥操 Piper 的 Dora 节点
+
+将 PiperTeleopController 适配为 Dora 节点,
+在 tick 事件驱动下执行一次 IK + 控制循环,
+同时在 grip 激活时以 DoRobotDataset(LeRobot 兼容)格式保存数据。
+"""
+import os
+os.environ.setdefault("QT_QPA_PLATFORM", "xcb")
+import time
+from pathlib import Path
+import numpy as np
+import pyarrow as pa
+import cv2
+from dora import Node
+
+from xrobotoolkit_teleop.hardware.piper_teleop_controller import (
+ PiperTeleopController,
+ DEFAULT_PIPER_MANIPULATOR_CONFIG,
+)
+from xrobotoolkit_teleop.utils.path_utils import ASSET_PATH
+
+from robodriver.dataset.dorobot_dataset import DoRobotDataset, DoRobotDatasetMetadata
+
+URDF_PATH = os.getenv("URDF_PATH", os.path.join(ASSET_PATH, "piper/piper.urdf"))
+CAN_PORT = os.getenv("CAN_BUS", "can0")
+SCALE_FACTOR = float(os.getenv("SCALE_FACTOR", "1.5"))
+CONTROL_RATE_HZ = int(os.getenv("CONTROL_RATE_HZ", "50"))
+RECORD_DIR = os.getenv("RECORD_DIR", os.path.expanduser("~/recordings/pico_piper"))
+RECORD_FPS = int(os.getenv("RECORD_FPS", "30"))
+REPO_ID = os.getenv("REPO_ID", "pico_piper")
+TASK = os.getenv("TASK", "teleoperation")
+USE_VIDEOS = os.getenv("USE_VIDEOS", "false").lower() == "true"
+
+# Piper 7 维状态:joint1-6 + gripper
+STATE_DIM = 7
+IMAGE_HEIGHT = 480
+IMAGE_WIDTH = 640
+
+FEATURES = {
+ "observation.state": {
+ "dtype": "float32",
+ "shape": (STATE_DIM,),
+ "names": ["joint1", "joint2", "joint3", "joint4", "joint5", "joint6", "gripper"],
+ },
+ "observation.images.camera_top": {
+ "dtype": "video" if USE_VIDEOS else "image",
+ "shape": (IMAGE_HEIGHT, IMAGE_WIDTH, 3),
+ "names": ["height", "width", "channel"],
+ },
+ "observation.images.camera_wrist": {
+ "dtype": "video" if USE_VIDEOS else "image",
+ "shape": (IMAGE_HEIGHT, IMAGE_WIDTH, 3),
+ "names": ["height", "width", "channel"],
+ },
+ "action": {
+ "dtype": "float32",
+ "shape": (STATE_DIM,),
+ "names": ["joint1", "joint2", "joint3", "joint4", "joint5", "joint6", "gripper"],
+ },
+}
+
+
+def make_dataset() -> DoRobotDataset:
+ """创建或续接 DoRobotDataset,绕过 robot.microphones 依赖。"""
+ obj = DoRobotDataset.__new__(DoRobotDataset)
+ meta = DoRobotDatasetMetadata.__new__(DoRobotDatasetMetadata)
+ meta.repo_id = REPO_ID
+ meta.root = Path(RECORD_DIR)
+
+ info_path = meta.root / "meta" / "info.json"
+ if info_path.exists():
+ # 续接已有数据集
+ meta.load_metadata()
+ else:
+ # 全新数据集
+ meta = DoRobotDatasetMetadata.create(
+ repo_id=REPO_ID,
+ fps=RECORD_FPS,
+ root=RECORD_DIR,
+ robot_type="piper",
+ features=FEATURES,
+ use_videos=USE_VIDEOS,
+ use_audios=False,
+ )
+
+ obj.meta = meta
+ obj.repo_id = obj.meta.repo_id
+ obj.root = obj.meta.root
+ obj.revision = None
+ obj.tolerance_s = 1e-4
+ obj.image_writer = None
+ obj.audio_writer = None
+ obj.episode_buffer = obj.create_episode_buffer()
+ obj.episodes = None
+ obj.hf_dataset = obj.create_hf_dataset()
+ obj.image_transforms = None
+ obj.delta_timestamps = None
+ obj.delta_indices = None
+ obj.episode_data_index = None
+ obj.video_backend = "pyav"
+ return obj
+
+
+def decode_image(data: pa.Array, metadata: dict) -> np.ndarray | None:
+ encoding = metadata.get("encoding", "bgr8")
+ width = metadata.get("width", IMAGE_WIDTH)
+ height = metadata.get("height", IMAGE_HEIGHT)
+ buf = data.to_numpy(zero_copy_only=False).astype(np.uint8)
+ if encoding == "bgr8":
+ img = buf.reshape((height, width, 3))
+ elif encoding in ("jpeg", "jpg", "png", "bmp", "webp"):
+ img = cv2.imdecode(buf, cv2.IMREAD_COLOR)
+ if img is None:
+ return None
+ else:
+ return None
+ # DoRobotDataset 期望 RGB
+ return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+
+
+def main():
+ import threading
+
+ node = Node()
+
+ dataset_ref = [None]
+ controller_ref = [None]
+ robot_ready = threading.Event()
+
+ def setup():
+ dataset_ref[0] = make_dataset()
+ ctrl = PiperTeleopController(
+ robot_urdf_path=URDF_PATH,
+ manipulator_config=DEFAULT_PIPER_MANIPULATOR_CONFIG,
+ can_port=CAN_PORT,
+ scale_factor=SCALE_FACTOR,
+ control_rate_hz=CONTROL_RATE_HZ,
+ enable_log_data=False,
+ enable_camera=False,
+ visualize_placo=False,
+ )
+ ctrl._robot_setup()
+ controller_ref[0] = ctrl
+ robot_ready.set()
+
+ threading.Thread(target=setup, daemon=True).start()
+
+ latest_images: dict[str, np.ndarray] = {}
+ was_active = False
+
+ for event in node:
+ if event["type"] != "INPUT":
+ continue
+
+ eid = event["id"]
+
+ # 缓存相机图像
+ if eid in ("camera_top_image", "camera_wrist_image"):
+ meta = event["metadata"]
+ val = event["value"]
+ cam_key = "camera_top" if eid == "camera_top_image" else "camera_wrist"
+ if cam_key == "camera_top":
+ try:
+ buf_dbg = val.to_numpy(zero_copy_only=False).astype(np.uint8)
+ print(f"[DBG] {cam_key}: len={len(buf_dbg)}, first4={buf_dbg[:4].tolist()}")
+ except Exception as e:
+ print(f"[DBG] {cam_key} to_numpy error: {e}")
+ img = decode_image(val, meta)
+ if img is not None:
+ latest_images[cam_key] = img
+ else:
+ print(f"[DBG] {cam_key} decode failed: encoding={meta.get('encoding')}, len={len(val)}")
+ continue
+
+ if eid != "tick":
+ continue
+
+ # robot 未就绪时只显示图像,跳过控制
+ if not robot_ready.is_set():
+ if latest_images:
+ frames_to_show = []
+ for cam_key in ("camera_top", "camera_wrist"):
+ img = latest_images.get(cam_key)
+ if img is not None:
+ bgr = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
+ cv2.putText(bgr, f"{cam_key.upper()} | INIT", (10, 30),
+ cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 165, 255), 2)
+ frames_to_show.append(bgr)
+ if frames_to_show:
+ try:
+ combined = np.hstack(frames_to_show) if len(frames_to_show) > 1 else frames_to_show[0]
+ cv2.imshow("Pico Teleop", combined)
+ cv2.waitKey(1)
+ except Exception as e:
+ print(f"[imshow error] {e}")
+ continue
+
+ # IK + 控制
+ controller = controller_ref[0]
+ dataset = dataset_ref[0]
+ controller._update_robot_state()
+ controller._update_gripper_target()
+ controller._update_ik()
+ controller._send_command()
+
+ q = controller.piper.get_joint_positions()
+ gripper = controller.piper.get_gripper_position()
+ state = np.append(q, gripper).astype(np.float32)
+
+ is_active = controller.active.get("right_arm", False)
+
+ # grip 激活时录制(有哪路图像用哪路,缺失的用上一帧或零填充)
+ if is_active and latest_images:
+ top = latest_images.get(
+ "camera_top", np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH, 3), dtype=np.uint8)
+ )
+ wrist = latest_images.get(
+ "camera_wrist", np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH, 3), dtype=np.uint8)
+ )
+ frame = {
+ "observation.state": state,
+ "observation.images.camera_top": top,
+ "observation.images.camera_wrist": wrist,
+ "action": state,
+ "task": TASK,
+ }
+ dataset.add_frame(frame)
+
+ # grip 松开时保存 episode
+ if was_active and not is_active:
+ n_frames = dataset.episode_buffer["size"]
+ if n_frames > 0:
+ ep_idx = dataset.save_episode()
+ print(f"[Recorder] Episode {ep_idx} saved ({n_frames} frames)")
+ dataset.episode_buffer = dataset.create_episode_buffer()
+ else:
+ print("[Recorder] No frames recorded, discarding.")
+
+ was_active = is_active
+
+ # 实时显示相机画面
+ if latest_images:
+ frames_to_show = []
+ for cam_key in ("camera_top", "camera_wrist"):
+ img = latest_images.get(cam_key)
+ if img is not None:
+ bgr = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
+ label = f"{'TOP' if cam_key == 'camera_top' else 'WRIST'} | {'REC' if is_active else 'IDLE'}"
+ cv2.putText(bgr, label, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
+ (0, 0, 255) if is_active else (0, 255, 0), 2)
+ frames_to_show.append(bgr)
+ if frames_to_show:
+ try:
+ combined = np.hstack(frames_to_show) if len(frames_to_show) > 1 else frames_to_show[0]
+ cv2.imshow("Pico Teleop", combined)
+ key = cv2.waitKey(1)
+ if key == ord("q"):
+ break
+ except Exception as e:
+ print(f"[imshow error] {e}")
+
+ # 发布关节状态
+ metadata = event["metadata"]
+ metadata["timestamp"] = time.time_ns()
+ node.send_output(
+ "follower_jointstate",
+ pa.array(state),
+ metadata,
+ )
+
+ cv2.destroyAllWindows()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/interface/__init__.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/interface/__init__.py
new file mode 100644
index 0000000..93e6fbd
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/interface/__init__.py
@@ -0,0 +1,3 @@
+from .piper import PiperInterface
+
+__all__ = ["PiperInterface"]
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/interface/piper.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/interface/piper.py
new file mode 100644
index 0000000..6c29998
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/interface/piper.py
@@ -0,0 +1,306 @@
+"""
+松灵 Piper 机械臂硬件接口
+
+提供与 Piper 机械臂的底层通信接口,包括:
+- 关节位置控制
+- 关节状态读取
+- 夹爪控制
+- 安全保护功能
+
+基于 Piper SDK V2 API
+"""
+
+import time
+import numpy as np
+from typing import List, Optional, Union
+from piper_sdk import C_PiperInterface_V2
+
+
+class PiperInterface:
+ """
+ 松灵 Piper 机械臂硬件接口类
+
+ 使用 CAN 总线与 Piper 机械臂通信
+ """
+
+ def __init__(
+ self,
+ can_port: str = "can0",
+ dt: float = 0.02,
+ filter_alpha: float = 0.6,
+ ):
+ """
+ 初始化 Piper 接口
+
+ Args:
+ can_port: CAN 端口名称(如 "can0", "can1")
+ dt: 控制周期(秒),默认 50Hz
+ filter_alpha: 一阶低通滤波器系数 (0-1),越小越平滑但延迟越大,默认 0.6
+ """
+ self.can_port = can_port
+ self.dt = dt
+ self.num_joints = 6 # Piper 有 6 个关节
+
+ # 角度转换因子:SDK 使用的单位转换
+ # factor = 1000 * 180 / π ≈ 57295.7795
+ self.angle_factor = 57295.7795
+
+ # 夹爪转换因子:SDK 使用 μm 单位
+ self.gripper_factor = 1000 * 1000 # m to μm
+
+ # 一阶低通滤波器
+ self.filter_alpha = filter_alpha
+ self.filtered_positions = None # 滤波后的关节位置
+ self.filtered_gripper = None # 滤波后的夹爪位置
+
+ # 初始化 Piper SDK
+ print(f"Initializing Piper on CAN port: {can_port}")
+ self.piper = C_PiperInterface_V2(can_port)
+
+ # 连接端口
+ self.piper.ConnectPort()
+ print("Piper CAN port connected")
+
+ # 使能机械臂
+ self._enable_and_wait()
+
+ def _enable_and_wait(self, timeout: float = 5.0):
+ """
+ 使能机械臂并等待就绪
+
+ Args:
+ timeout: 超时时间(秒)
+ """
+ print("Enabling Piper robot...")
+ start_time = time.time()
+
+ while not self.piper.EnablePiper():
+ if time.time() - start_time > timeout:
+ raise TimeoutError("Failed to enable Piper robot within timeout")
+ time.sleep(0.01)
+
+ print("Piper robot enabled successfully")
+
+ def go_home(self) -> bool:
+ """
+ 移动到预定义的 Home 位置(零位)
+
+ Returns:
+ bool: 成功返回 True
+ """
+ print("Moving Piper to home position...")
+
+ # Home 位置:所有关节归零
+ home_position = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
+
+ # 设置运动模式:位置控制模式
+ # ModeCtrl(enable, mode, speed_factor, unused)
+ # mode: 0x01 = 位置控制模式
+ self.piper.ModeCtrl(0x01, 0x01, 30, 0x00)
+
+ # 发送关节位置命令
+ self.set_joint_positions(home_position)
+
+ # 夹爪归零
+ self.set_gripper_position(0.0)
+
+ # 等待到达
+ time.sleep(2.0)
+
+ return True
+
+ def get_joint_positions(self) -> np.ndarray:
+ """
+ 获取当前关节位置
+
+ Returns:
+ np.ndarray: 关节位置数组(弧度),shape: (6,)
+ """
+ # 获取关节消息
+ joint_msgs = self.piper.GetArmJointMsgs()
+
+ # 提取关节位置并转换为弧度
+ # SDK 返回的单位需要除以 angle_factor 转换为弧度
+ positions = np.array([
+ joint_msgs.joint_state.joint_1 / self.angle_factor,
+ joint_msgs.joint_state.joint_2 / self.angle_factor,
+ joint_msgs.joint_state.joint_3 / self.angle_factor,
+ joint_msgs.joint_state.joint_4 / self.angle_factor,
+ joint_msgs.joint_state.joint_5 / self.angle_factor,
+ joint_msgs.joint_state.joint_6 / self.angle_factor,
+ ])
+
+ return positions
+
+ def get_joint_velocities(self) -> np.ndarray:
+ """
+ 获取当前关节速度
+
+ Returns:
+ np.ndarray: 关节速度数组(弧度/秒),shape: (6,)
+
+ Note:
+ Piper SDK 不直接提供速度数据,这里返回零数组
+ 如需速度信息,可以通过位置数值微分计算
+ """
+ # Piper SDK 的 GetArmJointMsgs() 不包含速度信息
+ # 返回零数组
+ return np.zeros(self.num_joints)
+
+ def set_joint_positions(
+ self,
+ positions: Union[List[float], np.ndarray],
+ speed_factor: int = 100,
+ **kwargs
+ ) -> bool:
+ """
+ 设置目标关节位置
+
+ Args:
+ positions: 目标关节位置(弧度),shape: (6,)
+ speed_factor: 速度因子 (0-100),默认 100
+ **kwargs: 额外参数
+
+ Returns:
+ bool: 成功返回 True
+ """
+ if len(positions) != self.num_joints:
+ print(f"Error: Expected {self.num_joints} joints, got {len(positions)}")
+ return False
+
+ # 关节限位检查(弧度)
+ joint_limits = [
+ (-3.14, 3.14), # joint1
+ (-2.0, 2.0), # joint2
+ (-2.5, 2.5), # joint3
+ (-3.14, 3.14), # joint4
+ (-2.0, 2.0), # joint5
+ (-3.14, 3.14), # joint6
+ ]
+
+ # 限制关节位置在安全范围内
+ clipped_positions = []
+ for i, (pos, (min_val, max_val)) in enumerate(zip(positions, joint_limits)):
+ if pos < min_val or pos > max_val:
+ clipped_pos = np.clip(pos, min_val, max_val)
+ print(f"Warning: Joint {i+1} position {pos:.3f} clipped to {clipped_pos:.3f}")
+ clipped_positions.append(clipped_pos)
+ else:
+ clipped_positions.append(pos)
+
+ # 一阶低通滤波:y = alpha * x + (1 - alpha) * y_prev
+ clipped_positions = np.array(clipped_positions)
+ if self.filtered_positions is None:
+ self.filtered_positions = clipped_positions.copy()
+ else:
+ self.filtered_positions = (
+ self.filter_alpha * clipped_positions
+ + (1.0 - self.filter_alpha) * self.filtered_positions
+ )
+ clipped_positions = self.filtered_positions
+
+ # 转换为 SDK 单位
+ joint_0 = round(clipped_positions[0] * self.angle_factor)
+ joint_1 = round(clipped_positions[1] * self.angle_factor)
+ joint_2 = round(clipped_positions[2] * self.angle_factor)
+ joint_3 = round(clipped_positions[3] * self.angle_factor)
+ joint_4 = round(clipped_positions[4] * self.angle_factor)
+ joint_5 = round(clipped_positions[5] * self.angle_factor)
+
+ # 设置运动控制模式
+ # MotionCtrl_2(enable, mode, speed_factor, unused)
+ self.piper.MotionCtrl_2(0x01, 0x01, speed_factor, 0x00)
+
+ # 发送关节控制命令
+ self.piper.JointCtrl(joint_0, joint_1, joint_2, joint_3, joint_4, joint_5)
+
+ return True
+
+ def set_gripper_position(self, position: float, speed: int = 1000) -> bool:
+ """
+ 设置夹爪位置
+
+ Args:
+ position: 夹爪开合度 (0.0=完全闭合, 1.0=完全打开)
+ speed: 夹爪速度 (默认 1000)
+
+ Returns:
+ bool: 成功返回 True
+ """
+ # 限制在 0-1 范围内
+ position = np.clip(position, 0.0, 1.0)
+
+ # 一阶低通滤波
+ if self.filtered_gripper is None:
+ self.filtered_gripper = position
+ else:
+ self.filtered_gripper = (
+ self.filter_alpha * position
+ + (1.0 - self.filter_alpha) * self.filtered_gripper
+ )
+ position = self.filtered_gripper
+
+ # 将 0-1 映射到 Piper 夹爪的实际范围(单位:μm)
+ # Piper 夹爪最大开合约 0.08m = 80000μm
+ gripper_pos_um = round(position * 80000)
+
+ # 发送夹爪控制命令
+ # GripperCtrl(position_um, speed, enable, force)
+ self.piper.GripperCtrl(abs(gripper_pos_um), speed, 0x01, 0)
+
+ return True
+
+ def get_gripper_position(self) -> float:
+ """
+ 获取当前夹爪位置
+
+ Returns:
+ float: 夹爪开合度 (0.0-1.0)
+ """
+ gripper_msgs = self.piper.GetArmGripperMsgs()
+
+ # grippers_angle 单位是 0.001 度
+ # 转换为 0-1 范围(假设最大开合角度约 80 度)
+ angle_degrees = gripper_msgs.gripper_state.grippers_angle / 1000.0
+ gripper_pos = abs(angle_degrees) / 80.0 # 归一化到 0-1
+
+ return np.clip(gripper_pos, 0.0, 1.0)
+
+ def enable_robot(self) -> bool:
+ """使能机械臂"""
+ print("Enabling Piper robot...")
+ success = self.piper.EnablePiper()
+ if success:
+ print("Piper robot enabled")
+ return success
+
+ def disable_robot(self) -> bool:
+ """失能机械臂"""
+ print("Disabling Piper robot...")
+ # Piper SDK 没有直接的 disable 方法
+ # 可以通过设置模式为 0 来停止控制
+ self.piper.ModeCtrl(0x00, 0x00, 0, 0x00)
+ print("Piper robot disabled")
+ return True
+
+ def emergency_stop(self) -> bool:
+ """紧急停止"""
+ print("Piper emergency stop triggered!")
+ # 立即停止所有运动
+ self.piper.ModeCtrl(0x00, 0x00, 0, 0x00)
+ return True
+
+ def get_arm_status(self):
+ """
+ 获取机械臂状态信息
+
+ Returns:
+ 机械臂状态对象
+ """
+ return self.piper.GetArmStatus()
+
+ def __del__(self):
+ """析构函数,断开连接"""
+ if hasattr(self, 'piper'):
+ print("Disconnecting Piper...")
+ self.disable_robot()
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/node.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/node.py
new file mode 100644
index 0000000..73ed6c9
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/node.py
@@ -0,0 +1,238 @@
+import logging_mp
+import threading
+import time
+import numpy as np
+import pyarrow as pa
+from dora import Node
+from typing import Any, Dict, Optional
+
+# 导入 XRoboToolkit SDK
+try:
+ import xrobotoolkit_sdk as xrt
+ XRT_AVAILABLE = True
+except ImportError:
+ XRT_AVAILABLE = False
+ logging_mp.warning("xrobotoolkit_sdk not available")
+
+# 导入 Placo IK 求解器
+try:
+ import placo
+ PLACO_AVAILABLE = True
+except ImportError:
+ PLACO_AVAILABLE = False
+ logging_mp.warning("placo not available")
+
+
+logger = logging_mp.getLogger(__name__)
+CONNECT_TIMEOUT_FRAME = 10
+
+
+class TeleoperatorNode:
+ pass
+
+
+class DoraTeleoperatorNode(TeleoperatorNode):
+ pass
+
+
+class PicoUltra4DoraTeleoperatorNode(DoraTeleoperatorNode):
+ """
+ Pico Ultra4 VR 遥操器 Dora 节点
+
+ 功能:
+ 1. 从 XRoboToolkit SDK 获取 VR 控制器位姿
+ 2. 使用 Placo 进行 IK 求解
+ 3. 通过 Dora 发送关节角度命令
+ """
+
+ def __init__(self, robot_urdf_path: str, config: Dict[str, Any]):
+ self.node = Node("pico_ultra4_dora")
+ self.config = config
+
+ # VR 数据
+ self.vr_pose: Optional[np.ndarray] = None # [x, y, z, qx, qy, qz, qw]
+ self.gripper_value: float = 0.0
+ self.control_active: bool = False
+
+ # IK 求解器
+ self.robot_urdf_path = robot_urdf_path
+ self.placo_robot = None
+ self.placo_solver = None
+
+ # 关节数据
+ self.current_joint_positions: Optional[np.ndarray] = None
+ self.target_joint_positions: Optional[np.ndarray] = None
+
+ self.lock = threading.Lock()
+ self.running = False
+
+ # 初始化 XRoboToolkit
+ if XRT_AVAILABLE:
+ xrt.init()
+ logger.info("XRoboToolkit SDK initialized")
+ else:
+ logger.error("XRoboToolkit SDK not available")
+
+ # 初始化 Placo IK 求解器
+ if PLACO_AVAILABLE and robot_urdf_path:
+ self._setup_placo()
+ else:
+ logger.error("Placo not available or URDF path not provided")
+
+ # 启动 VR 数据读取线程
+ self.vr_thread = threading.Thread(target=self._vr_update_loop, daemon=True)
+
+ def _setup_placo(self):
+ """设置 Placo IK 求解器"""
+ try:
+ # 加载机器人模型
+ self.placo_robot = placo.RobotWrapper(self.robot_urdf_path)
+
+ # 创建求解器
+ self.placo_solver = placo.KinematicsSolver(self.placo_robot)
+
+ # 获取关节索引
+ joint_names = self.config.get("joint_names", [f"joint{i}" for i in range(1, 7)])
+ self.placo_arm_joint_slice = slice(
+ self.placo_robot.get_joint_offset(joint_names[0]),
+ self.placo_robot.get_joint_offset(joint_names[-1]) + 1,
+ )
+
+ # 初始化关节位置为零位
+ self.current_joint_positions = np.zeros(len(joint_names))
+ self.placo_robot.state.q[self.placo_arm_joint_slice] = self.current_joint_positions
+
+ logger.info("Placo IK solver initialized successfully")
+ except Exception as e:
+ logger.error(f"Failed to initialize Placo: {e}")
+ self.placo_robot = None
+ self.placo_solver = None
+
+ def _vr_update_loop(self):
+ """VR 数据更新循环"""
+ if not XRT_AVAILABLE:
+ return
+
+ while self.running:
+ try:
+ # 获取 VR 控制器位姿
+ controller_name = self.config.get("vr_controller", "right_controller")
+ pose = xrt.get_right_controller_pose() if "right" in controller_name else xrt.get_left_controller_pose()
+
+ # 获取握持键状态(激活控制)
+ grip_trigger = self.config.get("control_trigger", "right_grip")
+ grip_value = xrt.get_right_grip() if "right" in grip_trigger else xrt.get_left_grip()
+
+ # 获取扳机键状态(夹爪控制)
+ gripper_trigger = self.config.get("gripper_trigger", "right_trigger")
+ trigger_value = xrt.get_right_trigger() if "right" in gripper_trigger else xrt.get_left_trigger()
+
+ with self.lock:
+ self.vr_pose = pose
+ self.control_active = grip_value > 0.5 # 握持键按下激活控制
+ self.gripper_value = trigger_value
+
+ except Exception as e:
+ logger.error(f"Error reading VR data: {e}")
+
+ time.sleep(0.01) # 100Hz 更新频率
+
+ def _solve_ik(self, target_pose: np.ndarray) -> Optional[np.ndarray]:
+ """
+ 使用 Placo 求解 IK
+
+ Args:
+ target_pose: 目标末端位姿 [x, y, z, qx, qy, qz, qw]
+
+ Returns:
+ 关节角度数组,如果求解失败返回 None
+ """
+ if not PLACO_AVAILABLE or self.placo_solver is None:
+ return None
+
+ try:
+ # 更新当前关节状态
+ if self.current_joint_positions is not None:
+ self.placo_robot.state.q[self.placo_arm_joint_slice] = self.current_joint_positions
+
+ # 设置末端执行器目标
+ end_effector_link = self.config.get("end_effector_link", "link6")
+
+ # 提取位置和四元数
+ position = target_pose[:3]
+ quaternion = target_pose[3:] # [qx, qy, qz, qw]
+
+ # 转换为旋转矩阵
+ from scipy.spatial.transform import Rotation as R
+ rotation_matrix = R.from_quat(quaternion).as_matrix()
+
+ # 设置任务
+ task = self.placo_solver.add_frame_task(end_effector_link, position, rotation_matrix)
+ task.configure(end_effector_link, "soft", 1.0, 1.0)
+
+ # 求解
+ self.placo_solver.solve(True)
+
+ # 获取求解结果
+ q_des = self.placo_robot.state.q[self.placo_arm_joint_slice].copy()
+
+ return q_des
+
+ except Exception as e:
+ logger.error(f"IK solve failed: {e}")
+ return None
+
+ def start(self):
+ """启动节点"""
+ if self.running:
+ logger.warning(f"{self} is already running.")
+ return
+
+ self.running = True
+ self.vr_thread.start()
+
+ logger.info(f"{self} started. Waiting for VR data...")
+
+ def get_action(self) -> Optional[Dict[str, float]]:
+ """
+ 获取当前动作(关节角度)
+
+ Returns:
+ 关节角度字典,如果未激活或求解失败返回 None
+ """
+ with self.lock:
+ if not self.control_active or self.vr_pose is None:
+ return None
+
+ # 应用缩放因子
+ scale_factor = self.config.get("scale_factor", 1.5)
+ scaled_pose = self.vr_pose.copy()
+ scaled_pose[:3] *= scale_factor # 缩放位置
+
+ # 求解 IK
+ joint_positions = self._solve_ik(scaled_pose)
+
+ if joint_positions is None:
+ return None
+
+ # 构造动作字典
+ joint_names = self.config.get("joint_names", [f"joint{i}" for i in range(1, 7)])
+ action = {f"{name}.pos": pos for name, pos in zip(joint_names, joint_positions)}
+
+ # 添加夹爪
+ action["gripper.pos"] = self.gripper_value
+
+ return action
+
+ def update_current_joints(self, joint_positions: np.ndarray):
+ """更新当前关节位置(用于 IK 求解的初始状态)"""
+ with self.lock:
+ self.current_joint_positions = joint_positions.copy()
+
+ def stop(self):
+ """停止节点"""
+ self.running = False
+ if XRT_AVAILABLE:
+ xrt.close()
+ logger.info(f"{self} stopped.")
+
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/status.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/status.py
new file mode 100644
index 0000000..2c7e4f3
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/status.py
@@ -0,0 +1,75 @@
+from dataclasses import dataclass, field, asdict
+from typing import List, Optional
+import json
+import abc
+import draccus
+
+
+@dataclass
+class ArmInfo:
+ name: str = ""
+ type: str = ""
+ start_pose: List[float] = field(default_factory=list)
+ joint_p_limit: List[float] = field(default_factory=list)
+ joint_n_limit: List[float] = field(default_factory=list)
+ is_connect: bool = False
+
+
+@dataclass
+class ArmStatus:
+ number: int = 0
+ information: List[ArmInfo] = field(default_factory=list)
+
+ def __post_init__(self):
+ if not self.information:
+ self.number = 0
+ else:
+ self.number = len(self.information)
+
+
+@dataclass
+class Specifications:
+ end_type: str = "Default"
+ fps: int = 30
+ arm: Optional[ArmStatus] = None
+
+
+@dataclass
+class TeleoperatorStatus(draccus.ChoiceRegistry, abc.ABC):
+ device_name: str = "Default"
+ device_body: str = "Default"
+ specifications: Specifications = field(default_factory=Specifications)
+
+ @property
+ def type(self) -> str:
+ return self.get_choice_name(self.__class__)
+
+ def to_dict(self) -> dict:
+ return asdict(self)
+
+ def to_json(self) -> str:
+ return json.dumps(self.to_dict(), ensure_ascii=False)
+
+
+@TeleoperatorStatus.register_subclass("pico_ultra4_dora")
+@dataclass
+class PicoUltra4DoraTeleoperatorStatus(TeleoperatorStatus):
+ device_name: str = "Pico_Ultra4"
+ device_body: str = "Pico_VR"
+
+ def __post_init__(self):
+ self.specifications.end_type = "VR 控制器"
+ self.specifications.fps = 50
+
+ self.specifications.arm = ArmStatus(
+ information=[
+ ArmInfo(
+ name="pico_vr_controller",
+ type="Pico Ultra4 VR 控制器",
+ start_pose=[],
+ joint_p_limit=[],
+ joint_n_limit=[],
+ is_connect=False
+ ),
+ ]
+ )
diff --git a/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/teleoperator.py b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/teleoperator.py
new file mode 100644
index 0000000..c283617
--- /dev/null
+++ b/robodriver/teleoperators/robodriver-teleoperator-pico-ultra4-dora/robodriver_teleoperator_pico_ultra4_dora/teleoperator.py
@@ -0,0 +1,171 @@
+import time
+import logging_mp
+import numpy as np
+from typing import Any, Dict
+
+from lerobot.utils.errors import DeviceNotConnectedError, DeviceAlreadyConnectedError
+from lerobot.teleoperators.teleoperator import Teleoperator
+
+from .config import PicoUltra4DoraTeleoperatorConfig
+from .status import PicoUltra4DoraTeleoperatorStatus
+from .node import PicoUltra4DoraTeleoperatorNode
+
+
+logger = logging_mp.getLogger(__name__)
+
+
+class PicoUltra4DoraTeleoperator(Teleoperator):
+ """
+ Pico Ultra4 VR 遥操器
+
+ 使用 Pico Ultra4 VR 控制器进行末端位姿控制,
+ 通过 IK 求解转换为关节角度,控制 Piper 机械臂
+ """
+
+ config_class = PicoUltra4DoraTeleoperatorConfig
+ name = "pico_ultra4_dora"
+
+ def __init__(self, config: PicoUltra4DoraTeleoperatorConfig):
+ super().__init__(config)
+ self.config = config
+ self.teleoperator_type = self.config.type
+
+ # 状态
+ self.status = PicoUltra4DoraTeleoperatorStatus()
+
+ # 创建节点配置
+ node_config = {
+ "vr_controller": self.config.vr_controller,
+ "control_trigger": self.config.control_trigger,
+ "gripper_trigger": self.config.gripper_trigger,
+ "end_effector_link": self.config.end_effector_link,
+ "scale_factor": self.config.scale_factor,
+ "joint_names": self.config.joint_names,
+ }
+
+ # 创建 Dora 节点
+ self.teleoperator_node = PicoUltra4DoraTeleoperatorNode(
+ robot_urdf_path=self.config.robot_urdf_path,
+ config=node_config
+ )
+
+ self.connected = False
+ self.logs = {}
+
+ @property
+ def action_features(self) -> dict[str, type]:
+ """定义动作特征"""
+ features = {f"{joint}.pos": float for joint in self.config.joint_names}
+ features["gripper.pos"] = float
+ return features
+
+ @property
+ def feedback_features(self) -> dict[str, type]:
+ """定义反馈特征(暂不支持)"""
+ return {}
+
+ @property
+ def is_connected(self) -> bool:
+ return self.connected
+
+ def connect(self):
+ """连接 VR 控制器"""
+ if self.connected:
+ raise DeviceAlreadyConnectedError(f"{self} already connected")
+
+ logger.info("Connecting Pico Ultra4 VR controller...")
+
+ # 启动节点
+ self.teleoperator_node.start()
+
+ # 等待 VR 数据就绪
+ timeout = 10
+ start_time = time.perf_counter()
+
+ while True:
+ # 检查是否有 VR 数据
+ action = self.teleoperator_node.get_action()
+ if action is not None:
+ logger.info("VR controller data received")
+ break
+
+ if time.perf_counter() - start_time > timeout:
+ raise TimeoutError("Failed to connect to VR controller within timeout")
+
+ time.sleep(0.1)
+
+ # 更新状态
+ self.status.specifications.arm.information[0].is_connect = True
+ self.connected = True
+
+ logger.info(f"{self} connected successfully")
+
+ @property
+ def is_calibrated(self) -> bool:
+ """VR 控制器不需要校准"""
+ return True
+
+ def calibrate(self) -> None:
+ """VR 控制器不需要校准"""
+ pass
+
+ def configure(self) -> None:
+ """配置遥操器"""
+ pass
+
+ def get_action(self) -> dict[str, float]:
+ """
+ 获取当前动作
+
+ Returns:
+ 关节角度字典
+ """
+ if not self.connected:
+ raise DeviceNotConnectedError(f"{self} is not connected.")
+
+ start = time.perf_counter()
+
+ # 从节点获取动作
+ action = self.teleoperator_node.get_action()
+
+ if action is None:
+ # 如果未激活控制,返回零动作
+ action = {f"{joint}.pos": 0.0 for joint in self.config.joint_names}
+ action["gripper.pos"] = 0.0
+
+ dt_ms = (time.perf_counter() - start) * 1e3
+ logger.debug(f"{self} get action: {dt_ms:.1f} ms")
+
+ return action
+
+ def send_feedback(self, feedback: dict[str, Any]) -> None:
+ """发送反馈(暂不支持)"""
+ if not self.connected:
+ raise DeviceNotConnectedError(
+ f"{self} is not connected. You need to run `teleoperator.connect()`."
+ )
+
+ logger.warning(f"{self}: send_feedback() not implemented.")
+
+ def update_status(self) -> str:
+ """更新状态"""
+ # 检查 VR 控制器连接状态
+ action = self.teleoperator_node.get_action()
+ self.status.specifications.arm.information[0].is_connect = action is not None
+
+ return self.status.to_json()
+
+ def disconnect(self):
+ """断开连接"""
+ if not self.connected:
+ raise DeviceNotConnectedError(
+ "Teleoperator is not connected. You need to run `teleoperator.connect()` before disconnecting."
+ )
+
+ self.teleoperator_node.stop()
+ self.connected = False
+ logger.info(f"{self} disconnected.")
+
+ def __del__(self):
+ if getattr(self, "connected", False):
+ self.disconnect()
diff --git a/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/pyproject.toml b/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/pyproject.toml
index ff19111..3967a0f 100644
--- a/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/pyproject.toml
+++ b/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/pyproject.toml
@@ -1,5 +1,5 @@
[build-system]
-requires = ["setuptools>=61.0"]
+requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
diff --git a/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/node.py b/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/node.py
index c586ffd..5096c05 100644
--- a/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/node.py
+++ b/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/node.py
@@ -6,7 +6,7 @@
from typing import Any, Dict
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/teleoperator.py b/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/teleoperator.py
index 07432cb..21068b8 100644
--- a/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/teleoperator.py
+++ b/robodriver/teleoperators/robodriver-teleoperator-so101-leader-dora/robodriver_teleoperator_so101_leader_dora/teleoperator.py
@@ -13,7 +13,7 @@
from .node import SO101LeaderDoraTeleoperatorNode
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class SO101LeaderDoraTeleoperator(Teleoperator):
diff --git a/robodriver/teleoperators/utils.py b/robodriver/teleoperators/utils.py
index 2bf4d86..901d12e 100644
--- a/robodriver/teleoperators/utils.py
+++ b/robodriver/teleoperators/utils.py
@@ -5,7 +5,7 @@
from robodriver.utils.import_utils import make_device_from_device_class
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
def make_teleoperator_from_config(config: TeleoperatorConfig) -> Teleoperator:
diff --git a/robodriver/tools/AutoDriver/GUI/generate_ros2_aio.py b/robodriver/tools/AutoDriver/GUI/generate_ros2_aio.py
index 71c776f..e7a8843 100644
--- a/robodriver/tools/AutoDriver/GUI/generate_ros2_aio.py
+++ b/robodriver/tools/AutoDriver/GUI/generate_ros2_aio.py
@@ -274,7 +274,7 @@ def generate_robot_py(base_dir: Path, robot_name: str):
from .node import __CAMEL_NAME__Node
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
class __CAMEL_NAME__Robot(Robot):
@@ -646,7 +646,7 @@ def _to_camel(name: str) -> str:
import logging_mp
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
CONNECT_TIMEOUT_FRAME = 10
diff --git a/robodriver/utils/import_utils.py b/robodriver/utils/import_utils.py
index 4998e15..2a00a6f 100644
--- a/robodriver/utils/import_utils.py
+++ b/robodriver/utils/import_utils.py
@@ -21,7 +21,7 @@
import logging_mp
from draccus.choice_types import ChoiceRegistry
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
def make_device_from_device_class(config: ChoiceRegistry) -> Any:
diff --git a/robodriver/utils/parser.py b/robodriver/utils/parser.py
index 5e35844..a1b46aa 100644
--- a/robodriver/utils/parser.py
+++ b/robodriver/utils/parser.py
@@ -24,7 +24,7 @@
import draccus
import logging_mp
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
from robodriver.utils.utils import has_method
diff --git a/robodriver/utils/utils.py b/robodriver/utils/utils.py
index 10065a1..c13c3c6 100644
--- a/robodriver/utils/utils.py
+++ b/robodriver/utils/utils.py
@@ -10,7 +10,7 @@
import numpy as np
import torch
-logger = logging_mp.get_logger(__name__)
+logger = logging_mp.getLogger(__name__)
# import colored_logging as clog