This guide outlines the specific steps, configurations, and commands required to deploy Lightning-LM on the Deep Robotics M20 platform equipped with a RoboSense LiDAR. Watch the tutorial on Youtube or Bilibili!
Before deploying on the physical robot, it is highly recommended to obtain the code and test the algorithm using the provided dataset.
Start by cloning the Deep Robotics specific version of the repository, to your workspace source folder:
git clone https://github.com/DeepRoboticsLab/lightning-lm-deep-robotics.gitDownload the test dataset collected on the M20 robot here:
- Google Drive: M20 Robot Dataset
- Note: This dataset contains
lidar_data_bag, which is used in the examples below.
We also have a larger and harder dataset recording the M20 robot moving around the main librady at Zhejiang University. This dataset contains aggressive and shaking movements of legged robots. The lite3 lidar dataset can be downloaded from here. You can use src/lightning-lm-deep-robotics/config/default_livox.yaml to test this algorithm. There are also videos recording what the robot looks like when collecting the lidar data.
For details on how to configure and use the RoboSense LiDAR specific to the M20 robot, refer to the official documentation:
- LiDAR Usage Guide: Deep Robotics M20 LiDAR Docs
Do not install libgoogle-glog-dev from apt — it conflicts with the thirdparty glog v0.6.0 (both register the same gflags flags at startup, causing a crash on launch). Install everything else:
sudo apt install -y libopencv-dev libpcl-dev pcl-tools libyaml-cpp-dev libepoxy-dev libgflags-dev python3-wheel ros-humble-pcl-conversions If libgoogle-glog-dev is already installed, remove it:
sudo apt remove -y libgoogle-glog-dev libgoogle-glog0v5cd src/lightning-lm-deep-robotics/thirdparty/glog
mkdir build && cd build
cmake -DBUILD_SHARED_LIBS=ON -DBUILD_TESTING=OFF -DCMAKE_BUILD_TYPE=Release ..
make -j$(nproc)
sudo make install
sudo ldconfig
cd ../../../../..cd src/lightning-lm-deep-robotics/thirdparty/Pangolin
mkdir build && cd build
cmake -DBUILD_EXAMPLES=OFF -DBUILD_TOOLS=OFF -DCMAKE_CXX_FLAGS="-Wno-error" -DCMAKE_BUILD_TYPE=Release ..
make -j$(nproc)
sudo make install
sudo ldconfig
cd ../../../../..Standard build (PC/Server with sufficient RAM):
source /opt/ros/humble/setup.bash
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bashLow memory build (recommended for on-board computers such as M20/RK3588, to avoid OOM crashes):
export MAKEFLAGS="-j3"
source /opt/ros/humble/setup.bash
colcon build --parallel-workers 3 --executor sequential --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bashCompilation on RK3588 takes approximately 10 minutes; using 4 cores may hang the system due to Out of Memory (OOM) issues.
The primary configuration file for the M20 robot is located at:
src/lightning-lm-deep-robotics/config/default_deep_robotics.yaml
Key Configuration Parameters:
- LiDAR Type: Ensure
fasterlio.lidar_typeis set to4(RoboSense). - Topics: Check that
common.lidar_topicandcommon.imu_topicmatch the sensor output in your bag or live stream.
Suitable for testing on the robot or playing back bags in real-time simulation.
-
Play the ROS2 bag:
ros2 bag play ~/Downloads/m20/lidar_data_bag --clockNote: Real-time processing is resource-intensive. On WSL/Virtual Machines, playback speed might need to be reduced.
-
Launch the Online SLAM Node:
ros2 run lightning run_slam_online --config src/lightning-lm-deep-robotics/config/default_deep_robotics.yaml
-
Save the Map: Once mapping is complete, save the result to disk:
ros2 service call lightning/save_map lightning/srv/SaveMap "{map_id: new_map}" -
Log the localization state: Check the realtime odometry of SLAM by running
ros2 topic echo /lightning/nav_state, it will list position, attitude quaternion and velocity.
Recommended for quickly generating maps from recorded data without dropping frames.
- Run Offline SLAM:
Note: The system automatically saves results to the
ros2 run lightning run_slam_offline --input_bag /home/msy/Downloads/m20/lidar_data_bag/lidar_data_bag_0.db3 --config ./src/lightning-lm-deep-robotics/config/default_deep_robotics.yaml
data/new_mapdirectory upon completion.
- 3D Point Cloud:
pcl_viewer ./data/new_map/global.pcd
- 2D Grid Map:
sudo apt install feh feh data/new_map/map.pgm
No UI is shown by default for this mode.
-
Play the ROS2 bag (or run on live robot):
ros2 bag play ~/Downloads/m20/lidar_data_bag --clock -
Launch the Localization Node:
- Ensure
system.map_pathin your yaml config points to the folder containing the map (default:new_map). - Run command:
ros2 run lightning run_loc_online --config ./src/lightning-lm-deep-robotics/config/default_deep_roboticsloc.yaml
- Ensure
Run localization on a bag file without real-time constraints to verify algorithm performance.
ros2 run lightning run_loc_offline --config ./src/lightning-lm-deep-robotics/config/default_deep_roboticsloc.yaml --input_bag [path_to_bag]We test on the AOS(103) platform, which has ROS2_foxy already.
Connect the AOS(103) host to the network.
Modify vim /etc/NetworkManager/NetworkManager.conf and delete unmanaged-devices and all the [keyfile] section and reboot.
Running nmcli d wifi list should then display all available WiFi networks.
Set the name and connect using:
sudo nmcli d wifi connect "<wifiname>" password "password" ifname wlan0.
To maintain a continuous rviz display while the robot is moving, ensure a persistent WiFi connection between your computer and the M20 robot. is maintained.
Enable the service on the NOS(106) host and select the user account.
ssh user@10.21.31.106
sudo systemctl start multicast-relay.serviceTo check status:
sudo systemctl status multicast-relay.serviceOr just enable auto service, so next time restart robot it works permernatly:
sudo systemctl enable multicast-relay.serviceOnce complete, switch to the AOS(103), enter su mode, password is '(a single comma), and check the point cloud:
source /opt/robot/scripts/setup_ros2.sh
ros2 topic hz /LIDAR/POINTSThese steps are required for each SLAM try, in other words, always check LiDAR topic accessibility first.
Follow Steps 1–4 from the Build Instructions section above. On M20, transfer the code to the robot using scp and use the Low Memory Build in Step 4.
Note: On M20 the ROS 2 distro is Foxy — replace
ros-humble-pcl-conversionswithros-foxy-pcl-conversionsin the apt install command, and source/opt/ros/foxy/setup.bashinstead of humble.
3D UI window in run_slam_online crash when starting the pangolin, primarily due to OpenGL/EGL context initialization failure. Error is eglGetBindAPI(0x30a2) failed: EGL_BAD_PARAMETER (300c).
Due to compatibility issues with EGL + OpenGL support on the RK3588, Pangolin visualization (and other OpenGL applications) may fail to open.
Therefore, this modified version primarily utilizes rviz2 for visualization.
Recodring realtime topic related to LIO to bags can by running:
taskset -c 4,5,6,7 chrt 90 ros2 bag record -o lio260310 /tf /IMU /LIDAR/POINTSMake sure:
- The robot is standing when starting program, since system may substract under ground points.
- Verify that the input point cloud topic is visible.
- Dynamic obstacles like cars and people may degrade system performance. In narrow tunnels or when the LiDAR is obstructed, localization may be lost.
So that we use run_slam_online node.
To monitor the SLAM program in one single remote terminal, use the following command to launch 4 tmux windows at once (stay at your workspace folder which is the parent folder of the src folder):
./src/lightning-lm-deep-robotics/start_lg_rviz_session.shThis opens a session named lg. Switch between windows using Ctrl+b n. If a command fails, you can manually re-enter it as described in the next section.
Further instructions see tmux session usage
Mapping mode requires at least 4 windows to run each component:
ros2 run lightning run_slam_online --config src/lightning-lm-deep-robotics/config/default_deep_roboticsslam.yaml
rviz2 -d src/lightning-lm-deep-robotics/config/showbodypc.rviz
ros2 topic echo /lightning/nav_state
ros2 service call /lightning/save_map lightning/srv/SaveMap "{map_id: 'office4'}"The /lightning/odom odometry, and /lightning/path in 'map' frame, topics will be visible here.
Trun on system.pub_odom and ros2 topic echo /lightning/nav_state to list position, attitude quaternion and velocity. It's independent on TF.
Also you can run ros2 service call /lightning/save_path lightning/srv/SavePath "{file_path: 'data/traj.txt'}" any time to save a TUM style trajectory.
Then you can use Python's matplotlib to visualize the path:
python3 src/lightning-lm-deep-robotics/scripts/visualize_trajectory.py data/traj.txtWe test run_loc_online node.
The procedure is basically the same as SLAM, but you need to ensure the map configuration is correct. Check the pointcloud in map path exists in the yaml file you loaded like default_deep_robotics.yaml:
system:
map_path: ./data/office4/And verify the point cloud with pcl_viewer ./data/office4/global.pcd.
Localiztion mode, you need at least 3 windows to each typing these commands.
ros2 run lightning run_loc_online --config src/lightning-lm-deep-robotics/config/default_deep_roboticsloc.yaml
rviz2 -d src/lightning-lm-deep-robotics/config/showglobalmap.rviz
ros2 topic echo /lightning/nav_stateBy running:
./src/lightning-lm-deep-robotics/sloc_rviz_session.shThis use tmux to open 4 windows in a session named loc, the 4th window is left free, you can try SavePath service.
In this project by default we enable pub_tf for rviz2 visualization.
For the purpose of:
- Print or publish localization state and odometry messages.
- Optionally publish mapping result like point clouds and path.
the following options are added in yaml
system:
log_pose_opt: false # Print pos/vel directly to terminal
pub_odom: true # Publish odometry topic
enable_lidar_loc_rviz: false # Enable RViz point cloud publishing
rviz_current_scan_topic: "/current_scan_cloud"
rviz_global_map_topic: "/global_map_cloud"
enable_path_rviz: true
pub_tf: true By default pointcloud is not published. The config file provides basic need for path saving, nav_state logging.
Firstly check the map_path is correct when running loc_online mode.
If you need to manually set another initial pose, use the following in the config:
system:
map_path: ./data/office4/
use_init_pose: true
init_pos: [0.0, 0.0, 0.0] # Initial position in the point cloud [x, y, z]
init_quat: [0.0, 0.0, 0.0, 1.0] # Initial quaternion relative to the global map [x, y, z, w]We publish the map->lidar_link transform in slamOnline for visualization. Note that pub_tf only exists in the LocSystem of the original version.
ros2 run tf2_tools view_frames
ros2 topic echo /tf
ros2 run tf2_ros tf2_echo base_link lidar_link
ros2 run tf2_ros tf2_echo map base_linkUse Ctrl+b followed by 0/1/2/4 to switch between the 4 sub-windows.
Ctrl+b, d # Detach from session
Ctrl+b, c # Create a new tab
su
tmux attach -t lg # Re-attach to the session
tmux kill-session -t lg # Kill the sessionIf the session is disconnected due to network failure, you can attach to this session again. After MobaXterm reconnecting, in su mode run tmux attach -t lg will be back to the running program. Ctrl+b 0 to see the running of SLAM/Loc node.
Although the original Pangolin interface is more efficient, rviz2 visualization is also provided.
The rviz display:
- tf,
map->lidar_link - Odometry,
/lightning/odom - PointCloud2-currentScan,
/current_scan_cloud - PointCloud2-globalMap,
/global_map_cloud - Path:
/lightning/path
The currentScan is transformed to the global 'map' frame and processed by Undistortion and Downsampling. For SLAM mode, the globalMap is updated by KeyFrame updating. For Localization mode, the global map is set by the input file, which remains constant during online operation. The latter ones are not published every second.
Note: rviz2 works on MobaXterm, not on VSCode.
If the network connection is interrupted due to a failure and the next time reconnected, you can restart RViz2 as follows:
source /opt/robot/scripts/setup_ros2.sh
pkill -f rviz2
rviz2 -d src/lightning-lm-deep-robotics/config/showbodypc.rviz