Visualize point clouds, cameras, and 3D annotations in your browser
— straight from the most widely used autonomous driving datasets.
No conversion, no preprocessing.
Live Demo · URL Loading · Share View · Dev Setup
Waymo Open Dataset — 3D segmentation, keypoints, and camera views. Timeline markers indicate frames with annotations (cyan = LiDAR seg, lime = 3D keypoints, light-cyan = 2D keypoints, magenta = camera seg).
One tool for the three most widely used AV datasets. Drop a folder or paste a URL — auto-detected, zero setup.
- LiDAR point clouds with multiple colormap modes (intensity, height, range, segmentation, camera projection)
- 3D bounding boxes as wireframes or 3D models with color-coded tracking
- Synchronized camera views with POV switching — click a camera to jump into its viewpoint
- Trajectory trails showing object movement over past frames
- Semantic segmentation overlays (LiDAR and camera)
- Timeline with play/pause, frame scrubber, and buffer progress bars
![]() |
![]() |
| nuScenes — 32-class LiDAR segmentation projected onto camera views via LiDAR overlay | Argoverse 2 — camera colormap mode with 7-camera POV switching |
| Feature | Waymo v2 | nuScenes | Argoverse 2 |
|---|---|---|---|
| LiDAR point cloud | ✓ (5 sensors) | ✓ (1 sensor + 5 radar) | ✓ (2 sensors) |
| Camera images | ✓ (5 cams) | ✓ (6 cams) | ✓ (7 cams) |
| 3D bounding boxes | ✓ | ✓ | ✓ |
| 2D camera boxes | ✓ | — | — |
| Cross-modal hover linking | ✓ | — | — |
| Trajectory trails | ✓ | ✓ | ✓ |
| 3D human keypoints | ✓ | — | — |
| 2D camera keypoints | ✓ | — | — |
| LiDAR segmentation | ✓ (23-class) | ✓ (32-class) | — |
| Camera panoptic seg | ✓ (29-class) | — | — |
| POV camera switching | ✓ | ✓ | ✓ |
| Local (drag & drop) | ✓ | ✓ | ✓ |
| URL loading | ✓ | ✓ | ✓ |
Dataset format is auto-detected from folder structure.
- Open the live demo
- Drag & drop your dataset folder into the browser
- Done — browse frames, toggle sensors, play the timeline
Load data directly from S3 or any static file server by providing a URL.
Two modes:
- URL only — auto-discovers all segments/scenes in the directory
- URL + Segment ID — loads a specific segment directly (works with any static file server)
https://egolens.github.io/egolens/?dataset=argoverse2&data=https://your-server.com/av2/sensor/val/
https://egolens.github.io/egolens/?dataset=nuscenes&data=https://your-server.com/nuscenes/
https://egolens.github.io/egolens/?dataset=waymo&data=https://your-server.com/waymo_data/&scene=SEGMENT_ID
The URL should point to a directory containing the dataset's standard folder structure. Works with S3 buckets, any HTTP server, or localhost.
Note: Waymo's license prohibits data redistribution, so no hosted demo data is available. You'll need to host your own copy after accepting the Waymo Open Dataset License.
Point your team to exactly what you see. When data is loaded via URL, click Share View to get a link that encodes your exact view — frame, colormap, camera angle, overlays, everything. Paste it in Slack or a PR comment and your teammate lands on the same frame, same angle, same overlays. No screenshots, no "go to frame 142 and turn on segmentation."
Share what you see — one link captures frame, camera angle, overlays, and all settings.
git clone https://github.com/egolens/egolens.git
cd egolens
npm install
npm run devnpm run build # Type-check + production build
npm run lint # ESLint
npm test # VitestReact 19 · TypeScript · Three.js · React Three Fiber · Vite · Zustand · hyparquet · Web Workers
Chrome / Edge recommended. Safari may crash on large datasets due to WebKit memory limits. Firefox works but lacks the folder picker API.
Found a bug? Have a feature idea? Want support for another dataset? Open an issue — all feedback is welcome.
See CONTRIBUTING.md for development guidelines · Changelog
MIT · Built by Heejae Kim



