Abstract
Robotic automation requires precise object pose estimation for effective grasping and manipulation. With their high dynamic range and temporal resolution, event-based cameras offer a promising alternative to conventional cameras. Despite their success in tracking, segmentation, classification, obstacle avoidance, and navigation, their use for 6D object pose estimation is relatively unexplored due to the lack of datasets. This paper introduces an extensive dataset based on Yale-CMU-Berkeley (YCB) objects, including event packets with associated poses, spike images, masks, 3D bounding box coordinates, segmented events, and a 3-channel event image for validation. Featuring 13 YCB objects, the dataset covers both cluttered and uncluttered scenes across 18 scenarios with varying speeds and illumination. It contains 306 sequences, totaling over an hour and around 1.5 billion events, making it the largest and most diverse event-based dataset for object pose estimation. This resource aims to support researchers in developing and testing object pose estimation algorithms and solutions.
| Original language | British English |
|---|---|
| Article number | 245 |
| Journal | Scientific Data |
| Volume | 12 |
| Issue number | 1 |
| DOIs | |
| State | Published - Dec 2025 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
Fingerprint
Dive into the research topics of 'E-POSE: A Large Scale Event Camera Dataset for Object Pose Estimation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver