E-POSE: A Large Scale Event Camera Dataset for Object Pose Estimation

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Robotic automation requires precise object pose estimation for effective grasping and manipulation. With their high dynamic range and temporal resolution, event-based cameras offer a promising alternative to conventional cameras. Despite their success in tracking, segmentation, classification, obstacle avoidance, and navigation, their use for 6D object pose estimation is relatively unexplored due to the lack of datasets. This paper introduces an extensive dataset based on Yale-CMU-Berkeley (YCB) objects, including event packets with associated poses, spike images, masks, 3D bounding box coordinates, segmented events, and a 3-channel event image for validation. Featuring 13 YCB objects, the dataset covers both cluttered and uncluttered scenes across 18 scenarios with varying speeds and illumination. It contains 306 sequences, totaling over an hour and around 1.5 billion events, making it the largest and most diverse event-based dataset for object pose estimation. This resource aims to support researchers in developing and testing object pose estimation algorithms and solutions.

Original languageBritish English
Article number245
JournalScientific Data
Volume12
Issue number1
DOIs
StatePublished - Dec 2025

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 9 - Industry, Innovation, and Infrastructure
    SDG 9 Industry, Innovation, and Infrastructure

Fingerprint

Dive into the research topics of 'E-POSE: A Large Scale Event Camera Dataset for Object Pose Estimation'. Together they form a unique fingerprint.

Cite this