Italian Trulli

Introduction

Object pose estimation is crucial for robotic applications and augmented reality. To provide a benchmark with high-quality ground truth annotations to the community, we introduce a multimodal dataset for category-level object pose estimation with photometrically challenging objects termed PhoCaL. PhoCaL comprises 60 high quality 3D models of household objects over 8 categories including highly reflective, transparent and symmetric objects. We developed a novel robot-supported multi-modal (RGB, depth, polarisation) data acquisition and annotation process. It ensures sub-millimeter accuracy of the pose for opaque textured, shiny and transparent objects, no motion blur and perfect camera synchronisation.

Download

If one wants to do object pose estimation task WITHOUT depth information, and wants to download the full dataset at once. Please click here: Link. If one wants to download sequence by sequence with raw depth information, or wants to access grasping labels. Please refer to the PhoCaL session in this Link

Format

Full dataset: The dataset contains 24 sequences and object models. In each sequence, the dataset contains multimodality inputs from ToF camera and polarization camera. The rgb and depth [deprecated] folders are captured from ToF camera and the corresponding object masks and nocs maps are also rendered in the folders. The polarization folder contains images from polarization camera which are resized to 640x480. The object ground truth annotations for both cameras are saved in rgb_scene_gt.json and pol_scene_gt.json in BOP format. The object class id and instance id is documented in the class_obj_taxonomy.json file.
Splits: The obj_poses folder contains initial object poses in the robot base coordinate. The ee_frame folder contains the end-effector poses with respect to the robot base coordinate for each frame in the sequence. The hand-eye calibration result of mutlple cameras is saved in the extrinsics folder, for example the ee_to_l515.txt and ee_to_pol.txt indicating l515 and polarization sensors. The object poses can be recovered by transforming object poses in the robot base coordinate to end-effector coordinates, and then to different camera coordinates.

Citation

If you use the dataset for research, please consider citing:
@inproceedings{wang2022phocal,
title={PhoCaL: A Multi-Modal Dataset for Category-Level Object Pose Estimation with Photometrically Challenging Objects},
author={Wang, Pengyuan and Jung, HyunJun and Li, Yitong and Shen, Siyuan and Srikanth, Rahul Parthasarathy
and Garattoni, Lorenzo and Meier, Sven and Navab, Nassir and Busam, Benjamin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21222--21231}, year={2022} }