* See the full length version of this video here

Massively Multiview System

  • 480 VGA camera views
  • 30+ HD views
  • 10 RGB-D sensors
  • Hardware-based sync
  • Calibration

Interesting Scenes with Labels

  • Multiple people
  • Socially interacting groups
  • 3D body pose
  • 3D facial landmarks
  • Transcripts + speaker ID

Dataset Size

Currently, 65 sequences (5.5 hours) and 1.5 millions of 3D skeletons are available.

What's New

Dec. 2017 Hand Keypoint Dataset Page has been added. More data will be coming soon.
Jun. 2017 We organize a tutorial in conjunction with CVPR 2017: "DIY A Multiview Camera System: Panoptic Studio Teardown"
Jun. 2017 Hand keypoint detection and reconstruction paper will be presented in CVPR 2017: Project Page.
Dec. 2016 Panoptic Studio is featured on The Verge. You can also see the video version here.
Dec. 2016 The social interaction capture paper (extended version of ICCV15) is available on arXiv.
Sep. 2016 The CMU PanopticStudio Dataset is now publicly released.
Currently, 480 VGA videos, 31 HD videos, 3D body pose, and calibration data are available.
Dense point cloud (from 10 Kinects) and 3D face reconstruction will be available soon.
Please contact Hanbyul Joo and Tomas Simon for any issue of our dataset.
Sep. 2016 The PanopticStudio Toolbox is available on GitHub.
Aug. 2016 Our dataset website is open. Dataset and tools will be available soon.

Dataset Examples

Example Results

System Description

We keep upgrading our system. Currently our system has the following hardware setup:

The following figure shows the dimension of our system.
Dome Figures

Reference

@InProceedings{Joo_2015_ICCV,
author = {Joo, Hanbyul and Liu, Hao and Tan, Lei and Gui, Lin and Nabbe, Bart and Matthews, Iain and Kanade, Takeo and Nobuhara, Shohei and Sheikh, Yaser},
title = {Panoptic Studio: A Massively Multiview System for Social Motion Capture},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
year = {2015}
}

Acknowledgement

This research is supported by the National Science Foundation under Grants No. 1353120 and 1029679, and in part using an ONR grant 11628301.