The first Workshop on
Reconstruction of Human-Object Interactions (RHOBIN)


rhobin-logo

June 19 @ CVPR 2023 In Person


  • Speakers
  • Schedule
  • Papers
  • Contact

News

  • [Jan 16, 2023] Workshop website launched, with Call-for-Papers and speakers announced.
  • [Mar 12, 2023] Call for papers and single-page abstracts, submit your papers Call-for-Papers!

Introduction

This half-day workshop will provide a venue to present and discuss state-of-the-art research in the reconstruction of human-object interactions from images. The focus will be on recent developments in human-object interaction learning and its impact on 3D scene parsing, building human-centric robotic assistants, and the general understanding of human behaviors.

Humans are an essential component of the interaction. Hence, it is crucial to estimate the human pose, shape, and motion as well as objects that are being interacted with accurately to achieve a realistic interaction. 3D Human Pose and Motion estimation from images or videos have attracted a lot of interest. However, in most cases, the task does not explicitly involve objects and the interaction with them. Whether it is 2D detection and/or monocular 3D reconstruction, objects and humans have been studied separately. Humans are in constant contact with the world as they move through it and interact with it. Considering the interaction between them can marry the best of both worlds.

Participation details of the Rhobin Challenge can be found below and paper submission can be found below.

Invited Speakers (Check out the Full Schedule)

Josef Sivic

Siyu Tang

Torsten Sattler

Czech Institute of Informatics, Robotics and Cybernetics (CIIRC)
Czech Technical University (CTU)
ETH Zürich
Czech Institute of Informatics, Robotics and Cybernetics (CIIRC)
Czech Technical University (CTU)

Kristen Grauman

Edward H Adelson

University of Texas Austin
Facebook AI Research (FAIR)
Massachusetts Institute of Technology (MIT)

Call for Papers

In this workshop, we invite papers on topics related to human-centered interaction modeling. This could include, but is not limited to:

  • Estimation of 3D human pose and shape from a single image or video
  • 3D human motion prediction
  • Interactive motion sequence generation
  • Shape reconstruction from a single image
  • Object 6-DoF pose estimation and tracking
  • Human-centered object semantics and functionality modeling
  • Joint reconstruction of both bodies and objects/scenes
  • Interaction modeling between humans and objects, e.g., contact, physics properties
  • Detection of human-object interaction semantics
  • New datasets or benchmarks that have 3D annotations of both humans and objects/scenes

We invite submissions of a maximum of 8 pages, excluding references, using the CVPR template. Submissions should follow CVPR 2023 instructions. All papers will be subject to a double-blind review process, i.e. authors must not identify themselves on the submitted papers. The reviewing process is single-stage without rebuttals. We also invite 1-page abstract submissions of already published works or relevants works in progress.

Submission Instructions


Submissions are anonymous and should not include any author names, affiliations, and contact information in the PDF.
  • Online Submission System: https://cmt3.research.microsoft.com/
  • Submission Format: official CVPR template (double column; no more than 8 pages, excluding reference).
If you have any questions, feel free to reach out to us.

Timeline Table (11:59 PM, Pacific Time)

  • Full-paper submission deadline: March 9 March 20, 2023
  • Notification to authors: March 30, 2023
  • 1-page Abstract submission deadline: May 15, 2023
  • Camera-ready deadline: April 6, 2023
  • Workshop: June 19, 2023

The First Rhobin Challenge

Given the importance of human-object interaction, as also highlighted by this workshop, we propose a challenge on reconstructing 3D human and object from monocular RGB images, with a focus on images with close human object interactions. We have seen promising progress in reconstructing human body mesh or estimating 6DoF object pose from single images. However, most of these works focus on occlusion-free images which are not realistic for settings during close human-object interaction since humans and objects occlude each other. This makes inference more difficult and poses challenges to existing state-of-the-art methods. In this workshop, we want to examine how well the existing human and object reconstruction methods work under more realistic settings and more importantly, understand how they can benefit each other for accurate interaction reconstruction. The recently released BEHAVE dataset (CVPR'22), enables for the first time joint reasoning about human-object interactions in real settings. We will use the dataset and this workshop to sparkle research in human-object interaction modeling. We already have a publicly available baseline approach and evaluation protocols, CHORE Xie et al. ECCV'22 for the challenge. The authors of BEHAVE and CHORE will help organize the challenge.

Challenge website

  • 3D human reconstruction
  • 6DoF pose estimation of rigid objects
  • Joint reconstruction of human and object

Important dates

  • Challenge open: February 1, 2023
  • Submission deadline: May 15, 2023
  • Winner award: June 19, 2023

Workshop Organizers

Xi Wang

Kaichun Mo

Nikos Athanasiou

ETH, Zurich
NVIDIA Research, Seattle
Max-Planck-Institute for Intelligen Systems, Tübingen

Chun-Hao (Paul) Huang

Gerard Pons-Moll

Otmar Hilliges

Adobe, London
University of Tübingen
Max Planck Institute for Informatics, Saarland Informatics Campus
ETH, Zurich

Challenge Organizers

Xianghui Xie

Bharat Lal Bhatnagar

Gerard Pons-Moll

Max Planck Institute for Informatics, Saarland Informatics Campus
Max Planck Institute for Informatics, Saarland Informatics Campus
University of Tübingen
Max Planck Institute for Informatics, Saarland Informatics Campus

Contact Info

E-mail: rhobinchallenge@gmail.com

Acknowledgements

Website template borrowed from: https://futurecv.github.io/ (Thanks to Deepak Pathak)