The first Workshop on
Reconstruction of Human-Object Interactions (RHOBIN)



rhobin-logo

June 19 @ CVPR 2023 In Person
East 18@Vancouver Convention Center


  • Home

We are happy to announce the winners of the RHOBIN Challenge! Thanks to the generous sponsorhsip of Adobe each winning team will receive a gift of 500 USD 🎉!

Challenge Winners


3D human reconstruction
Hao Chen, Xia Jia from NIP3D-ARteam

Yinghui Fan, Qi Fang, Yanjun Li from NetEase Games AI Lab
6DoF pose estimation of rigid objects
Zerui Zhang, Liangxian Cui, Xiao Lin, Bingqiao Qian, Jie Xiao

University of Science and Technology of China
Joint reconstruction of human and object
Hyeongjin Nam, Daniel Sungho Jung, Kihoon Kim, Kyoung Mu Lee

Department of ECE & ASRI & IPAI, Seoul National University, Korea

Paper Presentations

1
Alexander J Avery (Rochester Institute of Technology)*; Andreas Savakis (Rochester Institute of Technology)
[PDF] [Video]

DeepRM: Deep Recurrent Matching for 6D Pose Refinement

Abstract: Precise 6D pose estimation of rigid objects from RGB images is a critical but challenging task in robotics, augmented reality and human-computer interaction. To address this problem, we propose DeepRM, a novel recurrent network architecture for 6D pose refinement. DeepRM leverages initial coarse pose estimates to render synthetic images of target objects. The rendered images are then matched with the observed images to predict a rigid transform for updating the previous pose estimate. This process is repeated to incrementally refine the estimate at each iteration. The DeepRM architecture incorporates LSTM units to propagate information through each refinement step, significantly improving overall performance. In contrast to current 2-stage Perspective-n-Point based solutions, DeepRM is trained end-to-end, and uses a scalable backbone that can be tuned via a single parameter for accuracy and efficiency. During training, a multi-scale optical flow head is added to predict the optical flow between the observed and synthetic images. Optical flow prediction stabilizes the training process, and enforces the learning of features that are relevant to the task of pose estimation. Our results demonstrate that DeepRM achieves state-of-the-art performance on two widely accepted challenging datasets.

2
Nikolaos Zioulis (Independent Researcher)*; James F O'Brien (Klothed Technologies)
[PDF] [Video]

KBody: Towards general, robust, and aligned monocular whole-body estimation

Abstract: KBody is a method for fitting a low-dimensional body model to an image. It follows a predict-and-optimize approach, relying on data-driven model estimates for the constraints that will be used to solve for the body's parameters. Acknowledging the importance of high quality correspondences, it leverages ``virtual joints" to improve fitting performance, disentangles the optimization between the pose and shape parameters, and integrates asymmetric distance fields to strike a balance in terms of pose and shape capturing capacity, as well as pixel alignment. We also show that generative model inversion offers a strong appearance prior that can be used to complete partial human images and used as a building block for generalized and robust monocular body fitting. Project page: https://klothed.github.io/KBody.

3
Gee-Sern Hsu (National Taiwan University of Science and Technology)*; Yu-Hong Lin (National Taiwan University of Science and Technology); Chin-Cheng Chang (National Taiwan University of Science and Technology)
[PDF] [Video]

Pretrained Pixel-Aligned Reference Network for 3D Human Reconstruction

Abstract: We propose the Pretrained Pixel-aligned Reference (PPR) network for 3D human reconstruction. The PPR network utilizes a pretrained model embedded with a reference mesh surface and full-view normals to better constrain spatial query processing, leading to improved mesh surface reconstruction. Our network consists of a dual-path encoder and a query network. The dual-path encoder extracts front-back view features from the input image through one path, and full-view reference features from a pretrained model through the other path. These features, along with additional spatial traits, are concatenated and processed by the query network to estimate the desired mesh surface. During training, we consider points on the pretrained model as well as around the ground-truth mesh surfaces, enabling the implicit function to better capture the mesh surface and overall posture. We evaluate the performance of our approach through experiments on the THuman 2.0 and RenderPeople datasets, and compare it with state-of-the-art methods.

Abstract Presentations

Aditya Prakash (University of Illinois Urbana-Champaign)*; Matthew Chang (UIUC); Matthew Jin (University of Illinois at Urbana-Champaign); Saurabh Gupta (UIUC)
[PDF] [Video]

Learning Hand-Held Object Reconstruction from In-The-Wild Videos

Yi-Ling Qiao (University of Maryland, College Park)*; Shutong Zhang (University of Toronto); Guanglei Zhu (University of Toronto); Eric Heiden (NVIDIA); Dylan Turpin (University of Toronto); Ming C Lin (UMD-CP & UNC-CH ); Miles Macklin (NVIDIA); Animesh Garg (University of Toronto, Vector Institute, Nvidia)
[PDF] [Video]

HandyPriors: Physically Consistent Perception of Hand-Object Interactions with Differentiable Priors

Contact Info

E-mail: rhobinchallenge@gmail.com