Shengze (Mike) Wang

I am a 3rd year PhD student at UNC Chapel Hill, advised by professor Henry Fuchs. We want to democratize telepresence by building desktop systems at a low cost.

I did my undergrad at University of Illinois Urbana-Champaign (UIUC), where I first learned 3D vision from professor Derek Hoiem . I then went to CMU for Master in Computer Vision (MSCV). After working with professor Michael Kaess and Dr. Lipu Zhou at CMU, I returned to UIUC to work for Reconstruct founded by professor Derek Hoiem and professor Mani Golparvar-Fard. At Reconstruct, I gained a lot more experience in traditional 3D reconstruction and SLAM, and I got my first patent. Much love to everybody at Reconstruct : ). I love every second there.

In 2022, I interned in Ronald Azuma's group at Intel Labs. In 2018, I interned at Uber ATG.

Fun Stuff: I'm mostly a Metal/Rock&Roll guitarist, but I (try to) play other instruments when I make music. Before I joined the UIUC boxing team as a sparring partner, I did BJJ for 2 years.

shengzew [at]  /  Google Scholar  /  Twitter  /  My Music (Soundcloud)  /  η½‘ζ˜“δΊ‘

profile photo

I mostly work on traditional and learning-based 3D vision (SLAM, 3D reconstruction, neural rendering).

In the future, my short-term goal is to democratize telepresence via affordable desktop systems.

My mid-term goal is to achieve holographic telepresence.

My long-term goal is to understand memory and intelligence through neuroscience and compuational methods.

Bringing Telepresence to Every Desk
Shengze Wang, Ziheng Wang, Ryan Schmelzle, Liuejie Zheng, YoungJoong Kwon, Soumyadip Sengupta, Henry Fuchs
arXiv, 2023 [Project Page][Code]

We showcase a prototype personal telepresence system. With 4 RGBD cameras, it can be easily installed on every desk. Our renderer synthesizes high-quality free-viewpoint videos of the entire scene and outperforms prior neural rendering methods.

INV: Towards Streaming Incremental Neural Videos
Shengze Wang, Alexey Supikov, Joshua Ratcliff, Henry Fuchs, Ronald Azuma
arXiv, 2023 [Project Page][Code]

We discovered a natural information partition in 2D/3D MLPs, which stores structural information in early layers and color information in later layers. We leverage this property to incrementally stream dynamic free-viewpoint videos without buffering (required by prior dynamic NeRFs).

With the significant reduction in training time and bandwidth, we lay foundation for live-streaming NeRF and better understanding of MLPs.

PLC-LiSLAM: LiDAR SLAM With Planes, Lines, and Cylinders
Lipu Zhou, Guoquan Huang, Yinian Mao, Jincheng Yu, Shengze Wang, Michael Kaess
IEEE RA-L, 2022
EDPLVO: Efficient direct point-line visual odometry (Outstanding Navigation Paper)
Lipu Zhou, Guoquan Huang, Yinian Mao, Shengze Wang, Michael Kaess
ICRA, 2022
Learning Dynamic View Synthesis With Few RGBD Cameras
Shengze Wang, YoungJoong Kwon, Yuan Shen, Qian Zhang, Andrei State, Jia-Bin Huang, Henry Fuchs
arXiv, 2022

We introduce a system that synthesizes dynamic free-viewpoint videos from 2 RGBD cameras. This is a preliminary work to our personal telepresence system.

DPLVO: direct point-line monocular visual odometry
Lipu Zhou, Shengze Wang, Michael Kaess
ICRA, 2021
Ο€-LSAM: LiDAR Smoothing and Mapping With Planes
Lipu Zhou, Shengze Wang, Michael Kaess
ICRA, 2021
A fast and accurate solution for pose estimation from 3D correspondences
Lipu Zhou, Shengze Wang, Michael Kaess
ICRA, 2020
Do not Omit Local Minimizer: a Complete Solution for Pose Estimation from 3D Correspondences
Lipu Zhou, Shengze Wang, Jiamin Ye, Michael Kaess
arXiv, 2019
Unsupervised Learning of Monocular Depth Estimation with Bundle Adjustment, Super-Resolution and Clip Loss
Lipu Zhou, Jiamin Ye, Montiel Abello, Shengze Wang, Michael Kaess
arXiv, 2018

Thank you to Jon Barron for sharing his website template with the community