3D reconstruction from multi-view images is a fundamental problem in computer vision and
robotics. The recent use of Neural Radiance Field (NeRF)-based implicit representation has
significantly improved reconstruction accuracy, enabling photorealistic novel view rendering.
However, these methods often require accurate camera poses as input, which can be
challenging to obtain in real-world scenarios. Moreover, current datasets may lack perfect
ground-truth 3D models for accurate evaluation. In this talk, we present our new findings on 3D
neural reconstruction. We introduce a dataset comprising real-world video captures with perfect
3D models for evaluation. We propose a novel joint optimisation method that refines camera
poses during the reconstruction process.
Post Talk Link: ClickHere
Passcode: g+3WQ2V1
Jiawang Bian is currently a postdoctoral researcher at the University of Oxford, collaborating with Prof. Philip Torr and Prof. Victor Adrian Prisacariu. His research focuses on 3D Computer Vision and Robotics. Jiawang earned his B.Eng degree from Nankai University under the guidance of Prof. Ming-Ming Cheng. He then worked as a research assistant at the Singapore University of Technology and Design. He completed his PhD at the University of Adelaide under the supervision of Prof. Ian Reid and Prof. Chunhua Shen. In addition to his academic pursuits, Jiawang has undertaken research internships at esteemed institutions and companies, including the Advanced Digital Sciences Center, TuSimple, Amazon, and Meta.
Read More
Read More