pCON: Polarimetric Coordinate Networks for Neural Scene Representations

Henry Peters1 Yunhao Ba1 Achuta Kadambi1

University of California, Los Angeles1

CVPR 2023, Vancouver, Canada

image pCON learns to fit an image by learning a series of reconstructions with different singular values.

Abstract
Neural scene representations have achieved great success in parameterizing and reconstructing images, but current state of the art models are not optimized with the preservation of physical quantities in mind. While current architectures can reconstruct color images correctly, they create artifacts when trying to fit maps of polar quantities. We propose polarimetric coordinate networks (pCON), a new model architecture for neural scene representations aimed at preserving polarimetric information while accurately parameterizing the scene. Our model removes artifacts created by current coordinate network architectures when reconstructing three polarimetric quantities of interest.


Files


Results

Missing

Our model shows higher SSIM and fewer artifacts on predicted AoLP, DoLP and unpolarized intensity maps. Baseline models cause noise or tiling which is clearly visible on the checkerboard pattern on the floor, where all three quantities take large values. The artifacts are present on objects exhibiting both specular reflections, like the floor, and diffuse reflections, like the wall and doors in the background.


Citation

@inproceedings{peters2023pcon,
  title={pCON: Polarimetric Coordinate Networks for Neural Scene Representations},
  author={Peters, Henry and Ba, Yunhao and Kadambi, Achuta},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}


Contact

Henry Peters
Computer Science Department
hpeters@ucla.edu
 
Yunhao Ba
Electrical and Computer Engineering Department
yhba@ucla.edu