Revisiting non-parametric matching cost volumes for robust and generalizable stereo matching


Conference


K. Cheng, T. Wu, C. G. Healey
36th Conference on Neural Information Processing (NeurIPS 2022), vol. 35, 2022, pp. 16305-16318

View PDF
Cite

Cite

APA   Click to copy
Cheng, K., Wu, T., & Healey, C. G. (2022). Revisiting non-parametric matching cost volumes for robust and generalizable stereo matching. In 36th Conference on Neural Information Processing (NeurIPS 2022) (Vol. 35, pp. 16305–16318).


Chicago/Turabian   Click to copy
Cheng, K., T. Wu, and C. G. Healey. “Revisiting Non-Parametric Matching Cost Volumes for Robust and Generalizable Stereo Matching.” In 36th Conference on Neural Information Processing (NeurIPS 2022), 35:16305–16318, 2022.


MLA   Click to copy
Cheng, K., et al. “Revisiting Non-Parametric Matching Cost Volumes for Robust and Generalizable Stereo Matching.” 36th Conference on Neural Information Processing (NeurIPS 2022), vol. 35, 2022, pp. 16305–18.


BibTeX   Click to copy

@conference{k2022a,
  title = {Revisiting non-parametric matching cost volumes for robust and generalizable stereo matching},
  year = {2022},
  pages = {16305-16318},
  volume = {35},
  author = {Cheng, K. and Wu, T. and Healey, C. G.},
  booktitle = {36th Conference on Neural Information Processing (NeurIPS 2022)}
}

Abstract

Stereo matching is a classic challenging problem in computer vision, which has recently witnessed remarkable progress by Deep Neural Networks (DNNs). This paradigm shift leads to two interesting and entangled questions that have not been addressed well. First, it is unclear whether stereo matching DNNs that are trained from scratch really learn to perform matching well. This paper studies this problem from the lens of white-box adversarial attacks. It presents a method of learning stereo-constrained photometrically-consistent attacks, which by design are weaker adversarial attacks, and yet can cause catastrophic performance drop for those DNNs. This observation suggests that they may not actually learn to perform matching well in the sense that they should otherwise achieve potentially even better after stereo-constrained perturbations are introduced. Second, stereo matching DNNs are typically trained under the simulation-to-real (Sim2Real) pipeline due to the data hungriness of DNNs. Thus, alleviating the impacts of the Sim2Real photometric gap in stereo matching DNNs becomes a pressing need. Towards joint adversarially robust and domain generalizable stereo matching, this paper proposes to learn DNN-contextualized binary-pattern-driven non-parametric costvolumes. It leverages the perspective of learning the cost aggregation via DNNs, and presents a simple yet expressive design that is fully end-to-end trainable, without resorting to specific aggregation inductive biases. In experiments, the proposed method is tested in the SceneFlow dataset, the KITTI2015 dataset, and the Middlebury dataset. It significantly improves the adversarial robustness, while retaining accuracy performance comparable to state-of-the-art methods. It also shows a better Sim2Real generalizability.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in