What is it about?

Neural Radiance Fields (NeRF) have transformed 3D computer vision, but their reliance on sensitive data collection raises privacy concerns. SplitNeRF, a collaborative training framework, mitigates these risks by using split learning without sharing local data. However, vulnerabilities exist, as demonstrated by the Surrogate Model Attack and Scene-aided Surrogate Model Attack, which exploit shared gradients and leaked scene images. To address these, S2NeRF introduces noise into gradient data, enhancing privacy while preserving model utility. Extensive evaluations confirm that S2NeRF effectively protects privacy while enabling secure NeRF training for sensitive applications.

Featured Image

Why is it important?

This work introduces S2NeRF, a unique privacy-preserving framework for NeRF training, addressing the growing concerns over data privacy in 3D model generation. By integrating split learning and secure gradient noise, it ensures privacy without sacrificing much model performance. This approach is timely as it meets the increasing demand for secure AI in sensitive applications.

Perspectives

Working on this paper has been an exciting journey, combining my passion for privacy-preserving AI with cutting-edge 3D vision techniques. I hope this work sparks meaningful discussions about balancing innovation and privacy, inspiring further advancements in secure AI technologies for applications that deeply impact individuals and society.

Bokang Zhang
The Chinese University of Hong Kong, Shenzhen

Read the Original

This page is a summary of: S 2 NeRF: Privacy-preserving Training Framework for NeRF, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3658644.3690185.
You can read the full text:

Read

Contributors

The following have contributed to this page