What is it about?
Neural Radiance Fields (NeRF) have transformed 3D computer vision, but their reliance on sensitive data collection raises privacy concerns. SplitNeRF, a collaborative training framework, mitigates these risks by using split learning without sharing local data. However, vulnerabilities exist, as demonstrated by the Surrogate Model Attack and Scene-aided Surrogate Model Attack, which exploit shared gradients and leaked scene images. To address these, S2NeRF introduces noise into gradient data, enhancing privacy while preserving model utility. Extensive evaluations confirm that S2NeRF effectively protects privacy while enabling secure NeRF training for sensitive applications.
Featured Image
Photo by XR Expo on Unsplash
Why is it important?
This work introduces S2NeRF, a unique privacy-preserving framework for NeRF training, addressing the growing concerns over data privacy in 3D model generation. By integrating split learning and secure gradient noise, it ensures privacy without sacrificing much model performance. This approach is timely as it meets the increasing demand for secure AI in sensitive applications.
Perspectives
Read the Original
This page is a summary of: S
2
NeRF: Privacy-preserving Training Framework for NeRF, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3658644.3690185.
You can read the full text:
Contributors
The following have contributed to this page