What is it about?
With the development of applications associated to ego-vision systems, smart-phones, and autonomous cars, automated analysis of videos generated by freely moving cameras has become a major challenge for the computer vision community. Current techniques are still not suitable to deal with real-life situations due to, in particular, wide scene variability and the large range of camera motions. Whereas most approaches attempt to control those parameters, this paper introduces a novel video analysis paradigm, ‘vide-omics’, inspired by the principles of genomics where variability is the expected norm. Validation of this new concept is performed by designing an implementation addressing foreground extraction from videos captured by freely moving cameras. Evaluation on a set of standard videos demonstrates both robust performance that is largely independent from camera motion and scene, and state-of-the-art results in the most challenging video. Those experiments underline not only the validity of the ‘vide-omics’ paradigm, but also its potential.
Featured Image
Why is it important?
Camera-based visual surveillance systems were supposed to deliver a safer and more secure society. But despite decades of development, they are generally not able to handle real-life situations. By treating differences between the images that make up a video as mutations, we can apply the techniques developed for genomic analysis to video. If the “vide-omics” principle were to be adopted, the coming decade could deliver much smarter cameras.
Read the Original
This page is a summary of: Vide-omics: A genomics-inspired paradigm for video analysis, Computer Vision and Image Understanding, January 2018, Elsevier,
DOI: 10.1016/j.cviu.2017.10.003.
You can read the full text:
Resources
Contributors
The following have contributed to this page