What is it about?
This work utilizes the information of synchronous activity data collected from different wearable devices on our body to predict the performed activity. For example, imagine someone walking on the road wearing earbuds, a smartwatch, and a smartphone. These devices individually record the same action, but they have an inherent natural augmentation of the core signal due to their orientation and position on the body. ColloSSL utilizes this property to train a feature extractor from this unlabeled data, which outperforms existing state-of-the-art methods.
Featured Image
Photo by Christopher Gower on Unsplash
Why is it important?
ColloSSL achieves impressive performance with only 10% labeled data. It is a very common issue to not have labels for sensor data. ColloSSL circumvents the issue by training a feature extractor in an unsupervised fashion which easily aligns with the labeled data for any body position.
Perspectives
Read the Original
This page is a summary of: ColloSSL, Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, March 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3517246.
You can read the full text:
Contributors
The following have contributed to this page