What is it about?
This work focuses on improving how wearable devices like smartwatches and fitness trackers understand user actions, even when used across different people, devices, or environments. Many current systems struggle when they encounter changes, like switching from one user to another, or when only a small amount of labeled training data is available. To address this, we developed a new approach called ContrastSense. By utilizing contextual information, such as when the data is collected and who collects it, ContrastSense achieves consistent performance across diverse scenarios. ContrastSense was tested on tasks including activity and gesture recognition, showing significant improvements compared to existing methods.
Featured Image
Photo by Alexander Ruiz on Unsplash
Why is it important?
Wearable technology plays an increasingly vital role in health monitoring, fitness tracking, and gaming. However, existing systems often struggle to deliver consistent performance when faced with diverse users, devices, or environments. ContrastSense tackles two critical challenges: adapting to variations across users and devices, and overcoming the scarcity of labeled data, which is costly and time-consuming to collect. By enhancing the generalizability of wearable systems, ContrastSense reduces the reliance on extensive data collection, paving the way for smarter, more adaptable devices that cater to a broader and more diverse range of users.
Perspectives
Read the Original
This page is a summary of: ContrastSense: Domain-invariant Contrastive Learning for In-the-Wild Wearable Sensing, Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, November 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3699744.
You can read the full text:
Resources
Contributors
The following have contributed to this page