What is it about?

Video lectures are increasingly being used by learners in a ubiquitous manner. However, existing video designs are not optimized for ubiquitous use, creating the need to adapt the style of these videos to meet the constraints of the learning platform and context of use. Our formative study with experienced video editing users, however, found that performing these adaptations using traditional video editors can be a challenging and time-consuming task. We developed VidAdapter, a tool that facilitates lecture video adaptation by allowing direct manipulation of the video content. For this, VidAdapter automatically extracts meaningful elements from the video, enables spatial and temporal reorganization of the elements, and streamlines the modification of an element's visual appearance. We demonstrate the capabilities and specific use cases of VidAdapter within the domain of adapting existing blackboard lecture videos for on-the-go learning on Optical Head-Mounted Displays. Our evaluation of the tool with experienced video editing users revealed that VidAdapter was strongly preferred over traditional approaches and can improve the efficiency of the adaptation process by over 53% on average.

Featured Image

Why is it important?

This work provides a novel interaction method and a tool that enables people to easily modify video content to fit it into the environment of Head-Monted Devices. With this tool, people could easily transfer a normal video to a video that they could watch on their HMD device and do other things (e.g. walking) at the same time.

Perspectives

Within our work, a tool that facilitates lecture video adaptation by allowing direct manipulation of the video content for Head-Mounted Devices has been presented and can be applied to a wide range of scenarios like medical learning.

Han Xiao
Technische Universiteit Eindhoven

Read the Original

This page is a summary of: VidAdapter, Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, September 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3610928.
You can read the full text:

Read

Contributors

The following have contributed to this page