Abstract
This document specifically focuses on Avatar Face Representation for XR Communication, outlining a comprehensive framework for standardizing the representation and animation of avatar faces in XR communication scenarios. It aims to standardize the human-to-3D avatar face modeling process, tailored for users of XR glasses. This process involves accurately mapping an individual's facial features onto their avatars to ensure that the 3D avatars closely resemble the real human faces they represent, even when XR devices obscure certain facial regions. Additionally, the document emphasizes the critical need for real-time facial animation during communication. This involves synchronizing facial expressions with the glasses wearer's movements and gestures. Achieving this level of synchronization requires advanced tracking technologies capable of capturing and processing facial data instantaneously, generating accurate digital representations in real-time. (a) Human Face Capturing: This initial step involves capturing the user's facial features using XR device sensors. The standard specifies the types and configurations of sensors (e.g., external cameras, infrared sensors, depth sensors) integrated into XR glasses to effectively capture facial data. (b) Facial Data Mapping and Rendering: The captured facial data is then processed using algorithms that map the user's facial geometry and expressions onto the avatar. Finally, the processed data is used to render the avatar's face in the virtual environment. This rendering is done in real-time to synchronize with the user's facial movements and expressions, providing a seamless and realistic avatar representation.
General information
-
Status: Under developmentStage: New project registered in TC/SC work programme [20.00]
-
Edition: 1
-
Technical Committee :ISO/IEC JTC 1/SC 24
- RSS updates