Intent Prediction in AR Shopping Experiences Using Multimodal Interactions of Voice, Gesture, and Eye Tracking: A Machine Learning Perspective
Main Article Content
Abstract
Augmented Reality (AR) is revolutionizing the shopping experience by allowing consumers to interact with virtual products in real-time. Intent prediction – the mechanism of predicting a consumer’s intention based on their behavioral patterns and actions – is crucial for enhancing the personalization of AR shopping environments. This paper explores how multimodal interactions, including voice commands, gesture recognition, and eye tracking, can be integrated into AR shopping experiences to predict user intent more effectively. We review current advancements in multimodal interaction systems, discuss the importance of intent prediction in AR, and assess the impact of combining multiple input modalities on prediction accuracy. Our research identifies the challenges and future directions for intent prediction in AR shopping landscapes, aiming to improve user engagement, personalization, and the overall shopping experience.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
©2024 All rights reserved by the respective authors and JAIGC