Generative AI for Intent-Based Multimodal User Interfaces
Deadline: December 18, 2024 - 00:00
Advisor: Prof. Maristella Matera
Keywords: large language model, ai, accessibility, interfaces
LLMs are revolutionizing our interaction with machines thanks to their ability to understand user intents from natural-language prompts. Their new multimodal capabilities enable an intuitive UX, bridging the gap between human intention and machine execution with benefits for usability and also inclusivity. However, a critical question remains: how can we leverage this increased interpretation ability to unlock disruptive interaction paradigms and “fluid” UIs that increase users’ productivity, also advancing accessibility and inclusivity? And how will this transition materialize in practical terms? In this context, the research conducted by the doctoral student will investigate how LLMs can help create adaptive prompt-driven multimodal UIs to support efficient ways of performing digital tasks. LLMs will help i) interpret user needs and goals, and ii) dynamically generate strategies and plans for interfaces and interactions. The characterizing features of this new interaction paradigm, the interaction patterns, and the building blocks of the new interfaces will be identified through intensive user reseach and formalized within a development framework contributing with a novel design system and a UI toolkit. Accessibility of the resulting paradigm will be prioritized.
- Computer Science and Engineering
- PNRR-sponsored
Download PDF