AI-driven HMI/UX for Autonomous, Mercedes
Contributed to an autonomous-driving HMI with AI-based monitoring, designed clear and safe interactions for drivers and passengers, later adopted across multiple classes.
Client
Mercedes-Benz
Role
Product Designer, PM : A lean, startup-size team within a focused division
Team
1 Product Designer, PM / 1 GUI Designer / 60 Developers + 200+
Period
12 month, 2017 - 2018
Overview
Mercedes-Benz, in collaboration with LG, set out to develop an AI-based HMI/UX for Level 3 autonomous driving. As a Product Design/Project Manager, I worked with cross-functional teams and gained valuable learning through research and development. Together, we helped design a multimodal AI framework using machine learning to interpret head movement, gaze, gestures, posture, and driver health in real time. This system enabled safer, more intuitive interactions, improved user experience, and contributed to significant business impact.
What I did
Project Management, UX Strategy & Planning, User Research, UI / Interaction Concept (Wireframing & Prototyping)
What LG did - Behind Story
After global OEMs failed to deliver Mercedes’ first autonomous HMI, Mercedes came to LG’s small vehicle division. At the time, LG faced 4,000 layoffs: many employees had just been transferred from our collapsed mobile division to automotive.
Several senior engineers and managers had already struggled with the project, and with only a week remaining before the concept proposal presentation, I was asked to take it on as the youngest member of the team.
I rushed to used luxury car markets, manually disassembled 20+ vehicles and game devices, and interviewed every hidden expert across LG. Within five sleepless days, After five sleepless nights, I completed the product plan/design with the help of many hidden people. Mercedes was stunned and gradually expanded the solution from S class to C class—leading to $750M+ in annual recurring revenue for LG.
Challenge
After several failed attempts by senior engineers and global OEMs, the autonomous HMI project was handed to me—alone—with only five days left.
With mass layoffs looming and no team or clear specs, I had to rebuild trust from the ground up, starting with used car markets and raw parts.
Objective
We had to combine field research, rapid prototyping, and user-centric intuition to quickly create a realistic and scalable solution that would meet Mercedes’ expectations for the first real-world HMI prototype for autonomous driving.
Result
We completed a full HMI product plan in 5 days. Mercedes adopted and scaled this solution, ultimately generating over $750 million in annual recurring revenue for LG. More importantly, it restored trust in a team that was almost dismantled by the many hidden talents.
Preview final design
Our UX approach focused on enabling intuitive, full-body interactions that support both autonomy and comfort in high-stakes driving scenarios. By integrating head posture, eye tracking, and hand pointing gestures, we created a proactive HMI that eliminates the need for memorized commands.
The design allows users: driver or passenger to interact naturally under pressure, minimizing distraction and maximizing safety. Every gesture & posture model was validated through rapid prototyping and real-world testing, ensuring clarity, speed, and emotional trust at every touchpoint.
In addition to these multimodal interactions, several advanced techniques and proprietary sensing technologies were applied throughout the system. Due to confidentiality agreements, certain technical details cannot be disclosed here—but they played a critical role in delivering a seamless and intelligent experience.
More detail
I’ll walk you through the details during our meeting
Hello
Any questions?
Let's connect and build something meaningful together