AI-driven UX for Mercedes’ Autonomous

Contributed to an autonomous-driving HMI with AI-based monitoring, designed clear and safe interactions for drivers and passengers, later adopted across multiple classes.

Client

Mercedes-Benz

Role

Product Designer : A lean, startup-size team within a focused division

Team

2 PM / 2 Product Designer / 1 GUI Designer / 60 Developers + 200+

Period

24 months, 2017 - 2018

Overview

First AI-driven Autonomous HMI/UX Development for Mercedes

First AI-driven Autonomous HMI/UX Development for Mercedes

Mercedes-Benz collaborated with LG to develop an AI-based HMI and UX for Level 3 autonomous driving. Working with cross-functional teams, I gained valuable research and development experience while co-designing a multimodal AI framewo-rk that used machine learning to interpret head movement, gaze, gestures, posture, and driver health in real time. This system enabled safer and more intuitive interactio-ns, enhanced overall user experience, and generated significant business impact.

Challenge

After several failed attempts by senior engineers and global OEMs, the autonomous HMI project was handed to me—alone—with only five days left.

With mass layoffs looming and no team or clear specs, I had to rebuild trust from the ground up, starting with used car markets and raw parts.

Objective

We had to combine field research, rapid prototyping, and user-centric intuition to quickly create a realistic and scalable solution that would meet Mercedes’ expectations for the first real-world HMI prototype for autonomous driving.

Result

We completed a full HMI product plan in 5 days. Mercedes adopted and scaled this solution, ultimately generating over $750 million in annual recurring revenue for LG. More importantly, it restored trust in a team that was almost dismantled by the many hidden talents.

Preview final design

Monitoring from head to toe and pointing gestures

Monitoring from head to toe and pointing gestures

Our UX approach focused on enabling intuitive, full-body interactions that support both autonomy and comfort in high-stakes driving scenarios. By integrating head posture, eye tracking, and hand pointing gestures, we created a proactive HMI that eliminates the need for memorized commands.

The design allows users: driver or passenger to interact naturally under pressure, minimizing distraction and maximizing safety. Every gesture & posture model was validated through rapid prototyping and real-world testing, ensuring clarity, speed, and emotional trust at every touchpoint.

In addition to these multimodal interactions, several advanced techniques and proprietary sensing technologies were applied throughout the system. Due to confidentiality agreements, certain technical details cannot be disclosed here—but they played a critical role in delivering a seamless and intelligent experience.

More detail

I’ll walk you through the details during our meeting

Hello

Any questions?

Let's connect and build something meaningful together