Side Viewer: an intelligent cockpit design for elderly drivers with vision limiations

I teamed up with Violet Zhang on this project and built this intelligent cockpit experience together. My design interests lie in the area of intelligent cockpit in-car experience design, while Violet hopes to discover design solutions for the aging population. We combined our interests and did this exciting project.

Video demo of Side Viewer

What is an Intelligent Cockpit?

An automotive intelligent cockpit offers users a personalized interaction experience with a single core, multiple screens, multiple systems, voice recognition, gesture control, etc.

What is Side Viewer?

This automotive intelligent cockpit interaction helps older adult drivers with vision limitations travel safely. Older adult drivers have vision limitations that affect their driving safety. We designed this experience when they turn on the turn signal switch, and they will be able to see the side view of the car and know if it is safe to turn from the UI. When it is safe to change the lane, a green indicator will show on the view and voice directing the driver to go. When turning is unsafe, there will be a red interface showing and a voice reminding you not to turn now. This enhances the driver’s side view.
This experience is meant to happen in a car, but we present it in class with a big screen as the front window view, a computer screen as the screen display, and a steering wheel with turn signal switches.

What inspired us?

The reading that inspired us to do this work is What Can a Body Do by Sara Hendren. She claims that all technologies are assistive technologies, and human bodies are limited. People have tools that help them in their daily lives. Indeed the tools are body extensions. For example, chopsticks are the extension of our hands, and a ladder is an extension of our legs. In this project, we would like to take the Side Viewer as an extension of two eyes, which helps older adults with vision limitation drive much more accessible.

Who is it for?

Here is our persona for Bob.


In this project, we design both the digital interface of the screen display and the physical prototype. In our digital display, we recorded videos and masked the designed interfaces in Premier Pro.

For the physical prototype, we used the Makey Makey kit as our major circuit and coded in Scratch to make it work. We connected the wire to each turn switch on the steering wheel, and code to show the corresponding videos.

Testing Makey Makey circuit in our studio
Codes in Scratch

Final Delivery

Presentation in class


What Can a Body Do

The 8 Challenges Of Aging

 Aging Issues: Older Drivers,


Project 2 Service: JRN-An 100% Human Custom Illustration Website

In this project, I teamed up with Rhebsa and Nimi. In order to best perform our skills and provide a service for our classmates, we came up with the website JRN which means our names and initials.

The JRN website user flow.

What is JRN?

JRN is a website that makes custom illustrations by 100% humans instead of AI. On this website, the user can upload a portrait picture, and get an animated process of hand-drawn illustration. While waiting for the artist to finish the illustration, the user can play with an interaction loading board, which is a real-time mouse interactive pixeled form of the picture he uploaded. When the wait time is over, the user can watch the animated illustration process and download the final avatar.

Why We Made This and Our Intention:

The three of us have very distinctive skills and strengths. Rhebsa is an amazing illustrator, and Nimi makes the most beautiful website. I am good with making interactions using Processing. Our intention is to build something that is fun for all of our classmates. We believe that colors and art uplift our daily life. We hope to share this joy by creating a platform that makes original illustrations and small interactive art.

The Visual and Interactive Forms:

The work has three essential components: the website, the interactive art, and the animated illustration. On the landing page of the website, there is a title and illustrations by Rhebsa emphasizing what is this for.

Landing page and upload pictures button.

When scrolling down, we put a short about us to let people know that we are 100% human to make this illustration.

About us.

After the user uploads the portrait, there is an interactive art generated using the original portrait. The user can use mouse horizontal movement to view the circles and the circle-formed portrait.

Processing interactive art.

When the wait time is over, the animated illustration will show up and the user can finally download the final illustration.

Animated illustration process.
The png file that can be downloaded and the original photo.

My Part about Making the Interactive Art in Processing:

When designing about this waiting state of illustration, I considered “what is the possible form between a real-life picture and a 2D graphic”. This is how I came up with the idea of this interaction picture pixelated art. The specific Processing language I used is Pjs.

Every picture has a contrast between brightness and darkness. In the code, I used a darker background for the dark area of the picture. The brightness of the picture is represented by the circles.

Original code in Processing.


Overall, we did a pretty good job achieving this service and we can see the potential of this service to grow to a bigger user group. However, there are still some areas we can improve, including the timelapse of the artist making the illustration. After receiving feedback from the class, we think that it would be better to make the wait time more “human”, for example, we can cue that “the artist is working on it” to let users know why this illustration is not done yet.

Try the prototype in Figma here:

My original code in Processing: