Project 4 Inquiry: Visioner-AR Driving Glasses for Seniors with Glaucoma

Continuing on the topic from project 3, me and Violet worked on how we might design a smart driving experience for elderly drivers with glaucoma to make them drive safer and more comfortably.

What is Visioner?

Visioner is a driving assist device that provides enhanced vision support and voice support using AR glasses. Our target users are elderly drivers with glaucoma.

After researching the current AR glasses in the market, we find that most of them have very innovative and bold designs.  Considering our target drivers group, we want our product to look ordinary so they can easily blend into their daily lives.  

3D Printing of the Physical Prototype.

Under the arms of the glasses, there is a speaker embedded for delivering voice reminders.  There is a projector on the inner side of the right arm.  The projector will project through the screen and show the view in front of the sight.  On the outer side of the arms, there is a touch control for turning it on and off.  

Why We Made this?

Building on our Project 3, we want to continue on the design to help older adult drivers with glaucoma drive safely. We learned from the critique showing a side view on the screen will cause cognitive overload for drivers. Research suggests that a driver’s vision can’t remain on screen for more than 3 seconds, or it will increase the chances of accidents. AR glasses provide an enhanced view for drivers with this vision limitation.


Here is how we designed our glasses.

UI Development in Figma

Visual Components:

Here is how we made the visual design decisions.

How it looks like in the AR glasses.

This is a more saturated colour choice of cyan for the optimum wavelength range for the human eye.

Arial bold with 5% more spacing
Larger sans-serif text and spacing are more suitable for seniors’ reading habits.

For seniors, the amount of incoming light is reduced by about 28-43%. Using AI filters to enhance the brightness.


1. Speed Detection and Sign Indication
2. Highway Exit Reminder
3. Navigation
4. Lane Change Assist



Side Viewer: an intelligent cockpit design for elderly drivers with vision limiations

I teamed up with Violet Zhang on this project and built this intelligent cockpit experience together. My design interests lie in the area of intelligent cockpit in-car experience design, while Violet hopes to discover design solutions for the aging population. We combined our interests and did this exciting project.

Video demo of Side Viewer

What is an Intelligent Cockpit?

An automotive intelligent cockpit offers users a personalized interaction experience with a single core, multiple screens, multiple systems, voice recognition, gesture control, etc.

What is Side Viewer?

This automotive intelligent cockpit interaction helps older adult drivers with vision limitations travel safely. Older adult drivers have vision limitations that affect their driving safety. We designed this experience when they turn on the turn signal switch, and they will be able to see the side view of the car and know if it is safe to turn from the UI. When it is safe to change the lane, a green indicator will show on the view and voice directing the driver to go. When turning is unsafe, there will be a red interface showing and a voice reminding you not to turn now. This enhances the driver’s side view.
This experience is meant to happen in a car, but we present it in class with a big screen as the front window view, a computer screen as the screen display, and a steering wheel with turn signal switches.

What inspired us?

The reading that inspired us to do this work is What Can a Body Do by Sara Hendren. She claims that all technologies are assistive technologies, and human bodies are limited. People have tools that help them in their daily lives. Indeed the tools are body extensions. For example, chopsticks are the extension of our hands, and a ladder is an extension of our legs. In this project, we would like to take the Side Viewer as an extension of two eyes, which helps older adults with vision limitation drive much more accessible.

Who is it for?

Here is our persona for Bob.


In this project, we design both the digital interface of the screen display and the physical prototype. In our digital display, we recorded videos and masked the designed interfaces in Premier Pro.

For the physical prototype, we used the Makey Makey kit as our major circuit and coded in Scratch to make it work. We connected the wire to each turn switch on the steering wheel, and code to show the corresponding videos.

Testing Makey Makey circuit in our studio
Codes in Scratch

Final Delivery

Presentation in class


What Can a Body Do

The 8 Challenges Of Aging

 Aging Issues: Older Drivers,


Project 2 Service: JRN-An 100% Human Custom Illustration Website

In this project, I teamed up with Rhebsa and Nimi. In order to best perform our skills and provide a service for our classmates, we came up with the website JRN which means our names and initials.

The JRN website user flow.

What is JRN?

JRN is a website that makes custom illustrations by 100% humans instead of AI. On this website, the user can upload a portrait picture, and get an animated process of hand-drawn illustration. While waiting for the artist to finish the illustration, the user can play with an interaction loading board, which is a real-time mouse interactive pixeled form of the picture he uploaded. When the wait time is over, the user can watch the animated illustration process and download the final avatar.

Why We Made This and Our Intention:

The three of us have very distinctive skills and strengths. Rhebsa is an amazing illustrator, and Nimi makes the most beautiful website. I am good with making interactions using Processing. Our intention is to build something that is fun for all of our classmates. We believe that colors and art uplift our daily life. We hope to share this joy by creating a platform that makes original illustrations and small interactive art.

The Visual and Interactive Forms:

The work has three essential components: the website, the interactive art, and the animated illustration. On the landing page of the website, there is a title and illustrations by Rhebsa emphasizing what is this for.

Landing page and upload pictures button.

When scrolling down, we put a short about us to let people know that we are 100% human to make this illustration.

About us.

After the user uploads the portrait, there is an interactive art generated using the original portrait. The user can use mouse horizontal movement to view the circles and the circle-formed portrait.

Processing interactive art.

When the wait time is over, the animated illustration will show up and the user can finally download the final illustration.

Animated illustration process.
The png file that can be downloaded and the original photo.

My Part about Making the Interactive Art in Processing:

When designing about this waiting state of illustration, I considered “what is the possible form between a real-life picture and a 2D graphic”. This is how I came up with the idea of this interaction picture pixelated art. The specific Processing language I used is Pjs.

Every picture has a contrast between brightness and darkness. In the code, I used a darker background for the dark area of the picture. The brightness of the picture is represented by the circles.

Original code in Processing.


Overall, we did a pretty good job achieving this service and we can see the potential of this service to grow to a bigger user group. However, there are still some areas we can improve, including the timelapse of the artist making the illustration. After receiving feedback from the class, we think that it would be better to make the wait time more “human”, for example, we can cue that “the artist is working on it” to let users know why this illustration is not done yet.

Try the prototype in Figma here:

My original code in Processing:


Project 1 Gift: Watering Cactus Game

Watering Cactus Game Interface

In project 1, I made a small game as a gift for Faizaan using Processing. In this game, a small cactus is waiting for water in the desert. The player can control the movement of the cactus by using the mouse. The player moves the mouse freely on the screen and lets the cactus catch the water drops. The score cumulates as more water drops are caught. The player can play as long as he wishes. The background music plays as the game continues.

My original intention for this game is to be stress-relieving on a busy day. The game is meant to be played in a scenario of taking a break, whether it is during work or after work. The game display is in a very small size to be private enough for the player himself and unnoticeable for other people around.

Watering Cactus Gameplay Demo

I made this gift because I learned that Faizaan likes playing PC games. Playing games helped him relax from the busy work. I learned that he is a fun and positive person. The gift was received by Faizaan and he was glad to try playing it. He thinks that it is nice to play during a small break from work.


Original codes if you are interested:


Hello world!

Welcome to MFA and MDes Full Residency Virtual Studios. This is your first post. Edit or delete it, then start writing!