CN Vision is a gamified augmented reality (AR) learning tool to help Communicative Sciences & Disorders (CSD) students better envision how cranial nerve pathways map onto a human head and to quiz their knowledge. The project was done in collaboration with Maura Philippone.
CSD students express difficulty with transferring anatomy concepts, such as cranial nerve pathways, to real-world service provision. Many find viewing pathway images in a textbook ineffective for learning.
CN Vision teaches Cranial Nerves V, VII, IX/X, and XII, all of which are key for speech and swallowing. AR was implemented to provide a more interactive way to see how the pathways connect to different facial features and muscles.
Aligning with medical training recommendations (Hazelton, 2011), CN Vision aims to teach speech-language pathology (SLP) students to view and “map” cranial nerve pathways onto their clients.
Per the principles of neuroplasticity (Nahum, Lee, & Merzenich, 2013), specific task practice enhances learned skill transfer to novel contexts. Designing this way may enhance students' understanding of the content and help them better apply it during service provision.
Based on this, a persona ("Katie") was created to further explore the problem space.
A journey map and storyboard were crafted to brainstorm Katie's needs and pain points. These were then translated to a content map which determined the tools' flow and navigation.
It was determined that CN Vision should allow for both exploration and evaluation, so two modes were planned: Explore Pathways and Quiz.
Key Takeaway: Understand the problem and use different approaches to explore the problem space.
The design process began with wireframes and a rudimentary prototype to plan the layout and navigation.
Moving forward, there were several challenges faced when designing the augmented reality experience. First, with the user's environment serving as the backdrop, text legibility was a concern. To account for this, first a white box was added behind text. Later, a solid background was used on all pages not featuring augmented reality, and the screens making use of AR featured a tan color box.
The initial landing screen wireframe.
The first screen in ZapWorks with poor text legibility.
Improving text legibility with a white box as a backdrop.
The fully designed landing screen.
Animation storyboards were also used to plan the movements of CN Vision's homunculus mascot for when the user completes the quiz.
CN Vision is meant to be fun and a bit goofy to increase engagement. The homunculus, which serves as a reward for a high score, is the primary mechanism for this. The character remains unseen until the quiz is complete, but it is hinted at through the progress bar. This also provides instant feedback on quiz answers.
Augmented reality was planned to be implemented in two areas: Explore Pathways Mode, where the user points their device camera at the target image and toggles on views of the five pathways taught, and after passing the quiz in Quiz Mode, where they meet the homunculus character and can enter Photo Mode to take a shareable photo of it.
Our first working prototype of Explore Pathways Mode involved the target image and the pathways displayed on top of a white box for visibility. However, we realized that this approach was not leveraging the power of AR, and in fact was adding an additional step (pointing your camera at the target image) for the user with no added benefit. To remedy this, we began by removing the white background box. Next, instead of displaying the target image on the user's phone or tablet, we displayed an outline of the head with a semi-opaque fill as well as the pathways, providing a configurable layered effect. See the before and after below.
In user testing, participants would state that they felt this layered effect was beneficial to their visualization and learning.
Text descriptions of the pathways are provided in addition to the visual display.
Being that CN Vision is a visual learning tool, accessible design was a bit tricky. It was important however to provide an accessible experience by addressing the issues in creative ways.
In addition to color contrast on the cranial nerve pathways, there were several accessibility considerations we made when designing CN Vision:
A built-in screen reader function was created to allow users to tap on text and have it read out to them. This feature is currently not functioning as it causes performance issues, but we plan to implement this in the upcoming redesign in Unity (more on the redesign later!).
Users who are blind or partially sighted may experience difficulties seeing the pathways, even if color contrast is effective. Therefore, on the pathway information pages we included descriptive text in terms of helping visualize the locations of each pathway branch and how they connect to the facial structures.
Key Takeaway: When designing in AR, leverage the power of the medium. Don't just slap it on to something when it's not needed.
Two user testing sessions have been completed so far with Communicative Sciences & Disorders students. One test was done with a group, the other with an individual. Both sessions included a pre-test interview, observed user testing, and a post-test interview.
All three participants thought the tool was straightforward to use and beneficial as a study tool. They liked the interactivity, the pathway coloring, the "fun" provided by gamification and the homunculus, and the layering effect.
Challenges included the dual-device requirement, maintaining hand position for an extended period of time while exploring the pathways, and slight confusion about several navigational elements (such as the two Info buttons for pathways IX and X and hesitation locating the homunculus informational page).
CN Vision has been explored by users both formally and more informally (as seen here).
Based on this feedback, the following changes have been made or proposed:
Change the color of the headings for the branches on the pathway information pages to match the color in the pathway visualization (rather than all headings being dark gray).
Create printouts of the target image that students can use to avoid the need for a second device.
Add more learning content, such as symptoms of pathway damage.
Consider alternatives to reduce discomfort from maintaining hand position.
Explore possibilities such as using a real person's head instead of a target image.
Key Takeaway: Get feedback from more users to account for diverse learning needs.
CN Vision has received grant funding for additional research, testing, and design through the Charles J. Strosacker Foundation Research Fund for Health and Risk Communication and Future Academic Scholars in Teaching (FAST) Fellowship funding. This has allowed CN Vision to become a formal research project. Future work will involve:
Rebuilding the platform in Unity to allow for more design features and better processing performance.
A two-phase usability study, focused on the gamified aspect of the platform.
Writing a research paper based on the findings.
Demoing the tool and sharing the research findings at conferences. So far, CN Vision has been demonstrated at Meaningful XR in May 2025.
The current version of the tool was demoed to industry experts at Meaningful XR.
To see a demonstration highlighting the features of CN Vision, watch the short video below!
To try out CN Vision yourself, just follow these instructions (two devices required):
Scan the QR code below to the left.
Give the website permission to access your camera and motion.
Explore CN Vision!
Point your device camera at the target image below to the right while you’re in Explore Pathways Mode.
The key aspects of the research and design I participated in are as follows:
User Storyboard
Content Map (collaborative)
Wireframes
Animation Storyboards (collaborative)
Lead Graphic Design
Lead UX/UI Design
Animation
Lead Augmented Reality Design
User Testing Planning and Moderation (collaborative)
Demo Reel Filming and Editing
Design and Handoff Prep (for redesign phase)