CN Vision is a gamified augmented reality (AR) learning tool to help Communicative Sciences & Disorders (CSD) students envision cranial nerve pathways in the human head. It's currently undergoing funded research evaluation and will be introduced in classrooms in late 2026.
CSD students have expressed difficulty with transferring anatomy concepts, such as cranial nerve pathways, to real-world service provision. Many find viewing static images of pathways in a textbook ineffective for learning.
CN Vision teaches the five pathways that are key for speech and swallowing. The web-based platform uses AR and gamification as a more interactive way to envision and learn the pathways.
Susan Bonner (design consultant)
Celeste Campos-Castillo (research consultant)
Faith Fischer (research assistant)
Mary Gallagher (medical expert)
Thea Knowles (medical expert)
Noah Maue (developer)
Yevgenia Minchuk (developer)
Maura Philippone (lead researcher)
CN Vision is also a featured project of the Transforming Tools Together (TTT) Lab at Michigan State University.
Aligning with medical training recommendations (Hazelton, 2011), CN Vision aims to teach CSD students to view and “map” cranial nerve pathways onto their clients.
Per the principles of neuroplasticity (Nahum, Lee, & Merzenich, 2013), specific task practice enhances learned skill transfer to novel contexts. Designing this way may enhance students' understanding of the content and help them better apply it during service provision.
Based on this, a persona ("Katie") was created to further explore the problem space.
A journey map and storyboard were crafted to brainstorm Katie's needs and pain points. These were then translated to a content map which determined the tool's flow and navigation.
It was determined that CN Vision should allow for both exploration and evaluation, so two modes were planned: Explore Pathways and Quiz.
My Role: Building on Maura's background research and later persona and journey map, I brought the journey map to life via the storyboard. I also planned initial content and interactions via the content map.
What I'd Do Differently: Dig deeper in the early stages for more learner perspectives in addition to instructors.
The design process began with wireframes and a rudimentary prototype to plan the layout and navigation.
Later, there were some challenges when designing the AR experience. First, with the mix of design elements and the user's environment, text legibility was a concern. I first added a partial-page backdrop behind text. Later, a full-page background was used on all pages not featuring augmented reality, and the backdrop was only used on pages where AR was required.
I played around with text legibility and AR, and eventually decided to only use AR when needed.
Animation storyboards were also used to plan the movements of the homunculus mascot users meet when they pass the quiz. A homunculus is a model with body parts scaled in proportion to the amount of nerve fibres they have. It is a bit of an inside joke in the CSD field that draws a chuckle out of users.
CN Vision is meant to be fun and a bit goofy to increase engagement. The homunculus, which serves as a reward for a high score, is the primary mechanism for this. The character remains unseen until the quiz is complete, but it is hinted at through the progress bar. This also provides instant feedback on quiz answers.
Augmented reality was implemented in two areas: Explore Pathways Mode, where the user points their device camera at the target image and toggles on views of the five pathways taught, and after passing the quiz in Quiz Mode, where they meet the homunculus character and can enter Photo Mode to take a shareable photo of it.
The first prototype of Explore Pathways Mode involved the target image and the pathways displayed on top of a white box for visibility. However, I realized that this approach wasn't leveraging the power of AR, and in fact was adding an additional step (pointing your camera at the target image) for the user with no added benefit. To remedy this, I first removed the white background box. Next, instead of displaying the target image on the user's phone or tablet, I displayed an outline of the head with a semi-opaque fill as well as the pathways, providing a configurable layered effect. See the before and after below.
In user testing, participants would state that they felt this layered effect was beneficial to their visualization and learning.
Text descriptions of the pathways are provided in addition to the visual display.
Being that CN Vision is a visual learning tool, accessible design was a bit tricky. It was important however to provide an accessible experience by addressing the issues in creative ways.
In addition to color contrast on the cranial nerve pathways, there were several accessibility considerations we made when designing CN Vision:
A built-in screen reader function was created to allow users to tap on text and have it read out to them. This feature is currently not functioning as it causes performance issues, but we plan to implement this in the upcoming redesign in Unity (more on the redesign later!).
Users who are blind or partially sighted may experience difficulties seeing the pathways, even if color contrast is effective. Therefore, on the pathway information pages we included descriptive text in terms of helping visualize the locations of each pathway branch and how they connect to the facial structures.
My Role: As CN Vision's lead product designer, I was involved in all aspects of the design in this section. Maura provided her medical expertise for the learning content, including the initial pathway drawings which I later adjusted for better visibility.
I also created the homunculus animation storyboards, and developed the gamification and accessibility with input from the course professor, Susan Bonner, and Bill Fischer, a professor emeritus at Kendall College of Art and Design.
What I'd Do Differently: Get earlier feedback on quick mockups when designing the pathway visualizations.
Three initial user testing sessions were completed with Communicative Sciences & Disorders students. One test was done with a group, the other with an individual. Both sessions included a pre-test interview, observed user testing, and a post-test interview.
All three participants thought the tool was straightforward to use and beneficial as a study tool. They liked the interactivity, the pathway coloring, the "fun" provided by gamification and the homunculus, and visualization from the layering effect.
Challenges included the dual-device requirement, maintaining hand position for an extended period of time while exploring the pathways, and slight confusion about several navigational elements (such as the two Info buttons for pathways IX and X and hesitation locating the homunculus informational page).
CN Vision has been explored by users both formally and more informally (as seen here).
Based on this feedback, the following changes have been made or proposed:
Change the color of the headings for the branches on the pathway information pages to match the color in the pathway visualization (rather than all headings being dark gray).
Create printouts of the target image that students can use to avoid the need for a second device.
Add more learning content, such as symptoms of pathway damage.
Consider alternatives to reduce discomfort from maintaining hand position.
Consider different types of quiz questions aside from solely multiple choice.
Plus, what if...you could point your camera at a real person's head for a fully 3D visualization? This is something brought up by participants that we'll be exploring soon!
Since receiving funding, CN Vision has undergone formal evaluation by an additional seven people (including undergraduate and PhD students) in October and November 2025. Data analysis is still ongoing. Phase 2 of the study will take place in spring 2026 with master's students.
My Role: During initial testing in late 2024, my role was the lead interviewer.
In the formal evaluation in late 2025, the research protocol was made collaboratively, with my role being providing feedback on the initial draft.
During user testing, I recorded observations in a coding sheet while Maura led the interview and Faith took notes on verbal responses. We are currently working on writing a research paper in which I am writing the quantitative data analysis section.
What I'd Do Differently: Get feedback from more users to account for diverse learning needs.
CN Vision has received grant funding for additional research, testing, and design through the Charles J. Strosacker Foundation Research Fund for Health and Risk Communication and Future Academic Scholars in Teaching (FAST) Fellowship funding. Future work will involve:
Rebuilding the platform in Unity to allow for more design features and better processing performance.
A two-phase usability study, focused on the gamified aspect of the platform.
Writing a research paper based on the findings.
Demoing the tool and sharing the research findings at conferences. So far, CN Vision has been demonstrated at Meaningful XR in May 2025.
Introducing CN Vision in the classroom!
The current version of the tool was demoed to industry experts at Meaningful XR.
To try out CN Vision yourself, just follow these instructions (two devices required):
Scan the QR code below to the left.
Give the website permission to access your camera and motion.
Point your device camera at the target image below to the right while you’re in Explore Pathways Mode.
Explore CN Vision!
As the lead designer and a research assistant, my role in CN Vision includes:
User Storyboard
Content Map (collaborative)
Wireframes
Animation Storyboards (collaborative)
Lead Graphic Design
Lead UX/UI Design
Animation
Lead Augmented Reality Design
User Testing Planning and Moderation (collaborative)
Demo Reel Filming and Editing
Design and Handoff Prep (for redesign phase)