CN Vision is a gamified augmented reality (AR) learning tool to help Communicative Sciences & Disorders (CSD) students envision cranial nerve pathways in the human head. It's currently undergoing funded research evaluation and will be introduced in classrooms in late 2026.
CSD students have expressed difficulty with transferring anatomy concepts, such as cranial nerve pathways, to real-world service provision. Many find viewing static images of pathways in a textbook ineffective for learning.
CN Vision teaches the five pathways that are key for speech and swallowing. The web-based platform uses AR and gamification as a more interactive way to envision and learn the pathways.
CN Vision is also a featured project of the Transforming Tools Together (TTT) Lab at Michigan State University!
Susan Bonner (design consultant)
Dr. Celeste Campos-Castillo (research consultant)
Faith Fischer (research assistant)
Mary Gallagher (medical expert)
Dr. Thea Knowles (medical expert)
Noah Maue (developer)
Yevgenia Minchuk (developer)
Maura Philippone (lead researcher)
Aligning with medical training recommendations (Hazelton, 2011), CN Vision aims to teach CSD students to view and “map” cranial nerve pathways onto their clients.
Per the principles of neuroplasticity (Nahum, Lee, & Merzenich, 2013), specific task practice enhances learned skill transfer to novel contexts. Designing this way may enhance students' understanding of the content and help them better apply it during service provision.
Based on this, a persona ("Katie") was created to further explore the problem space.
A journey map and storyboard were crafted to brainstorm Katie's needs and pain points. These were then translated to a content map which determined the tool's flow and navigation.
It was determined that CN Vision should allow for both exploration and evaluation, so two modes were planned: Explore Pathways and Quiz.
My Role: Building on Maura's background research and later persona and journey map, I brought the journey map to life via the storyboard. I also planned initial content and interactions via the content map.
What I'd Do Differently: Dig deeper in the early stages for more learner perspectives in addition to instructors.
The design process began with wireframes and a rudimentary prototype to plan the layout and navigation.
Later, there were some challenges when designing the AR experience. First, with the mix of design elements and the user's environment, text legibility was a concern. I first added a partial-page backdrop behind text. Later, a full-page background was used on all pages not featuring augmented reality, and the backdrop was only used on pages where AR was required.
I played around with text legibility and AR, and eventually decided to only use AR when needed.
Animation storyboards were also used to plan the movements of the homunculus mascot users meet when they pass the quiz. A homunculus is a model with body parts scaled in proportion to the amount of nerve fibres they have. It is a bit of an inside joke in the CSD field that draws a chuckle out of users.
CN Vision is meant to be fun and a bit goofy to increase engagement. The homunculus, which serves as a reward for a high score, is the primary mechanism for this. The character remains unseen until the quiz is complete, but it is hinted at through the progress bar. This also provides instant feedback on quiz answers.
Augmented reality was implemented in two areas: Explore Pathways Mode, where the user points their device camera at the target image and toggles on views of the five pathways taught, and after passing the quiz in Quiz Mode, where they meet the homunculus character and can enter Photo Mode to take a shareable photo of it.
The first prototype of Explore Pathways Mode involved the target image and the pathways displayed on top of a white box for visibility. However, I realized that this approach wasn't leveraging the power of AR, and in fact was adding an additional step (pointing your camera at the target image) for the user with no added benefit. To remedy this, I first removed the white background box. Next, instead of displaying the target image on the user's phone or tablet, I displayed an outline of the head with a semi-opaque fill as well as the pathways, providing a configurable layered effect. See the before and after below.
In user testing, participants would state that they felt this layered effect was beneficial to their visualization and learning.
Text descriptions of the pathways are provided in addition to the visual display.
Being that CN Vision is a visual learning tool, accessible design was a bit tricky. It was important however to provide an accessible experience by addressing the issues in creative ways.
In addition to color contrast on the cranial nerve pathways, there were several accessibility considerations we made when designing CN Vision:
A built-in screen reader function was created to allow users to tap on text and have it read out to them. This feature is currently not functioning as it causes performance issues, but we plan to implement this in the upcoming redesign in Unity (more on the redesign later!).
Users who are blind or partially sighted may experience difficulties seeing the pathways, even if color contrast is effective. Therefore, on the pathway information pages we included descriptive text in terms of helping visualize the locations of each pathway branch and how they connect to the facial structures.
My Role: As CN Vision's lead product designer, I was involved in all aspects of the design in this section. Maura provided her medical expertise for the learning content, including the initial pathway drawings which I later adjusted for better visibility.
I also created the homunculus animation storyboards, and developed the gamification and accessibility with input from the course professor, Susan Bonner, and Bill Fischer, a professor emeritus at Kendall College of Art and Design.
What I'd Do Differently: Get earlier feedback on quick mockups when designing the pathway visualizations.
Initial user testing took place with three students while CN Vision was still a class project. We got some initial feedback such as finding the layering effect from the AR beneficial and the desire for more learning content aside from just identifying pathways visually (i.e. they wanted to be quizzed on pathway functions).
Thanks to funding from the Charles J. Strosacker Foundation Research Fund for Health and Risk Communication and Future Academic Scholars in Teaching (FAST) Fellowship, we got the chance to further and more formally evaluate CN Vision and iterate it prior to introducing it in classrooms.
In doing so, we formed a team of four professors (including instructors and medical experts), two student developers, and a student researcher in addition to Maura and myself. The rest of this section talks about the formal evaluation including usability testing and evaluation of learning outcomes.
CN Vision has been explored by users both formally and more informally (as seen here).
Phase 1 focused on usability testing to get more perspectives from more students. Testing involved five sessions with seven participants (three solo, two pairs). All were Communicative Sciences & Disorders students, with six undergrad students and one PhD student.
Sessions began with a pre-interview to gauge participants' knowledge of cranial nerves and study tools used. We then asked them to complete five tasks to fully explore the interface. I used a coding sheet (pictured) to capture data on task completion, non-verbal behaviors, and task completion time. If a participant exhibited one of the observations from the pre-set list, a highlight was added that allowed us to easily gather quantitative data on these metrics.
We collected qualitative data through a post-test interview where participants shared feedback on the usability of the tool, their sense of engagement, the interactivity, learning efficacy, gamification, and overall suggestions. They also completed a system usability scale (SUS).
I collected task completion observations, non-verbal behaviors, and completion times in coding sheets.
While analysis is currently ongoing, we have some preliminary insights:
The average SUS score was 86.07 on a scale of 0 to 100 (with 100 being the highest possible score).
Participants reiterated the assertion from informal testing that they want to be quizzed on pathway functions in addition to visualizations.
Perceptions on displaying pathways via 2D layers or 3D models was mixed.
Participants felt it could be challenging to hold their hand in the same position for an extended period of time.
They would like pathway information to be better integrated into the visualization pages instead of on a separate page.
Based on this, we could consider the following:
Create additional quizzes, still in bite-sized portions, covering more educational content.
Design additional visualization methods, both 2D and 3D. Plus, 3D visualizations could involve pointing the camera at a real person's head to visualize the pathways inside.
Create alternative setups to avoid the need to maintain extended hand positioning.
Better integrate educational content through different types of interactions.
To go beyond just system usability and focus on what the tool sets out to do (help students better understand and visualize the pathways), Phase 2 of the study focused on measuring learning outcomes. In doing so, we evaluated CN Vision in a Communicative Sciences & Disorders class with 16 master's students.
To measure learning outcomes, we used CN Vision as the experimental condition and a standard slide deck as the control. Students were divided into equally-sized groups.
They began by taking a pre-test which involved quizzing their knowledge on cranial nerve pathways (both visualization and functions). After this, they spent 15 minutes reviewing the study materials (either CN Vision or the slide deck). Once the time was up, the completed a post-test which involved the same questions as the pre-test to measure improvement after reviewing the material. Two weeks later, they completed the test a third time to measure retention.
This study design allows us to set an individual baseline for each student, then compare improvement from reviewing CN Vision compared to traditional materials (such as textbook images). The follow-up two weeks later allows us to again compare learning outcomes with the tool, with the additional factor of longer-term recall instead of just short-term.
Data collection and analysis is currently ongoing, so check back soon for some updates!
My Role: During initial testing in late 2024, my role was the lead interviewer.
In the formal evaluation, the research protocol was made collaboratively, with my role being providing feedback on the initial draft and creating the coding sheet.
During user testing, I recorded observations in a coding sheet while Maura led the interview and Faith took notes on verbal responses. We're currently writing a research paper in which I'm leading the quantitative and qualitative data analysis.
What I'd Do Differently: Get feedback from more users earlier on to account for diverse learning needs.
Once data analysis from the two-phase study is complete, we plan to iterate CN Vision based on feedback and insights. Future work will involve:
Completing the data analysis and planning iteration based on both the qualitative and quantitative insights.
Rebuilding the iterated platform in Unity to allow for more design features and better processing performance.
Writing research papers based on the findings.
Demoing the tool and sharing the research findings at conferences! So far, CN Vision has been demonstrated at Meaningful XR in May 2025 and will return in May 2026 as a poster presentation.
Introducing CN Vision in the classroom!
The current version of the tool was demoed to industry experts at Meaningful XR.
To try out CN Vision yourself, just follow these instructions (two devices required):
Scan the QR code below to the left.
Give the website permission to access your camera and motion.
Point your device camera at the target image below to the right while you’re in Explore Pathways Mode.
Explore CN Vision!
As the lead designer and a research assistant, my role in CN Vision includes:
User Storyboard
Content Map (collaborative)
Wireframes
Animation Storyboards (collaborative)
Lead Graphic Design
Lead UX/UI Design
Animation
Lead Augmented Reality Design
User Testing Planning and Moderation (collaborative)
Demo Reel Filming and Editing
Design and Handoff Prep (for redesign phase)