CN Vision is a gamified augmented reality (AR) learning tool to help Communicative Sciences & Disorders (CSD) students better envision how cranial nerve pathways map onto a human head and to quiz their knowledge. The project was done in collaboration with Maura Philippone.
CSD students express difficulty with transferring anatomy concepts, such as cranial nerve pathways, to real-world service provision. Many find viewing pathway images in a textbook ineffective for learning.
CN Vision teaches Cranial Nerves V, VII, IX/X, and XII, all of which are key for speech and swallowing. AR was implemented to provide a more interactive way to see how the pathways connect to different facial features and muscles.
Aligning with medical training recommendations (Hazelton, 2011), the aim of CN Vision is to teach students to view and subsequently “map” cranial nerve pathways onto their clients when providing speech-language pathology (SLP) services.
Per the principles of neuroplasticity (Nahum, Lee, & Merzenich, 2013), specific task practice enhances learned skill transfer to novel contexts. Training students to conceptualize cranial nerve pathways in this way may enhance their understanding of relevant CN pathways during SLP service provision with clients.
Based on this research, and considering that Maura Philippone will instruct classes teaching this subject matter, a persona ("Katie") was created.
Persona: Katie is an SLP master’s student who is having trouble envisioning CN pathways on a real person.
Subsequently, a journey map and storyboard were crafted to envision Katie's experience with CN Vision. The experience envisioned in the journey map and storyboard were then used to inform the creation of a content map which determined the tool's flow and navigation.
It was determined that CN Vision should be both an exploration tool and an evaluation tool. Therefore, two primary modes were planned to be implemented: Explore Pathways Mode and Quiz Mode.
Below are some highlights of the research process. You may click on any of the images to open them in full screen.
The design process began with wireframes and a rudimentary prototype to plan CN Vision's layout and navigation. There were several challenges faced when beginning to design the augmented reality experience in ZapWorks. First, with the user's environment as seen through their device camera serving as the background, text legibility was a concern. To account for this, first a white box was added behind text. Later, a solid background was used on all pages not featuring augmented reality, and the screens making use of AR featured a tan color box.
The initial wireframe.
The first screen in ZapWorks with poor text legibility.
Improving text legibility with a white box as a backdrop.
The fully designed landing screen.
Additionally, animation storyboards were created to plan the movement's of CN Vision's featured character, the homunculus.
The homunculus character has historically shown how different regions of the brain’s motor and sensory cortices are dedicated to processing sensory input and motor control for various body parts, with more space given to areas like the hands, lips, and face than to other body parts.
CN Vision was designed to be fun and a bit goofy in order to increase user engagement. The homunculus, which serves as a reward for getting a perfect or passing quiz score, is the primary gamification element to achieve this effect. The character remains unseen until such a score is achieved, but it is hinted at through both the CN Vision logo and the quiz progress bar. The progress bar also incorporates instant feedback on whether the user answers questions correctly or not.
Augmented reality was planned to be implemented in two areas: Explore Pathways Mode, where the user points their device camera at the target image and toggles on views of the five pathways taught, and after passing the quiz in Quiz Mode, where they meet the homunculus character and can enter Photo Mode to take a shareable photo of it.
Our first working prototype of Explore Pathways Mode involved the target image and the pathways displayed on top of a white box for visibility. However, we realized that this approach was not leveraging the power of AR, and in fact was adding an additional step (pointing your camera at the target image) for the user with no added benefit. To remedy this, we began by removing the white background box. Next, instead of displaying the target image on the user's phone or tablet, we displayed an outline of the head with a semi-opaque fill as well as the pathways, providing a configurable layered effect. See the before and after below.
In user testing, participants would state that they felt this layered effect was beneficial to their visualization and learning.
In addition to color contrast on the cranial nerve pathways, there were several other accessibility considerations we made when designing CN Vision:
A built-in screen reader function was created to allow users to tap on text and have it read out to them. This feature is currently not functioning as it causes performance issues, but we are working with ZapWorks support to address it.
Users who are blind or partially sighted may experience difficulties seeing the pathways, even if color contrast is effective. Therefore, on the pathway information pages we made sure that the text is as descriptive as possible in terms of helping visualize the locations of each pathway branch and how they connect to the facial structures.
Two user testing sessions have been completed so far, all with people of the tool's target audience (Communicative Sciences & Disorders students). The first session included two participants testing simultaneously, while the second session featured a lone participant. Both sessions included a brief pre-test interview, observed user testing, and a post-test interview.
All three participants thought the tool was straightforward to use and beneficial as a study tool. They stated that they liked CN Vision's interactive nature, the coloring of the pathway branches, the "fun" aspect provided by the gamification and homunculus character, and the layering effect.
Challenges participants mentioned included the dual-device requirement, maintaining hand position for an extended period of time while exploring the pathways, and slight confusion about several navigational elements (specifically, noticing that there are two Info buttons on the page for pathways IX and X, and hesitation locating the homunculus informational page).
Based on this feedback, the following changes have been made or proposed:
Change the headings for the branches on the pathway information pages to match the color in the pathway visualization (rather than all headings being dark gray).
Create printouts of the target image that students can use in lieu of a second device.
Add further learning content, for example the option to display symptoms of pathway damage.
Add an alternative, non-AR Explore Pathways Mode to allow users to explore the pathways without the need to maintain hand positioning for a significant amount of time.
Conduct further user testing, focusing on the tool's navigation.
CN Vision is still an ongoing project. Prior to being fully introduced to students as part of classroom instruction, we plan to implement the additional feedback items received in user testing thus far, and conduct further user tests with the target audience.
Additionally, CN Vision is currently pushing the ZapWorks platform to its limits. We would like to reupload the audio files to allow the built-in screen reader to function, and create additional scenes to allow us to include a Submit button in Quiz Mode.
Lastly, we are eyeing several conferences to submit CN Vision to where we could potentially share it with an even wider audience.
To see a demonstration highlighting the features of CN Vision, watch the short video below!
To try out CN Vision yourself, just follow these instructions (two devices required):
Scan the QR code below to the left.
Give the website permission to access your camera and motion.
Explore CN Vision!
Point your device camera at the target image below to the right while you’re in Explore Pathways Mode.
To read a detailed summary of the design and development process, please browse through the slideshow below.
The key aspects of the research and design I participated in are as follows:
User Storyboard
Content Map (collaborative)
Wireframes
Animation Storyboards (collaborative)
Lead Graphic Design
Lead UX/UI Design
Animation
Lead Augmented Reality Design
User Testing Planning and Moderation (collaborative)
Demo Reel Filming and Editing