The personal history, team, and evaluation screens

​Loopback is a tool that provides feedback about regular Emergency Medical Technician (EMT) evaluation. It helps Emergency Medical Services (EMS) responders see where they need to improve, and helps supervisors choose training tasks based on easy-to-spot problem areas in team evaluations. 

Mapping and Flows
The structure of our application is straightforward: we have three top-level categories (Evaluations, History, Team) controlled through a navigation bar at the bottom of the screen. Subpages under each of these top-level categories permit activities such as filling out an evaluation or the ability to drill down into a single evaluation report.

Research and Design
Training is critical for EMTs. EMTs undergo rigorous training to obtain their certification but are also evaluated after every run to provide feedback to improve their performance in the field.
 Loopback allows EMTs to evaluate the work of junior officers after runs, shares that information with the EMT and their superior, and gives both parties an easy way to see what problems happened in the field, and which training might help improve problems. Evaluations are saved and aggregated for each EMT, allowing them to track their progress over time, encouraging improvement that can lead to promotion.


​Preliminary Research
The project began with online research about EMT life, the structure of EMT hierarchies, and responsibilities. Using subreddit, our team tapped into specific conversations EMTs were having about problems and concerns at work. On top of that research, we interviewed a supervisor at the Carnegie Mellon University EMS who explained how evaluations work at his station and how training tasks were chosen. From him we learned how constant training occurs during their large periods of downtime.


Follow-Up Interview
Based on the preliminary research we decided an app on a mobile device could help with feedback, evaluation, and training. We re-interviewed the CMU EMS supervisor and asked more detailed questions about these topics and how they could be improved. He highlighted the importance of comments in addition to simple numerical ratings, to provide context and justify substandard ratings. Since he was not on every run, comments help him understand the issues his team is facing.


Ideation
We brainstormed the problems and solutions that could be offered in this space and came up with a number of possibilities. After putting those ideas into an effort vs. impact matrix, we chose feedback and training because it was a feasible problem to tackle, and could make a significant impact on the problems of evaluation and feedback that we had studied.

Loopback

Contact

 614.296.2415                  ralley.2@osu.edu

With Sharon Lin, Israel Gonzales, Zac Aman for Interaction Design Fundamentals, Fall 2014 at Carnegie Mellon University

Major and Minor Axes

One of the main problems was that rows and columns were initially given even weight making the difference between row and column confusing. To solve this, we made rows our major axis to keep single runs together and made criteria columns secondary.


Our initial concept for the history was as an abstract grid of colors with minimal labeling information. After receiving feedback that it was too abstract, we added additional labels and iconography to make the meaning of each square clear.

Design Insights

Personal History Screen
One of the most interesting and challenging screens holds an EMT's personal evaluation history. The intention was to quickly show trends but also provide navigation to both single-criterium drill-downs as well as individual reports. The challenge is to provide enough context about each evaluation on the screen without interfering with interpretation of the data or intuitive navigation.

Final high fidelity mockups organized into a flow diagram

This design process underscored the importance of providing visual reinforcements for data to help make interpreting it easy at first glance. We went a step further and added features that provided training suggestions based on team performance and interpreted the team data for supervisors to make their job easier. By stepping into the shoes of the EMTs who will use this tool, we developed an interface that would be both engaging and informative. 

Davidralley.com

Brainstorming ideas and organizing the results

Early iterations of the personal history screens

Later iterations of the personal history screen, showing detail screens for a specific run, with comments

Early team evaluation screens with minimal labeling, which proved to be too abstract and hard to understand.

Team view screens, showing suggestions for training, highlighted run information, and team status

Share

david ralley © 2020  |  All Rights Reserved

Early stage flow diagrams with low-fidelity screens