Dell EMC
Visualizing complex data storage environments.
Dell EMC
Pulse Dashboard
Monitoring and managing large amounts of data has become essential for many businesses ranging from the startup to the enterprise operation. While some tools exist to monitor this data, a common pain point for many storage administrators is the task of monitoring multiple data storage environments across disparate products. Partnering with Dell EMC, we sought to address this problem by designing Pulse - a heterogeneous data storage monitoring tool that collects performance data from multiple storage environments to aggregate metrics and give storage administrators monitoring capabilities ranging from at-a-glance views down to detailed metadata.
Role: UX Designer, Visual Designer
Time: 6 Months
Course: Capstone Project
Result: Interactive Prototype
Designing for Data Storage
I worked on a team of three to rethink how complex data storage environments can be visualized to improve the data monitoring experience for storage administrators. My role focused on the visual design, information architecture, and key interactions within the system. Additionally, I assisted in conducting research to help understand the specialized domain we were working in.
Design Problem
The data monitoring experience becomes disjointed and complex as an organization begins to scale out their data storage products.
How might we improve the data monitoring experience across heterogeneous storage environments?
Research
Literature Review
Our first step into research helped us to understand what a heterogeneous data environment was, how it is structured, and why so many companies utilize it. This gave us insights on our target user, their workflow, and the current industry standards by way of metrics and terminology.
Competitive Analysis
Our competitive analysis consisted of a preliminary review of software offerings and competitive hardware ecosystems. We quickly learned that heterogeneous data monitoring tools have limited competition, and that organizations still have not adopted a single product standard.
Interviews
We conducted an initial SME interview at Dell followed by five user interviews with data storage administrators at various companies. Through this, we were able to identify the primary pain points in the heterogeneous storage monitoring workflow and identify a more concise scope.
Key Findings
Current monitoring workflow has storage admins using disparate tools and other “homegrown” solutions for monitoring different systems.
There is a current lack of consistency is metric terminology from product to product.
Users want the ability to drill down to metrics and metadata on individual, remote hardware as necessary .
The Problem Visualized
Existing Workflow
Storage admin monitoring duties are split across disparate interfaces that accommodate different storage units and their health metrics.
Proposed Workflow
Give storage admins a single point of truth to monitor the health of their storage units across their entire environment from one interface.
Terminology & Hierarchy
Additionally, we discovered that while all data monitoring products display similar metrics, the names under which these metrics live tend to vary from product to product. With this insight, we consulted the terminology and hierarchy utilized within Dell products to create a clear, definitive explanation for terms related to metrics, KPIs and hierarchy. This allowed us the necessary alignment to begin entering the design phase of our solution.
Ideation
Having collected our research, we took our findings to the whiteboard to identify scenarios our solution will focus on addressing. To do this, an AEIOU exercise was conducted to distill research into actionable items. This allowed us to cast a wide net and document Actions, Environments, Interactions, Objects, and Users that were observed. We utilized this exercise to have a discussion with our stakeholders and scope our design outputs accordingly.
Defining Scope
After brainstorming a series of scenarios in which our users would use our solution, we found that all of these actions could be consolidated to 5 scenarios including: System Setup, Performance Monitoring, Error Detection, Troubleshooting, and Forecasting Analysis. After regrouping with out stakeholders, we decided to focus our project scope on Monitoring, Error Detection and Troubleshooting actions. These actions were identified as being the most valuable, and would deliver the most impact given the constraints we had on time.
The User Journey
Using the three key scenarios from our scoping exercise above, we were able to combine them to create one cohesive user journey. This journey map outlines the current process a data storage administrator has to go through when identifying an issue in their environment. Outlining the pain points associated at each phase of the journey gave use a much clearer picture of the opportunities we had to improve this process with our own solution.
Sketches
After having our problem and opportunities more clearly defined, we created a series of sketches of interface ideas that could meet our users’ needs. In our sketches, we focused on outlining features and workflows based on the scenarios created from our research earlier in the process. These early concepts focused on addressing the metrics users wanted to see, where they would see them, and how they could identify and troubleshoot and problem in their environment.
Interactive Wireframe Diagram
After regrouping with everyone’s sketches, we gained alignment by outlining design requirements which were then translated into a series of wireframes. The wireframes aimed to explore the opportunities in our journey map in more detail by outlining the content strategy of the dashboard screen, cluster and node details pages, and the notification activity page. I linked these screens together into a clickable prototype so we could test them with users before heading into the final design.
User Feedback
Due to time constraints, we didn’t have the opportunity to conduct a formal usability test of our wireframes, but were able to receive feedback on our design direction and from the users we had interviewed previously. In our feedback sessions we wanted to pay close attention to how users responded to the terminology being used, the workflow of monitoring their environment, identifying an issue, and troubleshooting an issue.
Key Findings
Capacity, Latency, IOPS, and Bandwidth are the key metrics users are concerned about viewing.
Users want to see a visual of the physical nodes for more context when troubleshooting.
Would like the ability to pin metrics to their top-level dashboard to see data at a glance.
Users like the aggregate level views of metrics, but want to the ability to customize them further.
Color Palette
To ensure we adhered to a consistent design language, I outlined a color palette that uses a variant of Dell’s brand colors. These colors included gradients to create a more lively aesthetic in the interface, and make it feel more modern. This was a goal of ours, since other data monitoring tools felt stuck in the past regarding their visual direction.
Visual Design
Using the color palette above, we sought to address the findings from our user feedback sessions in the final visual design. For our final design, we came up with the Pulse Dashboard, which can be used to monitor the health and performance of heterogeneous storage environments across both large and small companies. The visual design uses a “dark mode” aesthetic to accommodate for the darker environments storage administrators work in. Additionally, our stakeholders suggested we look at video game dashboards for inspiration, so our final design sought to make use of color in creative ways to highlight KPIs and actions.
Metrics at a Glance
While the dashboard is used as a one-stop-shop, allowing users to view all of the metrics they care about at the top-level, the storage tab gives users even more control over monitoring cluster performance. In a list view, users can view key performance metrics beside other clusters to cross-compare the health of their data environment, streamlining the data monitoring process.
Pinnable Architecture
To avoid having to drill down into specific clusters each time a storage admin wants to monitor performance at the cluster or node level, they can click on metric tiles to see a list of actions that include the ability to pin metrics to the dashboard. This way, we could deliver a highly customizable interface for users, allowing them to always see the information that is most relevant to their workflow at the top level.
Hardware View
Context is key when troubleshooting issues in a data storage environment. The node level is the deepest a user can go when drilling down, since the node is the hardware component containing the individual drives with the data itself. As a result, I designed this page to communicate the various switches, ports, and hard drives associated with the node. This way, users can understand context, and quickly identify the root cause of a performance issue.
Key Workflows
The primary workflows of our tool are a direct reflection of the scenarios and journey map we outlined early on in our design process. The following interactions highlight key screens for each phase of the user journey.
Monitoring: Our main focus here was two-fold: easy to read, at-a-glance performance metrics and customizability by the user.
Error Detection: Being notified of errors and the context surrounding them was a resounding ask from our users. They wanted the ability to drill down to the problem source quickly and with ease.
Troubleshooting: Identifying errors is one thing, but if the source of the error is located at a data center in another state, there needed to be an easy way for users to quickly contact site engineers.
What I Learned
For a complex and niche domain such as data storage, there was a lot of legwork that needed to be done in order to fully understand the problem, and I was concerned we would have difficulty communicating that to a general audience. This project taught me that a clear and concise use case, coupled with simple visuals, go a long way in demystifying a problem. Focusing our scope to a narrow scenario improved the quality of our work, and allowed us the opportunity to tell a much more compelling story.
If I were to change anything about our project, I would have liked to shadow some of our users to gain a better understanding of their workflows in the environments they work in. Additionally, we had to make a few trade-offs in the end for our final visual design, and it would have been nice to test the final design to validate our decisions.