Server Tagging

The server tagging application was created to provide engineers a way to add attributes to their servers.


At the time, approximately 12,000 production servers existed in the Nordstrom environment. Nothing was known about these servers.

When an issue arose on a server, it was impossible to know which team to contact and which applications were impacted.


We were looped in to this project after development but prior to release. The stakeholders were eager to ship and expected a thumb’s up from UX. (Note: this was the team’s first time engaging UX, so the workflow was less than ideal.)

Their app confused us. We knew without data our feedback wouldn’t interest the team. My teammate and I flipped into guerrilla usability testing mode. We created a test script, gathered 4 four engineers, ran the study, and produced a results video in under 36 hours.

Since the team was resistant to feedback, we chose to create a short video. The video was less than 6 minutes. It provided enough time to convey the results, but didn’t afford too much time for the audience to loose focus or object.

When the stakeholders saw 4 engineers fail the task to tag a server, they understood this product couldn’t release in it’s current form.

Together we prioritized 3 use cases and agreed each must be successful prior to release.

Prioritized Use Cases:

  1. An engineer searches for a single server and tags the server.
  2. An engineer searches for multiple servers and tags multiple servers.
  3. An engineer searches for a server using one or more tags.

Our second use case, we discovered during the initial usability study. Engineers can support 1 through n servers. Often they have multiple servers with a shared prefix, so they could tag these simultaneously.

Here’s a clip from the initial test that shows one participant attempting to return multiple results:

In the video, the participant tries using a wildcard * to return the results. His reaction after trying and seeing the search results, “No, that’s bad.”

Interested in seeing more from this study?
Test Script


With our prioritized use cases in hand, we returned to sketch solutions. We felt the user’s mental model had two distinct paths:

  1. Tagging server(s)
  2. Finding info about server(s)

We used these paths to redesign the application’s home page. We created tabs to separate the activity. In addition, we reduced the amount of information on the page and added instructions for clarity.

See the clip below from one of the Axure prototypes we created after refining our sketches:

Although the user in the usability test video used a wildcard *, our developers informed us that this functionality would take more time. The partial search worked without the wildcard *, which is why we also included the example text below the search box.

A couple other features we added to improve the efficiency of the server tagging experience were:

  • the ability to “Select All”
  • results pagination
  • adding another update button to the top of the results.


We tested the prototype with engineers and found the task success rate increased 100%.

Since we had worked with our developers during the refinement process, we were able to reconvene with our stakeholders and provide a timeline for the changes.

They released a week later. Within the first month, 92% of servers were tagged.

Here’s some shots from the application in production:

Search for servers to tag tab.
Search for tagged servers tab.