OSCAR · INDIVIDUAL CONTRIBUTOR

Building for our Concierge teams

I joined the Concierge team as 1 of 2 product designers when I joined Oscar in April of 2019. I was on the team for 1.5 years before moving over to Care Delivery (which you can read more about from the Projects Page). On Concierge I worked on a range of projects from small quality-of-life enhancements, to rapid execution of COVID protocols, to thinking blue-sky on how we can integrate benefits checking, cost estimation, and care routing into one tightly coupled system. You can read about an in-depth case study plus a few description of the other notable projects down below.

Team overview

Concierge is our Customer Support team. Each time a member calls in, they reach a member services agent, who we admiringly call our Care Guides. Their name reflects our dedication to provide the highest touch guidance to all members so they can rely on Oscar to take them through the health insurance journey. The Concierge Tech team is responsible for building the tools that help Concierge answer any member question. These questions include checking a member’s benefits, searching for or recommend providers and facilities, and providing cost estimation for the member’s future care.

During my time on Concierge I had the privilege to work on a vast array of projects. Things move pretty fast on our teams, which means we get to work on a lot of cool stuff. Two quick stats:
40+
I’ve designed over 40 projects ranging from “XS” half week quick fixes, to “XL” quarter long greenfield projects.
88%
Of the projects I’ve designed have been built. This is a testament to our team, and our focus on shipping often.
What I cherished most about my time on Concierge, was the ability to routinely shadow our Care Guides on their member calls. It was always the most motivating activities I did, both from seeing how we could positively impact our members, to watching where Care Guides would struggle through the app and feeling the fire to fix it.

A few notable projects

On Concierge, I had the opportunity to collaborate with Product Managers on the team vision, lead discovery research to determine the best experiences to prioritize based on company vitals, have spirited conversations with engineering on feasibility, and also to be on the ground working with and understanding our Care Team’s day to days. Some notable projects I’ve worked on include:

Bringing cost estimation into the care routing experience

One of the last and most exciting projects I worked on as part of the Concierge Tech team, was understanding how to effectively bring cost estimation into the equation when members are searching for care. My biggest personal learning in this project, was underestimating the nuances of the cost conversation with members. Did they want to see actual numbers if they’re averages? Or ranges? Did they want to see relative ranking instead? Would they want some mix of both? Do they care about how this number is calculated? How sensitive are members based on how critical their need for care is? These questions only brought about more questions. In the end, we decided to run an A/B test on our simplest application to see if we could drive any change. We ended up generating a positive 13% in the direction of our KPI!

Behavioral Health Routing

A purely research and discovery project I worked on was understanding how our Concierge Teams can improve in routing for Mental and Behavioral health. From the start it was clear we cannot treat this experience the same as medical routing. I spoke with members of my team who had worked in Arts Therapy, at behavioral health focused healthcare companies, conducted an in depth Competitive Analysis of direct and secondary competitors, and produced personas for our cross-functional teams to align on.  
angela liu portfolio - behavioral health personas

Design Sprint: Rethinking cost and coverage member conversations

Over the years, several tools have been developed by the two Concierge product teams to help our Care Guides through cost estimation, coverage and benefits, and routing questions. In early 2020 as yet another tool was being thought through, the other designer and I began feeling the urgency to streamline these workflows. There were three main problems that we saw:
  1. We are not meeting members where they are, and are instead asking that they learn complex healthcare concepts in order to get estimations on cost.
  2. Care guides currently have access to 4 different tools to answer a cost-related call. There is enough overlap that the tools are a bit redundant, but not enough to replace each other. It is not clear which tool a care guide SHOULD use when they have a certain set of information.
    3. Members who are further upstream in their care routing journey (don’t know a provider or facility, don’t have a CPT code) are not getting cost estimates.
  3. Members who are further upstream in their care routing journey (don’t know a provider or facility, don’t have a CPT code) are not getting cost estimates.
Our goal of the design sprint was to “Create a Northstar vision of the best possible value driven interaction we can have with our cost transparency capabilities”.
angela liu portfolio - design sprint figma

Case study: Untangling provider profile data

Problem statement

One of the most complicated engines I have witnessed during my time at Oscar is how provider data and network status interact. A member’s ability to see any given doctor for in-network prices depends on the provider’s contracted network, which TINs (a type of identification) they are contracted under, the Oscar plans in that network, and how the provide ends up billing in the end – just to name a few major factors. I am far from being an expert in this field to know all the nitty gritty nuances.

As our networks grew, and this decision tree got larger and larger with more branches, we realized our current profile pages were not set up to handle those changes well. We were not showing the most accurate data, at the most relevant level.

Process overview

  • October 2019 – Exploratory research and interviews in Tempe, AZ.
  • November – Ran a half day workshop with stakeholders for idea generation. Began what became my sheet to rule all sheets.
  • December – Conducted remote user interviews regarding early ideas to rank sort desired features. Measured impact vs. effort on top ideas and created a list of MVP features.
  • January 2020 – Held a crazy8’s session with those features to kick off design iteration!
  • February – User testing on mid fidelity mocks. Began documenting all necessary states, edge cases, data scenarios, and profile variations.
  • March – Conversations with team about how to break down and phase out the work.
  • April - Visual refinement, flushing out states, prototyping animations.
  • May, June - Implementation on hold as we scramble to respond to COVID!
  • July - Resume implementation and QA cycles
  • August - First phase of project MVP launches 🎉

Stepping into our users' shoes

Once every other quarter, our team has the opportunity to go down to Tempe, AZ.

Tempe is where our HUB is located, and the HUB is where all of our wonderful Concierge team members work from. Our trips to Tempe usually require 1-2 weeks of preparation, where we weigh upcoming projects and company priorities. Once we have a set of projects we want to explore, we’ll put together thorough research plans and interview scripts to gear up for the trip. Once in Tempe, our days our typically packed with user interviews, shadowing sessions, and lunch catch-ups. Provider profiles was one of these big exploratory projects for our trip there in October 2019.

3 Research methods

To make the most of our trip, I set out to perform 3 different user research methods.
Method 1: Paper surveys
During this trip we tried something slightly different from our normal in person interviews. We put printed surveys at the entrance of the kitchen to see if we could catch people on their breaks. And it was a success! We received 45 responses in 2 days – far more than expected!
Method 2: Focus groups
We conducted a total of 3 group user interview sessions. Each lasted 1 hour and 2-3 care guides participated in each.
Method 3: Lightning sketches
As a change of pace, at the end of our user interviews we also had Care Guides put on their creative hats and sketch their “Magic Genie” provider profile pages. It was a lot of fun for us, and hopefully for them too!

Consolidating notes

After our trip, I spent several hours compiling all of our research into one centralized document. After doing so I pulled out the major themes across the three research methods and formatted them in a way we could share out with the wider team.

3 Takeaways / focus areas

  1. Network status is extremely confusing to figure out in the current page. A seemingly simple “Will I be covered if I see this doctor?” causes Care Guides a lot of headache. Between the colored status pills, and the confusing network timelines, this question was not easy to answer.
  2. The provider’s ecosystem was difficult to understand. Who were the vendor relationships? How is this provider contracted with Oscar? Which hospital systems are they affiliated with? All of this was either inaccurate or very cryptic.
  3. Important data was often not up-to-date or missing. Simple things like phone number or email address might be off by one character, without an easy way for the Care Guide to correct it.

Team workshop

Using the main takeaways, I put together a half day workshop for my product team and our important stakeholders of provider data. There was a total of 10 participants, jamming together for 3.5 hours with good conversation and yummy snacks.

The goal of the workshop was:
“Generate in and out of the box ideas for these main pain points discovered in Tempe.”

Speed dating the ideas with Care Guides using the Kano Analysis Model

I wanted to take the ideas that had not been discussed with Care Guides, and find a way to get a quick gut check on them. Now, how to do this was the tricky part. I knew directly asking anything such as “would this be helpful” would not produce the insights we are looking for. Then I remembered coming across something called the Kano Analysis in the past.

Kano analysis is precisely for ranking features based on what the product must have to serve its purpose, would be nice to have, and would harm the product. However, we do not want to ask Care Guides to consider this for every idea. So, I went back through all the ideas and figured out ways to combine them into similar situations. Using these, I sent out a remote survey where representative ideas were asked in this format:

[Feature explanation]. How would this affect you?
  • This is a basic requirement for me
  • This would be very helpful to me
  • This would not affect me
  • This would be a minor inconvenience for me
  • This would be a major problem for me.
The results were pretty conclusive, and extremely excited to see! 

The sheet to rule all sheets

We documented outputs from the research and workshop into a master sheet. As part of the workshop we also did a “push the boundaries” activity, where we took MVP ideas and made them bigger, and vice versa took lofty ideas and brainstormed MVP versions. These all went into the spreadsheet. From the Kano Analysis exercise, we added in relative "importance" to ideas we didn't already have qualitative feedback on.

At this point, I began running meetings with our team and our provider data platform team, who ingests all provider data. We needed their help to understand feasibility of each of these projects. I created another column in this sheet to get their estimated feasibility on a scale of 1-10.

1 – This can be done today without our support
10 – This is not anywhere on our near term roadmaps

These estimates, you guessed it, went into the spreadsheet. The final master spreadsheet became the source of truth for all features now and in the future, sorted by user feedback, tagged with engineering effort, and filled with team ideas.

Let the (iteration) games begin!

We scoped out the MVP set of features from the top of this list, and set out to begin the iteration loops! 
Competitive analysis

While it’s quite hard to find internal facing examples through a competitive analysis, I was able to aggregate quite a few direct and analogous examples as inspiration.

Crazy 8s

To kick off design ideation, I held a crazy8s brainstorming session with my immediate team. I gave them the list of MVP and V2 features we had discussed, as well as the competitive analysis as artifacts to work from.
Wireframe, test, rinse, repeat

After digitizing my team’s crazy8 ideas, I dove into heads down ideation time. Grouping ideas, branching off of others, and smattering Figma with artboards.

Data scenarios and "edge" cases

The core of this project is to streamline mountains of data, so naturally there are a lot of missing or inaccurate data elements to handle in the UI. My PM put together a data map with our service owners which became extremely helpful in this process. Along with that, we also worked together on creating a table of edge cases and how the UI would handle them. As I continue to iterate on the profiles, I was regularly meeting with the following audiences to provide updates, feedback, technical checks, etc: PM & Tech Lead, our engineering team, the service teams, and my design critique group.

Design decisions

Data visualization for network status
Understanding linear time progression is critical for deciphering network status. However, the data can be messy and conflicting at times. What is the simplest way to communicate these complexities?
Interaction designs to reveal demographic info
An interaction that took a lot of iterating to settle on was how to display top level provider data. This top level data includes Provider name, NPI, specialties, heads up notes, profile picture, and actions. Because the underlying page had multiple in-page tabs, I wanted to find a method where this information could be viewed from anywhere, without back navigation. A couple options I tried were:
Creating member specific views of the data
One of the ideas we heard often in response to our “Magic Genie” questions was: “It would be so cool if you could just enter a member ID, and the page only tells you what’s relevant to that member.” We had to work very closely with engineering to understand all the nuances of passing in this additional parameter to the data.

From a design perspective, conveying this “member context” was a bit tricky. I took a look at ways to flag data elements, add member specific indicators to section headers, badge certain UI elements, etc. Then one day I was sharing my screen on Slack with the colored box and it dawned on me that that would be a great UI for this! I started noticing the same pattern in Google Meets, Figma inspect, etc.

In the final design, when applying the member context,  the input box borders expand to encompass the rest of the page, and signifies the transition into a member-specific view for the page.
Maximizing usable vertical space
One of my personal growth goals that I got a chance to work on as part of this project was animation. I wanted to challenge myself to not see projects so statically – to envision how motion can be used to convey change, represent transitions, benefit the UI, or afford behavior. In this project, I found two great places for that: the member context shown above, and another with the patient header.

When a user begins scrolling down the page, the large header housing multiple navigational elements and page actions collapses into just a small navigational bar. When the user scrolls up, the full header is pulled down as well.

Visual refinement

I conducted several rounds of user testing as we neared the final stages of the MVP designs. As part of the visual refinement process, I redlined and outlined all the UI states for each component used that was not in our design system. Two example below of new components and their flushed out UI states as well as data handling.

Final product