Darren Lasso
case study #2How might we improve public access to US Treasury data?
Design execution
Design system
Public
Consulting

From 2017-2021, I led product design for the DATA Act program at the Treasury Department, focusing on making government spending data more transparent and accessible to the public.

A key initiative was addressing the accessibility and usability of government-managed datasets. We aimed to help the public easily discover, interpret, and trust this data through a well-documented, authoritative source.

This effort led to the creation of FiscalData.

Launch Fiscaldata
Framing the problemWhere is the authoritative source for US government spending data?

The Treasury manages a vast catalog of fiscal reports across various business areas, but there’s no centralized repository or consistent schema for how this data is shared with the public. This lack of standardization leads to a fragmented and frustrating user experience. Our goal was to create an MVP solution that builds empathy with users affected by this problem while aligning with the broader mission of improving public access to federal financial data.

50+
Datasets managed by different bureaus of the Treasury
14
Different data offices maintaining datasets
230 years
of inconsistently structured, and documented data
Our goal is to provide the ideal customer experience for searching and downloading federal financial data.
- Timothy Gribben, Fiscal Service Commissioner
Timeline & process
Nov‘19
May‘21
01.
Project kickoff
Vision workshops
Pitch to leadership
02.
Discovery
User research
Share insights
03.
Vision
Ideation workshops
User stories
04.
Prototype
IA / Wireframes
Usability testing
05.
Deliver
Design MVP
Review code
06.
Continued delivery
New homepage
‍Continue iteration
I remained embedded in the team for the entire lifecycle of this project, from the initial pitch to the client, through research and iterative design, to the delivery of the MVP, and afterwards through v2.0 refinement.
Results
>50%
Treasury API calls originate from FiscalData
20,000
Page views in first week (high traffic for Treasury sub-domain)
50
Fully documented, API-driven, machine readable datasets
project kickoffMy role

As UX Lead for the Data Act program, I influenced the work and operations of five scrum teams and ten designers/researchers. I managed overall design execution while contributing as the lead IC, embedding myself into teams tackling the highest-priority projects.

For Fiscal Data, I led the initial client pitch, identifying and defining the opportunity during a visionary north star workshop. Acting as design lead, I connected research insights to design deliverables, supported requirements definition, ideation, and prioritization, and guided iterative design and evaluation of the product.

  • Partner with UXR on foundational research & synthesis.
  • Facilitate solution ideation & prioritization workshops.
  • Facilitate product team definition of MVP, stretch, post-MVP requirements.
  • Manage design execution and evaluation across the entire product.
  • Oversee engineering implementation of designs.
DiscoveryFollowing the data
The research team developed personas, one of which, the Data Analyst, would be targeted as our core persona. We would tailor our MVP to them, and we would evaluate our competitors through their lens. As the most broad user type, who aligned well to core business values, we believed that tailoring the experience to them would result in a positive experience across ALL personas.
Areas of focus & opportunity
After 20+ interviews, we were able to synthesize a number of key pain points and areas of opportunity to guide our solution.
Pain point 01.
Inconsistent search and discovery experience
Because Treasury datasets were maintained and hosted by so many different offices, it was difficult to know where to find them and which link was the 'authoritative' source.
Pain point 02.
Indulgent data visualizations
Data visualizations can help make a complex data story more approachable, however if the audience is mainly interested in the raw data, a fancy viz can actually be a distracting impediment.
Pain point 03.
Incomplete or noisy data
Different data owners can result in different standards for data consistency, documentation and completeness.
opportunity 01.
Common entry point for fast, reliable, trustworthy data
First and foremost we knew we needed to bring all of these treasury datasets under one roof, preferably served through a subdomain of treasury.gov.
opportunity 02.
Access to raw, filterable, downloadable, historical data
Because Treasury datasets were maintained and hosted by so many different offices, it was difficult to know where to find them and which link was the 'authoritative' source.
opportunity 03.
Machine-readable data
Because Treasury datasets were maintained and hosted by so many different offices, it was difficult to know where to find them and which link was the 'authoritative' source.
opportunity 04.
Well documented data
Ensure that data is consistently documented, that a data dictionary is always present, and that the data is historicaly complete.
opportunity 05.
Well documented APIs
Analysts are likely to want to tap directly into datasets to serve a variety of purposes, so a well documented API reduces the learning curve of using that data and ensures standardized usage of public APIs.
opportunity 06.
Common schema for data owners
Organized and well defined datasets enable that data to be more easily interpreted and simplifies API integrations.
VisionIdentify and evaluate potential solutions
I facilitated a workshop with key decision-makers and developed a framework for prioritization so the team had a clear roadmap defining requirements for launch. From problem statements we developed opportunities, from opportunities we created epics and discrete user stories.
After working with the client and program leadership on prioritizing features for MVP, I then worked with stakeholders to break those features down into epics and then more granular user stories. I used user stories as the basis for creating an information architecture map; that IA map could then be used to build an assumptive model for task flows to document how users might achieve the goals outlined in each user story.
prototypeImprove the discoverability of datasets
With prioritized features, user stories, and a rough information architecture in hand, I began prototyping. Search was critical for this data-driven site but lacked clear research goals, requiring extensive iteration during the wireframing phase.

While the search page's structure remained consistent with the initial wireframe, its details evolved significantly through user insights and rapid iterations. Without clear research recommendations, the search design emerged during the iterative process.

By exploring low-fidelity concepts with subject matter experts, I developed a design based on reasonable assumptions, later validated through usability testing. Key improvements included topics filters, data displayed on search result cards, and keyword search result communication.

Low fidelity exploration

The early value of topical browsing

Internal conversations around tagging content proved the concept to be less impactful for our limited MVP launch; we abandoned the concept to focus on Topics. Topics would be highlighted significantly on search, each with its own custom icon. These icons pushed results down the page, so I wanted to find ways to minimize them if possible but over and over in testing our users would rave about them. They loved how quickly they could jump in and start reviewing topical datasets.
testing insights
What users told me:
Search feels a little disconnected from the results on the page.
It is really important to know which of these datasets has a data dictionary so that I can interpret what the metadata means and avoid errors in my analysis.
I like the idea of topics.. can I search for datasets relating to financial summaries for example?

design iteration

Determine what data to put in front of users

We continued to iterate and test with users. We validated what dataset information was THE most important to highlight on card summaries. The overall design layout structure refined considerably as well as other sections of the site were developed and layout patterns were defined.
testing insights
What users told me:
Why are the cards different heights? This would look more clean if everything was on a consistent grid.
How do I submit my search? Am I missing a button somewhere?
I would like to know from this results page how far back these datasets have collected data. Do I need to click into a card first to see if the dataset goes far back enough for my analysis?

High fidelity execution

Clearly communicated search results

The final design for search maintains much of its ancestry from the initial wires, but shows a clear evolution. We introduced a natural language summary of the users' search at the top of the page, and added helpful previews for filters. For each pinned filter on the left-side panel of the page, users can see a dynamic preview demonstrating how many results they would see if they were to activate that filter. Usability scores were high and users would often remark unprompted, "especially considering this is a government site, this looks great!"

Expand our audience with accessibility & responsive design

Accessibility is incredibly important to me. I strive for AA 508 compliance in all of my digital products. I ensure that my designs have the appropriate levels of contrast, that fonts and content are legible, and that we convey system status in ways that can be discerned by the impaired.

When appropriate, each screen of my prototypes are mocked up for 3 screen size milestones: Desktop, tablet and mobile, so that the implementation team has clear guidance on the adaptation of the UI/UX for different screen sizes and methods of interaction. I work closely with the dev team to ensure that compliance milestones such as keyboard accessibility are met.

It was very important to me that FiscalData elegantly adapt its experience for mobile users. I had to fight for this work to be part of MVP. I knew based on analytics from other tools in the ecosystem, mobile users were visiting legacy sites only to find broken experiences. When initial analytics came back from launch showing that 40% of visitors used mobile devices, this fight was proven worthwhile.
PRototypeSummarize, download, and plug into datasets
Dataset detail pages standardize data reports for our users. This page underwent the most iteration of any section of the site. Users wanted easy access to clearly labeled and defined metadata, but they also wanted to preview and download data. This was a tremendous amount of information to fit in a browser window.

We put multiple variations of the ‘about’ section in front of users; each with a totally different interaction pattern for finding data.

low fidelity exploration

Testing early data-dense explorations

We put together an initial rough wireframe to map out the different summary data that would be boilerplated for each dataset. Testing allowed the design to evolve iteratively as we learned more about the source data, and we worked closely with data owners to map out the architecture of each dataset so that we could have a consistent presentation for each dataset. This effort required close coordination between design, research, data, development and of course, our users.
testing insights
What users told me:
I don't really want to look at a big data table in this format. I'd rather load this data into my own tools.
Can I download different file types here?
The API endpoint is helpful, but where are instructions on how to use the API?

design iteration

Refining our approach to better align to user mental model

Our initial design was out of alignment with some of the initial feedback we had interpreted from our user sessions in that it was providing too much screen real estate devoted to data visualizations. That big chart might have been visually striking, but it wasn't very relevant to our target user who just wanted to download and interpret the data. So as we iterated, we prioritized elements like the data dictionary and deprioritized charting.
testing insights
What users told me:
I faced challenges in displaying extensive metadata content, including the API's data dictionary, notes, and known limitations. To address this, I facilitated A/B testing to evaluate different patterns. This process led to categorizing the metadata into paginated tabs, providing a more organized and user-friendly layout.
I didn't actually notice that there is a navigation bar on this page...
Is there any way to see the PDF versions of these reports? The monthly treasury statement for example releases a structured data report every month that I like to review.
I prefer the tabs over the collapsible sections. I like being able to see the contents of the data dictionary immediately because that is one of the first things I would reference on this page.

High fidelity execution

Focusing the content & hierarchy

Our final solution integrates insights from rapid iteration and user feedback. Key data is prominently highlighted at the top of the page for quick access, while granular historical data and API documentation are readily available for deeper exploration. Each dataset detail page serves as a comprehensive hub, empowering users to review and utilize government fiscal transparency data effectively.

Design system

Our design system was codified from the very start of production. I used version control tools to synchronize components across multiple designers and teams (this was before Figma!). This approach enabled multiple designers to work consistently with defined patterns, while simultaneously coordinating consistent style documentation to the engineering team. Both the product team and end user benefit from a cohesive approach to system design.
deliver
The FiscalData launch
We launched on time and even ahead of the goal that the commissioner had set for us. He was happy to showcase the accomplishment in his annual progress statement that year. We launched FiscalData with 18 datasets and that number has grown to 50 today, with more planned on the roadmap.

Over 200 years of well documented, machine-readable, API-driven historical data is contained on the site.
You are putting in place a foundation that I get to brag about all the time. I’ve demoed your work in Paris and represented the US to other countries. The work of this team is one of the most impressive efforts any of those governments have ever seen.
- Amy Edwards, Deputy Assistant Secretary, Dept of Treasury
When did fiscaldata.treasury.gov go live? This is the best government data portal right now.
- Chief Data Officer, Federal Highway Admin
continued deliveryConsolidating content
In my waning days supporting this team, I began exploring how to adapt the information architecture, navigation and content to allow for consolidation of related content that had been previously managed on another of Data Act's websites. This was primarily educational content, meant to help provide context and explanations for the numbers catalogued in our datasets, so it made sense for FiscalData to serve as the new hub for all this content.
expanding our audience

Introducing relevant data analyses to our users

By integrating new educational content, we are essentially expanding our focus beyond the Data analyst, more toward a general public. The homepage needed to adapt to find a balance between data-centricity and a citizen-first marketing posture. I designed a new concept for the homepage that balanced marketing the new content (data analyses) so that it could coexist inline with the relevant datasets. I also added interactive snapshot cards showing prominent spending metrics and their historical changes over time.

Final & current homepage

I was no longer on the team by the time this new homepage launched, but I think you can clearly see the fingerprints of my homepage concept writ large here. Many of the core concepts that I designed including, search, dataset details, API docs & more continue today in the state that I left them.
more case studiesDesigning for impact
previous case study
Make on-call less awful
View case study
next case study
User adoption of K8s
View case study