Aperture.
The AI workspace inside an advisory product — Ask Aperture, threads, structured responses, faceted filters. The slice I owned on a 3-designer team.
Client
Strategy advisory · designed at Coditas
Role
AI Workflow Designer · 3-designer team
Surface
Web · B2B SaaS · AI Research Workspace
Process
- Founder Brief
- AI Interaction Design
- Faceted Filtering
- Eng Handoff

The arc · Point A → Point B
A — Where it started
Three designers, a 2-week sprint, and a founder-defined product. The dashboard had a teammate; the AI surface didn’t have a designer yet.
B — Where it landed
An AI workspace where the unit is the thread, the response is a document not a chat bubble, and a faceted filter (Category × Source) constrains the model before every prompt.
What it moved
The numbers, in plain sight.
2 weeks
Sprint length
0
Designers on team
6+
AI surfaces I owned
0
Patterns defined
The story · 4 chapters
How Aperture went from point A to point B.
Brief
Three designers, two weeks, one product.
❝An early-stage strategy advisory team wanted to ship a workspace that does what a senior advisor does — pricing, growth, due diligence — and stays with the customer as a system, not a deck. The founders had a clear product direction. The design team’s job was to translate that direction into a working, clickable product in 2 weeks.
We were a team of three. The lead designer held client direction and product narrative. A teammate owned the dashboard flows — Company / Segment / Sector / Country filtering, the chart canvas, the news surface, saved dashboards. I owned the AI workflow — Ask Aperture: how it behaves, what an answer looks like, how the user constrains it.
This case study covers my slice.
Pull-out · Brief
1 slice
The AI surface, inside a larger 3-designer build
What I owned
Threads as the unit, not messages.
❝Every AI product I’d touched in 2024 had the same failure mode — the model talks too much, the user can’t tell where the answer ends, and the conversation vanishes the moment the tab closes. The lead’s brief to me was direct: don’t ship a chatbot.
I made the thread the unit, not the message. Threads can be named (‘Beverages Market Size’, ‘EV Sales Outlook’), renamed, deleted, and survive sessions. Each thread inherits whichever lens (Company / Segment / Sector / Country) the user is in when they open it — so prompts run pre-constrained. An expand/collapse pattern lets the AI compress to a side-rail when the chart is the conversation, and fill the canvas when the AI is.
Small system, but the bet is structural: an advisor returning on day 4 should find their thread alive and re-runnable, not gone.
Pull-out · What I owned
Thread = unit
Named, durable, lens-aware

Ask Aperture, expanded — thread list on the left, main conversation on the right. The lens the user is in flows into the prompt.
What I owned
Constraining the model is the design.
❝The single hardest interaction wasn’t the AI response — it was the AI filter. Aperture has access to a sea of data: filings, news, analyst reports, internal IP. If the model is free to pull from all of it on every prompt, the answer drifts and the advisor stops trusting it.
I designed a faceted filter that runs inside the conversation, not as a separate settings page. The user picks a Category (Company / Segment / Sector / Country) and a Source (Filings / News / Analyst / Internal IP). The model is constrained to the intersection. The same question with different filters returns different answers, and the user sees exactly which subset the model read.
The response itself isn’t a chat bubble — it’s a document. Summary, comparison chart, detailed table, numbered references. Every claim clickable to a source. An advisor who can’t audit the answer won’t trust the answer.
Pull-out · What I owned
Category × Source
Faceted picker inside the conversation, not in settings

The AI filter — Category × Source. The user constrains the model; the model doesn’t constrain the user.

Filter-driven response — narrower subset, narrower answer, smaller reference list. The advisor sees what the model is and isn’t reading.

Full structured response — summary, two donut comparisons, detailed table, numbered references. The AI returns a document.
Reflection
What 2 weeks earned, and what I’d do with more.
❝Two weeks is enough to ship patterns. It’s not enough to test them. The 3 patterns I’m proud of — threads as unit, documents not chat, faceted filter inside the conversation — survived the sprint and made it into investor demos. The product the founders walked into pitch meetings with is the product we built.
What I’d do with more time: usability test the faceted filter with real advisors (we sanity-checked it with the founders, not end users). Tighten the empty-state copy. Add a thread-versioning model so the same conversation can fork into ‘what if the source was filings only?’ without losing the original. The pattern is there; the lifecycle isn’t.
Honest about the constraint, proud of the slice.
03. Threads as the unit. Documents not chat. A faceted filter that constrains the model. The AI slice of an advisory product, shipped in 2 weeks.
Want this kind of work on your team?