A set of defaults for design teams moving from solo to collaborative delivery
Type:Design OperationsLast Updated:March 2026Intended for:Design teams scaling from solo to collaborative
When I was the only designer on a product, the process lived entirely in my head. That was fine — until I started preparing to bring another designer onto the team. I realized I was about to hand someone work I couldn't fully explain. Not because it was complicated, but because I'd never had to externalize it.
The decisions, the handoff expectations, the file conventions, the reason we wrote microcopy the way we did: all of it existed as instinct. When the person holding that instinct is unavailable, or leaves, or is just slammed, the team loses access to it. That's what this is about.
What I built is less a process and more a shared reference. Strong defaults the team can reach for when they need them, not steps they're required to follow. The goal was consistency without ceremony.
How the Templates Connect
Template 1
User Research Insights
Documents pain points, user needs, and design implications from discovery sessions. Translated into actionable opportunities, not just raw observations.
needs inform scope
Template 2
Design Brief
Scopes the design problem: what to solve, what success looks like, what is fixed versus flexible, and every reference a designer needs before opening Figma.
scope shapes the design
Template 3
Design Review Checklist
Evaluates completed design work across six areas: brief alignment, information architecture, visual quality, Figma hygiene, accessibility, and edge cases.
design is ready to build
Template 4
Design to Developer Handoff
Captures everything engineering needs to build accurately, from user flows and conditional logic to validation rules, edge cases, accessibility requirements.
Before I could bring anyone else in, I needed to document not just what we did, but why. This piece covers the differences between requirements-driven development and human-centered design, why information architecture must come before visual design, and what goes wrong when the data model drives the UI. It's an onboarding document. It's also something I'd pull out when there was pressure to skip discovery and jump straight to wireframes. A shared frame of reference instead of a recurring debate.
Discovery sessions are only as useful as what you do with them afterward. This template captures research in a format that's actually usable downstream: the topic area, the core problems observed, what users need and why, and the design implications translated into specific opportunities. Short enough to get filled out. Specific enough that it means something six months later. It links directly to the source notes, so the connection between what was heard and what was built is traceable rather than assumed.
A template only solves half the problem. The other half is knowing where your research lives and being able to find it. The repository uses a simple two-folder structure: raw session notes in one place, synthesized insights in another. Each insight links back to its source. It's not a system that requires maintenance to stay useful. It just needs to be used consistently enough that research accumulates over time instead of evaporating between projects.
This exists so a designer never has to reverse-engineer a decision or spend a week going in the wrong direction. It gives them everything upfront: the problem, what success looks like, what's fixed versus open for exploration, and all the references they need before opening Figma. The Boundaries section is the most important part. Constraints that exist for non-obvious reasons, whether technical, organizational, or domain-specific, have to be written down or they get designed around by accident.
Before this existed, reviews were inconsistent. Things surfaced in QA or after a developer had already built them because the review was more of a quick check than a real evaluation. This checklist makes reviews predictable without making them heavy. Six areas: brief alignment, information architecture and logic, visual and UI quality, Figma file hygiene, accessibility, and edge cases. The intent isn't that every item gets checked on every ticket, but rather that anyone on the team can run a review against the same standard.
Before this existed, engineers got a Figma link, a Jira ticket, and usually a conversation. This template replaced the entire story-writing process. It captures everything needed to build accurately: the user flow, conditional logic, validation rules, edge cases, and accessibility requirements. It also includes a lightweight section for logging questions that come up during build, so the answers live in the ticket rather than a Slack thread. The goal isn't a perfectly filled-out template every time. The goal is that the information exists somewhere that is findable.
The templates don't work alone. Six additional systems support how the team operates, communicates, and scales.
When you're building a team, you don't always get to choose who arrives or at what level. This playbook gave me a way to calibrate my involvement regardless. Five autonomy levels are mapped across four roles, from directed execution to full strategic ownership. When someone joined the team, I could look at where they were and know immediately what I needed to provide: how much direction, how much review, how much I could hand off and trust. It made my involvement intentional instead of reactive. And it gave new team members a clear picture of where they were starting and where they could go.
Figma files are a form of communication. A well-organized file tells the next person where things are, what's approved, what's in progress, and how everything connects to the rest of the team. These standards cover naming, structure, and versioning across three file types: the design system library, exploration files, and final designs. The frame naming convention had the biggest downstream impact. When every frame follows a consistent pattern, a developer or QA engineer can find exactly what they need without asking. Across dozens of flows and hundreds of screens, that adds up.
How a product talks to its users is a design decision, not an afterthought. These guidelines cover voice, tone, and writing standards for all component types: form labels, error messages, success states, button labels, and empty states. With examples of what to do and what to avoid. Writing at a 7th-grade reading level and using judgment-free language isn't about simplifying things. It's about removing friction that never needed to be there. The principles are the same regardless of what your product does or who your users are.
Every team accumulates it. A state that never got designed. A pattern that made sense at the time and doesn't anymore. A component that works but contradicts itself across three screens. Design debt isn't a failure; it's evidence that the product is moving. The problem isn't accumulation. It's when nobody has a way to see it, name it, or decide what to do about it.
This process is deliberately light. It's a shared place to log inconsistencies when they're spotted, a rough method to categorize them by impact, and a periodic check-in to decide what's worth fixing now versus what can wait. No audit cycles, no mandatory reviews. Just a running list that makes invisible debt visible so the team can make intentional choices about it rather than stumbling across it during QA.
Most communication overhead comes not from having too much to say, but from not having a clear habit for when and how to say it. A decision gets made and is never written down. A scope change happens, and two people find out late.
This guide covers four situations where design work consistently needs communication: status updates, decision logs, scope changes, and meeting summaries. Each has a format and a home. Short, async where possible, posted where the work lives. The goal is to make communication a lightweight habit rather than a production.
This is the piece that closes the loop. Every other document in this system is about how the team works. This one asks whether it's working.
The metrics are intentionally simple. How often does work make it to handoff without a major revision? How long does onboarding take before someone is contributing independently? How many questions come back from development after a handoff? These aren't numbers to report upward. They're signals for the team, a way to notice when something in the process is creating friction and decide whether to adjust.
This is also what makes the framework legible as a product. Taken together, the templates and reference documents form a living PRD for the design practice itself: a defined problem, a scoped solution, success criteria, and a feedback loop for iteration. The framework isn't finished when the documents are written. It's finished when the team is using it and improving it. That's the same bar we hold for anything else we ship.
None of this is meant to be followed to the letter. The value is in having defaults the team shares, so when someone new joins, or takes over a feature, or needs to review someone else's work, there's a common language to reach for. The framework grows as the team does. That's the point.