MISSIVIO

01

An AI-powered tool helping non-marketing-savvy business owners run email campaigns without needing a dedicated marketing team. Designed research-first, from zero to MVP.

RoleSolo designer
Year2026
ProjectMVP, SaaS
Missivio dashboard UI showing AI-led email campaign builder
THE PROBLEM

Small business owners shouldn't require a marketing degree.

Email marketing tools assume expertise most small business owners don't have. Without guidance on what to send, who to send it to, or why it matters - people don't just struggle, they stop.


A competitive analysis of six platforms confirmed the pattern across the whole market:


Every tool placed the burden of strategy on the user. Templates help with execution, but only if you already know what you're trying to achieve.

My goal was designing a tool that didn't require users to already be email marketers. Missivio is born to answer the unheard question: what should I actually be doing?

Missivio gallery full width

Muted violet palette is distinct from blue/teal competitors, designed to reduce anxiety around marketing decisions.

Qualitative research

Qualitative research was used to identify core need.

User flows

User flows for Onboarding and Campaign Creation.

RESEARCH

Five interviews with small business owners surfaced one consistent theme: paralysis. Not feature confusion, but a more fundamental uncertainty about where to start.



  • Delegation thinkers: owners who didn't want to learn email marketing. They wanted the decisions made for them so they could focus on running their business.
  • Confidence-seekers: operators who wouldn't follow a tool blindly, but didn't have time to become experts either. They needed to understand the logic behind each action, not just execute it.


Both needed the same thing: confidence. Not just the ability to complete a task, but the ability to explain why they'd done it. That became my definition of success throughout.

CONSTRAINTS

The AI proposes, never acts.

Every output requires explicit user approval before anything goes live. Scope was defined early to keep the experience focused:



  • Desktop-first: research showed that the platform's primary demographic consistently prefer desktop for important business tasks. Mobile awareness was built in, with full optimization deferred to post-launch.
  • Single-user accounts only: no collaborative editing or approval workflows at launch
  • No A/B testing or dynamic email content: kept the first-use experience focused
  • Limited template customization: a guardrail against overwhelming users before they'd built confidence with the basics
  • Token efficiency: AI interactions were designed to be concise and purposeful, not conversationally deep

Every constraint above served one shift: the AI speaks first. A traditional dashboard that asked what would you like to do? was built, tested, and failed immediately. For someone who doesn't know what they should be doing, that's paralysis with better UI.

Iteration of right side panel

The evolution of the right side panel: from feeling dismissive, being too crowded, to just perfect.

USABILITY TESTING

Round 1, mid-fidelity

Users didn't trust their own actions. Right editor panel read as dismissing, and users mistakenly kept publishing a live sequence. The AI's reasoning was invisible. The chat input (the product's main differentiator) was ignored entirely in favor of familiar card patterns.

Round 2, high-fidelity

The structural problems were solved. A new one emerged: the AI reasoning panels were too text-dense to read. The platform's core value proposition was present but invisible. Users who did engage with it responded strongly: the content was right, the format was wrong.

The fix wasn't more content: it was less, better formatted. Dense reasoning panels were broken into scannable steps: recommendation first, rationale second, optional depth third. The information didn't change. The cognitive load did.

FINAL DESIGN

Solving for visibility

Round 2 taught me that the right content shown in the wrong format is indistinguishable from the wrong content. Every final iteration was about making the AI's reasoning accessible.

The defining iteration

The mid-fidelity dashboard was a traditional overview with AI features inside it. That structure implicitly told users: you're in charge, what would you like to do?


For someone who doesn't know what they should be doing, that's paralysis with better UI.


I rebuilt the dashboard around an AI-led conversational handoff. The AI speaks first. The user responds. Every screen that follows has context because the AI already established purpose.


The measure of success wasn't task completion. It was whether a user could, unprompted, explain why they'd just done something. In post-testing debrief, 5 out of 5 users could. That was the benchmark set in research. The final design met it.

Initial concept of onboarding flow, mid-fidelity

Initial concept of onboarding flow, mid-fidelity.

Final dashboard design

Final dashboard design focuses on User/AI interaction

LEARNINGS

Knowing the problem intimately is just a starting point

Missivio started as a response to something I watched happen repeatedly in my career: capable people made helpless by tools that assumed too much. But what this project taught me is that knowing the problem isn't enough.


Every assumption I brought from my professional experience still had to be tested, challenged, and sometimes discarded. The tension between professional instinct and what users actually showed me is what I'll carry into every project after this.