Volodymyr Merlenko
Feb 25, 2026
...

Designer's guide to AI-assisted development

Intro

Something changed

Recently, I noticed a shift in conversations happening on Reddit, YouTube, and other forums. Developers who had been skeptical or even defensive about AI started talking differently – they admitted that AI can write good code, do it quickly, and at a pretty low cost.

For a couple years, even though everyone was already talking about AI, most developers weren't really using such tools to their full potential. Many stuck with basic Copilot or just chatted with ChatGPT. But now it seems to be changing. Maybe AI finally reached a point where it writes reliable code, maybe developers gave the right tools like Claude a real chance, or probably both.

For product designers, this shift matters too. I believe code is just a tool, but the bigger question is what to create with it. And that's where your experience as someone who knows how to build products from users' perspective becomes valuable.

AI is fun

It can help you brainstorm and think through problems, but it cannot make good decisions on its own. Sometimes it does it though, but it just means it guessed well. It still requires a human to read the outputs carefully, give feedback, and think creatively throughout the process.

If you're anything like me, you've probably dreamed about building your own product someday. Not just designing static screens for someone else's vision, but creating something that's truly yours. Designers always wanted to change the world around them, but usually got stuck on typical B2B projects in outsourcing companies, whose main goal has always been to make more and more money.

We can finally change it – AI-assisted development might be the way to make our dreams come true. However, I won't pretend it's quick or easy. You can't build a good and complex app in days, or weeks, even though lots of ads nowadays tell you so. The app I'll be using as an example throughout this article took me about half a year of work in my free time.

Meddy logo animation

What I'm definitely sure about is that the learning curve for building products from 0 to 1 is lower than it has ever been. Also, it's a really fun process.

Kind reminder for new readers

This article is the third and last in a series.

Before diving into development, it helps to have a clear vision for your product: problem statement, target audience, MVP requirements, information architecture – all the thinking that happens before any building. If you haven't done this work yet, or want to see how I approached it, the first article walks through that entire process based on my previous EdTech project PDPro.

It's also helpful to understand some tech basics: what code editors do, what the backend and frontend are, how they're connected, and so on. My second article in the series covers everything a designer should know as well.

What you'll learn

Last year I created 40+ prototypes, 3 pet projects, and made 713 contributions on GitHub. Essentially, I moved away from static mockups in Figma to interactive prototypes and MVPs. I even started doing basic frontend tasks at my real job. Also, I gave around 20 mentorship sessions, held 6 online lectures, and started a Telegram community about AI and design that now has over 1000 members.

One of the products I've been working on during this period of time was Meddy, which I'm going to talk about throughout the article. It's a health management app – basically one place for all your medical records (lab results, prescriptions, drug reminders, etc). We'll cover how to:

  • Turn your idea into a rich context for AI
  • Finally, start using Cursor and Claude Code
  • Set up a development workflow with AI agents
  • Manage context so AI doesn't forget details
  • Recover from mistakes you'll definitely make

Also, we'll talk about MCPs, Xcode, GitHub, Vercel, when (and when not) to use Figma, and many, many more. By the end, you'll have examples, prompts, and a clear understanding of how it all works in practice. Even though you don't have to be a developer, you'll need patience, attention to detail, and a willingness to iterate when things don't work the first time.

The foundation before development

It's okay to spend more time on preparation for development than on development itself. Otherwise the result is going to be basic and buggy. For example, I spent more than half of my time working on Meddy doing ideation, prototyping, research and design. By the way, I used Figma the least among all the tools.

This section covers my workflow on such pre-development activities.

Claude Projects

Claude is an AI assistant made by Anthropic. You can use it in your browser or download a dedicated desktop app. It works like ChatGPT – type a message, get a response, continue your conversation. But there's also "Projects", a specific feature that allows you to create a space where you put files and rules that the AI then has access to in every future chat.

You don't need to have any files ready before creating such a project. With Meddy I had just a few-sentence idea of being able to check your medical data anytime you need it. I used the first chat under a completely blank project to help me set it up. There, Claude and I worked together to write:

  • Project instructions – the rules that would guide every future chat
  • Status tracker – to keep progress through all the planned activities
  • Assumptions document – to track what I believe to be true and update it as I learn more
  • Problem statement document – the first design deliverable, which defined what problems I am trying to solve with Meddy

I refined each document that Claude created for me tens of times until they were good to go. After that they became the foundation for everything else. So basically, though it might sound weird, you can and should use Claude to set up Claude.

One activity = one chat

Even with good project context, long conversations become a problem. Claude allows about 200,000 tokens per chat – roughly 140,000 words. That limit includes your messages, AI responses, attached files, and documents that AI creates. When you approach this limit, Claude starts losing context from earlier parts of the conversation. It seems likely that Anthropic will raise it to around 1 million tokens in the near future, but even then, long conversations will run into the same constraint at some point.

Claude also has a "compact chat" feature – when you hit 200k tokens, AI automatically compresses the earlier conversation and continues. This compaction is pretty smart, it summarizes older messages more heavily and keeps recent ones mostly as they are. But you still lose some details, so I prefer to avoid hitting such a limit altogether.

Ideally, one activity should equal one chat. When you finish (e.g., defining a problem statement), save the deliverable to the project's knowledge by clicking a dedicated "Copy to project" button. This must be done manually, because Claude can't save files on its own – even if you ask it to and it claims it did, don't believe it. Then start a fresh chat for the next activity. AI will access all your previous documents through its knowledge, while having a clean context window.

Don't just trust AI

I set up a clear hierarchy for making decisions at the very beginning:

  1. My own experience – highest weight, least likely to be wrong
  2. Project documentation – what I've already decided and written down
  3. AI suggestions – lowest weight, most likely to contain mistakes

Claude confidently fills in gaps with made-up information if you give it room to do so. Here's an example: during one chat about the business model, I had a hypothesis that quarterly payments might work better than monthly subscriptions. Claude analyzed some research we had gathered and confidently stated that "research proves people accept quarterly payments". But our findings only proved that people don't like monthly subscriptions in similar apps – it said nothing about whether they accept quarterly ones.

When AI makes a claim based on research or documents, ask it to provide specific quotes. Then open those sources and use Cmd+F to search for the exact text. If you can't find anything, Claude probably invented it.

Rules you can adapt

The multi-chat workflow works best when every conversation follows the same rules. I put these in the project instructions so they apply to every chat automatically:

  • Project idea – a simple description of what your product is
  • Decision hierarchy – whose input matters most when there's disagreement
  • Multi-chat workflow – each activity gets its own chat, AI always checks the status tracker first
  • Document standards – all documents in markdown, written simply, with executive summaries at the beginning
  • Process for each step – AI shares brief thoughts first, asks strategic questions, works through problems together, creates documentation only after it understands the task well enough
  • Stay in scope – Claude never jumps ahead, it's always focused on one step only

Here's a pretty detailed prompt that will generate such rules and other foundational documents for your Claude Project. Just copy and send it to the first chat:

# New Claude Project Setup Prompt

**I want to set up a systematic, multi-chat project workflow for a new project I'm starting. I need you to help me create the foundational documents and approach that will ensure consistent, high-quality work across multiple chat sessions.**

**Here's what I need you to understand about my working style and requirements:**

**MY WORKING APPROACH:**

- I work through complex projects using **one dedicated chat per step** to avoid context limitations and hallucinations

- Each chat focuses on completing exactly one step with its deliverable before moving to the next

- I want a systematic approach with clear global rules, project tracking, and documentation standards

- I have full decision authority for all project choices

- My experience and thoughts should outweigh web research when conflicts arise

- You should share your initial thoughts about that step's deliverable first (brief, based on project knowledge), then ask me 7-10 strategic questions per step

- Each step must stay focused on its specific scope - never jump to solutions during discovery phases, never discuss positioning during discovery, never ask about implementation during early phases

**MULTI-CHAT WORKFLOW DETAILS:**

- **Why one step per chat**: Prevents context degradation, reduces hallucinations, maintains laser focus on current step objectives

- **Between chats**: I upload deliverables to project knowledge so next chat has full context

- **Starting new chats**: You will provide me a brief prompt for each next chat that includes: instruction to check status tracker, follow complete global rules, review project knowledge, then start with your assumptions + 7-10 questions for that specific step

- **Step completion**: Only when we have a final deliverable and I explicitly say the step is complete

- **Step scope discipline**: NEVER ask about moving to the next step - stay in the current step until the final deliverable is ready. CAN revisit and update previous step deliverables based on new knowledge, but NEVER work on future steps

**WHAT I WANT YOU TO CREATE:**

1. **Global Rules Document** - Complete workflow rules that will guide every chat session

2. **Project Status Tracker** - Phase and step breakdown appropriate for my project type

3. **Project Assumptions Document** - Track assumptions throughout the project

**DOCUMENT REQUIREMENTS:**

- All documents in markdown format

- Simple, natural language (8th grade reading level, no jargon)

- Executive summaries with specific key points, not vague descriptions

- Detailed content that can be used as context for other AI tools

- Exact artifact naming consistency across all chats (no variations ever)

**PROCESS REQUIREMENTS:**

- Always check project status tracker first in each new chat

- Always follow global rules completely - do not ignore any parts

- Update project assumptions document at each step when new assumptions emerge

- When step is complete, provide brief prompt for next chat that includes: check tracker, follow global rules, review knowledge, start with your initial thoughts about that step's deliverable + 7-10 questions

- Decision hierarchy: My experience (highest weight) > Project knowledge > Your thoughts (lowest weight, most likely to hallucinate)

**SUGGESTED STEP SEQUENCE** (adapt as needed for your project type):

**Phase 1: Discovery & Foundation**

1. Problem Statement → Problem Statement Document

2. Target Audience → Target Audience Document

3. Assumptions → Project Assumptions Document

4. Deep Web Research → Research Findings Document

**Phase 2: Strategy & Positioning**

5. 6P Story (method by Growth.Design) → 6P Story Document

6. StoryBrand → StoryBrand Document

7. Business Plan → Business Plan Document

**Phase 3: Product Planning**

8. MVP Requirements → MVP Requirements Document

9. Information Architecture → Information Architecture Document

**Phase 4: Design & Validation**

10. MVP Prototyping → MVP Prototype

11. Psychology Improvements → Improved Prototype (user independently)

12. Design References (user independently)

13. Figma Designs → Hi-fi Designs (user independently)

14. Unmoderated User Testing → Improved Prototype (user independently)

**Phase 5: Implementation**

15. Implementation Guide → Implementation Guide Document

16. Development & Deployment (user independently)

**Now, to customize this approach for my specific project, please ask me strategic questions about:**

1. **Project Type & Context** - What kind of project, industry, scope, timeline expectations

2. **My Role & Authority** - My background, decision-making authority, team involvement

3. **Project Goals & Deliverables** - What success looks like, key outputs needed

4. **Client/Stakeholder Context** - Who I'm working with/for, their expectations

5. **Research & Validation Needs** - What research approaches make sense for this project

6. **Unique Project Requirements** - Any special considerations, constraints, or opportunities

7. **Step Sequence Preferences** - Which suggested steps fit your project, what should be modified/added/removed

**After understanding my project context, please:**

1. Create the three foundational documents customized for my specific project

2. Begin the first step of the workflow in this same chat

3. Follow the systematic approach you've established throughout

**Ask me 7-10 strategic questions now to gather the context needed to set up this systematic workflow for my project.**

What do you really need to start coding?

The pre-development activities I went through are based on my preferred process as a product designer. Yours might look different depending on your experience, but the list below should give you some ideas on what to include:

  • Problem statement – defined the core problems Meddy solves
  • Target audience – user personas of who I'm building for
  • Project assumptions – tracked what I believed to be true
  • Research findings – validation from academic papers, competitor analysis, and Reddit
  • StoryBrand – clear product messaging so people understand why they should care about Meddy
  • Business plan – how it could make money over time
  • MVP requirements – prioritized list of features to build first
  • Information architecture – structure of screens, navigation, and user flows in the app

For Meddy, it all served two purposes: first, these activities helped me understand what I was building from every angle. Second, they became a solid foundation for future development. If you want to learn about them in more detail, check out my first article from the series.

You should use Figma way less

Many designers already use AI for prototyping, and starting in Figma doesn't really make sense unless you're working with an uncommon design idea or a non-typical layout. For regular web and mobile apps with standard patterns, you can do it mostly with tools like Claude. And while some say AI makes bad-looking UIs, that's usually because of low-quality prompts. With proper context and the right design process beforehand, you'll get great results.

For those who are still not convinced, there's another practical reason – when you spend days in Figma to design your first iteration of a product, you risk creating a UI that looks great but is technically painful to implement. On the other hand, if AI generates components and provides their code, it means the development will be relatively easy.

AI prototypes for quick tests

Claude Projects can generate simple web prototypes – perfect for quickly testing your ideas. As I described in the beginning, I worked through many project steps before building anything. By step 10, I already had MVP Requirements (what features to build) and an Information Architecture document (navigation, structure and user flow details). When I asked Claude to turn it all into a fully interactive prototype, it worked perfectly.

However, I didn't tell AI to build it immediately – first I asked Claude to share its thoughts, ask me clarifying questions, create a detailed development plan, and wait for my specific approval. This process took about 90 iterations, because I kept refining what I wanted, and each round of feedback got the prototype closer to the idea I had in my head.

Claude also provides a direct share link for any prototype you create there. So when mine was ready for testing, I copied that link and pasted it into Useberry – a tool for unmoderated user tests. I even created a script for it with Claude's help.

Even if you've never used Useberry before, you can simply ask Claude to help you figure out where to click, how to set things up, what settings to pick. If it tells you to open a page or click something you can't find – that's fine, AI models have knowledge up to a certain date and apps like Useberry change their interfaces from time to time. Just screenshot whatever you're looking at and share it with Claude. This way you get personalized guidance that's specific to your project.

Always analyze results yourself

Don't just share the testing results with Claude and ask for analysis – you'll miss important details and AI will definitely make up things that aren't there.

For example, I started by watching every user test recording first, taking my own notes and forming my own conclusions. I didn't share them with AI right away. Instead, I asked Claude to analyze the raw results without knowing my thoughts – this way it wasn't biased by my interpretation. Only after that I uploaded my analysis and asked Claude to identify what might have been missed. Finally, I combined the best results from both versions.

In my case, such tests caught a positioning problem – most respondents said they could do the same thing by having a separate chat for medical stuff in ChatGPT. They didn't see enough value in Meddy. However, finding this during research meant I could adjust direction before writing any code.

First I had 6 designs, then 18

Once your product is validated through testing, you can move to a high-fidelity UI. But you don't need to design every screen beforehand. With AI-assisted development, Figma becomes a tool for visual direction – not for documenting every possible state.

Before opening Figma, I spent time on Mobbin looking at healthcare apps to get a sense of common patterns. Nothing complex, just collected references so I wasn't starting from a blank file.

I created 6 screens: one onboarding step, a bottom sheet with actions, voice mode, chat mode, homepage, and a medical record detail page. These were building blocks – screens that set up the visual style Claude could analyze and reuse for other parts of the app.

During development, I gave Claude Code (*Claude Projects ≠ Claude Code) these frames along with my requirements and architecture documents. AI then developed other screens to match my established UI. For example, there was only one onboarding page in Figma, but the app had many more steps defined in the architecture – Claude Code created the rest of them.

After reviewing the results, some screens looked good, while others had UI issues. The solution was to design corrections only for the frames where AI made mistakes, share them via Figma MCP, and let Claude adjust the code based on these new references. By the end, I had 18 screens in my Figma file, but I didn't design all of them upfront.

Components and tokens aren't important

There's no need to name layers properly because Figma now has an AI feature that does it automatically. Additionally, I didn't create any components, color tokens, or a separate design system. I literally had a single-page Figma file for everything.

I'm not saying all these things don't matter at all. You absolutely need to have a design system. I'm just telling you it shouldn't always be done in Figma because AI now handles it really well in code. What I learned is that you can give Claude Code a design frame using the Figma MCP, develop the first iteration, and then ask it to refactor the code – meaning clean it up, separate big coding files into smaller ones, create reusable components, colors, etc.

What I did focus on was auto-layouts. Claude needs them to understand how to make your code responsive. If you skip auto-layouts and use absolute positioning, AI won't know how elements should behave when screen sizes change.

I'll explain how the Figma MCP works in a later section – the point here is that these tools change what you need to prepare for development, and it's much less than what we've been taught as designers.

Setting up your dev environment

Before you generate your first line of code, you need three things: the right tools, rules that tell AI how to behave, and ideally – prompts to initialize your new coding project. However, there are other pro tips I'll share as well. This section covers the essentials, plus what happens when your tech stack doesn't work out as you planned.

If you've never coded before, you might not know what an IDE is. It stands for Integrated Development Environment, basically a complex text editor designed for writing and previewing code. Think of it like how Figma is specialized for design work – an IDE is the same but for developers.

Meddy, the product I've been working on, is an iOS application, and if you plan to create a native mobile app yourself – you'll need tools such as Cursor, Claude Code and Xcode. Let's explore each one.

Cursor

Cursor is a desktop application, an IDE built specifically for AI-assisted coding. This is very different from web-based prototyping tools like Lovable, because Cursor runs on your computer and has way fewer limitations. You can develop anything: mobile apps, games, Chrome extensions, even Figma plugins, not just web apps or landing pages. Also, Cursor can connect to external tools through Model Context Protocol (MCP). For example, it means Cursor can read your Figma designs (not screenshots), fetch technical documentation from the web, and much more. Basically, you write instructions in natural language, Cursor generates the code, and you iterate from there.

If you're on a low budget, I suggest using the free version. It gives you the main IDE features you need – visualizing your code, making small manual changes, GitHub integration, version history, etc. I don't think it's wise to pay for Cursor's paid plan right now.

About a year ago, they charged by requests – you had 500 requests for $20/m, where a single request was usually one prompt (powerful AI models might have cost 2-4 requests). It was pretty clear how much you could send before hitting the monthly limit. But now, when you pay $20, you literally get $20 worth of tokens. If your request is large, it costs much more than a simple prompt (e.g., 50 cents or more). From my experience, those $20 on the cheapest paid plan disappear very quickly.

There are already many alternatives for different needs and preferences – e.g., Antigravity, Windsurf, or Kiro. However, I like the team behind Cursor. They're young, smart, ship updates frequently, and they look like people who genuinely love their product. It's also a matter of taste – if you prefer a more visual AI experience with a polished interface, pay for Cursor. Additionally, they sometimes offer temporarily free AI models that are less powerful but work fine with simple tasks.

For complex development at a reasonable price, you need the next tool.

Claude Code

Claude Code is Anthropic's AI coding tool that runs in your terminal. If you don't know what a terminal is, or you've heard of it but feel too scared to open one, check out my previous article where I explain it in simple terms. It also costs $20/month for the cheapest plan, but gives you way more generous usage limits. While I'm writing this article, they have a 5 hour rate limitation, which means you get a certain amount of tokens to use every 5 hours and then it gets refreshed. Also, there's an additional weekly limit which they introduced a few months ago. Sometimes it's a bit frustrating to get hit with such a long wait, but anyway it's still a better option than using Cursor's paid plan.

Here's something important to understand – AI models from all the biggest players (Google, Anthropic, OpenAI) are roughly the same in terms of intelligence. Some lean more toward conversational tasks, like ChatGPT, some toward visuals, like Gemini, and some toward coding or complex analysis, like Claude. But overall, they're pretty much the same. What matters more is the tool you use to interact with such AI models. For Claude, the best solution is using the one designed by the same people who created it. In this case, it means Claude Code, which gives you access to three different models: Haiku (lower performance, good for simple tasks), Sonnet (good performance, handles complex tasks), and Opus (the best performance at this moment). To sum up – people who built the model know best how to get the most out of it.

You could also hear about things called "AI tools" in the AI-assisted development context, but to avoid confusion with actual applications (which one can call tools), let's use the word "capabilities". These are what make some AI products more powerful than others, even when they have the same underlying AI models. The most powerful capabilities include MCPs, global rules, slash commands, and agents. Nowadays, there are also skills and plugins – basically, every few months a new capability gets released, so it's really difficult to cover everything, however they are not that complex to learn. I'll explain the most useful ones in a dedicated section, particularly the sub-agent pipeline which I tested a lot while working on Meddy. For now, just know that as you run into these terms during development, they're ways to extend what AI can do beyond basic code generation.

What also makes Claude Code particularly powerful is that it runs in your terminal, which has access to everything on your computer. A terminal is basically a simple app, preinstalled on your device, that uses text commands instead of your keyboard or touchpad to perform actions. And it does it much faster, in greater numbers than when done manually. With Claude integrated into the terminal, all these actions become possible for AI to do automatically.

Another way to use Claude Code, which I personally prefer, is to run it inside Cursor's terminal.

Xcode

Xcode is another IDE from Apple. It's their official tool for building applications for iOS, macOS, and all the other operating systems. If you're already using Cursor for all your coding, you still need Xcode to preview the result and test it on real devices (e.g., on your iPhone). There's no way around this for iOS development, but it's also free and not that complicated to learn.

If you're building just web-based products, you won't need Xcode at all, only Cursor and Claude Code.

Sometimes you have to start over

I began building Meddy with Expo – a framework that lets you write code once and release it to both iOS and Android devices. If you're unfamiliar with terms like "framework", I also explained them in my previous article about programming basics. The reason why I chose it was because I already had some experience with Expo from my previous pet-projects, and I wanted to avoid learning new tools.

Unfortunately, I got stuck trying to customize the iOS safe area. Specifically, I wanted to position a fixed bottom button in my onboarding flow closer to the phone's edge – exactly as I'd designed it in Figma. This sounds like a small detail, but it was important to me. Also, I knew it was possible because I'd seen similar layouts in other apps. But Expo, at least at that time, didn't have enough customization options for the safe area handling.

I searched documentation, asked Claude for help, and tried different approaches. Nothing worked, so eventually I decided to pivot from Expo to native iOS development. Even though I lost a few days of work, looking back, it taught me some things. If you're learning and building pet projects, making mistakes like this is a good thing long term.

I learned Xcode's basics surprisingly fast by using the same approach described in the earlier section with Useberry. I would simply share screenshots of its interface with Claude and ask where and why I should click.

CLAUDE.md and global rules

CLAUDE.md is a markdown file that lives in the root of your development folder. When you use Claude Code, it automatically reads them and follows the rules you've defined in any conversation you have with AI. Think of it as instructions that shape how Claude behaves throughout your entire coding project.

Keep in mind that this file doesn't stay static. Mine went through tens of changes during Meddy's development. It started as a basic document with some tech stack information and grew into a comprehensive set of rules covering everything from source of truth hierarchy to specific warnings about outdated approaches that AI tried to use but got stuck with bugs.

This file is super long, so you probably won't read it word by word. But it's useful as reference – both for understanding what mature global rules look like, and for feeding them into your own AI tools if you want to recreate something similar.

# Meddy Project Rules - Development Approach

## CRITICAL: Read Project Knowledge Documents First

**BEFORE implementing anything, read these documents in full:**

- Information Architecture Document

- MVP Requirements Document

- AI System Architecture & Tools Document

- Medical Record Templates Document

**These documents contain ALL feature specifications, requirements, and implementation details.**

## REUSE EXISTING DEVELOPMENT (CRITICAL)

**BEFORE creating ANY new component, layout, or style:**

1. **AUDIT existing codebase** - check Views/, DesignSystem/, Components/

2. **IDENTIFY reusable patterns** - existing layouts, components, styles

3. **EXTEND existing systems** - don't recreate what already works

4. **MAINTAIN consistency** - follow established naming and structure patterns

**Example:** If onboarding has card layouts, reuse CardComponent for other onboarding steps

**Key:** Always build upon what exists rather than creating duplicate components

## SOURCE HIERARCHY & REFERENCES

### **Mandatory Sources of Truth (Hierarchy):**

1. **User's direct instructions** (highest authority - never override)

2. **dev-status/development-context.md** (current project status, guidelines, warnings)

3. **Figma designs** (pixel-perfect UI implementation required)

4. **Project knowledge documents** (complete feature specifications)

5. **@prototype-for-reference-only** (content/text reference when no Figma design exists)

6. **These rules** (implementation process guidelines)

### **Documentation Files:**

- **MVP Requirements.md**: Feature specifications (use AskUserQuestion tool before big implementations)

- **Information Architecture.md**: UI flows, 5-tab structure (use AskUserQuestion tool before big implementations)

- **AI System Architecture & Tools.md**: AI integration details (Gemini 3.0 for MVP, Vercel AI SDK for future)

- **Medical Record Templates.md**: Document organization approach

- **prototype-for-content-reference-only.jsx**: Text/copy reference (7000+ lines)

## TECH STACK & CURRENT STATUS

### **Correct Tech Stack (Use Only These):**

- **Frontend**: Native iOS with SwiftUI IMPLEMENTED

- **Backend**: Convex auth | Local storage (Core Data + CloudKit) for MVP | Full Convex backend for future

- **AI**: Gemini 3.0 (free tier) for MVP | Vercel AI SDK with GPT-5 for future

- **Data**: Core Data + CloudKit sync IMPLEMENTING (MVP priority)

- **Networking**: Swift URLSession

### **Icon Implementation (CRITICAL):**

- **NEVER install Tabler Icons via Swift Package Manager or NPM**

- **ALWAYS USE**: Custom PDF icons exported from Figma + `add-icons.sh` script

- **Available icons**: 51 icons with `icon-tabler_` prefix in Assets.xcassets

- **Usage pattern**: `Image("icon-tabler_building-hospital")` (include tabler_ prefix)

### **Implementation Patterns:**

- **Icons:** `Image("icon-tabler_building-hospital")` (51 available)

- **Colors:** `Color.onboardingTextPrimary` (semantic aliases from OnboardingSemantics.swift)

- **Typography:** `Font.onboardingTitle` (semantic aliases from OnboardingSemantics.swift)

- **Spacing:** `OnboardingSemantics.screenHorizontalPadding` (semantic constants)

- **AccentColor:** iOS system blue (#007AFF) for system UI tinting - custom components use explicit colors (e.g., Color.onboardingAccentRed)

## PROJECT STATUS & STRUCTURE

### **IMPLEMENTED & COMPLETE:**

- **Design System**: Complete atomic architecture (7 files including MainAppSemantics.swift, 10/10 compliance standard)

- **Onboarding Flow**: Steps 1-12 fully complete (see dev-status/development-context.md for details)

- **Authentication**: Google + Apple Sign-In with session persistence

- **Health Data Layer**: HealthKit + Core Data + CloudKit with encryption

- **Reusable Components**: MeddyButton, InputField, OnboardingCard, IconButton, ConditionRowView, GenderSelectionCard, RecordCardView, AIInterpretationCard, etc.

- **Assets**: 51 icons, 5 profile images, Meddy logo, app icon, AccentColor (iOS system blue #007AFF)

- **App Structure**: RootView with LaunchScreen → Onboarding (complete) → MainAppView (5-tab navigation)

- **Main App Foundation**: 5-tab navigation, RecordsView, SettingsView, Bottom sheet, WorkInProgressView

- **Services**: GeminiService (759 lines), FileStorageService (580 lines), KeychainManager, ConvexAPI, MotionManager, CoreDataService (7 total)

- **ViewModels**: AuthViewModel (498 lines), OnboardingViewModel, RecordsViewModel (872 lines)

- **Main App Views**: RecordsView (618 lines), CategoryListView (518 lines), RecordDetailsView (404 lines), SettingsView (260 lines)

- **Components**: 20 reusable components in Views/Components/ including CarouselIndicatorPill, DocumentOpener, PlaceholderReminderCard

### **MVP COMPLETE:**

- **CategoryListView**: Full implementation (518 lines) - search, navigation, delete

- **RecordDetailsView**: Full implementation (404 lines) - hero carousel, AI cards, Quick Look, share

- **Bottom Sheet Upload**: Fully functional photo/file upload with Gemini AI analysis

### **POST-MVP (Future):**

- **AI Chat Interface**: Full conversational AI with Vercel AI SDK + GPT-5

- **Voice Mode**: AVFoundation + speech-to-text

- **Reminders System**: Automatic + manual reminders

- **Home Dashboard**: Health insights and quick actions

- **Advanced Features**: Subscription (StoreKit), notifications, full backend sync

### **Current Project Structure:**

```

/Meddy AI.xcodeproj

/Application/ # App entry (3 files)

/Views/ # Onboarding (30+ files), Main App (MainAppView, Records/, Settings/, Components/)

/DesignSystem/ # Complete (7 files including MainAppSemantics.swift)

/ViewModels/ # AuthViewModel, OnboardingViewModel, RecordsViewModel

/Services/ # 6 services (GeminiService, FileStorageService, KeychainManager, etc.)

/Utilities/ # RecordTextFormatter, MedicalRecordDecoder

/Models/ # MedicalRecordData.swift, Core Data model

/Assets.xcassets/ # 51 icons, 5 profiles, logo, app icon

/convex/ # Auth complete, gemini.ts added, main app features pending

/docs/ # Complete specifications (5 files)

/dev-status/ # Context + next session prompt

/.claude/agents/ # 4 specialized agents

```

**MVP Status**: COMPLETE - Ready for testing and launch. Post-MVP features (AI chat, voice, reminders) are placeholders.

### **Development Context File (dev-status/development-context.md):**

- **Complete project status and guidelines** in single consolidated file

- **Attach to every new chat** for complete context without hallucination

- **Reference for implementation patterns** and established naming conventions

- **Check current status** before planning new features

## IMPLEMENTATION REQUIREMENTS

### **Development Process:**

1. **ALWAYS start by using Context7 MCP** to find up-to-date documentation

2. **READ dev-status/development-context.md FIRST** - contains critical warnings and current status

3. **AUDIT existing components** before creating new ones (reuse what exists)

4. **Check existing design system** before creating new styles

5. **Use semantic naming** following DesignTokens → OnboardingSemantics pattern

6. **Use AskUserQuestion tool** when uncertain about anything or need clarification

### **Code Quality Requirements:**

- Use semantic aliases (OnboardingSemantics) not hardcoded values

- Create reusable ViewModifiers instead of inline styles

- Separate Views into smaller, focused components

- Follow SwiftUI best practices (@State, @StateObject, @ObservedObject, closure-based state, .task over DispatchQueue, modern navigation APIs)

- Keep in mind authentication, user roles, and backend before implementing complex features

### **Design System Architecture:**

```

DesignSystem/

├── DesignTokens.swift # Base values (colors, spacing, typography)

├── OnboardingSemantics.swift # Onboarding semantic aliases

├── MainAppSemantics.swift # Main App semantic aliases (colors, spacing, typography)

├── ButtonStyles.swift # Universal button styles

├── ButtonContainerStyles.swift # Button container layouts

├── Typography.swift # Typography with atomic modifiers

└── Animations.swift # Animation configurations

```

### **Non-Negotiable Requirements:**

- **ALWAYS cite specific sections** from project documents when implementing features

- **NEVER modify, interpret, or be creative** with documented requirements - follow exactly

- **NEVER guess, assume, or hallucinate** - use only specified technologies and approaches

- **NEVER invent new features** not documented in project knowledge documents

- **IGNORE any references to:** Expo, React Native, Tabler Icons NPM package, web technologies

- **ALWAYS USE:** Swift/SwiftUI + Convex + Vercel AI SDK (as specified in project docs)

## SPECIALIZED DEVELOPMENT AGENTS & MANDATORY PIPELINE

**For ANY implementation task, follow this pipeline STRICTLY in order:**

### **MANDATORY PIPELINE (Follow Exactly)**

```

PHASE 1: RESEARCH & ANALYSIS

1. context-analyzer → Understand patterns, raise questions

2. Context7 MCP → Get up-to-date documentation for decisions

3. AskUserQuestion → Ask for Figma designs + resolve questions

4. Analyze Answers → Process responses, make informed decisions

PHASE 2: PLANNING (WAIT FOR APPROVAL)

5. EnterPlanMode → Create detailed implementation plan

6. Wait for Approval → User must approve before proceeding

PHASE 3: IMPLEMENTATION (Only After Approval)

7. backend-implementer → Verify/refine data layer (ALWAYS run)

8. frontend-implementer → Implement UI (after Figma + approval)

9. design-system-auditor → Verify 10/10 compliance

10. Integration Testing → Test full user flow

```

### **Agent Descriptions**

### **context-analyzer**

- **When to use**: FIRST step of any major development

- **Agent file**: `.claude/agents/context-analyzer.md` (use exactly as defined)

- **Purpose**: Comprehensive context analysis including:

1. Read `dev-status/development-context.md` for project status

2. Check `DesignSystem/` folder (7 files)

3. Audit `Views/Components/` (13 components)

4. Audit `Services/` (6 services)

5. Audit `Assets.xcassets` (51 icons)

6. Review `CLAUDE.md` for tech stack

7. Use Context7 MCP for documentation

8. **Use Figma MCP** to analyze designs (ask for link, then fetch)

- **Output**: Context summary + questions for user clarification

- **CRITICAL**: Must identify decisions that need user input

- **Figma Note**: Designs may differ from implemented code. Always compare with existing code patterns and prioritize code when they differ.

### **EnterPlanMode (MANDATORY STEP)**

- **When to use**: AFTER questions answered, BEFORE any implementation

- **Purpose**: Create comprehensive implementation plan for user approval

- **Plan must include**:

- Summary of what will be implemented

- Architectural decisions (justified by Context7 + codebase patterns)

- Files to create/modify

- Components to reuse vs. create

- Design system tokens to use

- Step-by-step approach

- Success criteria

- **CRITICAL**: STOP and WAIT for user approval. Do NOT proceed without approval.

### **backend-implementer**

- **When to use**: AFTER plan approved (even if "no changes needed")

- **Purpose**: Verify/refine Core Data schemas, file storage, data retrieval

- **Critical for**: Ensuring data layer is ready before frontend implementation

### **frontend-implementer**

- **When to use**: ONLY after plan approved AND backend verified

- **Prerequisites**: Plan approved ✓, Figma link/confirmation ✓, backend ready ✓

- **Purpose**: Pixel-perfect UI implementation with maximum design system reuse

- **Critical for**: Main app screens (Home, Records, AI Chat, Reminders, Profile)

### **design-system-auditor**

- **When to use**: After EACH SwiftUI view implementation (not batched)

- **Purpose**: Eliminate hardcoded styles, ensure component reuse, maintain code quality

- **Critical for**: Maintaining design system consistency and code quality

### **PIPELINE VIOLATIONS TO AVOID**

- Skipping to frontend-implementer without asking for Figma designs

- Skipping plan creation step (EnterPlanMode)

- Proceeding with implementation before user approves the plan

- Marking backend-implementer as "complete" without actually running it

- Making architectural decisions without Context7 documentation

- Guessing or assuming - ALWAYS use AskUserQuestion tool

- Running design-system-auditor before frontend is complete

### **Context7 MCP Usage (MANDATORY for Decisions)**

Before making ANY architectural decision (state management, data flow, patterns):

1. Use `mcp__context7__resolve-library-id` to find the library

2. Use `mcp__context7__get-library-docs` to get official documentation

3. Combine with existing codebase patterns to make informed decisions

4. NEVER guess - if documentation is unclear, use AskUserQuestion

**All agents must use the AskUserQuestion tool** when encountering ambiguities, conflicts, or decisions requiring user input. Never guess or assume - ask for clarification.

## AI MEDICAL BUDDY IMPLEMENTATION

**For ALL AI interactions:**

- **MVP**: Use Gemini 3.0 (free tier) for document analysis only - simple REST API approach

- **Future**: Use Vercel AI SDK for all AI integrations with GPT-5

- **Maintain medical buddy positioning** (helpful buddy, never doctor or medical advisor)

- **Use user's health context** from health profile and uploaded medical records

- **Include proper medical disclaimers** as specified in docs

- **Maintain freemium model** (5 Assists free/month, unlimited paid)

**MVP Scope:** Document analysis with Gemini 3.0 answering "What it means" and "What to do next"

**Future Scope:** Full AI chat, voice conversations, quick questions, reminder generation with Vercel AI SDK + GPT-5

## REMEMBER FOR EVERY PROMPT

1. **READ dev-status/development-context.md FIRST** - critical warnings, status, guidelines

2. **AUDIT existing codebase FIRST** - reuse components, layouts, styles before creating new ones

3. **CHECK onboarding components** - MeddyButton, InputField, OnboardingCard, IconButton, etc. are ready for Main App reuse

4. **AUDIT Assets.xcassets icons FIRST** - check existing 51 tabler icons before referencing

5. **Follow established design system patterns** - base + semantic alias pattern (10/10 compliance standard)

6. **Use only Swift/SwiftUI + Convex + Vercel AI SDK** - ignore outdated tech references

7. **NEVER install Tabler Icons packages** - use existing icons with `icon-tabler_` prefix

8. **Maintain pixel-perfect Figma implementation**

9. **Medical buddy positioning** throughout all AI features

10. **Update development status** when complete

11. **WARN USER when approaching token limit** - allow user to manually compact conversation with specific instructions instead of auto-compacting (prevents information loss and hallucination)

## REQUIREMENT CONFLICT RESOLUTION

**When Figma design conflicts with project document requirements:**

```

"CONFLICT DETECTED:

- Project document requirement: [exact quote]

- Figma design shows: [description]

- Document source: [cite specific document and section]

How should I proceed?"

```

**Wait for user decision before implementing.**

In brief, my rules file covered:

  • Tech stack specifications – exact technologies to use, with no substitutions allowed
  • Non-negotiable requirements – cite documents when appropriate, never guess, ask for clarification when uncertain
  • Development pipeline – a sequence of specialized AI agents that run for each feature
  • Conflict resolution – what to do when Figma designs don't match the requirements
  • Project status tracking – what's complete, what's in progress, what's planned for later, and where to store all this information

If you decide to use Cursor or any other IDE instead of Claude Code, the idea of global rules is going to be the same – just done differently. In most cases, such instructions are configured through settings rather than a dedicated Markdown file.

Ask AI for some kick-start prompts

When you open a new project in Cursor, you're basically looking at a blank folder on your computer. If you know nothing about programming, it's going to be hard to understand how to even begin. That's where kick-start prompts could be helpful. At this point you're going to have a fully set up Claude Project that already knows a lot about your product from all the previous design activities – just ask it to help with such prompts. They should be simply a few starting messages you'll then send into Claude Code in Cursor to set everything up.

The rule of thumb is: don't overcomplicate them. Avoid deep technical details or code snippets. These prompts should be high-level references that point to your context documents – the files you'll download from your Claude Project and paste into that blank folder in Cursor. The same applies to global rules (i.e., CLAUDE.md file).

Even if you know nothing about programming and have no idea what kick-start prompts should cover, you can just ask AI to suggest options with pros and cons for each. For example, I didn't write these prompts from scratch myself – I chatted with Claude inside my project and let it guide me.

Reuse this template

I created this prompt some time ago based on the experience across multiple projects, including Meddy, to help myself with any future AI-assisted development preparation. It generates many materials you need in one step: prompts to set up Cursor, global rules, actual development prompts for each big feature, and even an agent pipeline files. Use it during your development preparation step (equivalent to my Step 15 in a Meddy Claude Project) – after you've completed your context documents but before you start any actual coding.

# Development Preparation Assistant

## Your Role

You convert this project's existing planning documents and Figma designs into development documentation and prompts for AI development tools (Claude Code, Cursor). You provide high-level guidance while letting AI tools handle all technical implementation separately.

## First: Review Existing Project Knowledge

Before generating anything, read all existing project documents in the knowledge base:

- Problem Statement / Project Brief

- Target Audience documentation

- MVP Requirements

- Information Architecture

- AI System Architecture (if applicable)

- All the other planning deliverables

These documents contain decisions already made. Your outputs must reference and build on them, not contradict or duplicate them.

## Core Principles

- **High-level guidance only** - No code snippets or super technical details

- **Figma designs are authoritative** - Pixel-perfect implementation required

- **Figma MCP integration** - All prompts reference MCP for design data

- **Behavioral focus** - Specify outcomes, not implementation methods

- **Design system consistency** - Extract tokens and components during development, not upfront

- **Documentation-based commands** - Reference official docs for current practices instead of hardcoded instructions

- **Build on existing work** - Cite specific sections from planning documents rather than restating requirements

## What You Generate

### 1. Project Setup Instructions

- **Multi-part approach**: Break complex setup into 3-5 focused prompts for AI tools

- Tech stack and dependencies setup using official documentation

- Required icon libraries and asset installation

- Folder structure and environment configuration

- Development status tracking setup

### 2. Development Prompts

**Objective**: [Clear goal, citing relevant section from MVP Requirements or Information Architecture]

**Design Reference**: Use Figma MCP to analyze selected [frame/component]

**Functional Requirements**: [User interactions and behaviors, referencing existing documentation]

**Design System Consistency**: [Token extraction and component reuse guidance]

**Success Criteria**: [Validation requirements]

### 3. Project Rules Document (CLAUDE.md)

- Source of truth hierarchy (user instructions → project documents → Figma → rules)

- References to all project knowledge documents AI must read before implementing

- Design fidelity requirements (pixel-perfect mandate)

- Tech stack specifications with no substitutions allowed

- Non-negotiable requirements (cite documents, never guess, ask for clarification)

- Conflict resolution protocol

### 4. Agentic Pipeline Design (Claude Code)

- 3-5 specialized agents for automated development workflow

- Agent sequence: Context → Planning → Implementation → Quality Audits → Testing

- Agent creation prompts for user to copy-paste into Claude Code

- Manual workflow alternative for Cursor compatibility (optional, if user needs it)

### 5. Design System Structure Guidance

- Token organization during implementation phases

- Component hierarchy recommendations with atomic design approach

- Consistency validation and automated auditing approaches

- File organization and naming conventions

### 6. Placeholder Implementation Strategy

- MVP approach for complex features (full UI, placeholder functionality)

- User-friendly placeholder messaging that maintains product voice

- Development planning that includes full specs but implements in phases

### 7. Custom Project Checklist

Generate a tailored checklist based on project type, platform, and deployment needs. Focus on commonly missed items that cause production issues.

## Your Workflow

### Initial Context Gathering

First, review all existing project documents in the knowledge base. Then ask only about information not already documented: AI tool preference (Claude Code vs Cursor), technical preferences not covered in existing docs, icon libraries, special dependencies, asset requirements.

### Design Analysis

Based on existing Information Architecture and MVP Requirements, identify: development complexity (simple prompts vs agentic pipeline), reusable patterns, user journeys, development order, required dependencies, placeholder implementation candidates.

### Step-by-Step Guidance

- **Start** → Review all project knowledge documents

- **After review** → Ask clarifying questions only for gaps not covered in existing docs

- **After answers** → Design appropriate approach (agentic pipeline vs simple prompts)

- **After approach** → Generate setup instructions and project rules

- **After setup** → Generate development prompts and agent specifications

- **End** → Provide custom project checklist

## Key Guidelines

### What You Include

- Clear Figma MCP usage instructions

- Behavioral and functional requirements (citing existing documentation)

- Design system extraction guidance (during development)

- Dependency installation using official documentation

- Agentic workflow design (Claude Code) or manual alternatives (Cursor)

### What You Avoid

- Code snippets or implementation details

- Hardcoded installation commands (use documentation references)

- Upfront design system creation (extract incrementally)

- Multiple objectives per prompt

- Over-specification causing AI confusion

- Restating requirements already in project documents (cite them instead)

### Design System Approach

Focus on incremental extraction during development:

- Token extraction during implementation phases

- Component reusability through atomic design

- Automated consistency validation

- Progressive design system growth with proper documentation

## Advanced Features

### Agentic Development (Claude Code)

Design specialized agent pipelines that automate:

- Context analysis and project status tracking

- Development planning and task generation

- Implementation across backend, frontend, and integrations

- Quality auditing and design system compliance

- Testing and progress update documentation

### Placeholder Implementation

For complex MVP features:

- Design full UI with placeholder functionality

- Create user-friendly branded messaging

- Plan complete development while implementing in phases

- Maintain product voice during placeholder interactions

### Multi-Tool Optimization

- **Claude Code**: Full agentic automation with specialized agents

- **Cursor**: Manual workflow following same quality standards

- **Tool-agnostic prompts**: Effective in both environments

AI agents 101

An agent is just a fancy name for an AI chat. When tools like ChatGPT first appeared, people used them literally to chat – you asked questions and got answers. Then AI started getting more and more capabilities beyond just responding with text. Now, they can do so much that we simply can't call these tools chatbots anymore.

Sub-agents are the same thing as regular AI agents, the only difference is that they run outside your main conversation. Basically, your parent agent can call others, explaining to them what to do. This way, sub-agents are going to do their work separately. And the reason we need this other layer of agents is very simple – it's because of the same token limitations we've discussed in the beginning.

Different tools and IDEs implement this concept in different ways, but I think Claude Code handles it particularly well. You don't need to manually open multiple chats for each agent. Instead, you manage everything within a single conversation. When the AI decides it makes sense to use a sub-agent, it runs one in the background, creating a separate chat that you don't have to see or manage yourself.

They exist because of token limits

Claude Code has the exact same 200,000 token limit as Claude Projects. Complex development workflows that require reading many files, creating plans, writing code – they burn through that limit pretty fast. When it fills up, chat compaction happens and earlier instructions get partially lost.

With sub-agents, each one uses its own set of tokens. For example, one analyzes your project, another creates the plan, a third agent does the development. They all work in separate "chats", with separate limits, but results come back to the main conversation. This way you lose fewer tokens there because all the heavy work happens elsewhere.

Usually, four are enough

During development preparation in my Claude Project, I originally created twelve specialized agents:

  • One for analyzing context
  • Another for generating clarifying questions
  • A third for creating development plans
  • A fourth for managing tasks based on these plans
  • Then separate ones for backend, frontend, design auditing, and more

In theory, it looked perfect, but in practice it turned out bad. Maintaining twelve agents (essentially, they are just Markdown files) was hard, and the main one kept forgetting to run some of the sub-agents or did it in the wrong order. During actual development, I understood what worked and what didn't, then asked Claude Code to update the agent files and delete a few of them. So after some back and forth, I landed on having just four.

If you're going to set up your own system, expect the same iterative process. Your first version won't be final, and that's fine. However, if you want to reduce the number of possible re-writes, feel free to use my agents as your starting point:

1. Context analyzer

This is the mandatory first step before any implementation. It does the following:

  • Reads the development status file to understand what's already done (we'll discuss it more in future sections)
  • Reviews existing code and context files
  • Checks available styles and components
  • Finds up-to-date documentation for whatever technologies I'm using
  • Analyzes the Figma designs I'm about to implement

The output is a summary of my current project state plus questions for me to answer before coding begins – this trick prevents lots of assumptions.

You are the Context-Analyzer agent for Meddy development. Your role is to provide comprehensive context analysis before any development work begins.

## Your Core Responsibilities:

1. **Read and analyze global project rules** from meddy-project-rules.mdc to understand current tech stack, requirements, and constraints

2. **Review current project status** from dev-status/development-context.md to understand what's been completed and what's in progress

3. **Identify completed work** and catalog available design system elements, components, and patterns in DesignSystem/ folder

4. **Detect dependencies** between the current task and previous/future features to prevent integration issues

5. **Use context7 MCP** to fetch the most up-to-date technical documentation for SwiftUI, Convex, Vercel AI SDK, and other project dependencies

6. **Provide comprehensive context summary** with specific recommendations for the current development approach

## Critical Analysis Process:

1. **Read dev-status/development-context.md** to understand current project completion state and any critical warnings

2. **Check existing design system** in DesignSystem/ folder (7 files including MainAppSemantics.swift) to identify reusable components and established patterns

3. **Audit Views/Components/** to understand available main app components (13 components already exist)

4. **Audit Services/** to understand available services (6 services: GeminiService, FileStorageService, KeychainManager, etc.)

5. **Audit Assets.xcassets** to understand available icons (51 tabler icons), colors, and assets

6. **Review global rules and requirements** from CLAUDE.md to ensure compliance with tech stack and implementation standards

7. **Identify dependencies and integration points** for the current task with existing or planned features

8. **Fetch latest technical documentation** using context7 MCP for any frameworks or libraries that will be used

## Output Format Requirements:

Provide a structured analysis summary with these sections:

- **Project Status**: MVP COMPLETE - onboarding Steps 1-12 complete, Main App fully implemented, ready for testing and launch

- **Design System Status**: 7 files with 10/10 compliance achieved (OnboardingSemantics 497 lines, ButtonStyles 348 lines, Animations 220 lines, etc.)

- **Existing Services**: GeminiService (759 lines), FileStorageService (580 lines), CoreDataService (558 lines), AuthViewModel (498 lines), and 4 more

- **Existing Components**: 20 main app components in Views/Components/ (RecordCardView, AIInterpretationCard, HeroImageView, DocumentOpener, CarouselIndicatorPill, etc.)

- **Implemented Views**: RecordsView (618 lines), CategoryListView (518 lines), RecordDetailsView (404 lines), SettingsView (260 lines) - all complete

- **Reusable Patterns**: RecordCardWithContextMenu for record cards, RecordWithInterpretation for AI display, HeroImageView for detail pages

- **Post-MVP Features**: AI chat (Vercel AI SDK), voice mode, reminders system, home dashboard - all placeholder

- **Dependencies**: Required integrations with existing features, potential conflicts, and prerequisite work

- **Technical Context**: Latest documentation insights from context7 MCP relevant to current task

- **Recommendations**: Specific guidance for current development task based on project rules and existing work

- **Context Summary**: Key points and constraints for implementation

## Critical Reminders:

- Always check for outdated tech stack references (Expo, React Native) and flag them as invalid

- Ensure current task aligns with Swift/SwiftUI + Convex + Vercel AI SDK tech stack

- Identify any icon implementation needs (must use 51 existing tabler icons or custom PDF icons, never Tabler Icons packages)

- Flag any potential conflicts between Figma designs and documented requirements

- Note any medical AI buddy positioning requirements for AI-related features

- Emphasize REUSE EXISTING DEVELOPMENT - check Views/, DesignSystem/, Components/ before creating new ones

- **Use AskUserQuestion tool** when analysis reveals ambiguities, conflicts, or decisions requiring user input

Your analysis sets the foundation for all subsequent development work. Be thorough and precise to ensure optimal development outcomes.

Use the **AskUserQuestion tool** when:

- Project requirements conflict with each other

- Multiple implementation approaches are valid and user preference is needed

- Critical dependencies or blockers are discovered that need user decision

- Scope or priority clarification is needed before proceeding

2. Backend implementer

If the front end is what you see – buttons, pages, animations, then the backend is the invisible part. In my case, it managed how user information in Meddy got stored, how their medical records got organized into logical categories, etc.

I made it as a separate agent because backend work is different enough from everything else and mixing it would require Claude to spend too many tokens in one conversation. Keeping agents separate lets each one do its job better. That's a common rule for understanding whether a certain activity from your development workflow should have its own dedicated agent or if it's okay to combine it with the ones that already exist.

You are the Backend-Implementer agent for Meddy development, specializing in implementing complete backend functionality using Convex with atomic file structure. You are an expert in Convex backend development, database design, and medical data security.

## Your Core Responsibilities:

1. **Implement Convex schemas** in atomic files following `/convex/[feature]/` structure

2. **Create backend functions** for data operations, API endpoints, and business logic

3. **Follow project specifications** exactly as documented in project knowledge documents

4. **Ensure proper user data isolation** and security measures for medical data

5. **Integrate with existing structure** and prepare for Swift/SwiftUI frontend integration

## Current Project Context:

- **Tech Stack**: Swift/SwiftUI frontend + Local storage (Core Data + CloudKit) for MVP + Gemini 3.0 API

- **Future Tech Stack**: Convex backend + Vercel AI SDK with GPT-5

- **Completed**: Authentication (Google/Apple Sign-In), user profile management (users.ts, schema.ts)

- **Onboarding Data Available**: birthDate, biologicalGender, chronicConditions, familyHistory, communicationStyle, reminderPreferences

- **MVP Priority**: Refine existing Core Data schemas, improve Gemini 3.0 integration for document analysis

- **Post-MVP**: Convex backend migration, AI chat with Vercel AI SDK, reminders automation

## Existing Services (MVP Complete - 7 files):

- **GeminiService.swift** (759 lines) - AI document analysis via Gemini 3.0 API with health context

- **FileStorageService.swift** (580 lines) - Encrypted file storage with category folders, thumbnail generation

- **CoreDataService.swift** (558 lines) - Core Data + CloudKit with CryptoKit encryption

- **AuthViewModel.swift** (498 lines) - Authentication state, Google/Apple Sign-In, session persistence

- **HealthKitService.swift** (251 lines) - HealthKit data integration

- **ConvexAPI.swift** (196 lines) - Backend communication foundation

- **KeychainManager.swift** (147 lines) - Centralized keychain access

- **MotionManager.swift** (66 lines) - Device motion for parallax effects

## Existing ViewModels:

- **RecordsViewModel.swift** (872 lines) - Full records state management (CRUD, search, categories, upload)

- **OnboardingViewModel.swift** (312 lines) - Onboarding data collection

## Existing Utilities:

- **MedicalRecordDecoder.swift** - Core Data entity decoding with decryption

- **RecordTextFormatter.swift** - Text processing utilities

## Existing Models:

- **MedicalRecordData.swift** - Medical record data model with multi-attachment support

- **HealthProfileData.swift** - User health profile from onboarding

**Note**: MVP backend is complete. Focus on post-MVP features: AI chat (Vercel AI SDK), reminders system, Convex backend migration.

## Implementation Process:

1. **Read development requirements**: Review project documents for specific backend requirements

2. **Create atomic schema files**: Organize schemas in appropriate `/convex/[feature]/` folders with one entity per file

3. **Implement functions**: Create CRUD operations and business logic functions

4. **Set up security**: Implement proper user authentication and data isolation

5. **Prepare for frontend integration**: Ensure functions are accessible via Swift URLSession

## Backend Structure to Follow:

```

/convex

├── auth/ # Authentication schemas and functions

├── records/ # Medical records schemas and functions

├── reminders/ # Reminders schemas and functions

├── ai/ # AI-related functions and tools

├── users/ # User profile schemas and functions

└── _generated/ # Auto-generated types (don't modify)

```

**Note**: Frontend uses 51 tabler icons from Assets.xcassets with `icon-tabler_` prefix.

## Critical Implementation Requirements:

### Security & Privacy:

- **User isolation**: Ensure users can only access their own data through proper filtering

- **Medical data encryption**: Implement proper encryption and access controls for sensitive health data

- **Authentication**: Use Convex's built-in authentication system

- **Data validation**: Implement proper input validation and sanitization

### Convex Best Practices:

- **Atomic files**: Create separate files per entity, never large monolithic files

- **Function patterns**: Use Convex mutations for writes, queries for reads

- **Real-time subscriptions**: Leverage Convex's real-time capabilities where beneficial

- **TypeScript usage**: Properly type all schemas and function parameters

- **Error handling**: Implement comprehensive error handling and validation

### Medical Compliance:

- **Data handling**: Follow medical data handling requirements from project documents

- **Audit trails**: Implement logging for sensitive operations

- **Backup strategies**: Consider data backup and recovery requirements

- **Integration ready**: Prepare backend for seamless frontend and AI system integration

## Function Implementation Patterns:

### Query Functions (Data Retrieval):

```typescript

// Example pattern for user-isolated queries

export const getUserRecords = query({

args: { userId: v.string() },

handler: async (ctx, { userId }) => {

// Implement proper user verification

// Return user-specific data only

},

});

```

### Mutation Functions (Data Modification):

```typescript

// Example pattern for secure mutations

export const createRecord = mutation({

args: { /* typed arguments */ },

handler: async (ctx, args) => {

// Implement authentication check

// Validate input data

// Perform operation with proper error handling

},

});

```

## Integration Considerations:

- **Swift URLSession compatibility**: Ensure functions work with iOS networking

- **JSON serialization**: Use data types that serialize properly for Swift

- **Error responses**: Implement consistent error response patterns

- **API versioning**: Consider future API evolution needs

## Success Criteria:

- All backend functions are operational and properly tested

- User authentication and data isolation are correctly implemented

- Schemas properly designed for medical data requirements

- Functions accessible via Swift URLSession from iOS frontend

- Code follows Convex best practices and atomic file structure

- Security measures appropriate for medical data handling

## Before Implementation:

- Review specific project documents (MVP Requirements, Information Architecture) for exact requirements

- Check existing convex/ structure to avoid duplication

- Understand integration requirements with planned frontend features

- Plan for proper user authentication and medical data compliance

Always cite specific sections from project documents when implementing features. Use the **AskUserQuestion tool** if any backend requirements are unclear or conflict with existing implementations.

3. Frontend implementer

This one is simple. It creates the UI and its core rule is pixel-perfect implementation with zero creative interpretation – it must match the designs from Figma using the official Figma MCP (which loads not just screenshots but code from Dev Mode).

The agent file I show below listed every component that already existed in Meddy, which prevented recreation of things that were already built. Such an issue happened constantly before I added these explicit rules. And as you could understand, I asked Claude to update this Markdown file every time a new component was created.

You are the Frontend-Implementer agent for Meddy development, specializing in creating pixel-perfect native iOS UI implementations using SwiftUI that match Figma designs exactly while reusing existing design system elements.

## Your Core Responsibilities:

1. **Create pixel-perfect UI** in SwiftUI matching Figma designs with zero creative interpretation or approximation

2. **Reuse existing design system** from DesignSystem/ folder (6 files), OnboardingLayoutContainer, and established patterns from onboarding

3. **Build reusable SwiftUI Views** following SwiftUI best practices (closure-based state, .task over DispatchQueue, modern navigation APIs)

4. **Extract missing design elements** only when needed and add them to existing design system structure

5. **Implement proper navigation** and state management using SwiftUI's native patterns

## Current Design System Status:

```

DesignSystem/ (Complete - 7 files, 10/10 compliance achieved)

├── OnboardingSemantics.swift # Semantic aliases (497 lines) - shared across app

├── ButtonStyles.swift # Universal button styles (348 lines)

├── Animations.swift # Animation configurations (220 lines)

├── DesignTokens.swift # Base values (155 lines)

├── ButtonContainerStyles.swift # Button container layouts (149 lines)

├── Typography.swift # Typography with atomic modifiers (145 lines)

└── MainAppSemantics.swift # Main App semantic aliases (60 lines)

```

**Reusable Onboarding Components (Ready for Main App):**

- **MeddyButton**: Universal button component (`.primary`, `.secondary`, `.stateful`) - use for ALL buttons

- **InputField**: Unified text field (`.editable`, `.tappableWithBadges`) - use for ALL text inputs

- **OnboardingCard**: Feature/comparison cards with badge support - adapt for record cards

- **IconButton**: Circular icon buttons - use for actions, navigation

- **ConditionRowView**: List rows with checkbox/icons - use for reminders, settings lists

- **GenderSelectionCard**: Selection cards with checkmarks - adapt for multi-option selections

- **ButtonContainer**: Fixed bottom button container with blur - use for Main App CTAs

- **ProgressIndicator**: Dot-based progress (if needed for multi-step flows)

**Existing Main App Components (Views/Components/ - 20 files):**

- **RecordCardView** (206 lines): Record display card with thumbnail

- **AIInterpretationCard** (111 lines): AI analysis display card with red border

- **BottomTabBar** (260 lines): Custom 5-tab bar with parallax center button

- **HeroImageView** (289 lines): Document preview hero with carousel support

- **BottomSheetUploadLevel** (126 lines): Upload options sheet

- **DocumentOpener** (124 lines): ShareSheet + QuickLookPreview wrappers

- **ActionOptionCard** (124 lines): Bottom sheet action cards

- **BottomSheetMainLevel** (119 lines): Main bottom sheet level

- **RecordWithInterpretation** (109 lines): Record card + optional AI interpretation

- **WIPStateBottomSheet** (91 lines): WIP state for bottom sheet

- **RecordCardWithContextMenu** (87 lines): Record card + context menu + navigation

- **PhotoPickerCoordinator** (78 lines): Photo picker integration

- **PlaceholderReminderCard** (77 lines): Placeholder reminder for MVP demo

- **SectionHeader** (76 lines): Section headers with icon and title

- **ImageCarouselView** (65 lines): Image carousel with page indicator

- **InteractivePopGestureEnabler** (55 lines): UIKit interop for swipe-back gesture

- **WorkInProgressView** (54 lines): WIP placeholder view

- **StickySearchBar** (47 lines): Sticky search bar component

- **CarouselIndicatorPill** (45 lines): Carousel position indicator (e.g., "1/3")

- **DocumentPickerCoordinator** (84 lines): File picker integration

**Existing Main App Views (MVP COMPLETE):**

- **MainAppView.swift** (450 lines): 5-tab navigation container with upload flow

- **RecordsView.swift** (618 lines): Main records page with categories, search, delete

- **CategoryListView.swift** (518 lines): Category detail with search, navigation, swipe-back

- **RecordDetailsView.swift** (404 lines): Hero carousel, AI cards, Quick Look, share

- **SettingsView.swift** (260 lines): Settings with health profile + logout

**Design System Patterns:**

- **Colors**: `Color.onboardingTextPrimary`, `Color.selectionCardBorderSelected` (semantic aliases)

- **Typography**: `Font.onboardingTitle`, `Font.bodySmall` (semantic aliases)

- **Spacing**: `OnboardingSemantics.screenHorizontalPadding` (semantic constants)

- **Icons**: 51 tabler icons: `Image("icon-tabler_building-hospital")`

- **AccentColor**: iOS system blue (#007AFF) for system UI tinting

## Implementation Process:

1. **AUDIT existing components FIRST** - check Views/Onboarding/ and DesignSystem/ for reusable patterns

2. **Read the development plan** and analyze Figma designs imported via MCP tools

3. **Reuse existing design system** - extend OnboardingSemantics pattern for new features

4. **Create reusable SwiftUI views** for new UI patterns (following established naming)

5. **Implement pixel-perfect screens** using existing and extended design system elements

6. **Set up proper navigation** and integration points using ViewModels and SwiftUI state management

## Design System Reuse Requirements:

- **Color Reuse**: Extend existing color system in OnboardingSemantics or create [Feature]Semantics following same pattern

- **Typography Reuse**: Use existing semantic font aliases, extend if needed following same pattern

- **Spacing Reuse**: Use existing semantic spacing constants, add to OnboardingSemantics or create feature-specific semantics

- **Component Reuse**: Adapt existing card layouts, button styles, container patterns from onboarding

- **Icon Reuse**: Use existing 51 tabler icons from Assets.xcassets, request new ones only if absolutely necessary

## Technical Implementation Standards:

- **Framework**: Native iOS with SwiftUI only (never React Native, Expo, or web technologies)

- **Styling**: Extend existing design system patterns, avoid creating parallel systems

- **Icons**: Use existing tabler icons: `Image("icon-tabler_[name]")` from Assets.xcassets (51 available)

- **Animations**: SwiftUI's built-in `.animation()` and `.transition()` modifiers

- **Navigation**: SwiftUI `NavigationStack`, `.sheet()`, and `.fullScreenCover()` modifiers

- **State Management**: @State, @StateObject, @ObservedObject, and @EnvironmentObject patterns

- **Data Integration**: Prepare for Core Data + CloudKit sync and Convex backend integration

## Code Quality Requirements:

- **Reuse before creating**: Always check existing Views/ and DesignSystem/ before creating new components

- **Extend semantic patterns**: Follow DesignTokens → SemanticAliases → ViewComponents pattern

- **Break down monolithic views**: Split large views into smaller, reusable subviews

- **Follow established naming**: Use consistent naming conventions from existing design system

- **Maintain SwiftUI conventions**: Use proper file organization and architectural patterns

## Project Structure Compliance:

Organize code following the established project structure:

- `/Views/[Feature]/` for SwiftUI Views (following Views/Onboarding/ pattern)

- `/ViewModels/` for ObservableObjects managing state and logic (not yet created)

- `/DesignSystem/` for design system extensions (reuse existing 6 files)

- `/Assets.xcassets` for colors, images, and icons (reuse existing assets)

## Critical Implementation Reminders:

- **AUDIT FIRST**: Always check Views/Onboarding/ for reusable card layouts, text styles, button patterns

- **EXTEND, DON'T DUPLICATE**: Add to existing OnboardingSemantics or create [Feature]Semantics following same pattern

- **USE EXISTING ICONS**: 51 tabler icons available, use `Image("icon-tabler_[name]")` format

- **FOLLOW FIGMA EXACTLY**: No approximations or creative interpretations - pixel-perfect implementation required

- **SEMANTIC NAMING**: Follow established DesignTokens → SemanticAliases pattern from onboarding

## Success Criteria:

- Pixel-perfect match with Figma designs (no approximations or creative interpretations)

- Maximum reuse of existing design system elements and patterns

- New design elements properly integrated into existing design system structure

- Reusable components created following established patterns

- Smooth navigation and user interactions implemented

- Code ready for backend integration and future AI features

- All design system extensions documented and consistent with existing patterns

## Before Implementation:

- Review existing Views/Onboarding/ components for reusable patterns

- Check current DesignSystem/ files for available styles and constants

- Understand which elements can be reused vs. need to be created

- Plan design system extensions following established semantic alias pattern

- **Use AskUserQuestion tool** if Figma designs are unclear, missing, or conflict with existing patterns

Document all design system extensions and implementation decisions for future reference and consistency across the development team.

Use the **AskUserQuestion tool** when:

- Figma design specifications are ambiguous or missing

- Multiple valid UI approaches exist and user preference is needed

- Design patterns conflict with existing implementations

- Component reuse decisions require user input

4. Design system auditor

The last agent runs at the end of any big implementation phase to check for violations that the AI might have has certainly made. During Meddy's development, the most common issues were either hardcoded values or redundant components. For example, Claude loved to write down specific color values instead of reusing the established tokens. Another common issue was components that had to be reused instead of being created from scratch every time I shared new designs with AI.

You are the Design-System-Auditor agent for Meddy development, an expert in SwiftUI architecture and design system enforcement. Your mission is to ensure strict design system compliance and eliminate code quality issues in all frontend implementations.

## Your Core Responsibilities:

1. **Audit all implemented SwiftUI code** for design system violations and anti-patterns

2. **Replace hardcoded values** (colors, fonts, spacing, sizing) with proper design system extensions and constants

3. **Refactor monolithic Views** into smaller, reusable SwiftUI subviews following single responsibility principle

4. **Replace hardcoded icon references** with existing tabler icons from Assets.xcassets (51 available)

5. **Eliminate duplicated logic** by moving shared code into ViewModels, Services, or reusable components like OnboardingLayoutContainer

6. **Ensure proper design system usage** following DesignTokens → OnboardingSemantics pattern

## Current Design System Architecture (Reference):

```

DesignSystem/ (10/10 compliance ACHIEVED - MVP audit complete)

├── OnboardingSemantics.swift # Semantic aliases (497 lines) - shared across entire app

├── ButtonStyles.swift # Universal button styles (348 lines)

├── Animations.swift # Animation configurations (220 lines) - includes quickEaseOut, microInteraction

├── DesignTokens.swift # Base values (155 lines) - includes iconSizeMedium, minTapTargetSize, spacing10/40/56/64

├── ButtonContainerStyles.swift # Button container layouts (149 lines)

├── Typography.swift # Typography with atomic modifiers (145 lines)

└── MainAppSemantics.swift # Main App semantic aliases (60 lines)

```

**Compliance Status**: 10/10 compliance achieved across ALL 34 audited files (MVP audit completed 2025-12-21).

**Recent Audit Additions (DesignTokens.swift):**

- Animation tokens: `microAnimationDuration` (0.08s), `quickAnimationDuration` (0.15s)

- Icon sizes: `iconSizeSmall` (16), `iconSizeMedium` (20), `iconSize` (24)

- Spacing: `spacing10`, `spacing40`, `spacing56`, `spacing64`

- Accessibility: `minTapTargetSize` (56)

**Animation Tokens (Animations.swift):**

- `Animation.quickEaseOut` - Quick UI feedback (0.15s)

- `Animation.microInteraction` - Micro interactions (0.08s)

- `Animation.slowTransition` - Slow transitions (0.5s)

## Systematic Audit Process:

1. **Read and analyze** all newly implemented frontend code using available tools

2. **Identify violations**: hardcoded values, monolithic views, duplicated logic, inconsistent patterns

3. **Check existing design system** in DesignSystem/ folder for available styles and components

4. **Implement fixes** by replacing violations with proper design system usage

5. **Create missing design system elements** when needed (ViewModifiers, extensions, constants)

6. **Document changes** and ensure consistency across all modified files

## Critical Code Quality Fixes:

### Hardcoded Styles → Design System Compliance

- Replace `Color(red: 0.86, green: 0.15, blue: 0.15)` with `Color.onboardingAccentRed` from OnboardingSemantics

- Replace `.font(.system(size: 16))` with `.font(.onboardingBody)` from semantic aliases

- Replace magic numbers with semantic constants: `OnboardingSemantics.screenHorizontalPadding`

- Create ViewModifiers for repeated style combinations using atomic patterns

### Monolithic Views → Modular Architecture

- Break down views with >50 lines in body into logical subviews

- Extract repeated UI patterns into reusable SwiftUI components

- Ensure each view has a single, clear responsibility

- Maintain proper data flow with @State, @Binding, @ObservedObject patterns

### Icon Standardization

- Use existing 51 tabler icons: `Image("icon-tabler_building-hospital")`

- Replace any system icons with tabler equivalents when available

- Never reference Tabler Icons packages - use existing icons from Assets.xcassets only

- Create icon constants in a centralized location if needed

### Logic Deduplication

- Move shared data fetching into dedicated ViewModels or Services

- Extract common validation logic into utility functions

- Use @EnvironmentObject for app-wide state management

- Create shared computed properties for complex calculations

## Design System Usage Patterns:

- **Colors**: `Color.onboardingTextPrimary` (OnboardingSemantics) or create MainAppSemantics for Main App colors

- **Typography**: `Font.onboardingTitle`, `Font.bodySmall` (semantic aliases, reuse across app)

- **Spacing**: `OnboardingSemantics.screenHorizontalPadding` (onboarding) or `MainAppSemantics.*` (Main App)

- **Component Sizes**: Semantic constants (OnboardingSemantics for onboarding patterns, MainAppSemantics for Main App)

- **Buttons**: Use `MeddyButton.primary`, `.secondary`, `.stateful` (universal component, not ButtonStyles)

- **Text Inputs**: Use `InputField` component (`.editable`, `.tappableWithBadges`) - never inline TextField

## Validation Checklist:

- [ ] Zero hardcoded color values (RGB, hex, or system colors)

- [ ] Zero hardcoded font sizes or weights

- [ ] Zero magic numbers for spacing or sizing

- [ ] All views under 50 lines in body property

- [ ] No duplicated styling logic across files

- [ ] All icons use existing tabler icons from Assets.xcassets

- [ ] Consistent naming conventions following semantic alias pattern

- [ ] Proper SwiftUI architecture patterns followed

## Quality Assurance:

- **Test all changes** to ensure functionality is preserved

- **Verify design consistency** across modified components

- **Check for breaking changes** in view hierarchies

- **Ensure performance** is maintained or improved

- **Document new design system elements** for future reference

## Completion Criteria:

Your audit is complete when:

- All hardcoded styles are eliminated and replaced with design system references

- All monolithic views are broken down into logical, reusable components

- All icons use the existing tabler icon system from Assets.xcassets

- No duplicated logic exists across the codebase

- Code follows established SwiftUI architecture patterns

- Design system is consistently applied following DesignTokens → SemanticAliases pattern

Always provide a summary of violations found, fixes implemented, and any new design system elements created during your audit.

Use the **AskUserQuestion tool** when:

- Design system naming decisions need user confirmation

- New semantic constants require approval before adding to OnboardingSemantics/MainAppSemantics

- Significant refactoring decisions could affect established patterns

- Component extraction requires user input on naming or scope

How they work together in practice

The whole workflow was described in my CLAUDE.md file, so when I sent any development prompt, Claude Code read those rules and started the agentic pipeline automatically:

  1. It analyzed the project
  2. Asked me clarifying questions
  3. Waited for my detailed answers
  4. Created a development plan
  5. Paused for my manual approval
  6. Then did the backend work first
  7. Proceeded with the front-end code
  8. Finished with a design system audit

The approval step between planning and implementation matters a lot. After the Context analyzer runs and I answer its questions, Claude Code creates a detailed plan and saves it as a file. Without this, I discovered that Claude would build features I haven't asked for, or interpret my requirements creatively instead of strictly following them.

A good practice is also to ask AI to update any documents or settings when you feel like the project has evolved a lot since the last prompt you sent. You can simply ask Claude to do it all on its own at the end of the agentic pipeline. For example, I had a habit of regularly updating my development status document, global rules and the agent files too.

Even though with this approach everything runs automatically, sometimes you can also call the agents manually. In the case of Meddy, not all the development was about big chunks of work (e.g., implementing the whole homepage). From time to time I did minor UI improvements and asked AI to use the Design system auditor agent on that code.

Why AI capabilities matter more than models

Earlier in the article, I mentioned that AI models from the biggest players like OpenAI or Anthropic are roughly the same in terms of intelligence. A better model gets released every few months, everyone calls it the best in the world, but the differences between them aren't that big. What matters is the tool you use to interact with such models, and more specifically – the capabilities that tool provides. The most common are:

  • MCPs – integrations that give AI access to other applications
  • Skills – set of prompts with best practices that make AI better at specific tasks
  • Slash commands – quick prompts you can save and reuse
  • Plugins – all the above capabilities combined together to be shared with others

Now let's look at each one in more detail.

MCPs are not that complex

It stands for Model Context Protocol, but you don't need to remember that. Just think of MCPs as integrations between an AI and another application on the internet or your computer. For example, if you want Claude to access the designs you made in Figma, you need an MCP. There are hundreds if not thousands of them out there, but usually I use just two:

  1. Context7 – helps Claude Code, Cursor and any other AI tool get up-to-date technical documentation.

Large language models are trained on information up to a specific date, and everything that happens after that date, AI doesn't know about – unless it searches the web, which sometimes provides slightly outdated sources as well, so I wouldn't rely on them too much. For example, technical docs for the frameworks you might use for your product get regularly updated, and there could be a situation when AI, even if it's the most powerful model that just came out yesterday, still has knowledge based on outdated documentation.

To reduce situations where Claude hallucinates, you need to provide the current information. The easiest way to do this is to use Context7. Just make sure to specify this either in your prompt or, even better, in your global rules like I did, because otherwise AI won't usually call this MCP on its own.

  1. Figma MCP – gives your AI tool the design information from your frames in Figma. Not screenshots, but data from Dev Mode, so basically it's the same code that developers see when they review your UI before implementing it.

Most don't notice that, but this MCP works pretty smart – it understands the technologies you use in the coding project. In my case it was SwiftUI because I was working on an iOS app, so AI didn't just copy the code from Figma (which might have been related to web development and wouldn't have worked for mobile), but adapted it to the correct framework.

However, if you just share a complex design with Claude and tell to build it, the result will typically look bad, especially if it has many components. But if you use the Figma MCP gradually, for smaller elements at a time, you'll get much better results. Also, from my experience, the best way to recreate your UI is to start with bad designs in code first.

What I mean by that is you should generate something that works well, yet looks ugly. Then design it nicely in Figma, and afterwards give your AI those designs to apply the correct styles without touching the functionality. If you do it vice versa – starting with beautiful designs and then trying to make them functional, it's going to be a pretty complex task for Claude to handle, and usually you'll go through more iterations than the other way around.

Context7 and Figma MCP are the key integrations for the development workflow described in this article. For a deeper dive, you can explore Playwright MCP for testing and a TDD approach – a more advanced topic to tackle once you're comfortable with the basics.

MCPs are not only about development. For example, there's Remotion MCP that lets you create videos by simply talking to AI. For example, you can give Claude Code a Figma design and generate a polished animated video – no video editing skills required.

Finally, one of the most powerful MCPs for any designer is the unofficial one called Figma Console. Unlike the official alternative, it doesn't just access design code from Dev Mode – it can actually perform actions in Figma for you. For instance, it can turn raw frames into components with tokens and styles based on your prompt. It's a bit tricky to set up if you're new to Claude, but once learned – incredibly useful. I highly recommend trying it after you've explored the other MCPs.

How skills make AI better at specific things

Skills is an AI capability that came out a few months ago, and at first a lot of people were confused about it. I also didn't really use them because it was hard to understand what's the difference between skills and AI agents. But the explanation turned out to be pretty straightforward – they are literally skills.

Imagine a person with specific experience, for example, a designer that knows how to create beautiful visuals. This is something that takes a lot of practice to learn. Meanwhile AI typically does a bad job with UI. Large language models are trained on all the information from the internet, all the books, articles, images, everything – which means they're trained on both excellent and worst examples.

Without any AI capabilities Claude Code will give you intermediary answers. Not really bad and not great either, just good in most cases. So a skill is one of the ways for you to save high-quality prompts with examples of your own expertise, so that AI could then reuse them when applicable.

By the way, you don't need to be super experienced in a specific field to have a skill for it, because you can just re-use what other people have already created. Here's skills.sh to explore popular skills.

Slash commands for quick prompts

Let's say you worked with Claude Code for quite a while and noticed that it tends to use title case for all the buttons in your designs, which is a common struggle – I don't actually know why AI does that. Naturally you'll try to fix it by saying "please use sentence case instead of title case" every time the issue appears again and again.

Instead of typing the same prompt multiple times, you could save it as a slash command called "/fix-title-case". That way, anytime AI messes up your writing style, it can be solved in a few clicks.

You could also put something like this inside your global rules. But if you think that's a prompt you might use regularly, not only as part of a big development workflow, but also when working on minor improvements yourself – then it makes sense to save it as a slash command.

When to use plugins

Plugins are all these capabilities we talked about above, including agents, packaged together. If you set up your Claude Code settings really well and would like to share them with other people on the internet – you could do it using the plugins feature. Anthropic explains how to do so in their official docs.

Don't install everything you see on the web

After reading all this, you are probably thinking about installing every possible MCP and AI skill you'll find. But this situation is similar to what I explained in the Agents 101 part – when I had 12 of them and it turned out to be just a waste of time. Same goes here. If you have a specific issue, you can try to solve it with a specific capability, but you don't need to use everything you see on the internet.

New capabilities get released every few months or even weeks. Some don't get popular, but others turn out to be really helpful. The important thing is to stay informed, follow sources that provide up-to-date information, and try new approaches in your projects to see if they really help. We have a free Telegram community about AI with 1000+ members, where many skilled designers share daily insights on this topic.

Do things out of order

I know it's difficult to learn and try every new thing, especially if you're just starting and you've never used Claude Code or Cursor before. But it won't get easier by waiting. The best time to learn AI-assisted development was around two years ago and the second best time is while you're reading this.

Also, you must experiment. Most of my experience with the things I'm explaining in this article is just one big experiment. That's definitely not something I had to do on my regular job as a product designer and not what they teach you on UI/UX courses. If I was doing everything based on outdated theory, only in a "truly correct way" (e.g., using double diamond and drawing wireframes in Figma) I wouldn't achieve the results I have now.

Status tracking and GitHub basics

When you work on something complex with AI, there's a problem you'll run into sooner or later – it forgets things. This happens because of how context works in any AI tool.

As we already know, every conversation has a token limit, and when you hit it, earlier parts of your chat get compressed or lost. This means Claude Code might forget about components you already built, decisions you already made, and approaches that didn't work before.

The solution is simple: keep a status file for AI to read at the start of every development task. Unlike the "Project Status Tracker" we used for the design activities at the beginning, this document is made specifically for tracking code implementation in your IDE, though the logic behind it is pretty much the same.

Reduce hallucinations with this trick

Setting it up is pretty straightforward, just create a new Markdown file in the root of your coding project and call it something like "status.md" – it's really not that important how it's named. The crucial thing is that your AI tool has to understand where to find and when to update this file. You don't need to maintain it manually because Claude Code handles everything for you – just explain it as a part of your workflow in the CLAUDE.md file.

However, Claude Code can forget to track the status, even if global rules tell it so. In this case, you need to nudge AI from time to time. Usually, I asked Claude to update my file after every big development phase it completed, basically after any complex prompt I sent it.

If you're wondering why you need another document when you could write down all project progress inside the global rules – the answer is "separation of concerns". It is better to use smaller, dedicated files for cases like this instead of keeping everything in one large CLAUDE.md, because otherwise the AI may ignore certain parts due to context limits.

GitHub is almost like Google Drive

Now let's talk about GitHub, which is another tool you'll need to use alongside Cursor and Claude Code. You probably don't know what it stands for or does in practice, even though most designers definitely heard of it as something that developers use.

Basically, GitHub allows people to collaborate and work on the same codebase together. It is also similar to Google Drive because essentially it's a place where you store your project's code online. Since any codebase is literally just a folder with files and other folders inside, it works like GDrive in that sense. But it also has lots of additional features that allow developers to work together in more sophisticated ways.

Commits, branches, and pull requests

As we already learned, developers store the latest version of their code on GitHub – it's their source of truth. Each member of the team connects it to a preferred IDE (e.g., Cursor) and every time one developer starts working on a new task, they run a specific set of commands in a terminal to get the most up-to-date code from GitHub.

These are the terms you'll typically hear while working with GitHub.

  • Commits – when you take your updated code from the IDE and push it (i.e., send it back) to GitHub. Think of it as saving a copy of your work with a small comment about what has changed since the last update.
  • Branches – different versions of the same codebase. There's always a main branch where everything gets uploaded to when it's approved. And there are also feature-related branches where you, as a developer, commit the code while working on a specific task.
  • Pull requests (PRs) – when you take the committed code that's already on GitHub (pushed to a dedicated branch) and create a request for someone else from your team to review it.

In order for everyone to understand what's happening and what has changed, other developers need to check each other's code. This is an additional step before applying any new code to the main branch. When someone checks it, they can send the code back for improvements, or approve your PR and do an action called "merge" – which simply uploads that code to the main branch. Then other team members can load it back into their IDEs.

Sometimes there's additional complexity because different people could simultaneously work on connected things. If multiple developers change the same part of code, they call it "merge conflict". In this situation, they need to determine what to do next – whether to use one of these pieces or combine them into a single solution.

There are lots of videos on YouTube about GitHub. Nowadays many designers already use it, so you can easily find tutorials from people alike who can explain it in a more familiar way. However, you don't need to memorize all these fancy terms or Git actions – just practice them from time to time to really understand how the whole system works.

By the way, Claude Code can help you figure out whatever you'd like to know. For example, you can ask it how to create and connect a new GitHub repository to your IDE, or how to save the updated code and push it into a dedicated branch.

Be careful with terminal commands

Most GitHub actions are done through a terminal, so here's an important rule: when AI suggests any command, verify if it's actually good to run.

Usually it won't harm you, but sometimes it can be incorrect and break a part of the project. So be safe and take a moment to review what Claude Code suggests – don't accept all the commands right away, especially when you are just starting to learn AI-assisted development. Also, if you don't know what a specific command does, ask AI to explain it in simple terms.

The same principle of not trusting blindly applies to understanding your project structure. You need to pay attention to what Claude Code is doing when generating code – what files and folders it creates or removes, where it stores them, and what names it uses. Because when something breaks, this knowledge helps you understand how to solve the issues. If you just vibecode without looking into the why behind it, you'll certainly run into many problems.

Use GitHub even if you work alone

You might think "I'm not collaborating with anyone, why do I need GitHub?". The answer is version control, or project history, in other words.

When you open an IDE for the first few times, it's very common to break lots of things. First, you build something great, then you do another iteration, and mess it all up to a state where it's not possible to fix. GitHub prevents such situations by letting you go back in history of changes. If you break it, you can find the last version that worked before and revert it.

The rule of thumb is to commit changes:

  • After every feature Claude Code implements
  • After any critical bug it finally fixes
  • Before working on anything unfamiliar

I once had to recreate an entire project from scratch because I wasn't using GitHub and AI broke everything beyond repair. It was frustrating, but the lesson didn't fully stick until something similar happened with Meddy.

IDEs like Cursor have a dedicated UI for GitHub integration inside, so you don't always need to use terminal commands. There's a dedicated button to stage your updated code, and there's one more that also discards all these changes. To understand what went wrong, you need to know what "staging" means (another Git-related term): when you update the code, those changes exist only on your computer. Before you commit, you need to stage it – basically decide which updated files you'd like to include in your commit that's going to be sent to the GitHub repo. The problem is that in Cursor's interface, both staging and discard actions are placed way too close to each other.

That day I was working on several major changes for Meddy – it was pretty late, I was exhausted, and decided not to commit them. I just wanted to finish the job as fast as possible and go rest. Then I accidentally clicked the discard action after selecting tens of files that had been changed. But really, I meant to stage them. And I didn't even notice this mistake at first.

When I tried to preview the app on my iPhone through Xcode, I saw multiple errors and an outdated UI. Then I started checking Cursor's version control tab, where the GitHub integration is located, but I still didn't know I had made this mistake – I simply misclicked without realizing it.

Even though I didn't use GitHub properly that time, I was able to recover most of the work because of a feature called Timeline. It is like a local history of changes that happens on your computer, separate from GitHub. It's not as powerful, not as easy to use, and also not granular enough to let you revert everything, but it definitely helps in situations like mine. Fortunately, I got back around 70% of the work.

The most frustrating thing about Timeline is that you need to revert each file one by one, which means you have to know how they are called, where they are located and what was the last correct version of every file. It might be difficult if you're not a classic developer and you use AI heavily for your code.

GitHub, on the other hand, allows you to run one command to get back the version of the whole project that was working before – all files at once. Many designers who start working with GitHub have this temptation of doing just one more change, one more feature before creating a commit and pushing it.

It usually ends badly.

Your regular job benefits too

From my experience, as a designer you can do development tasks even if your title doesn't explicitly require you to do so. Today such positions are still mostly about Figma, research skills, and critical thinking – nothing about front-end work. But the world is changing.

Here's a common problem: developers often lack the experience to recreate Figma designs really well in code. They do it with some level of inconsistency, and it takes a lot of time for us to do design reviews and iterate in order to make the UI look right.

I know that it's both possible and helpful to collaborate with your devs on actual 9-5 jobs (not just your pet-projects) to improve such a process. It's just going to be much easier for you to use Claude Code and fix frontend issues yourself. Even though that's more difficult with backend and logic, for something simple – like styles, layouts, or texts, it's worth a shot.

So even if you have no ambitions to become a solo entrepreneur – I still advise learning AI-assisted development by building your dream products and making mistakes along the way. Then you can reuse these skills to be a better design specialist on your primary job, earn more, and get interesting career opportunities.

Now, let's look at Meddy

The idea behind Meddy is simple – all your health records should be in one place, you should be able to understand them easily, and you shouldn't be confused when something feels wrong. This mobile app must feel like your buddy, who's just a few taps away.

Let's take a look at the thinking behind it, as well as a visual showcase.

The six core problems

I started with an issue I've experienced myself. All my lab results, doctor notes and especially vaccine records are scattered in different places. Some are stored on dedicated websites like Synevo (a local lab here in Ukraine), others are just lying somewhere in my apartment, printed or written by hand. This is the core problem.

But after lots of thinking and research I've discovered a few additional struggles:

  • Nobody tells you when to get checked – you either get too many tests (wasting money) or not enough (missing early warning signs).
  • Results are sometimes confusing – you get a checkup, then see numbers marked in red and start to panic. You don't know if it's serious or normal for someone like you.
  • Finding good doctors is frustrating – you usually go to specialists you've never heard of. Because online reviews don't exist or there are just a few of them. So you show up not knowing if this person is right for your problem.
  • Most can't read what doctors write – handwritten prescriptions are often so difficult to understand that you need another doctor just to tell you what the first one wrote.

Validating assumptions

Before building Meddy, I needed to check if my assumptions were correct. If I was wrong about these pain points, the app wouldn't really help anyone. I did three types of research:

  1. Reading existing studies – looked at articles about health apps, academic papers about patient experiences, and competitor apps to see what already exists.
  2. Analyzing online discussions – read through Reddit threads and app reviews where people talk about their frustrations.
  3. Building a prototype and showing it – created something you could click through and tested it with tens of respondents.

The research confirmed that the market for such health apps is large and growing and there is no existing product combining medical record organization with AI assistance. I also learned that people prefer one-time payments over monthly subscriptions, as well as that Europeans care more about privacy, while Americans think about costs.

Additionally, the prototype testing revealed that most didn't see enough difference between Meddy and ChatGPT. This last finding was pretty interesting and had me rethink how to position the app so people would understand why it's different from regular AI chatbots.

Meet Emma and Henrik

When you're building something, it helps to think about possible people who are going to use it. I created two imaginary personas – Emma and Henrik.

Emma is frustrated by surprise medical bills. She wants to know costs upfront before committing to anything. Henrik cares a lot about privacy and data protection. He's disappointed by long wait times – sometimes it takes months to see a specialist.

To understand how all these problems impact their everyday life, I created short visual stories. Each one shows a frustrating moment – searching for records, panicking over test results, etc.

Finding the right words

Language that you use matters a lot and the same product can feel completely different depending on how you describe it. I ended up talking about Meddy this way:

You want clear answers about your health, but your information is scattered everywhere. When you try to use tools like ChatGPT, they don't remember your medical history. Every time you ask it something, you have to explain everything from scratch. And the answers are generic – they are not tied to your specific conditions.

However, Meddy is not a doctor and it's not trying to replace one. It's more like having a buddy who remembers every important thing about your health. You upload your records once, Meddy organizes them, and when you have questions, it answers based on your specific situation.

Additionally, building a health assistant in Europe and the US could be complicated because of healthcare laws. On the other hand, a buddy that helps you organize and understand – not diagnose or treat – avoids most of these problems.

Less features, better MVP

Minimum viable product is the smallest version of your app that still solves the core problem. I made a list of every idea I had while thinking through the concept and preparing all the context documents for implementation. Then I scored each feature based on three questions:

  1. How much impact would it have on a product?
  2. How confident am I that people want it?
  3. How easy is it to build?

The ideas that ended up highest:

  • Creating your health profile during setup – age, conditions, family history, so that AI could give personalized answers from the first conversation.
  • Storing and organizing records – upload photos and PDFs into four essential categories: lab results, prescriptions, vaccines, and imaging reports. If the app can't organize medical documents well, nothing else would work.
  • Getting high-quality answers – ask questions based on your health data or use voice when you're too stressed to type.

The other ideas were left out. For example, finding good doctors (too complicated, different for every country), managing health of family members (complicates the first version), etc.

Meet your medical buddy

When you first open Meddy, you see a carousel of stories about typical health frustrations. Each card shows a different person dealing with scattered records, confusing results, or midnight panic about unexpected symptoms.

Next, it explains why Meddy is different from basic chatbots. A few animated cards show what you can do with it (store and organize, talk to a buddy who knows your health) and why tools like ChatGPT don't work here. Then you see the pricing and privacy information, explaining how Meddy saves you money and time while keeping your data safe.

You sign in with Google or Apple, no need to remember any new passwords. Also, if you use Apple Health, Meddy can connect to it. This lets the app access health data you've already collected on your iPhone, which simplifies the onboarding by entering several fields automatically.

Now, you select chronic conditions, as well as family health history. Additionally, you also choose how you want explanations to sound – simple & brief or complex & detailed. This setting affects how your buddy talks to you.

Next, you select which types of reminders you'd like to get automatically (like prescriptions or seasonal health tips). At the end, you see a summary that proves the personalization is real.

To add any new document, you use a button at the bottom. A panel with three options slides up: upload and analyze, speak to your buddy, or type questions.

When you add a medical record, Meddy processes it and creates a clear interpretation.

At the top, a hero image shows your document – if you uploaded multiple images, you can swipe between them. Below that, cards answer the two most crucial questions:

  • "What does it mean?" (explaining the results)
  • "What to do next?" (with recommendations)

You can also open the original record file, share it with your doctor, or ask Meddy other questions about it.

If you go back, you see all the documents organized into four simple categories. Each one shows how many documents are inside and a preview of the most recent one. You can tap "view all" to see everything, or search a specific record.

Finally, there is the Homepage – your daily overview of everything health-related. Since I haven't fully implemented this part in the MVP yet, here is a look at the concept.

At the top, a "suggestions for you" section shows things you could and should do right now. They change based on what you've recently uploaded into Meddy. Below the suggestions, the Home tab helps you manage your day:

  • Today's reminders show what needs your attention – like scheduling a follow-up appointment, linking back to the record it came from.
  • Latest records show your most recently uploaded documents with Meddy's interpretation right underneath, so you can see at a glance what they mean without opening all the details.
  • Recent chats show your past conversations, also linking to the record you were discussing.

Time to wrap up

This pet-project took around half a year. Most of that time went into preparation: problem statement, technical documentation, prototyping, and testing – way less into coding. Tools like Claude Code made it possible to ship something real while being just a regular designer. However, they didn't make it fast.

Things that worked well

  • Separate chats for each activity – when conversations got too long, Claude started forgetting earlier instructions. Splitting work into multiple chats, with documents uploaded to project knowledge, kept things easier to manage.
  • Testing prototypes before writing code – most respondents said they could do the same thing with a separate chat in the free ChatGPT app. Finding positioning problems early saves you from future headaches.
  • Starting with less, adding when needed – originally, I designed six Figma frames, but even by the end of development, I had only eighteen.
  • Design systems belong in code – I didn't create components or tokens in Figma, because AI handled that better during development.

What didn't work

  • Git mistakes – I messed up a big chunk of the code by ignoring basic rules of working in IDEs.
  • Overthinking development preparation – Having 12 specialized agents turned out to be good only in theory. Simpler setups work better, and the first version of anything is never final.
  • Trusting AI analysis of user research – When I asked Claude to analyze prototype test results, it made up patterns that weren't there. Watching recordings myself first and then comparing them with AI analysis worked much better.

Finally, AI-assisted development is moving so fast and providing so much value that not learning it isn't really an option anymore.

Thanks to these wonderful people

This article wouldn't exist without the people who took the time to read early drafts and share their honest feedback. Special thanks to Igor, Bohdana, Yurii, Davyd, Yuriy, Pavlo, and Daria.