Designer's guide to AI-assisted development
Intro
Something changed
Recently, I noticed a shift in conversations happening on Reddit, YouTube, and other forums. Developers who had been skeptical or even defensive about AI started talking differently – they admitted that AI can write good code, do it quickly, and at a pretty low cost.
For a couple years, even though everyone was already talking about AI, most developers weren't really using such tools to their full potential. Many stuck with basic Copilot or just chatted with ChatGPT. But now it seems to be changing. Maybe AI finally reached a point where it writes reliable code, maybe developers gave the right tools like Claude a real chance, or probably both.
For product designers, this shift matters too. I believe code is just a tool, but the bigger question is what to create with it. And that's where your experience as someone who knows how to build products from users' perspective becomes valuable.
AI is fun
It can help you brainstorm and think through problems, but it cannot make good decisions on its own. Sometimes it does it though, but it just means it guessed well. It still requires a human to read the outputs carefully, give feedback, and think creatively throughout the process.
If you're anything like me, you've probably dreamed about building your own product someday. Not just designing static screens for someone else's vision, but creating something that's truly yours. Designers always wanted to change the world around them, but usually got stuck on typical B2B projects in outsourcing companies, whose main goal has always been to make more and more money.
We can finally change it – AI-assisted development might be the way to make our dreams come true. However, I won't pretend it's quick or easy. You can't build a good and complex app in days, or weeks, even though lots of ads nowadays tell you so. The app I'll be using as an example throughout this article took me about half a year of work in my free time.
What I'm definitely sure about is that the learning curve for building products from 0 to 1 is lower than it has ever been. Also, it's a really fun process.
Kind reminder for new readers
This article is the third and last in a series.
Before diving into development, it helps to have a clear vision for your product: problem statement, target audience, MVP requirements, information architecture – all the thinking that happens before any building. If you haven't done this work yet, or want to see how I approached it, the first article walks through that entire process based on my previous EdTech project PDPro.
It's also helpful to understand some tech basics: what code editors do, what the backend and frontend are, how they're connected, and so on. My second article in the series covers everything a designer should know as well.
What you'll learn
Last year I created 40+ prototypes, 3 pet projects, and made 713 contributions on GitHub. Essentially, I moved away from static mockups in Figma to interactive prototypes and MVPs. I even started doing basic frontend tasks at my real job. Also, I gave around 20 mentorship sessions, held 6 online lectures, and started a Telegram community about AI and design that now has over 1000 members.
One of the products I've been working on during this period of time was Meddy, which I'm going to talk about throughout the article. It's a health management app – basically one place for all your medical records (lab results, prescriptions, drug reminders, etc). We'll cover how to:
- Turn your idea into a rich context for AI
- Finally, start using Cursor and Claude Code
- Set up a development workflow with AI agents
- Manage context so AI doesn't forget details
- Recover from mistakes you'll definitely make
Also, we'll talk about MCPs, Xcode, GitHub, Vercel, when (and when not) to use Figma, and many, many more. By the end, you'll have examples, prompts, and a clear understanding of how it all works in practice. Even though you don't have to be a developer, you'll need patience, attention to detail, and a willingness to iterate when things don't work the first time.
The foundation before development
It's okay to spend more time on preparation for development than on development itself. Otherwise the result is going to be basic and buggy. For example, I spent more than half of my time working on Meddy doing ideation, prototyping, research and design. By the way, I used Figma the least among all the tools.
This section covers my workflow on such pre-development activities.
Claude Projects
Claude is an AI assistant made by Anthropic. You can use it in your browser or download a dedicated desktop app. It works like ChatGPT – type a message, get a response, continue your conversation. But there's also "Projects", a specific feature that allows you to create a space where you put files and rules that the AI then has access to in every future chat.
You don't need to have any files ready before creating such a project. With Meddy I had just a few-sentence idea of being able to check your medical data anytime you need it. I used the first chat under a completely blank project to help me set it up. There, Claude and I worked together to write:
- Project instructions – the rules that would guide every future chat
- Status tracker – to keep progress through all the planned activities
- Assumptions document – to track what I believe to be true and update it as I learn more
- Problem statement document – the first design deliverable, which defined what problems I am trying to solve with Meddy
I refined each document that Claude created for me tens of times until they were good to go. After that they became the foundation for everything else. So basically, though it might sound weird, you can and should use Claude to set up Claude.
One activity = one chat
Even with good project context, long conversations become a problem. Claude allows about 200,000 tokens per chat – roughly 140,000 words. That limit includes your messages, AI responses, attached files, and documents that AI creates. When you approach this limit, Claude starts losing context from earlier parts of the conversation. It seems likely that Anthropic will raise it to around 1 million tokens in the near future, but even then, long conversations will run into the same constraint at some point.
Claude also has a "compact chat" feature – when you hit 200k tokens, AI automatically compresses the earlier conversation and continues. This compaction is pretty smart, it summarizes older messages more heavily and keeps recent ones mostly as they are. But you still lose some details, so I prefer to avoid hitting such a limit altogether.
Ideally, one activity should equal one chat. When you finish (e.g., defining a problem statement), save the deliverable to the project's knowledge by clicking a dedicated "Copy to project" button. This must be done manually, because Claude can't save files on its own – even if you ask it to and it claims it did, don't believe it. Then start a fresh chat for the next activity. AI will access all your previous documents through its knowledge, while having a clean context window.
Don't just trust AI
I set up a clear hierarchy for making decisions at the very beginning:
- My own experience – highest weight, least likely to be wrong
- Project documentation – what I've already decided and written down
- AI suggestions – lowest weight, most likely to contain mistakes
Claude confidently fills in gaps with made-up information if you give it room to do so. Here's an example: during one chat about the business model, I had a hypothesis that quarterly payments might work better than monthly subscriptions. Claude analyzed some research we had gathered and confidently stated that "research proves people accept quarterly payments". But our findings only proved that people don't like monthly subscriptions in similar apps – it said nothing about whether they accept quarterly ones.
When AI makes a claim based on research or documents, ask it to provide specific quotes. Then open those sources and use Cmd+F to search for the exact text. If you can't find anything, Claude probably invented it.
Rules you can adapt
The multi-chat workflow works best when every conversation follows the same rules. I put these in the project instructions so they apply to every chat automatically:
- Project idea – a simple description of what your product is
- Decision hierarchy – whose input matters most when there's disagreement
- Multi-chat workflow – each activity gets its own chat, AI always checks the status tracker first
- Document standards – all documents in markdown, written simply, with executive summaries at the beginning
- Process for each step – AI shares brief thoughts first, asks strategic questions, works through problems together, creates documentation only after it understands the task well enough
- Stay in scope – Claude never jumps ahead, it's always focused on one step only
Here's a pretty detailed prompt that will generate such rules and other foundational documents for your Claude Project. Just copy and send it to the first chat:
What do you really need to start coding?
The pre-development activities I went through are based on my preferred process as a product designer. Yours might look different depending on your experience, but the list below should give you some ideas on what to include:
- Problem statement – defined the core problems Meddy solves
- Target audience – user personas of who I'm building for
- Project assumptions – tracked what I believed to be true
- Research findings – validation from academic papers, competitor analysis, and Reddit
- StoryBrand – clear product messaging so people understand why they should care about Meddy
- Business plan – how it could make money over time
- MVP requirements – prioritized list of features to build first
- Information architecture – structure of screens, navigation, and user flows in the app
For Meddy, it all served two purposes: first, these activities helped me understand what I was building from every angle. Second, they became a solid foundation for future development. If you want to learn about them in more detail, check out my first article from the series.
You should use Figma way less
Many designers already use AI for prototyping, and starting in Figma doesn't really make sense unless you're working with an uncommon design idea or a non-typical layout. For regular web and mobile apps with standard patterns, you can do it mostly with tools like Claude. And while some say AI makes bad-looking UIs, that's usually because of low-quality prompts. With proper context and the right design process beforehand, you'll get great results.
For those who are still not convinced, there's another practical reason – when you spend days in Figma to design your first iteration of a product, you risk creating a UI that looks great but is technically painful to implement. On the other hand, if AI generates components and provides their code, it means the development will be relatively easy.
AI prototypes for quick tests
Claude Projects can generate simple web prototypes – perfect for quickly testing your ideas. As I described in the beginning, I worked through many project steps before building anything. By step 10, I already had MVP Requirements (what features to build) and an Information Architecture document (navigation, structure and user flow details). When I asked Claude to turn it all into a fully interactive prototype, it worked perfectly.
However, I didn't tell AI to build it immediately – first I asked Claude to share its thoughts, ask me clarifying questions, create a detailed development plan, and wait for my specific approval. This process took about 90 iterations, because I kept refining what I wanted, and each round of feedback got the prototype closer to the idea I had in my head.
Claude also provides a direct share link for any prototype you create there. So when mine was ready for testing, I copied that link and pasted it into Useberry – a tool for unmoderated user tests. I even created a script for it with Claude's help.
Even if you've never used Useberry before, you can simply ask Claude to help you figure out where to click, how to set things up, what settings to pick. If it tells you to open a page or click something you can't find – that's fine, AI models have knowledge up to a certain date and apps like Useberry change their interfaces from time to time. Just screenshot whatever you're looking at and share it with Claude. This way you get personalized guidance that's specific to your project.
Always analyze results yourself
Don't just share the testing results with Claude and ask for analysis – you'll miss important details and AI will definitely make up things that aren't there.
For example, I started by watching every user test recording first, taking my own notes and forming my own conclusions. I didn't share them with AI right away. Instead, I asked Claude to analyze the raw results without knowing my thoughts – this way it wasn't biased by my interpretation. Only after that I uploaded my analysis and asked Claude to identify what might have been missed. Finally, I combined the best results from both versions.
In my case, such tests caught a positioning problem – most respondents said they could do the same thing by having a separate chat for medical stuff in ChatGPT. They didn't see enough value in Meddy. However, finding this during research meant I could adjust direction before writing any code.
First I had 6 designs, then 18
Once your product is validated through testing, you can move to a high-fidelity UI. But you don't need to design every screen beforehand. With AI-assisted development, Figma becomes a tool for visual direction – not for documenting every possible state.
Before opening Figma, I spent time on Mobbin looking at healthcare apps to get a sense of common patterns. Nothing complex, just collected references so I wasn't starting from a blank file.
I created 6 screens: one onboarding step, a bottom sheet with actions, voice mode, chat mode, homepage, and a medical record detail page. These were building blocks – screens that set up the visual style Claude could analyze and reuse for other parts of the app.
During development, I gave Claude Code (*Claude Projects ≠ Claude Code) these frames along with my requirements and architecture documents. AI then developed other screens to match my established UI. For example, there was only one onboarding page in Figma, but the app had many more steps defined in the architecture – Claude Code created the rest of them.
After reviewing the results, some screens looked good, while others had UI issues. The solution was to design corrections only for the frames where AI made mistakes, share them via Figma MCP, and let Claude adjust the code based on these new references. By the end, I had 18 screens in my Figma file, but I didn't design all of them upfront.
Components and tokens aren't important
There's no need to name layers properly because Figma now has an AI feature that does it automatically. Additionally, I didn't create any components, color tokens, or a separate design system. I literally had a single-page Figma file for everything.
I'm not saying all these things don't matter at all. You absolutely need to have a design system. I'm just telling you it shouldn't always be done in Figma because AI now handles it really well in code. What I learned is that you can give Claude Code a design frame using the Figma MCP, develop the first iteration, and then ask it to refactor the code – meaning clean it up, separate big coding files into smaller ones, create reusable components, colors, etc.
What I did focus on was auto-layouts. Claude needs them to understand how to make your code responsive. If you skip auto-layouts and use absolute positioning, AI won't know how elements should behave when screen sizes change.
I'll explain how the Figma MCP works in a later section – the point here is that these tools change what you need to prepare for development, and it's much less than what we've been taught as designers.
Setting up your dev environment
Before you generate your first line of code, you need three things: the right tools, rules that tell AI how to behave, and ideally – prompts to initialize your new coding project. However, there are other pro tips I'll share as well. This section covers the essentials, plus what happens when your tech stack doesn't work out as you planned.
If you've never coded before, you might not know what an IDE is. It stands for Integrated Development Environment, basically a complex text editor designed for writing and previewing code. Think of it like how Figma is specialized for design work – an IDE is the same but for developers.
Meddy, the product I've been working on, is an iOS application, and if you plan to create a native mobile app yourself – you'll need tools such as Cursor, Claude Code and Xcode. Let's explore each one.
Cursor
Cursor is a desktop application, an IDE built specifically for AI-assisted coding. This is very different from web-based prototyping tools like Lovable, because Cursor runs on your computer and has way fewer limitations. You can develop anything: mobile apps, games, Chrome extensions, even Figma plugins, not just web apps or landing pages. Also, Cursor can connect to external tools through Model Context Protocol (MCP). For example, it means Cursor can read your Figma designs (not screenshots), fetch technical documentation from the web, and much more. Basically, you write instructions in natural language, Cursor generates the code, and you iterate from there.
If you're on a low budget, I suggest using the free version. It gives you the main IDE features you need – visualizing your code, making small manual changes, GitHub integration, version history, etc. I don't think it's wise to pay for Cursor's paid plan right now.
About a year ago, they charged by requests – you had 500 requests for $20/m, where a single request was usually one prompt (powerful AI models might have cost 2-4 requests). It was pretty clear how much you could send before hitting the monthly limit. But now, when you pay $20, you literally get $20 worth of tokens. If your request is large, it costs much more than a simple prompt (e.g., 50 cents or more). From my experience, those $20 on the cheapest paid plan disappear very quickly.
There are already many alternatives for different needs and preferences – e.g., Antigravity, Windsurf, or Kiro. However, I like the team behind Cursor. They're young, smart, ship updates frequently, and they look like people who genuinely love their product. It's also a matter of taste – if you prefer a more visual AI experience with a polished interface, pay for Cursor. Additionally, they sometimes offer temporarily free AI models that are less powerful but work fine with simple tasks.
For complex development at a reasonable price, you need the next tool.
Claude Code
Claude Code is Anthropic's AI coding tool that runs in your terminal. If you don't know what a terminal is, or you've heard of it but feel too scared to open one, check out my previous article where I explain it in simple terms. It also costs $20/month for the cheapest plan, but gives you way more generous usage limits. While I'm writing this article, they have a 5 hour rate limitation, which means you get a certain amount of tokens to use every 5 hours and then it gets refreshed. Also, there's an additional weekly limit which they introduced a few months ago. Sometimes it's a bit frustrating to get hit with such a long wait, but anyway it's still a better option than using Cursor's paid plan.
Here's something important to understand – AI models from all the biggest players (Google, Anthropic, OpenAI) are roughly the same in terms of intelligence. Some lean more toward conversational tasks, like ChatGPT, some toward visuals, like Gemini, and some toward coding or complex analysis, like Claude. But overall, they're pretty much the same. What matters more is the tool you use to interact with such AI models. For Claude, the best solution is using the one designed by the same people who created it. In this case, it means Claude Code, which gives you access to three different models: Haiku (lower performance, good for simple tasks), Sonnet (good performance, handles complex tasks), and Opus (the best performance at this moment). To sum up – people who built the model know best how to get the most out of it.
You could also hear about things called "AI tools" in the AI-assisted development context, but to avoid confusion with actual applications (which one can call tools), let's use the word "capabilities". These are what make some AI products more powerful than others, even when they have the same underlying AI models. The most powerful capabilities include MCPs, global rules, slash commands, and agents. Nowadays, there are also skills and plugins – basically, every few months a new capability gets released, so it's really difficult to cover everything, however they are not that complex to learn. I'll explain the most useful ones in a dedicated section, particularly the sub-agent pipeline which I tested a lot while working on Meddy. For now, just know that as you run into these terms during development, they're ways to extend what AI can do beyond basic code generation.
What also makes Claude Code particularly powerful is that it runs in your terminal, which has access to everything on your computer. A terminal is basically a simple app, preinstalled on your device, that uses text commands instead of your keyboard or touchpad to perform actions. And it does it much faster, in greater numbers than when done manually. With Claude integrated into the terminal, all these actions become possible for AI to do automatically.
Another way to use Claude Code, which I personally prefer, is to run it inside Cursor's terminal.
Xcode
Xcode is another IDE from Apple. It's their official tool for building applications for iOS, macOS, and all the other operating systems. If you're already using Cursor for all your coding, you still need Xcode to preview the result and test it on real devices (e.g., on your iPhone). There's no way around this for iOS development, but it's also free and not that complicated to learn.
If you're building just web-based products, you won't need Xcode at all, only Cursor and Claude Code.
Sometimes you have to start over
I began building Meddy with Expo – a framework that lets you write code once and release it to both iOS and Android devices. If you're unfamiliar with terms like "framework", I also explained them in my previous article about programming basics. The reason why I chose it was because I already had some experience with Expo from my previous pet-projects, and I wanted to avoid learning new tools.
Unfortunately, I got stuck trying to customize the iOS safe area. Specifically, I wanted to position a fixed bottom button in my onboarding flow closer to the phone's edge – exactly as I'd designed it in Figma. This sounds like a small detail, but it was important to me. Also, I knew it was possible because I'd seen similar layouts in other apps. But Expo, at least at that time, didn't have enough customization options for the safe area handling.
I searched documentation, asked Claude for help, and tried different approaches. Nothing worked, so eventually I decided to pivot from Expo to native iOS development. Even though I lost a few days of work, looking back, it taught me some things. If you're learning and building pet projects, making mistakes like this is a good thing long term.
I learned Xcode's basics surprisingly fast by using the same approach described in the earlier section with Useberry. I would simply share screenshots of its interface with Claude and ask where and why I should click.
CLAUDE.md and global rules
CLAUDE.md is a markdown file that lives in the root of your development folder. When you use Claude Code, it automatically reads them and follows the rules you've defined in any conversation you have with AI. Think of it as instructions that shape how Claude behaves throughout your entire coding project.
Keep in mind that this file doesn't stay static. Mine went through tens of changes during Meddy's development. It started as a basic document with some tech stack information and grew into a comprehensive set of rules covering everything from source of truth hierarchy to specific warnings about outdated approaches that AI tried to use but got stuck with bugs.
This file is super long, so you probably won't read it word by word. But it's useful as reference – both for understanding what mature global rules look like, and for feeding them into your own AI tools if you want to recreate something similar.
In brief, my rules file covered:
- Tech stack specifications – exact technologies to use, with no substitutions allowed
- Non-negotiable requirements – cite documents when appropriate, never guess, ask for clarification when uncertain
- Development pipeline – a sequence of specialized AI agents that run for each feature
- Conflict resolution – what to do when Figma designs don't match the requirements
- Project status tracking – what's complete, what's in progress, what's planned for later, and where to store all this information
If you decide to use Cursor or any other IDE instead of Claude Code, the idea of global rules is going to be the same – just done differently. In most cases, such instructions are configured through settings rather than a dedicated Markdown file.
Ask AI for some kick-start prompts
When you open a new project in Cursor, you're basically looking at a blank folder on your computer. If you know nothing about programming, it's going to be hard to understand how to even begin. That's where kick-start prompts could be helpful. At this point you're going to have a fully set up Claude Project that already knows a lot about your product from all the previous design activities – just ask it to help with such prompts. They should be simply a few starting messages you'll then send into Claude Code in Cursor to set everything up.
The rule of thumb is: don't overcomplicate them. Avoid deep technical details or code snippets. These prompts should be high-level references that point to your context documents – the files you'll download from your Claude Project and paste into that blank folder in Cursor. The same applies to global rules (i.e., CLAUDE.md file).
Even if you know nothing about programming and have no idea what kick-start prompts should cover, you can just ask AI to suggest options with pros and cons for each. For example, I didn't write these prompts from scratch myself – I chatted with Claude inside my project and let it guide me.
Reuse this template
I created this prompt some time ago based on the experience across multiple projects, including Meddy, to help myself with any future AI-assisted development preparation. It generates many materials you need in one step: prompts to set up Cursor, global rules, actual development prompts for each big feature, and even an agent pipeline files. Use it during your development preparation step (equivalent to my Step 15 in a Meddy Claude Project) – after you've completed your context documents but before you start any actual coding.
AI agents 101
An agent is just a fancy name for an AI chat. When tools like ChatGPT first appeared, people used them literally to chat – you asked questions and got answers. Then AI started getting more and more capabilities beyond just responding with text. Now, they can do so much that we simply can't call these tools chatbots anymore.
Sub-agents are the same thing as regular AI agents, the only difference is that they run outside your main conversation. Basically, your parent agent can call others, explaining to them what to do. This way, sub-agents are going to do their work separately. And the reason we need this other layer of agents is very simple – it's because of the same token limitations we've discussed in the beginning.
Different tools and IDEs implement this concept in different ways, but I think Claude Code handles it particularly well. You don't need to manually open multiple chats for each agent. Instead, you manage everything within a single conversation. When the AI decides it makes sense to use a sub-agent, it runs one in the background, creating a separate chat that you don't have to see or manage yourself.
They exist because of token limits
Claude Code has the exact same 200,000 token limit as Claude Projects. Complex development workflows that require reading many files, creating plans, writing code – they burn through that limit pretty fast. When it fills up, chat compaction happens and earlier instructions get partially lost.
With sub-agents, each one uses its own set of tokens. For example, one analyzes your project, another creates the plan, a third agent does the development. They all work in separate "chats", with separate limits, but results come back to the main conversation. This way you lose fewer tokens there because all the heavy work happens elsewhere.
Usually, four are enough
During development preparation in my Claude Project, I originally created twelve specialized agents:
- One for analyzing context
- Another for generating clarifying questions
- A third for creating development plans
- A fourth for managing tasks based on these plans
- Then separate ones for backend, frontend, design auditing, and more
In theory, it looked perfect, but in practice it turned out bad. Maintaining twelve agents (essentially, they are just Markdown files) was hard, and the main one kept forgetting to run some of the sub-agents or did it in the wrong order. During actual development, I understood what worked and what didn't, then asked Claude Code to update the agent files and delete a few of them. So after some back and forth, I landed on having just four.
If you're going to set up your own system, expect the same iterative process. Your first version won't be final, and that's fine. However, if you want to reduce the number of possible re-writes, feel free to use my agents as your starting point:
1. Context analyzer
This is the mandatory first step before any implementation. It does the following:
- Reads the development status file to understand what's already done (we'll discuss it more in future sections)
- Reviews existing code and context files
- Checks available styles and components
- Finds up-to-date documentation for whatever technologies I'm using
- Analyzes the Figma designs I'm about to implement
The output is a summary of my current project state plus questions for me to answer before coding begins – this trick prevents lots of assumptions.
2. Backend implementer
If the front end is what you see – buttons, pages, animations, then the backend is the invisible part. In my case, it managed how user information in Meddy got stored, how their medical records got organized into logical categories, etc.
I made it as a separate agent because backend work is different enough from everything else and mixing it would require Claude to spend too many tokens in one conversation. Keeping agents separate lets each one do its job better. That's a common rule for understanding whether a certain activity from your development workflow should have its own dedicated agent or if it's okay to combine it with the ones that already exist.
3. Frontend implementer
This one is simple. It creates the UI and its core rule is pixel-perfect implementation with zero creative interpretation – it must match the designs from Figma using the official Figma MCP (which loads not just screenshots but code from Dev Mode).
The agent file I show below listed every component that already existed in Meddy, which prevented recreation of things that were already built. Such an issue happened constantly before I added these explicit rules. And as you could understand, I asked Claude to update this Markdown file every time a new component was created.
4. Design system auditor
The last agent runs at the end of any big implementation phase to check for violations that the AI might have has certainly made. During Meddy's development, the most common issues were either hardcoded values or redundant components. For example, Claude loved to write down specific color values instead of reusing the established tokens. Another common issue was components that had to be reused instead of being created from scratch every time I shared new designs with AI.
How they work together in practice
The whole workflow was described in my CLAUDE.md file, so when I sent any development prompt, Claude Code read those rules and started the agentic pipeline automatically:
- It analyzed the project
- Asked me clarifying questions
- Waited for my detailed answers
- Created a development plan
- Paused for my manual approval
- Then did the backend work first
- Proceeded with the front-end code
- Finished with a design system audit
The approval step between planning and implementation matters a lot. After the Context analyzer runs and I answer its questions, Claude Code creates a detailed plan and saves it as a file. Without this, I discovered that Claude would build features I haven't asked for, or interpret my requirements creatively instead of strictly following them.
A good practice is also to ask AI to update any documents or settings when you feel like the project has evolved a lot since the last prompt you sent. You can simply ask Claude to do it all on its own at the end of the agentic pipeline. For example, I had a habit of regularly updating my development status document, global rules and the agent files too.
Even though with this approach everything runs automatically, sometimes you can also call the agents manually. In the case of Meddy, not all the development was about big chunks of work (e.g., implementing the whole homepage). From time to time I did minor UI improvements and asked AI to use the Design system auditor agent on that code.
Why AI capabilities matter more than models
Earlier in the article, I mentioned that AI models from the biggest players like OpenAI or Anthropic are roughly the same in terms of intelligence. A better model gets released every few months, everyone calls it the best in the world, but the differences between them aren't that big. What matters is the tool you use to interact with such models, and more specifically – the capabilities that tool provides. The most common are:
- MCPs – integrations that give AI access to other applications
- Skills – set of prompts with best practices that make AI better at specific tasks
- Slash commands – quick prompts you can save and reuse
- Plugins – all the above capabilities combined together to be shared with others
Now let's look at each one in more detail.
MCPs are not that complex
It stands for Model Context Protocol, but you don't need to remember that. Just think of MCPs as integrations between an AI and another application on the internet or your computer. For example, if you want Claude to access the designs you made in Figma, you need an MCP. There are hundreds if not thousands of them out there, but usually I use just two:
- Context7 – helps Claude Code, Cursor and any other AI tool get up-to-date technical documentation.
Large language models are trained on information up to a specific date, and everything that happens after that date, AI doesn't know about – unless it searches the web, which sometimes provides slightly outdated sources as well, so I wouldn't rely on them too much. For example, technical docs for the frameworks you might use for your product get regularly updated, and there could be a situation when AI, even if it's the most powerful model that just came out yesterday, still has knowledge based on outdated documentation.
To reduce situations where Claude hallucinates, you need to provide the current information. The easiest way to do this is to use Context7. Just make sure to specify this either in your prompt or, even better, in your global rules like I did, because otherwise AI won't usually call this MCP on its own.
- Figma MCP – gives your AI tool the design information from your frames in Figma. Not screenshots, but data from Dev Mode, so basically it's the same code that developers see when they review your UI before implementing it.
Most don't notice that, but this MCP works pretty smart – it understands the technologies you use in the coding project. In my case it was SwiftUI because I was working on an iOS app, so AI didn't just copy the code from Figma (which might have been related to web development and wouldn't have worked for mobile), but adapted it to the correct framework.
However, if you just share a complex design with Claude and tell to build it, the result will typically look bad, especially if it has many components. But if you use the Figma MCP gradually, for smaller elements at a time, you'll get much better results. Also, from my experience, the best way to recreate your UI is to start with bad designs in code first.
What I mean by that is you should generate something that works well, yet looks ugly. Then design it nicely in Figma, and afterwards give your AI those designs to apply the correct styles without touching the functionality. If you do it vice versa – starting with beautiful designs and then trying to make them functional, it's going to be a pretty complex task for Claude to handle, and usually you'll go through more iterations than the other way around.
Context7 and Figma MCP are the key integrations for the development workflow described in this article. For a deeper dive, you can explore Playwright MCP for testing and a TDD approach – a more advanced topic to tackle once you're comfortable with the basics.
MCPs are not only about development. For example, there's Remotion MCP that lets you create videos by simply talking to AI. For example, you can give Claude Code a Figma design and generate a polished animated video – no video editing skills required.
Finally, one of the most powerful MCPs for any designer is the unofficial one called Figma Console. Unlike the official alternative, it doesn't just access design code from Dev Mode – it can actually perform actions in Figma for you. For instance, it can turn raw frames into components with tokens and styles based on your prompt. It's a bit tricky to set up if you're new to Claude, but once learned – incredibly useful. I highly recommend trying it after you've explored the other MCPs.
How skills make AI better at specific things
Skills is an AI capability that came out a few months ago, and at first a lot of people were confused about it. I also didn't really use them because it was hard to understand what's the difference between skills and AI agents. But the explanation turned out to be pretty straightforward – they are literally skills.
Imagine a person with specific experience, for example, a designer that knows how to create beautiful visuals. This is something that takes a lot of practice to learn. Meanwhile AI typically does a bad job with UI. Large language models are trained on all the information from the internet, all the books, articles, images, everything – which means they're trained on both excellent and worst examples.
Without any AI capabilities Claude Code will give you intermediary answers. Not really bad and not great either, just good in most cases. So a skill is one of the ways for you to save high-quality prompts with examples of your own expertise, so that AI could then reuse them when applicable.
By the way, you don't need to be super experienced in a specific field to have a skill for it, because you can just re-use what other people have already created. Here's skills.sh to explore popular skills.
Slash commands for quick prompts
Let's say you worked with Claude Code for quite a while and noticed that it tends to use title case for all the buttons in your designs, which is a common struggle – I don't actually know why AI does that. Naturally you'll try to fix it by saying "please use sentence case instead of title case" every time the issue appears again and again.
Instead of typing the same prompt multiple times, you could save it as a slash command called "/fix-title-case". That way, anytime AI messes up your writing style, it can be solved in a few clicks.
You could also put something like this inside your global rules. But if you think that's a prompt you might use regularly, not only as part of a big development workflow, but also when working on minor improvements yourself – then it makes sense to save it as a slash command.
When to use plugins
Plugins are all these capabilities we talked about above, including agents, packaged together. If you set up your Claude Code settings really well and would like to share them with other people on the internet – you could do it using the plugins feature. Anthropic explains how to do so in their official docs.
Don't install everything you see on the web
After reading all this, you are probably thinking about installing every possible MCP and AI skill you'll find. But this situation is similar to what I explained in the Agents 101 part – when I had 12 of them and it turned out to be just a waste of time. Same goes here. If you have a specific issue, you can try to solve it with a specific capability, but you don't need to use everything you see on the internet.
New capabilities get released every few months or even weeks. Some don't get popular, but others turn out to be really helpful. The important thing is to stay informed, follow sources that provide up-to-date information, and try new approaches in your projects to see if they really help. We have a free Telegram community about AI with 1000+ members, where many skilled designers share daily insights on this topic.
Do things out of order
I know it's difficult to learn and try every new thing, especially if you're just starting and you've never used Claude Code or Cursor before. But it won't get easier by waiting. The best time to learn AI-assisted development was around two years ago and the second best time is while you're reading this.
Also, you must experiment. Most of my experience with the things I'm explaining in this article is just one big experiment. That's definitely not something I had to do on my regular job as a product designer and not what they teach you on UI/UX courses. If I was doing everything based on outdated theory, only in a "truly correct way" (e.g., using double diamond and drawing wireframes in Figma) I wouldn't achieve the results I have now.
Status tracking and GitHub basics
When you work on something complex with AI, there's a problem you'll run into sooner or later – it forgets things. This happens because of how context works in any AI tool.
As we already know, every conversation has a token limit, and when you hit it, earlier parts of your chat get compressed or lost. This means Claude Code might forget about components you already built, decisions you already made, and approaches that didn't work before.
The solution is simple: keep a status file for AI to read at the start of every development task. Unlike the "Project Status Tracker" we used for the design activities at the beginning, this document is made specifically for tracking code implementation in your IDE, though the logic behind it is pretty much the same.
Reduce hallucinations with this trick
Setting it up is pretty straightforward, just create a new Markdown file in the root of your coding project and call it something like "status.md" – it's really not that important how it's named. The crucial thing is that your AI tool has to understand where to find and when to update this file. You don't need to maintain it manually because Claude Code handles everything for you – just explain it as a part of your workflow in the CLAUDE.md file.
However, Claude Code can forget to track the status, even if global rules tell it so. In this case, you need to nudge AI from time to time. Usually, I asked Claude to update my file after every big development phase it completed, basically after any complex prompt I sent it.
If you're wondering why you need another document when you could write down all project progress inside the global rules – the answer is "separation of concerns". It is better to use smaller, dedicated files for cases like this instead of keeping everything in one large CLAUDE.md, because otherwise the AI may ignore certain parts due to context limits.
GitHub is almost like Google Drive
Now let's talk about GitHub, which is another tool you'll need to use alongside Cursor and Claude Code. You probably don't know what it stands for or does in practice, even though most designers definitely heard of it as something that developers use.
Basically, GitHub allows people to collaborate and work on the same codebase together. It is also similar to Google Drive because essentially it's a place where you store your project's code online. Since any codebase is literally just a folder with files and other folders inside, it works like GDrive in that sense. But it also has lots of additional features that allow developers to work together in more sophisticated ways.
Commits, branches, and pull requests
As we already learned, developers store the latest version of their code on GitHub – it's their source of truth. Each member of the team connects it to a preferred IDE (e.g., Cursor) and every time one developer starts working on a new task, they run a specific set of commands in a terminal to get the most up-to-date code from GitHub.
These are the terms you'll typically hear while working with GitHub.
- Commits – when you take your updated code from the IDE and push it (i.e., send it back) to GitHub. Think of it as saving a copy of your work with a small comment about what has changed since the last update.
- Branches – different versions of the same codebase. There's always a main branch where everything gets uploaded to when it's approved. And there are also feature-related branches where you, as a developer, commit the code while working on a specific task.
- Pull requests (PRs) – when you take the committed code that's already on GitHub (pushed to a dedicated branch) and create a request for someone else from your team to review it.
In order for everyone to understand what's happening and what has changed, other developers need to check each other's code. This is an additional step before applying any new code to the main branch. When someone checks it, they can send the code back for improvements, or approve your PR and do an action called "merge" – which simply uploads that code to the main branch. Then other team members can load it back into their IDEs.
Sometimes there's additional complexity because different people could simultaneously work on connected things. If multiple developers change the same part of code, they call it "merge conflict". In this situation, they need to determine what to do next – whether to use one of these pieces or combine them into a single solution.
There are lots of videos on YouTube about GitHub. Nowadays many designers already use it, so you can easily find tutorials from people alike who can explain it in a more familiar way. However, you don't need to memorize all these fancy terms or Git actions – just practice them from time to time to really understand how the whole system works.
By the way, Claude Code can help you figure out whatever you'd like to know. For example, you can ask it how to create and connect a new GitHub repository to your IDE, or how to save the updated code and push it into a dedicated branch.
Be careful with terminal commands
Most GitHub actions are done through a terminal, so here's an important rule: when AI suggests any command, verify if it's actually good to run.
Usually it won't harm you, but sometimes it can be incorrect and break a part of the project. So be safe and take a moment to review what Claude Code suggests – don't accept all the commands right away, especially when you are just starting to learn AI-assisted development. Also, if you don't know what a specific command does, ask AI to explain it in simple terms.
The same principle of not trusting blindly applies to understanding your project structure. You need to pay attention to what Claude Code is doing when generating code – what files and folders it creates or removes, where it stores them, and what names it uses. Because when something breaks, this knowledge helps you understand how to solve the issues. If you just vibecode without looking into the why behind it, you'll certainly run into many problems.
Use GitHub even if you work alone
You might think "I'm not collaborating with anyone, why do I need GitHub?". The answer is version control, or project history, in other words.
When you open an IDE for the first few times, it's very common to break lots of things. First, you build something great, then you do another iteration, and mess it all up to a state where it's not possible to fix. GitHub prevents such situations by letting you go back in history of changes. If you break it, you can find the last version that worked before and revert it.
The rule of thumb is to commit changes:
- After every feature Claude Code implements
- After any critical bug it finally fixes
- Before working on anything unfamiliar
I once had to recreate an entire project from scratch because I wasn't using GitHub and AI broke everything beyond repair. It was frustrating, but the lesson didn't fully stick until something similar happened with Meddy.
IDEs like Cursor have a dedicated UI for GitHub integration inside, so you don't always need to use terminal commands. There's a dedicated button to stage your updated code, and there's one more that also discards all these changes. To understand what went wrong, you need to know what "staging" means (another Git-related term): when you update the code, those changes exist only on your computer. Before you commit, you need to stage it – basically decide which updated files you'd like to include in your commit that's going to be sent to the GitHub repo. The problem is that in Cursor's interface, both staging and discard actions are placed way too close to each other.
That day I was working on several major changes for Meddy – it was pretty late, I was exhausted, and decided not to commit them. I just wanted to finish the job as fast as possible and go rest. Then I accidentally clicked the discard action after selecting tens of files that had been changed. But really, I meant to stage them. And I didn't even notice this mistake at first.
When I tried to preview the app on my iPhone through Xcode, I saw multiple errors and an outdated UI. Then I started checking Cursor's version control tab, where the GitHub integration is located, but I still didn't know I had made this mistake – I simply misclicked without realizing it.
Even though I didn't use GitHub properly that time, I was able to recover most of the work because of a feature called Timeline. It is like a local history of changes that happens on your computer, separate from GitHub. It's not as powerful, not as easy to use, and also not granular enough to let you revert everything, but it definitely helps in situations like mine. Fortunately, I got back around 70% of the work.
The most frustrating thing about Timeline is that you need to revert each file one by one, which means you have to know how they are called, where they are located and what was the last correct version of every file. It might be difficult if you're not a classic developer and you use AI heavily for your code.
GitHub, on the other hand, allows you to run one command to get back the version of the whole project that was working before – all files at once. Many designers who start working with GitHub have this temptation of doing just one more change, one more feature before creating a commit and pushing it.
It usually ends badly.
Your regular job benefits too
From my experience, as a designer you can do development tasks even if your title doesn't explicitly require you to do so. Today such positions are still mostly about Figma, research skills, and critical thinking – nothing about front-end work. But the world is changing.
Here's a common problem: developers often lack the experience to recreate Figma designs really well in code. They do it with some level of inconsistency, and it takes a lot of time for us to do design reviews and iterate in order to make the UI look right.
I know that it's both possible and helpful to collaborate with your devs on actual 9-5 jobs (not just your pet-projects) to improve such a process. It's just going to be much easier for you to use Claude Code and fix frontend issues yourself. Even though that's more difficult with backend and logic, for something simple – like styles, layouts, or texts, it's worth a shot.
So even if you have no ambitions to become a solo entrepreneur – I still advise learning AI-assisted development by building your dream products and making mistakes along the way. Then you can reuse these skills to be a better design specialist on your primary job, earn more, and get interesting career opportunities.
Now, let's look at Meddy
The idea behind Meddy is simple – all your health records should be in one place, you should be able to understand them easily, and you shouldn't be confused when something feels wrong. This mobile app must feel like your buddy, who's just a few taps away.
Let's take a look at the thinking behind it, as well as a visual showcase.
The six core problems
I started with an issue I've experienced myself. All my lab results, doctor notes and especially vaccine records are scattered in different places. Some are stored on dedicated websites like Synevo (a local lab here in Ukraine), others are just lying somewhere in my apartment, printed or written by hand. This is the core problem.
But after lots of thinking and research I've discovered a few additional struggles:
- Nobody tells you when to get checked – you either get too many tests (wasting money) or not enough (missing early warning signs).
- Results are sometimes confusing – you get a checkup, then see numbers marked in red and start to panic. You don't know if it's serious or normal for someone like you.
- Finding good doctors is frustrating – you usually go to specialists you've never heard of. Because online reviews don't exist or there are just a few of them. So you show up not knowing if this person is right for your problem.
- Most can't read what doctors write – handwritten prescriptions are often so difficult to understand that you need another doctor just to tell you what the first one wrote.
Validating assumptions
Before building Meddy, I needed to check if my assumptions were correct. If I was wrong about these pain points, the app wouldn't really help anyone. I did three types of research:
- Reading existing studies – looked at articles about health apps, academic papers about patient experiences, and competitor apps to see what already exists.
- Analyzing online discussions – read through Reddit threads and app reviews where people talk about their frustrations.
- Building a prototype and showing it – created something you could click through and tested it with tens of respondents.
The research confirmed that the market for such health apps is large and growing and there is no existing product combining medical record organization with AI assistance. I also learned that people prefer one-time payments over monthly subscriptions, as well as that Europeans care more about privacy, while Americans think about costs.
Additionally, the prototype testing revealed that most didn't see enough difference between Meddy and ChatGPT. This last finding was pretty interesting and had me rethink how to position the app so people would understand why it's different from regular AI chatbots.
Meet Emma and Henrik
When you're building something, it helps to think about possible people who are going to use it. I created two imaginary personas – Emma and Henrik.
Emma is frustrated by surprise medical bills. She wants to know costs upfront before committing to anything. Henrik cares a lot about privacy and data protection. He's disappointed by long wait times – sometimes it takes months to see a specialist.
To understand how all these problems impact their everyday life, I created short visual stories. Each one shows a frustrating moment – searching for records, panicking over test results, etc.
Finding the right words
Language that you use matters a lot and the same product can feel completely different depending on how you describe it. I ended up talking about Meddy this way:
You want clear answers about your health, but your information is scattered everywhere. When you try to use tools like ChatGPT, they don't remember your medical history. Every time you ask it something, you have to explain everything from scratch. And the answers are generic – they are not tied to your specific conditions.
However, Meddy is not a doctor and it's not trying to replace one. It's more like having a buddy who remembers every important thing about your health. You upload your records once, Meddy organizes them, and when you have questions, it answers based on your specific situation.
Additionally, building a health assistant in Europe and the US could be complicated because of healthcare laws. On the other hand, a buddy that helps you organize and understand – not diagnose or treat – avoids most of these problems.
Less features, better MVP
Minimum viable product is the smallest version of your app that still solves the core problem. I made a list of every idea I had while thinking through the concept and preparing all the context documents for implementation. Then I scored each feature based on three questions:
- How much impact would it have on a product?
- How confident am I that people want it?
- How easy is it to build?
The ideas that ended up highest:
- Creating your health profile during setup – age, conditions, family history, so that AI could give personalized answers from the first conversation.
- Storing and organizing records – upload photos and PDFs into four essential categories: lab results, prescriptions, vaccines, and imaging reports. If the app can't organize medical documents well, nothing else would work.
- Getting high-quality answers – ask questions based on your health data or use voice when you're too stressed to type.
The other ideas were left out. For example, finding good doctors (too complicated, different for every country), managing health of family members (complicates the first version), etc.
Meet your medical buddy
When you first open Meddy, you see a carousel of stories about typical health frustrations. Each card shows a different person dealing with scattered records, confusing results, or midnight panic about unexpected symptoms.
Next, it explains why Meddy is different from basic chatbots. A few animated cards show what you can do with it (store and organize, talk to a buddy who knows your health) and why tools like ChatGPT don't work here. Then you see the pricing and privacy information, explaining how Meddy saves you money and time while keeping your data safe.
You sign in with Google or Apple, no need to remember any new passwords. Also, if you use Apple Health, Meddy can connect to it. This lets the app access health data you've already collected on your iPhone, which simplifies the onboarding by entering several fields automatically.
Now, you select chronic conditions, as well as family health history. Additionally, you also choose how you want explanations to sound – simple & brief or complex & detailed. This setting affects how your buddy talks to you.
Next, you select which types of reminders you'd like to get automatically (like prescriptions or seasonal health tips). At the end, you see a summary that proves the personalization is real.
To add any new document, you use a button at the bottom. A panel with three options slides up: upload and analyze, speak to your buddy, or type questions.
When you add a medical record, Meddy processes it and creates a clear interpretation.
At the top, a hero image shows your document – if you uploaded multiple images, you can swipe between them. Below that, cards answer the two most crucial questions:
- "What does it mean?" (explaining the results)
- "What to do next?" (with recommendations)
You can also open the original record file, share it with your doctor, or ask Meddy other questions about it.
If you go back, you see all the documents organized into four simple categories. Each one shows how many documents are inside and a preview of the most recent one. You can tap "view all" to see everything, or search a specific record.
Finally, there is the Homepage – your daily overview of everything health-related. Since I haven't fully implemented this part in the MVP yet, here is a look at the concept.
At the top, a "suggestions for you" section shows things you could and should do right now. They change based on what you've recently uploaded into Meddy. Below the suggestions, the Home tab helps you manage your day:
- Today's reminders show what needs your attention – like scheduling a follow-up appointment, linking back to the record it came from.
- Latest records show your most recently uploaded documents with Meddy's interpretation right underneath, so you can see at a glance what they mean without opening all the details.
- Recent chats show your past conversations, also linking to the record you were discussing.
Time to wrap up
This pet-project took around half a year. Most of that time went into preparation: problem statement, technical documentation, prototyping, and testing – way less into coding. Tools like Claude Code made it possible to ship something real while being just a regular designer. However, they didn't make it fast.
Things that worked well
- Separate chats for each activity – when conversations got too long, Claude started forgetting earlier instructions. Splitting work into multiple chats, with documents uploaded to project knowledge, kept things easier to manage.
- Testing prototypes before writing code – most respondents said they could do the same thing with a separate chat in the free ChatGPT app. Finding positioning problems early saves you from future headaches.
- Starting with less, adding when needed – originally, I designed six Figma frames, but even by the end of development, I had only eighteen.
- Design systems belong in code – I didn't create components or tokens in Figma, because AI handled that better during development.
What didn't work
- Git mistakes – I messed up a big chunk of the code by ignoring basic rules of working in IDEs.
- Overthinking development preparation – Having 12 specialized agents turned out to be good only in theory. Simpler setups work better, and the first version of anything is never final.
- Trusting AI analysis of user research – When I asked Claude to analyze prototype test results, it made up patterns that weren't there. Watching recordings myself first and then comparing them with AI analysis worked much better.
Finally, AI-assisted development is moving so fast and providing so much value that not learning it isn't really an option anymore.
Thanks to these wonderful people
This article wouldn't exist without the people who took the time to read early drafts and share their honest feedback. Special thanks to Igor, Bohdana, Yurii, Davyd, Yuriy, Pavlo, and Daria.



























