AI, Development, Workflow, Automation

Stop Wrestling with AI: A Step-by-Step Workflow for Effective AI-Powered Development

Transform your AI coding assistant from a frustrating wildcard into a reliable partner with this practical, detailed process.

We've all felt it: the whiplash when your AI coding assistant goes from genius coder to code-deleting chaos monkey. While AI assistants promise revolutionary productivity, inconsistent results can tank your workflow. This article is for developers who want to harness the power of AI coding tools lik...…
Stop Wrestling with AI: A Step-by-Step Workflow for Effective AI-Powered Development
<a href="http://www.youtube.com/@ColeMedin">Cole Medin</a> Cole Medin Follow

We've all felt it: the whiplash when your AI coding assistant goes from genius coder to code-deleting chaos monkey. While AI assistants promise revolutionary productivity, inconsistent results can tank your workflow. This article is for developers who want to harness the power of AI coding tools like Windsurf or Cursor effectively. You'll learn a detailed, step-by-step workflow—complete with golden rules, planning templates, and practical examples—to consistently generate high-quality code and dramatically boost your development speed.

The AI Coding Paradox: Genius or Chaos?

Everyone knows AI coding assistants are changing the game. If you're not using one, you risk falling behind. But here's the catch: simply throwing prompts at an AI IDE often leads to frustration. One minute, it feels like you're pair-programming with a senior engineer; the next, it's like a troop of monkeys randomly mashing your keyboard, deleting crucial code or implementing bizarre features.

You know the pain. To get high-quality, consistent output from AI, you need a well-defined process. If you lack that refined workflow, stick around. By the end of this guide, you'll have a clear roadmap to elevate your AI-assisted development.

I'll walk you through my full workflow, step-by-step, covering the nitty-gritty details. This process is designed to be:

  1. Simple: No overcomplicated setups.
  2. Practical: We'll illustrate it by building a real-world example: a Supabase MCP server.
  3. Universal: Adaptable regardless of your specific tech stack or AI IDE.

Let's dive in!

Your AI Coding Playbook: Process Overview & Resource

This entire workflow is documented in a shareable guide. We'll reference it throughout this article, so feel free to use it as your own resource:

At the heart of this process are the Golden Rules.

The Golden Rules for AI Coding

These principles underpin the entire workflow and are key to getting consistent results:

  1. Use High-Level Markdown Documents: Maintain files like planning.md and tasks.md containing project plans, tasks, setup instructions, and key documentation links. Use these to provide context to the LLM throughout development.
  2. Don't Overwhelm the LLM: Context length matters. Long context windows increase the chance of hallucinations and errors.
    • Keep code files under ~500 lines.
    • Start fresh chat conversations frequently.
    • Ask the LLM to implement only one feature or task per prompt.
  3. Write Tests: Consistently ask the AI to write tests for its code, ideally after implementing each new feature. This is crucial for verifying correctness and maintaining quality.
  4. Be Specific: Provide clear, detailed instructions. Don't just describe the high-level goal; specify technologies, libraries, desired output formats, and constraints.
  5. Write Docs & Comments As You Go: Have the LLM update documentation (both high-level files and inline code comments) continuously. This aids both your understanding and the AI's ability to maintain context.
  6. Implement Security Yourself: Never trust the LLM with sensitive information like API keys or database security configurations. You must understand and manage security aspects yourself. Understand the code the AI produces, especially security-sensitive parts.

Warning: Relying solely on AI for security is dangerous. There are horror stories online – don't become one!

Now, let's see how these rules translate into a phased workflow.

Phase 1: Planning - Setting the Direction

Before writing a single line of code, create two essential Markdown files:

  • planning.md: Captures the high-level vision, architecture, tech stack, constraints, and other key project information. This serves as a persistent context file for the LLM.
  • task.md: Tracks all development tasks – completed, pending, and in progress. The LLM can update this file as it works, allowing you to act as the project manager.

I typically use a chatbot assistant like Claude to generate the initial drafts of these files before even opening my AI IDE. For our Supabase MCP server example, a prompt like this works well:

Plan a project to build a Supabase MCP server in Python.
The server should allow interacting with Supabase tables (create, read, update, delete records).
Use the Brave MCP server implementation as a reference for the structure if possible: [Link to Brave MCP server repo if available, or describe structure].
Output two markdown files:
1. planning.md: Include project overview, scope, technical architecture, technology stack (Python, supabase-py), constraints, and potential challenges.
2. task.md: Outline the necessary steps/tasks to build and test this server. Start with setup, then core CRUD functionalities, testing, and documentation.

Review and refine the AI's output. Remove boilerplate definitions and ensure the plan accurately reflects your goals.

Pro Tip: Use multiple different LLMs (e.g., Claude, Deepseek Coder, etc.) for planning. Give each the same prompt and synthesize the best ideas from their outputs for a more robust plan.

Phase 2: Global Rules - Teaching Your AI Assistant

Global rules act as system prompts for your AI IDE. They provide high-level instructions that the AI should follow consistently without needing them repeated in every prompt. Think of them as standing orders.

For instance, you can instruct the AI to always read planning.md at the start of a new conversation or to always generate tests after implementing a feature.

The process document contains a template you can adapt. Here's how to set them up in Windsurf (other IDEs have similar features):

  1. Copy the global rules text from the template.
  2. In Windsurf, go to "Additional Options" -> "Manage Memories".
  3. Choose either "Global Rules" (apply to all projects) or "Workspace Rules" (apply only to the current project). Workspace rules are often better for project-specific requirements.
  4. Paste your adapted rules.

These rules typically cover using markdown files, adhering to file length limits, testing procedures, coding style, README maintenance, and more. Setting these up simplifies your subsequent prompts significantly.

Phase 3: Configure MCP Servers - Extending AI Capabilities

MCP (Membrane Control Protocol) provides tools (servers) that enhance your AI IDE's abilities, allowing it to interact with external systems like your file system, the web, or Git.

These three MCP servers are essential:

  1. File System Server: Lets the AI access files and folders outside the current project directory.
  2. Web Search Server (e.g., Brave Search API): Enables web lookups for documentation, examples, or research. Some IDEs have built-in search, but dedicated servers can offer more advanced features like AI-powered summarization.
  3. Git Server: Allows the AI to interact with your Git repository (commit, checkout branches, etc.).

Crucial: Always use Git! Set up a repository for every project. Prompt the AI to make frequent commits (Make a git commit to save the current state). AI will break things sometimes; Git is your safety net for reverting to working versions.

The process document includes links and setup instructions for these servers in various IDEs. In Windsurf, you configure them in the MCP config.json file.

Phase 4: The Initial Prompt - Kicking Off Development

With planning, rules, and MCP servers in place, it's time for the crucial initial prompt. Remember Golden Rule #4: Be Specific. Provide context, documentation, and examples.

Here are ways to give the AI context:

  1. Built-in IDE Features: Use commands like @mcp in Windsurf to include specific documentation.
  2. Web Search via MCP: Ask the AI to use the web search server: Search the web for documentation on the Supabase Python client.
  3. Manual Provision: Paste links to docs, GitHub examples, or relevant code snippets directly into the prompt.

Here’s the initial prompt used for the Supabase MCP server example:

Create a Supabase MCP server in Python based on the planning.md and task.md files.
Refer to the official MCP documentation [@mcp] and the Superbase Python client documentation [Search web if needed, or provide link: https://github.com/supabase-community/supabase-py].
Use this existing Python MCP server implementation as a structural example: [https://github.com/different-ai/brave-mcp]
Implement the core functionalities outlined in task.md:
- Connect to Supabase using environment variables (SUPABASE_URL, SUPABASE_KEY - I will handle setting these).
- Implement tools for:
    - Creating records in a specified table.
    - Reading records from a specified table (allow filtering).
    - Updating records in a specified table.
    - Deleting records from a specified table.
Follow Python best practices and include type hints. Generate a requirements.txt file.

An AI IDE like Windsurf will process this by:

  • Reading planning.md and task.md (as per global rules).
  • Incorporating specified context (@mcp).
  • Analyzing provided links/examples.
  • Generating the initial code structure (server.py, requirements.txt, maybe tests/).
  • Updating task.md.

The result in the example was a nearly complete server.py with CRUD tools, demonstrating the power of providing rich context.

Phase 5: Testing the Initial Build

Before iterating, test the first version. For the Supabase MCP server:

  1. Configure: Set up the new server in an MCP-compatible application (like Claude Desktop). This usually involves editing a config file to point to the server script and provide environment variables.
  2. Restart: Reload the application.
  3. Verify: Check if the new Supabase tools appear in the application.
  4. Test: Send a prompt to use one of the tools (e.g., What records do I have in my document_meta_data table?).

In the demonstration, the server worked correctly on the first try – a testament to the structured process.

Phase 6: Version Control with Git - Save Your Progress!

You have a working baseline. Commit it!

  1. Initialize Git: git init (if not already done).
  2. Create .gitignore: Ask the AI to help generate one for Python projects.
  3. Stage files: git add .
  4. Commit: git commit -m "Initial implementation of Supabase MCP server"

Use the Git MCP server or your terminal. This checkpoint is invaluable before adding tests or features.

Phase 7: Iteration and Testing - Refine and Verify

Now, iterate based on your task.md. Remember Golden Rule #2: One change at a time.

Address any missed items, like tests.

Writing Tests: Your global rules should guide the AI:

  • Use a tests/ directory.
  • Mock external dependencies (databases, APIs) for fast, isolated tests.
  • Cover success paths, error handling, and edge cases.

Prompt the AI:

Create unit tests for server.py in the tests directory. Follow the testing guidelines in the global rules.

The AI should generate test files (often longer than the source code!). Run them (e.g., pytest tests/) and iterate with the AI to fix any failures.

Phase 8: Further Iteration and Documentation

Continue refining:

  • Add new features one by one.
  • Ask the AI to generate or update the README.md with setup and usage instructions.
  • Ensure planning.md and task.md stay current. This maintains context for the AI, especially across different chat sessions.

Phase 9: Deployment - Sharing Your Creation

Once ready, ask the AI to help package your project. Docker is excellent for this, and LLMs are typically good at generating Dockerfiles.

Example Prompt:

Write a Dockerfile for this Python MCP server. Use requirements.txt for dependencies. Also, provide the Docker commands to build and run the container.

The AI can generate the Dockerfile and necessary commands. Update your README.md with Docker instructions.

Example Project: You can find the complete Supabase MCP server built using this workflow, including the Dockerfile, on GitHub: https://github.com/coleam00/supabase-mcp

Conclusion: Towards Consistent AI Coding

We've journeyed through a structured workflow for AI-powered development: from meticulous planning with markdown files and golden rules, through specific prompting, rigorous testing, version control, iterative refinement, and finally deployment. This process provides guardrails that help turn unpredictable AI assistants into reliable coding partners, leading to more consistent, high-quality results.

While this workflow provides a solid foundation, remember that the best process is the one that works for you. Experiment, adapt these steps to your specific needs, and enjoy a more productive and less frustrating experience coding with AI.