Windsurf Track
Module 30
Windsurf Track -- Module 30
Refactoring ShopMate: Six months of rapid growth left ShopMate with a 500-line service file and duplicated prompt templates across three features. The developer uses Windsurf's advanced patterns -- TDD, incremental refactoring, architecture chat -- to clean it up without breaking the live customer-facing features.

Windsurf Advanced Techniques

These patterns separate productive Windsurf users from exceptional ones. Each addresses a professional development workflow that benefits significantly from Cascade's agentic capabilities.

Test-Driven Flows

Describe the desired behaviour; ask Cascade to write failing tests first, then implement to pass them. "Write tests for a rate limiter that allows 100 requests per minute per IP, then implement the middleware." This produces better-structured code and a built-in verification loop.

Iterative Refinement

Treat Flows as conversations. Start broad: "Implement the payment webhook handler." Review the plan, then refine: "Good, but use the strategy pattern for different payment providers, not a big if-else." Cascade updates its approach and re-plans.

Architecture Exploration

Use Chat mode for architectural discussions before implementation: "What are the trade-offs between a message queue vs direct API calls for our notification system, given our current infrastructure?" Cascade reasons over your specific codebase context.

Incremental Refactoring

For large-scale refactoring, use small incremental Flows over one giant change. "Migrate UserService.ts to the repository pattern" rather than "Migrate all services." Smaller diffs are easier to review and safer to apply -- and easier to roll back if something breaks.

Weak Flow Prompt

Refactor the codebase to use better patterns.

Strong Flow Prompt

Refactor src/services/user.ts to use the repository pattern. Extract all database queries into a UserRepository class. UserService should depend on a UserRepositoryInterface, not the concrete implementation. Do not change the public API of UserService. Add unit tests for both the repository and the service using mocks.

ShopMate -- TDD and Refactoring Flows

Text -- TDD Flow: Review Sentiment Feature
# Test-driven: write the tests first, then the implementation

Add a review sentiment classifier to ShopMate. Write failing tests first.

Feature: classify_review_sentiment(review_text: str) -> dict
Returns: {"sentiment": "positive|negative|neutral", "score": 1-5, "key_issue": str|None}

Tests to write FIRST in tests/test_sentiment.py:
1. "Softest tee I have ever worn, perfect fit" -> sentiment=positive, score>=4
2. "Runs very small, had to return it" -> sentiment=negative, key_issue contains "sizing"
3. "It arrived fine, seems ok" -> sentiment=neutral
4. Empty string input -> raises ValueError
5. Response must be valid JSON (Claude sometimes adds preamble)

Confirm tests FAIL before implementing.

Then implement in shopmate/reviews/sentiment.py:
- Use claude-haiku (cheap for classification)
- System prompt must force JSON output -- handle parse errors
- Call via logged_create(brand_id="internal", feature="review_sentiment")
Text -- Architecture Chat: Email Delivery
# Chat mode before implementing a significant new piece of infrastructure

Given our current ShopMate architecture in @shopmate/api/main.py
and the fact that we send ~500 emails per week:

Should we:
Option A: Generate and send emails synchronously in the FastAPI request
          (simple, but slow -- email sending can take 2-3 seconds)

Option B: Use FastAPI BackgroundTasks to generate and send asynchronously
          (faster response, but harder to handle failures)

Option C: Write emails to a queue (Redis list) and have a separate worker
          process them (most resilient, but more infrastructure)

Our current setup: a single FastAPI process on a single VPS, no Redis yet.
Expected volume: max 50 emails per hour during flash sales.
Which would you recommend and why?