Team Workflows & Collaboration
Windsurf becomes exponentially more powerful when a team standardises on shared conventions, shared .windsurfrules, and shared prompt patterns. This module covers how to scale AI-native development across an engineering team.
Shared .windsurfrules
Your .windsurfrules file is a team asset. Maintain it in version control, assign an owner (typically the tech lead), and update it in team retrospectives when new patterns or anti-patterns emerge. New joiners onboard faster when Cascade already knows your conventions.
AI-Assisted Code Review
Before submitting a PR, run a Cascade /review flow: "Review this diff for correctness, security issues, and alignment with our .windsurfrules conventions." Use the output as a self-review checklist before requesting human review -- it catches most mechanical issues automatically.
Flow Templates Library
Maintain a shared library of Flow prompt templates for common tasks: new feature scaffold, test generation, migration flows, documentation generation. Store them in a team wiki or README. Standardised prompts produce standardised output quality.
Onboarding New Developers
New team members can use Chat mode to understand the codebase before making any changes: "Explain the overall architecture", "How does authentication work?", "What is the data flow for a new order?" This dramatically reduces the time from join date to first meaningful contribution.
PR Preparation Flow
Review the staged changes in this branch for: 1. Correctness -- does the implementation match the stated goal? 2. Security -- SQL injection, auth bypasses, exposed secrets, SSRF 3. Performance -- N+1 queries, missing indexes, unbounded operations 4. Convention violations -- check against .windsurfrules 5. Missing tests -- are all new code paths covered? 6. Breaking changes -- does this change any public API or data schema? For each issue found, provide: - File and line number - Severity: critical / warning / suggestion - Specific fix recommendation End with a summary: ready to review / needs work, with reasoning.
Cascade-assisted pre-review catches mechanical issues, convention violations, and common security patterns very reliably. It does not catch intent mismatches, business logic errors, or nuanced architectural concerns. Human review remains essential -- AI review reduces the noise so humans can focus on what matters.
ShopMate -- Team Flow Templates
## ShopMate Shared Flow Templates (save in docs/windsurf-templates.md) ## New Claude Feature Template Add a new ShopMate Claude feature: [FEATURE NAME] Endpoint: [METHOD] /[path] Input: [JSON fields] Output: [JSON fields] Model: [haiku/sonnet] -- [reason: cheap utility / quality needed] Requirements: - Call via logged_create(brand_id=brand_id, feature="[feature_name]") - Customer-facing replies must use safe_reply() - System prompt in shopmate/prompts/[name].py as module-level constant - Tests mock Claude with respx -- no real API calls in tests Tests: - Happy path: correct output structure - Claude API error: graceful fallback - Brand voice applied correctly (if multi-brand) ## New Brand Onboarding Template Add a new brand to ShopMate: [BRAND NAME] 1. Add entry to shopmate/config/brands.yaml following threadco structure 2. Create sample product descriptions to test the brand voice 3. Add brand_id to the audit log test fixtures 4. Run: pytest tests/ to verify all existing tests still pass 5. Write 5 sample product descriptions and share with the brand for approval
# Run this before requesting review on any ShopMate PR
Review the staged changes in this ShopMate PR for:
1. Are all new Claude calls going through logged_create()?
2. Are customer-facing replies going through safe_reply()?
3. Are there any hardcoded brand names or API keys?
4. Do new tests mock the Anthropic API (not making real calls)?
5. Is the brand_id always passed through to logged_create()?
6. Any prompt templates that use forbidden words for a brand?
For each issue: file, line, severity (blocker/warning/suggestion), and fix.
End with: ready to merge / needs fixes.