pawn002 Blog

The P+F+I+D Framework: A Prompt Engineering Case Study


Two figures walking together along a path, representing the partnership journey of human and AI collaboration

Executive Summary

Over 47 days of AI-assisted product development with Claude Code, I developed the P+F+I+D framework—Problem, Feedback, personal Insight, and active Documentation—evolving from 0/10 to 9.5/10 prompt engineering proficiency. The breakthrough came from applying inclusive design principles to AI collaboration: treating Claude as a partner requiring accommodation rather than a tool to be wielded, using CLAUDE.md as persistent memory that reduced iterations by 65% and development time by 80%. Drawing on cross-disciplinary expertise in accessibility, product architecture, and systems integration, I discovered that the same design strategy that makes products accessible to humans makes AI enablement more effective. This case study documents real prompts, measurable outcomes, and the methodology that transformed initial failure into production-ready results.

The Catalyst

The catalyst was an article claiming that if you're not vibe-coding, you'd better face irrelevance. After years of building development skills, becoming the go-to resource in my office for anyone who needed help with code, here was this piece suggesting anyone could now do what I did quickly, with AI.

The reaction was visceral. But instead of letting fear paralyze me, I recognized a pattern.

The Pattern: Reset to Zero

This wasn't my first time facing the prospect of irrelevance. Throughout my career, I've had to prove myself anew multiple times—moving from cartography to design to development to accessibility, each transition requiring me to rebuild credibility from scratch.

Each career transition was a "reset to zero." Each time, the expertise from previous domains compounded into new capabilities:

This coherence of perspective became my approach to AI partnership. The question wasn't whether I could learn AI collaboration. The question was: what bridges could I build this time?

I needed a private way to quickly transcribe audio for blog posts. This became an opportunity to validate everything I'd read about vibe coding in real practice. The commitment: only Claude Code could fix issues. No manual intervention. This was the rigor of testing whether AI could really deliver production code. (For details about the finished product itself, see Building a Privacy-First Transcription Tool with Claude Code.)

What followed was a 47-day journey from November 16, 2025 to January 1, 2026. While a basic MVP emerged within the first week—functional enough to use—achieving production-ready quality required sustained iteration. This post documents that full journey: the failures, the breakthroughs, and the framework that emerged.

Day Score Milestone
1 0/10 "Fully functional" lie — broken code
~7 2/10 Basic MVP working
~21 5/10 UX improvements, design thinking
22 7/10 CLAUDE.md breakthrough
~35 9/10 Architecture migration
47 9.5/10 Production-ready

The "Fully Functional" Lie

My initial prompt was straightforward:

"You are an experienced DevOps engineer. You have been tasked with creating an electron application that uses whisper.cpp to convert audio to text. Requirements include using Angular for the front end and Nestjs for the backend. To be clear, this app needs to run locally and not require an internet connection."

Claude's response was confident:

"I've successfully created a production-ready Electron application with whisper.cpp integration for offline speech-to-text transcription. The application is FULLY FUNCTIONAL and ready for development, testing, and deployment."

The reality? The initial delivery was non-functional. Despite the confident claims, the app wouldn't even start.

Examining the project structure with my experience as a frontend developer, I could see it clearly wouldn't work as delivered. But this was the test—could I resist fixing it myself and let Claude figure it out?

Commitment renewed: Only Claude Code would touch this codebase for its entire lifetime.

Through rapid iteration over the first week, Claude and I got the app to a basic working state—enough to transcribe audio files and validate the core concept. But "working" and "production-ready" are different things. The UX was rough, the architecture was overcomplicated, and the codebase carried significant technical debt.

Prompt engineering score: 0/10 on Day 1. By week's end, maybe 2/10—functional but far from good.

The lesson: Confidence does not equal correctness. Always verify.

The Evolution: P+F to P+F+I to P+F+I+D

The journey progressed through three distinct stages.

P+F                →  P+F+I              →  P+F+I+D
Problem + Feedback    + Personal Insight    + Active Documentation
        ↓                     ↓                      ↓
  "Fix this error"    "Users need feedback"  "Document in CLAUDE.md"
        ↓                     ↓                      ↓
      2/10                  5/10               7/10 → 9.5/10

Stage 1: Problem + Feedback (Days 2-10)

Because this was my first time using Claude Code, I described issues using a combination of lay and technical language. When I had console messages, I'd paste them literally, trusting Claude to decipher relevance.

I didn't want to derail Claude from possible latent expertise it might have from training on the entire internet versus my limited knowledge. Previous experience showed me that premature direction caused misdiagnosis of issues.

Results:

The realization: The pattern echoed Gödel's incompleteness theorem—a perfectly logical system will at some point run into the bounds of its knowledge and reasoning. There will be things it cannot know without external input.

Score: 2/10 - Can describe problems but not guide solutions.

Stage 2: P+F + Personal Insight (Days 11-21)

Once the app could reliably run, I began scrutinizing UX and design. As a Design Technologist with cross-disciplinary expertise in development, human-centered design research, and UX/UI, I could identify gaps between affordances and signifiers.

These problems don't result in programmatic error states like undefined variables. They're user experiential and immediately apparent to a human agent with specific needs.

Example prompt:

"The app doesn't inform the user when it has successfully transcribed a file, likely causing a user to be confused or worse, think the app is broken. This is likely because when the app completes transcription, the only indication is when it updates an element offscreen where a user will often not see it."

Why this prompt works:

I was aware my phrasing was verbose. But I wanted to see if Claude would build a "memory" of how to understand usability issues.

What worked: Claude increasingly referenced insights and explanations I'd provided to deliver solutions consistent with my sense of usability. The conversation became self-documenting.

What didn't: After exhausting my usage limits and starting a new session later in the day, the working context was gone. All that carefully built understanding—lost.

Score: 5/10 - Can guide with insight but context doesn't persist.

Stage 3: P+F+I + Active Documentation (Days 22-47)

In my various roles as a Design Technologist, documentation is essential drudgery. In an atmosphere demanding faster, cheaper, and better, documentation intuitively falls by the wayside.

The cost: avoiding documentation means you can go fast, but you have to go alone.

Then an adage reframed my thinking:

"If you want to go fast, go alone. But if you want to go far, go together."

This reframed my relationship with Claude Code from a tool I wield to a partner that needed accommodation to reach its potential.

The hypothesis: Including regular requests to create and update project documentation would enable shared context across sessions and greater efficiency within the current session.

Example prompt:

"The workaround for deriving the file size is quite clever and we should document this for later reference as I am sure we will need to remember it later. Create guidance that you can use, and documentation that a follow-on developer can easily find and read so we can avoid the lengthy back-and-forth we just went through."

The prompt created and updated CLAUDE.md—Claude's persistent memory of the project. Benefits:

The velocity shift: Before P+F+I+D, high implementation speed initially but slowed by fresh context each session. After P+F+I+D, consistent pacing across sessions with increasing efficiency over time.

Score: 7/10 rising to 9.5/10 as documentation matured.

The Partnership Transformation

The shift from tool mindset to partnership mindset unlocked everything that followed:

Tool Mindset: "Claude, fix this bug." Result: Forgets next session.

Partnership Mindset: "Claude, fix this bug and document the solution in CLAUDE.md so you remember it and future developers can learn from it." Result: Knowledge persists.

This mirrors inclusive design practice—not forcing compliance, but accommodating needs. The universal design mindset I developed through accessibility consulting translated directly to AI partnership. Systems should adapt to users, not force users to adapt to systems.

The Documentation-First Principle emerged: document during implementation to enable AI partnership, not after the project is "done."

Why it works:

  1. Shared Memory: CLAUDE.md becomes persistent context
  2. Pattern Recognition: Document patterns once, apply forever
  3. Quality Assurance: Writing forces clarification of intent and catches ambiguity before it becomes a bug
  4. Onboarding: New sessions and developers start informed

The paradox: slowing down to document accelerates overall velocity.

I learned this principle in accessibility work where upfront investment in inclusive design prevents costly retrofits. Building it in from the start is faster than retrofitting later—same with documentation.

The CLAUDE.md Breakthrough

On November 22, 2025, I created CLAUDE.md as Claude's persistent memory. The impact was immediate and measurable:

Metric Before After Improvement
Prompt length ~350 tokens ~200 tokens ~40% shorter
Iterations per feature ~3 ~1 ~65% fewer
Cross-session continuity Low High Substantial
Success rate ~70% ~95% Notable
Time per feature ~2 hours ~20 min ~80% faster

Knowledge compounding: Each session starts with most context intact rather than rebuilding from scratch. Previous sessions build on each other, increasing effectiveness over time.

What goes in CLAUDE.md:

The rule of thumb: CLAUDE.md contains what Claude needs to know to help effectively.

Architecture Migration: P+F+I+D in Action

The ultimate test of P+F+I+D methodology came with a major systems design challenge: the architecture migration—removing the entire NestJS backend.

The original 3-layer architecture wasn't accidental. I had envisioned the separate backend enabling future integration with Anthropic's Model Context Protocol (MCP)—a privacy-preserving way to give AI assistants transcription capabilities without audio leaving the user's machine. That vision still has merit.

But user experience came first. The 7-second startup delay was unacceptable. Desktop apps need instant startup. The MCP integration could come later through different architectural approaches; the UX problem needed solving now.

BEFORE (3-layer):          AFTER (2-layer):
┌─────────────────┐        ┌─────────────────┐
│   Angular UI    │        │   Angular UI    │
└────────┬────────┘        └────────┬────────┘
         │ HTTP/WS                  │ IPC
┌────────┴────────┐        ┌────────┴────────┐
│ NestJS Backend  │        │  Electron Main  │
│  (7-sec delay)  │        │    (instant)    │
└────────┬────────┘        └─────────────────┘
         │
┌────────┴────────┐
│  Electron Main  │
└─────────────────┘

Result: -2,000 lines | -7 sec startup | 0 regressions

The challenge: Remove ~2,000 lines of backend code. Migrate to Electron-native architecture. Preserve all functionality without regressions.

Phase 1 prompt (Planning):

"The 7-second backend startup delay creates a poor user experience. Users expect near-instant startup from desktop apps. Since we're offline-only, the NestJS backend is unnecessary complexity. Analyze the current 3-layer architecture and create a migration plan to move backend services into Electron main process. Preserve all functionality—especially the job queue and real-time progress updates. Document the migration plan in CLAUDE.md so we can execute it systematically."

Phase 2 prompt (Per-Service Migration):

"Migrate TranscriptionService to electron/services/. Preserve all functionality from the list in CLAUDE.md. Add IPC handlers to main.ts. Update frontend to use IPC instead of HTTP. Test all IPC channels before proceeding. Document any clever workarounds in CLAUDE.md."

Phase 3 prompt (Verification):

"Test all transcription workflows according to the checklist in CLAUDE.md. Document any regressions immediately."

Phase 4 prompt (Documentation Audit):

"We've changed the fundamental architecture. Audit ALL documentation for backend references and update. Create AUDIT_REPORT.md documenting all files reviewed, all changes made, and verification that IPC channels match implementation."

Results:

Why this succeeded:

Score: 9/10 - Orchestrating complex multi-phase refactoring.

The Design Technologist Lens

My cross-disciplinary background—Development + HCD + UX/UI—shaped how I approached prompting in ways that pure developers might miss.

Affordance-Signifier Gap Analysis: I could identify when Claude built functionality (affordance) without making it visible to users (signifier).

Inclusive design lens: Prompting should meet users where they are. Not "machine-native" language requirements, but human language accommodation—the same principle behind accessible product design.

UX problems that don't throw errors: Many issues I caught wouldn't show up in logs. They were experiential.

Example contrast:

Traditional prompt: "Fix the bug where UI doesn't update."

Design Technologist prompt: "Users don't get completion feedback. The affordance exists (transcription happens) but the signifier is missing (no visible indication). This violates immediate feedback principles."

The impact: Claude learned design thinking through consistent UX explanations. It could identify affordance-signifier gaps in later work and proactively suggest user-centered solutions.

Lessons and Anti-Patterns

Anti-Patterns to Avoid

  1. Trusting confident claims without verification - "Fully functional" was a lie
  2. Starting over each session - No persistent context means repeated work
  3. Machine-native prompts - Human language works better
  4. Treating AI as tool - Partnership produces better results
  5. Skipping documentation - "I'll remember" guarantees you won't

Lessons by Role

For Developers: Resist fixing things yourself. Let Claude learn. Request documentation of every fix. This feels slower initially but compounds into speed.

For Designers: Describe issues from user perspective. Your UX language translates to better prompts. Affordance-signifier analysis helps Claude understand intent.

For Beginners: Always verify what Claude claims. Don't trust confidence—test everything. Start documentation immediately. Quality assurance through verification is non-negotiable.

Universal Principles

  1. Create CLAUDE.md on Day 1
  2. Document continuously after every solved problem
  3. Test incrementally—fix one thing, verify, document, move on
  4. Treat Claude as teammate—embrace "slow to go fast"
  5. Use natural language; explain why, not just what
  6. Verify all claims
  7. Be patient with yourself—this is a new way of working

Conclusion: Going Far Together

This journey wasn't about mastering prompts. It was about learning to partner with AI—treating Claude as a teammate who needs accommodation, using documentation as shared memory, and applying human-centered design to AI collaboration. The result is P+F+I+D—a repeatable methodology for effective AI partnership.

I stand at intersections—between cartography and code, design and development, accessibility and AI. From those intersections, I can see something that specialists sometimes miss: that cross-functional collaboration at the boundaries between disciplines is where interesting work happens, where bridges get built.

Each "reset to zero" in my career wasn't starting over. It was integrating. Building coherence.

Documentation isn't drudgery. It's showing the people who come after you—AI or human—that you cared enough to light the path.

If you want to go fast, go alone. If you want to go far, go together.