← Back to projects

Chamfer

Live

AI-powered resume tailoring that selects, not generates

Try Chamfer →
Chamfer showing JD analysis and resume differentiators

TLDR

AI resume tailoring that treats your career as a Master CV and composes the optimal subset of accomplishments for each job. Multi-stage LLM pipeline (not a single mega-prompt), real-time streaming, one-page PDF output. 45 minutes per application down to 5.

The Problem

Every job application demands a tailored resume. Not just a different filename — a genuinely rewritten document where your experience is reframed through the lens of what that specific company cares about. The bullets that matter for a growth PM role at a fintech startup are completely different from the ones that matter for a platform PM role at an enterprise SaaS company, even though they come from the same career.

The typical workflow is brutal: open your resume, re-read the job description, mentally map your experience to their requirements, rewrite 15-20 bullets to mirror their language, figure out what to cut to stay on one page, and hope you didn't miss a keyword their ATS is scanning for. Repeat for every application.

I was doing this myself — spending 30-45 minutes per application on resume tailoring alone — and realized the bottleneck wasn't writing ability. It was the combinatorial problem of selecting and reframing the right subset of accomplishments from a much larger pool of experience.

The Bet

The hypothesis: if you maintain a comprehensive "Master CV" with every accomplishment you've ever had — well-structured, categorized, and metric-rich — then an AI can do the combinatorial selection and reframing work for you. Not generate fiction. Not hallucinate experience. Just pick the strongest 15-20 bullets from your pool of 30-50+, reword them to mirror the job description's exact terminology, and compose a one-page resume that reads like you wrote it specifically for that role.

I decided to build Chamfer — named after the machining term for a precision-angled edge cut. The product would do three things:

Help users build a comprehensive, structured Master CV from existing resumes. Analyze job descriptions to extract what actually matters (not just parse keywords). Compose a tailored one-page resume that maximizes match to requirements.

Key Constraint

The AI should never invent experience. Every bullet in the output must trace back to something real in the Master CV. The value is in selection and framing, not fabrication.

Building It

The MVP came together around a core loop: paste a job description, get a tailored resume, download a PDF. No accounts, no persistence — just the transformation.

The first working version had three API routes: analyze the JD to extract requirements, tailor the resume from a Master CV, and render a PDF. The resume data was hardcoded as a JSON fixture. It worked well enough to validate the core idea — the AI was genuinely good at selecting relevant bullets and mirroring JD language.

From there, the product grew in layers. Each iteration added one capability and shipped: streaming responses and outreach message generation, then job application tracking, then PDF upload so users could import existing resumes instead of entering data manually, then AI bullet rewriting during import, then CV health scoring with inline editing, and finally a Supabase backend with auth replacing localStorage.

Scope Decision: One Page Only

I deliberately constrained the output to a single-page resume. Multi-page resumes are a different product with different layout challenges. The one-page constraint also forced better bullet selection — the AI can't just include everything, it has to make real tradeoffs about what earns its spot.

Scope Decision: No Resume Templates

Instead of offering 15 template styles, Chamfer generates a single clean format — Times New Roman, traditional layout, optimized for ATS parsing. Template variety is a distraction from the core value proposition, which is content selection and framing.

Technical Decisions

Next.js App Router with SSE Streaming

The tailoring process takes 10-15 seconds with Claude, which is too long to show a loading spinner. Every AI-powered route uses Server-Sent Events to stream results in real-time. The user sees their resume materialize section by section — summary first, then experience bullets, then skills — which makes the wait feel productive rather than anxious.

The SSE implementation sends heartbeat events every 10 seconds to prevent Cloudflare from killing the connection with a 524 timeout on longer operations like PDF parsing. The streaming approach also simplified error handling — partial results are visible even if the stream fails mid-way.

The PCA Bullet Framework

Early on, I found that AI output quality varied wildly depending on input quality. A vague bullet like "Worked on product improvements" gives the AI nothing to work with. So I designed a structured bullet format — Performance Context Achievement:

[Action Verb] — [Context/Impact] — [Result → Metric]

Example: "Led SDK development from concept to scale — secured Venmo partnership and drove 20% message engagement — 3B+ daily messages"

When users upload a PDF, Claude Vision extracts their bullets and simultaneously rewrites each one into PCA format. The original text is preserved, but the rewritten version becomes the input for tailoring. This single decision dramatically improved output consistency — the tailoring system gets well-structured inputs every time, regardless of how the user originally wrote their resume.

10-Step Tailoring Pipeline

Rather than giving Claude a single "tailor this resume" instruction, the tailoring prompt walks through a 10-step optimization process: mirror exact ATS key phrases verbatim, write a summary addressing every must-have with quantified proof, select bullets prioritizing metrics and requirement matches, ensure every responsibility has a corresponding bullet, order bullets with strongest proof at top, edit bullets to use the JD's exact terminology, frame experience through the company's strategic lens, bridge gaps with transferable experience, curate skills to only JD-relevant ones, and score confidence honestly with keyword coverage metrics.

The structured pipeline makes the AI's reasoning transparent and consistent. The gap-bridging step is particularly important — when a user doesn't have direct experience with a requirement, the AI reframes adjacent experience rather than omitting it or admitting a gap.

Confidence Scoring and Auto-Optimization

After tailoring, the system returns a confidence assessment: an overall score (0-100), per-requirement match levels (strong/moderate/weak/gap), ATS keyword coverage, and specific suggestions for improvement.

If the initial score lands below 90%, the system automatically runs a second optimization pass — silently generating targeted instructions to upgrade moderate matches, bridge gaps, and weave in missing keywords. This happens transparently while the user reads their first draft.

The user then sees "Fix" buttons on individual requirements and "Apply" buttons on suggestions. Each generates a narrowly scoped prompt that modifies only the relevant bullets — solving a critical problem where asking an AI to "improve one thing" typically causes it to rewrite the entire document.

Every refinement prompt starts with an explicit constraint header: "TARGETED FIX — Do NOT modify bullets, summary text, or skills that are unrelated to this specific instruction." Without this, Claude would helpfully restructure the entire resume every time the user asked it to strengthen one bullet. The constraint keeps refinement predictable and trustworthy.

Evolution

After the MVP proved the core tailoring loop worked, the product grew along three axes:

Depth of Tailoring Intelligence

The initial version did basic bullet selection. Over time, I added ATS keyword extraction (exact n-gram phrases, not just individual words), company thesis generation (framing experience through what the company cares about strategically), nice-to-have matching, responsibilities coverage, and gap bridging. The confidence scoring system made this improvement measurable — early versions typically scored 60-70%, while the current pipeline consistently hits 85-95%.

CV Building as a First-Class Experience

Originally, the Master CV was just an input to the tailoring system. I realized that helping users build a better Master CV was equally valuable — most people's resumes are underselling their experience. Adding PDF upload with AI rewriting, health scoring across 7 competency categories, and a deep-dive conversational feature (where the AI asks targeted questions to surface accomplishments you forgot to include) turned CV building into a product in itself.

Application Lifecycle Tracking

Tailoring a resume is one step in a longer workflow. The application tracker, outreach message generation (hiring manager, recruiter, referrer), Q&A answer generation for application forms, and pipeline visualization turned Chamfer from a single-use tool into a persistent workspace.

What I'm Not Building

Resume templates — one clean format, optimized for ATS. Cover letter generation — different enough to deserve its own product thinking. Multi-page support — the one-page constraint forces better content decisions.

What I Learned

The Core Insight

The core insight isn't "AI can write resumes." It's that resume tailoring is a selection and reframing problem, not a generation problem. The value comes from having a comprehensive pool of real accomplishments and an intelligent system for composing the optimal subset for each opportunity. This distinction — curation over creation — is what makes the output trustworthy. Every bullet traces back to something the user actually did.

Prompt Architecture > Prompt Engineering

Working with Claude across multiple product surfaces taught me that prompt architecture matters more than prompt engineering. A single mega-prompt asking "tailor this resume" produces mediocre results. Breaking it into a structured pipeline — analyze the JD first, then tailor against specific extracted requirements, then score, then auto-optimize gaps — produces dramatically better output.

The other hard-won lesson: scope discipline is everything in iterative AI workflows. When users refine their resume through multiple rounds of feedback, the AI's instinct is to rewrite broadly. Explicit constraint headers make the difference between a tool that feels predictable and one that feels chaotic.

Shipping Fast, Intentionally

Chamfer went from idea to working product in about two weeks of focused building, then continued evolving through 27 PRs. The pace was possible because of deliberate scope constraints — one resume format, JSONB storage instead of normalized schemas, client-side Master CV in API requests instead of server-side data fetching. Each of these was a conscious trade of theoretical elegance for shipping speed.

Building quickly means being willing to delete quickly too. I built an invite code system, shipped it, realized it was friction with no upside at the current stage, and removed it entirely. The migrations are still in the repo as artifacts.