Skip to content
Automation AA-014

Job Apply Assistant

Semi-automated job application pipeline with 5 stages and 3 human decision gates — Playwright scrapes postings across ATS platforms, analyzes requirements against a master profile, tailors resumes and cover letters, exports to 4 formats, and navigates application forms without auto-submitting.

01 — Problem

The Mechanical Burden of Applying for Work

Job applications exist in a peculiar purgatory — too consequential to automate fully, too repetitive to justify the manual labor they demand. Each application required the same ritual: copy the job description into a document, read it carefully for keywords, rewrite the resume to mirror the posting’s language, draft a cover letter that sounds both personalized and professional, export to 4 formats (MD, TXT, DOCX, PDF), then navigate an application form that asks for everything already contained in the documents I just uploaded. A single thoughtful application consumed 45–60 minutes. Multiply by 30 applications in an active search, and you’ve lost an entire work week to mechanical repetition.

I didn’t need full automation — the judgment calls (which jobs to pursue, which experiences to emphasize, how to frame a career transition) require a human. I needed a system that handled the 80% that doesn’t: the scraping, the reformatting, the document generation, and the form navigation. A pipeline that preserves human decision-making while eliminating the labor that surrounds it.

02 — Architecture

Five Stages, Human-in-the-Loop at Every Gate

The pipeline operates as a semi-automated assembly line with explicit human decision points between each stage:

Stage 1 — Job Description Scraping (Playwright)

Given a URL, Playwright navigates to the posting, extracts the structured content (title, company, requirements, qualifications, description), and normalizes it into a standard JSON schema. Handles dynamic rendering (SPAs, lazy-loaded content) and multiple ATS platforms (Workday, Greenhouse, Lever, iCIMS). The scraper adapts selectors per platform rather than relying on a generic extractor — specificity beats generality when every ATS renders differently.

Stage 2 — Requirements Analysis

The scraped job description is analyzed against a master candidate profile — a structured document containing all skills, experiences, certifications, and accomplishments. The system identifies which requirements match existing profile entries (direct hits), which have partial overlap (reframeable), and which are gaps. This analysis drives the tailoring decisions in the next stage. Human gate: review the match analysis and decide whether to proceed or skip this posting.

Stage 3 — Document Tailoring

Using the match analysis, the pipeline generates a tailored resume (reordering and emphasizing relevant experience) and a cover letter (addressing specific requirements by name). Both documents reference the master profile as source material but restructure it to mirror the posting’s language and priorities. Human gate: review and edit both documents before export.

Stage 4 — Multi-Format Export

The reviewed documents are exported simultaneously to Markdown, plain text, DOCX, and PDF. Each format is generated from the same source to prevent version drift. The DOCX and PDF outputs use branded templates with consistent typography and layout.

Stage 5 — Form Navigation Assist

Playwright opens the application form and pre-fills fields from the candidate profile: name, email, phone, work history, education. It does not auto-submit — the browser pauses at each page for human review. Human gate: verify all pre-filled data and click submit manually. This stage saves the most tedious minutes while preserving full accountability for what gets submitted.

Key Design Decisions

Why human-in-the-loop instead of full automation? Fully automated job applications are both ethically questionable and practically ineffective. Mass-submitting generic applications signals desperation to recruiters and dilutes the quality of every application. The HITL model ensures each submission reflects deliberate judgment about fit, framing, and emphasis. The pipeline accelerates the labor; it doesn’t replace the thinking.

Why Playwright instead of HTTP requests for scraping? Modern ATS platforms render job descriptions via JavaScript — the content doesn’t exist in the initial HTML response. Playwright runs a real browser, waits for dynamic rendering, and extracts the fully-rendered DOM. This adds ~3 seconds per scrape but reliably captures content that request-based scrapers miss entirely.

03 — Outcomes

Measured Results

5
Pipeline Stages

from URL input to submitted application with human gates

70%
Time Reduction

per application — from 45–60 min to 12–15 min average

4
Output Formats

MD, TXT, DOCX, PDF — generated from single source

3
Human Decision Gates

match review, document edit, and submission verification

04 — Reflection

Automation With Accountability

The philosophical question at the center of this project is one I think about constantly: where does useful automation end and harmful abdication begin? The job search is one of the highest-stakes activities in adult life — the consequences of each application land on a real person’s career trajectory. Automating the mechanical parts (scraping, formatting, form-filling) is clearly net-positive. Automating the judgment parts (which jobs to pursue, how to frame your story, whether this role serves your long-term trajectory) would be net-negative. The pipeline’s explicit HITL gates aren’t a limitation. They’re the design.

What I’d change: the requirements analysis stage currently treats all listed qualifications as equally weighted. In reality, the first 3 requirements in a posting are typically must-haves while the last 3 are nice-to-haves. Adding a positional weighting model — where requirements earlier in the posting carry more weight — would produce more accurate match scores and better tailoring decisions.

“The question isn’t whether to automate the job search. It’s which parts deserve your attention and which parts are stealing it. Automate the theft. Keep the judgment.”

Outcomes

5 pipeline stages with 3 human decision gates; 70% time reduction per application (45 min to 12 min); 4 simultaneous output formats; Multi-ATS scraping via Playwright