I Applied to 250+ Jobs a Month with AI
Companies are using AI to screen you out before a human reads your resume. I built my own AI to apply on my behalf, for roles I actually want.
Job hunting has a volume problem. Networking and tailored outreach matter. They work, and they are genuinely important. But in a competitive market where most applications disappear into ATS filters before a recruiter sees them, volume increases the surface area of possibility. I built AppCopilot during my last job search to multiply that probability. Building it turned out to be more fun than the job hunt itself, and I have been sharing it with friends and family going through their own searches ever since.
AppCopilot scrapes major ATS platforms (Ashby, Greenhouse, Lever, Wellfound,and SmartRecruiters) scores each listing against my resume using keyword matching and fit scoring, and surfaces only the roles worth reviewing. The system does not replace networking or tailored outreach. It multiplies the probability that my resume gets seen by increasing the volume of high-fit applications, without sacrificing the quality of each individual submission.
The review interface is modeled on a swipe-based card stack. Each card shows the job summary, all required application fields, the AI's drafted answers, and a confidence score per field. When everything looks right, I swipe right. The AI submits. When something is off, I leave a short correction note: what was wrong, what should change, and the system redrafts and returns the card. That feedback loop trains the system over time.
Demo Video
Agentic Workflow
flowchart TD
A["`**Job Scraper**
Scans ATS platforms: Ashby, Greenhouse,
Lever, Wellfound, LinkedIn, SmartRecruiters`"]
B["`**Fitness Scoring Engine**
Resume-to-JD keyword matching
Fit score + recency weighting`"]
C["`**Application Packet Builder**
Identifies all required fields
Drafts answers per question`"]
D["`**Confidence Estimator**
Per-field confidence score
Overall go/no-go rating`"]
E["`**Review UI (Swipe Interface)**
Card stack: job summary + drafted answers
Swipe right = submit | Swipe left = feedback`"]
F["`**Feedback Loop**
Open-text correction
Reprocess and re-draft`"]
G["`**Autonomous Submission Layer**
High-confidence applications bypass review
Scales to 250/month`"]
A -->|Ranked by fit + recency| B
B -->|Qualified listings| C
C -->|Pre-filled packet| D
D -->|Review card| E
E -->|Green light| G
E -. Correction feedback .-> F
F -. Re-draft .-> C
D -->|Low confidence| E
class A source;
class B,C,D ai;
class E,F feedback;
class G control;
classDef ai fill:#dbeafe,stroke:#2563eb,stroke-width:1.5px,color:#0f172a;
classDef source fill:#fef9c3,stroke:#ca8a04,stroke-width:1.5px,color:#0f172a;
classDef feedback fill:#dcfce7,stroke:#16a34a,stroke-width:1.5px,color:#0f172a;
classDef control fill:#e5e7eb,stroke:#6b7280,stroke-width:1.5px,color:#0f172a;
linkStyle 0,1,2,3,4,7 stroke:#334155,stroke-width:2.5px;
linkStyle 5,6 stroke:#7c3aed,stroke-width:2px,stroke-dasharray: 6 4;I built a deliberate trust ladder into the architecture. Early on, I reviewed every application before submission. As the feedback loop tightened and the AI's drafts improved, I shifted from human-in-the-loop to human-on-the-loop: the AI runs autonomously for high-confidence applications, and I check in on edge cases rather than every card.
The outcome was measurable. In the early review-everything phase, I was processing 5 to 8 applications per day, limited by my own bandwidth rather than the system. Once the confidence threshold stabilized and I shifted to oversight mode, the system was submitting around 250 applications per month, all to roles I had pre-qualified as worth pursuing. I have since wrapped up my job search and moved on, but the system stayed. Friends and family dealing with their own searches have found it just as useful.
Key Outcomes
- Application volume: 5-8/day (review phase) → 250/month (autonomous phase)
- Trust model: human-in-the-loop → human-on-the-loop as confidence threshold stabilized
- All applications pre-qualified by fit score