The Problem
Career development for engineers is kind of a mess. You spend hours manually tracking what you’ve accomplished, never really know if you’re operating at the right level, and rarely get concrete advice on what to actually work on next. I wanted to build something that solved this in a way that felt natural - what if your actual work could just… inform your career growth automatically?
Eight days later, that’s exactly what I shipped.
What I Built
Paveway analyzes your GitHub activity, daily work logs, and resume to give you objective career level assessments and tell you what to focus on next. Think of it like having a career coach who actually understands the IC1-IC6 ladder and can tell you exactly where you stand based on what you’re actually doing.
The stats are pretty wild: 309 files changed, 43,000+ lines of code, 50+ commits, and 7 major features. But honestly, the real story is in the decisions that let me move this fast.
How I Built It Fast
The Stack
I went with a pretty opinionated stack because I didn’t want to waste time making trivial decisions:
- Next.js 15 with App Router and React Server Components
- Supabase for database, auth, and realtime
- TypeScript in strict mode (more on this later)
- Shadcn/UI for components
- Vercel for deployment
This gave me auth, database migrations, and a component library immediately. I literally spent zero time configuring webpack or setting up auth flows. Just started building features.
Database First, Always
Here’s something that saved me: every single feature started with a migration. Before touching any UI code, I’d design the schema, add Row Level Security policies, and think through the data model.
This sounds boring but it prevented so much pain. The database grew from 4 tables to 15+ and I never had to do emergency schema fixes or stop mid-feature to redesign things.
The database evolved from 4 foundational tables to 15+ specialized tables across three weeks:
-- Week 1: Foundation
User profiles, Daily logging, Prompt system, User responses
-- Week 2-3: Feature expansion
GitHub integration, Repository tracking, Webhook processing,
AI analysis results, Career insights, Subscription management,
Billing history, Resume analysis, User onboarding,
Access control, Activity auditing
Every migration was documented and tested before I deployed. Zero downtime, zero rollbacks.
Server Components Everywhere
I went all-in on React Server Components. If something didn’t need client-side interactivity, it was a Server Component. Period.
This was huge for performance. The billing page used to have this annoying flash where it would load, check subscription status client-side, then re-render. I moved the subscription check to the server and now users just see the correct state immediately. Plus it’s more secure since the subscription validation isn’t happening in the browser.
The GitHub Integration
This is the coolest part of the whole thing. Let me walk through how it actually works.
OAuth and Token Handling
I implemented the standard OAuth 2.0 flow with GitHub following industry best practices for token security and storage. The OAuth callback handler implements proper validation and exchanges credentials over secure server-side connections.
Analyzing Your Code
Users pick which repos to analyze - you have full control over what Paveway sees. The analysis system fetches relevant GitHub data through their API and evaluates multiple signals including commit patterns, pull request activity, and code review participation to map to the IC1-IC6 framework.
Real-time Updates via Webhooks
I built a webhook endpoint that GitHub calls on every push or PR, enabling real-time career tracking as you work.
The system implements proper webhook signature verification and rate limiting to ensure security and prevent abuse. Each webhook event is logged and processed asynchronously to maintain system performance.
The AI Analysis Engine
Getting the AI to actually give good career assessments was harder than I expected.
Making Prompts Work
Getting accurate assessments required multiple prompt iterations. My approach provides the IC1-IC6 framework as context, includes relevant work examples, and requests evidence-based reasoning with confidence scores.
The system analyzes patterns in technical contributions and provides structured assessments including current level evaluation, demonstrated skills, gap analysis, and actionable growth recommendations.
I’m still iterating on this. The assessments are pretty good but not perfect yet.
Keeping Costs Under Control
AI analysis requires careful cost management. I implemented several optimizations including smart context windows, result caching, batch processing where appropriate, and comprehensive token tracking for cost monitoring.
The current pricing model provides healthy unit economics while delivering weekly analyses and on-demand assessments to subscribers.
Adding Monetization
I needed to validate that people would actually pay for this, so I built Stripe integration in week two.
The Stripe Setup
The integration handles everything:
- Checkout: Click upgrade, get redirected to Stripe Checkout with a trial
- Webhooks: Stripe tells us about subscription changes
- Access Control: Middleware checks subscription on every request
- Billing Portal: Users manage their subscription through Stripe
The webhook handler processes subscription events securely and updates our database accordingly, handling subscription creation, updates, and cancellations.
Read-Only Mode for Free Users
Non-subscribers can see their data but can’t create new logs or trigger analyses. This was trickier than it sounds - I needed consistent subscription state across the whole app.
I built a useSubscription hook that checks once per session:
const { isSubscribed, isTrialing, needsUpgrade } = useSubscription();
if (needsUpgrade) {
return <UpgradeBanner />;
}
Simple but effective.
Building Admin Tools Early
Look, you’re going to need admin tools eventually. Build them now while you actually understand the data model.
The admin dashboard has:
- User Management: See everyone, update roles, manage accounts
- GitHub Monitoring: Track connections, repos, webhook success rates
- Analytics: Usage metrics, subscription stats, engagement
- Audit Logs: Full activity tracking
- Prompt Management: Configure AI prompts, monitor performance
- Resume Reviews: See all resume analyses
I added role-based access control with middleware protection on admin routes. Nothing fancy, just a boolean check on every request.
Making It Fast
Speed matters. Here’s what I did.
Server Components for Real
Moving components to the server eliminated so many waterfall requests. The dashboard used to make five separate API calls. Now it’s one round-trip. Huge difference.
Lazy Loading Charts
I use Recharts for viz, but only load chart components when you actually need them. The dashboard shows summary stats immediately, then hydrates charts when you interact with them.
Proper Database Indexes
Every query has indexes. I added composite indexes for common patterns:
CREATE INDEX idx_logs_user_date
ON daily_logs(user_id, created_at DESC);
CREATE INDEX idx_webhooks_repo_created
ON webhook_events(repo_id, created_at DESC);
Query times dropped from 200ms to 15ms for the daily log timeline.
What Actually Worked
Shipping every single day. I committed to visible progress daily. This kept scope tight and prevented over-engineering. If you couldn’t see it by end of day, it didn’t happen.
Database-first design. Seriously, do your migrations first. I never had to stop and redesign the schema halfway through a feature.
TypeScript strict mode from day one. Yeah it’s annoying at first, but it caught so many bugs before they hit production. Made refactoring way less scary too.
Ship complete features. Each feature got built end-to-end: UI, API, database, admin tools, docs. I never shipped half-finished stuff and had to come back to it later.
What Was Hard
Stripe is complex. The Customer Portal integration and webhook handling took a full day just to test all the edge cases. Subscription state management is no joke.
GitHub webhooks aren’t well documented. The signature verification part was annoying to figure out. I ended up just reading GitHub’s official examples a lot.
AI prompt engineering. Getting accurate career assessments took multiple tries. Still not perfect honestly.
Accessibility. I had to do a whole pass fixing ARIA labels, keyboard nav, and screen reader stuff. Easy to forget when you’re moving fast.
What’s Next
I’m actually three weeks ahead of schedule somehow. The foundation works and the core features are solid. Now I’m focusing on:
- Beta testing with real engineers to validate the assessment accuracy
- Vector search for semantic search over your career history
- Better AI context using embeddings
- Scale testing to make sure this works under load
Goal is to launch publicly in January 2026.
Advice for Other Builders
If you’re building something similar, here’s what I’d recommend:
Use boring tech. Pick proven tools with good docs. I never fought the stack, I just built features.
Design your database first. Schema changes are expensive later. Get it right early.
Ship daily. Visible progress creates momentum and keeps you honest about what actually matters.
Build admin tools now. You’ll need them anyway. Build them while the data model is fresh in your head.
Server Components are underrated. They’re faster, simpler, and more secure. Use them by default.
Add payments early. Building Stripe integration after launch sucks. I validated people would pay in week two.
Don’t automate until you’ve done it manually. I manually analyzed GitHub activity before building automation. Informed the whole feature design.
Wrapping Up
Eight days from empty repo to feature-complete product. Daily logging, GitHub integration, AI analysis, subscriptions, onboarding, and admin tools all working.
The foundation is solid. The economics work. The tech stack scales. Now I just need to validate with real users and keep refining.
Building fast doesn’t mean building bad. It means making good decisions, staying disciplined about scope, and shipping something real every day.
Just getting started.
Paveway is currently in private beta. Want early access? Join the waitlist
This is part of a series documenting the 90-day build of Paveway. Following along as we ship weekly and share what we learn building an AI-powered career platform.