Implementation Blueprints

How to Build Each Project

For each of the 10 shortlisted projects: the impact target, tech stack, step-by-step implementation plan, key milestones, and how to talk about it in a college essay.


01

No-Ad News Aggregator

AI / ML Full-Stack Systems
🎯 Impact Target

1,000+ daily unique readers getting factual, ad-free news summaries generated by a local LLM from free RSS feeds. Real analytics. Real return traffic. Evidence of sustained readership over 4+ weeks.

Tech Stack
Python Ollama (local LLM) Feedparser / RSS Flask / FastAPI PostgreSQL React / Next.js Cloudflare Pages Plausible Analytics
Key Constraints
  • LLM must run locally (Ollama) — no OpenAI API calls
  • No ads, no tracking scripts, no paywall
  • Must hit 1,000 daily users to count as impact
  • RSS feeds must be free and public domain
Step-by-Step Implementation
Phase 1 — Foundation
Weeks 1–4
  • → Set up Ollama on local machine (M1/M2/M3 Mac or Linux box)
  • → Pull a summarization-capable model (Mistral 7B or Phi-3)
  • → Build RSS feed ingestion script (Feedparser + scheduling)
  • → Test summarization prompts; evaluate quality manually
  • → Set up PostgreSQL schema for articles + summaries
Phase 2 — Product
Weeks 5–8
  • → Build frontend: clean reading UI (Next.js, mobile-first)
  • → Add email newsletter ( Buttondown or ConvertKit free tier)
  • → Integrate Plausible Analytics (privacy-friendly)
  • → Add personalization: topic preferences, reading time
  • → Set up Cron job for twice-daily digest generation
Phase 3 — Growth
Weeks 9–14
  • → Deploy frontend to Cloudflare Pages (free)
  • → Seed initial audience: post on Reddit, HN, IndieHacker
  • → Iterate on prompt quality based on user feedback
  • → Add "submit a feed" feature for user-suggested sources
  • → Hit 1,000 daily users; document with analytics screenshot

📝 The Essay Angle

"I noticed my grandparents were falling for misinformation. Facebook's algorithm wouldn't help them — it profited from their confusion. So I built a news aggregator with no ads, no algorithm, no manipulation — just facts, twice a day. Getting to 1,000 daily readers meant learning not just how to code, but how to write a newsletter people actually want to open."

🏆 What This Proves

Full-Stack Engineering Systems Administration Product Thinking Marketing & Growth LLM Operations

Verifiable via: live site URL, Plausible analytics dashboard, GitHub repo, newsletter subscriber count.

02

Local Tutoring Marketplace

Web / Mobile Leadership
🎯 Impact Target

100+ high school tutors onboarded. 300+ younger students matched with tutors. $5,000+ in combined tutor earnings. Real revenue (even token amounts) flowing through the platform. All within one school year.

Tech Stack
Next.js Supabase (Auth + DB) Stripe Connect Tailwind CSS Vercel (free tier) Calendly API
Key Constraints
  • Tutors must be verified (teacher recommendation required)
  • Payments must work for HS students (Stripe Connect restrictions)
  • Must recruit 20+ tutors — building it isn't enough
  • Must match 100+ students to count as real impact
Step-by-Step Implementation
Phase 1 — Recruitment + Research
Weeks 1–3
  • → Interview 10 students: what would make you tutor? what would make you pay?
  • → Get teacher sign-off for tutor verification process
  • → Recruit first 5 tutors (friends, IB students, NHS members)
  • → Design brand, name, and value proposition
Phase 2 — Build MVP
Weeks 4–8
  • → Set up Supabase: auth, profiles (tutor/student), subjects
  • → Build search + match UI (subject, availability, rating)
  • → Integrate Stripe for payments (even if just gift card to start)
  • → Add Calendly for session scheduling
  • → Launch with 5 tutors, 20 students (beta)
Phase 3 — Scale
Weeks 9–18
  • → Recruit school counselor as advisor (credibility)
  • → Expand to 2 more schools (poster campaign + teacher referral)
  • → Add group tutoring feature (3 students, lower price)
  • → Hit 100 tutors, 300 students, $5K tutor earnings
  • → Document revenue and student grade improvement data

📝 The Essay Angle

"I noticed the kids who needed tutoring most couldn't afford it. So I built a platform where the smartest kids in school could teach — and earn something doing it. But the hardest part wasn't the code. It was convincing 100 of my classmates to trust a stranger with their little siblings. That required 40 cold emails, 12 in-person meetings, and a working Stripe integration."

🏆 What This Proves

Full-Stack Development Marketplace Thinking Sales & Recruitment Revenue Generation Community Leadership

Verifiable via: live site, Stripe dashboard (redacted), tutor count, student testimonials.

03

Girls Who Code Chapter

Community Curriculum
🎯 Impact Target

30+ active chapter members. 50+ girls completing the curriculum. 10 members continuing to AP Computer Science the following year. 20 HS volunteers recruited and trained as instructors. Official Girls Who Code affiliation (brand credibility + resources).

Tech Stack (for the chapter tool)
Replit (for curriculum) Google Sites Notion (chapter management) Scratch / Python GitHub Pages
What Makes This Real
  • Must be an official GWC chapter (not just "inspired by")
  • Must track: enrollment, completion, AP CS continuation
  • The code component must be meaningful (not just tutorial completion)
  • Student must be founder AND primary instructor
Step-by-Step Implementation
Phase 1 — Founding
Weeks 1–4
  • → Apply for official Girls Who Code chapter affiliation
  • → Secure faculty advisor (CS teacher or librarian)
  • → Design 12-week curriculum: Python basics → final project
  • → Recruit first 10 members (via NHS, counselor referral)
  • → Build chapter website with curriculum overview + impact tracker
Phase 2 — Execution
Weeks 5–14
  • → Run weekly 90-min sessions (after school or lunch)
  • → Recruit 10 HS volunteers as teaching assistants
  • → Train volunteers: pedagogy basics, lesson delivery
  • → Members build final projects in small groups
  • → Final project showcase (invite parents, principal, press)
Phase 3 — Proof
Weeks 15–24
  • → Track AP CS enrollment: how many GWC members enrolled?
  • → Collect testimonials from members and parents
  • → Submit impact report to Girls Who Code national
  • → Document curriculum on GitHub Pages (open source)
  • → Apply for chapter awards (NCWIT, etc.)

📝 The Essay Angle

"My AP Computer Science class had 4 girls in it — including me. I couldn't change the class, but I could build something outside it. Girls Who Code wasn't just about teaching Python. It was about showing 30 girls that code was a place they belonged. The hardest part wasn't the curriculum. It was convincing my school that it was worth the classroom space on a Tuesday afternoon."

🏆 What This Proves

Founding Leadership Curriculum Design Volunteer Management Advocacy Teaching

Verifiable via: GWC national affiliation, faculty advisor letter, member count, AP CS enrollment data.

04

Open Source Bug Fixes

Systems Engineering
🎯 Impact Target

3+ merged pull requests in real, actively-used open source projects. PRs must touch meaningful code (not just docs fixes). Each PR must have gone through code review. Changelog / release note acknowledgment preferred. Target projects: Rust, Python, Vue.js, React, Node.js.

How to Find Good Issues
  • github.com/python/cpython "good first issue" label
  • github.com/rust-lang/rust "E-easy" tag
  • github.com/facebook/react "good first bug"
  • github.com/vitejs/vite "help wanted"
  • Look for: memory leaks, error handling gaps, test coverage, type errors
What Counts
  • ✅ Bug fix with test + regression prevention
  • ✅ Performance improvement with benchmark
  • ✅ Error message improvement (UX for developers)
  • ❌ Docs-only (doesn't count as engineering)
  • ❌ Tutorial-level changes (won't be merged)
Step-by-Step Implementation
Phase 1 — Setup + First Contribution
Weeks 1–4
  • → Set up development environment for chosen project(s)
  • → Read 5+ hours of codebase before touching anything
  • → Build the project from source; run the test suite
  • → Find and claim a "good first issue" (comment: "I can work on this")
  • → Submit first PR; respond to review feedback within 48hrs
Phase 2 — Build Momentum
Weeks 5–10
  • → Get first PR merged; study the review comments closely
  • → Submit 3 more PRs in the same or related project
  • → Build relationships with maintainers (be helpful, not demanding)
  • → Read project governance: how are decisions made?
  • → Contribute to an issue triage (help label/verify issues)
Phase 3 — Document Impact
Weeks 11–14
  • → Collect PR links, merge confirmations, changelog entries
  • → For each PR: write what it fixed, why it mattered, what you learned
  • → Ask maintainer for brief testimonial (most are happy to)
  • → Create a portfolio page showing all contributions with links

📝 The Essay Angle

"Contributing to Python's standard library felt like walking into a room full of the most rigorous engineers in the world and asking to help carry something. My first PR was rejected twice before it was good enough to merge. The third time, a core developer said it was 'exactly the kind of contribution the project needed.' I framed that comment."

🏆 What This Proves

Production Code Quality Code Review Navigation Codebase Reading Community Citizenship Persistence

Verifiable via: github.com/[username]?tab=overview&from=YYYY-MM-DD, merged PR links, release notes.

05

Club Leadership Hub

Web / Mobile Leadership
🎯 Impact Target

15 school clubs using the platform. 500+ total student users across clubs. The student's own club is the pilot; 14 other clubs adopt it voluntarily. Evidence of improved club governance: attendance, event planning, election integrity.

Tech Stack
Next.js Supabase Tailwind CSS Clerk Auth Vercel
Minimum Viable Features
  • Club profile pages with description + leadership team
  • Event calendar (create, RSVP, reminders)
  • Attendance tracking (who showed up)
  • Officer elections (secret ballot, vote tallying)
  • Announcements feed per club
Step-by-Step Implementation
Phase 1 — Build for Self
Weeks 1–5
  • → Build the platform for the student's own club first
  • → Run it for one semester: test attendance, event planning
  • → Collect feedback from club officers and members
  • → Iterate on UX based on real usage (not guesses)
  • → Document baseline: how was this managed before?
Phase 2 — Expansion
Weeks 6–12
  • → Approach 3 other clubs: offer free migration + training
  • → Customize the platform slightly for each club type
  • → Build a "club admin handbook" (so you're not support forever)
  • → Recruit 3 club presidents as advocates/reference letters
  • → Expand to 10 clubs total
Phase 3 — Proof
Weeks 13–18
  • → 15 clubs, 500+ users (capture analytics screenshot)
  • → Document: which clubs, how many events planned, elections run
  • → Collect testimonials from club presidents
  • → Create a case study: "How [School Name] clubs run on my platform"

📝 The Essay Angle

"Our student council had no way to track which clubs were active, when they met, or who was actually showing up. I built a platform to fix it for my own club. Then I realized: if it worked for me, it could work for everyone. Convincing 14 other club presidents to switch was harder than the code. I learned that the best software solves a problem people already know they have."

🏆 What This Proves

Full-Stack Development User Research Product Adoption Stakeholder Management Scaling a System

Verifiable via: live site, analytics (500+ users), school records of club participation, club president testimonials.

06

Study Buddy AI Tutor

AI / ML EdTech
🎯 Impact Target

200+ students used the tutor over one semester. Before/after grade data for at least 50 students showing measurable improvement. Concept mastery tracking proving the AI adapted to individual student weaknesses. IRB approval if publishing data.

Tech Stack
Python / FastAPI Ollama (local LLM) SQLite / PostgreSQL React Native / Expo JWT Auth GitHub Actions (CI)
Key Features Required
  • Concept mastery tracking (not just "completed")
  • Adaptive difficulty: adjusts to student's success rate
  • Multi-modal explanations (text + worked example + analogy)
  • Spaced repetition (review weak concepts automatically)
  • Progress report exportable as PDF (for student portfolios)
Step-by-Step Implementation
Phase 1 — Research + Design
Weeks 1–4
  • → Study learning science: spaced repetition (SM-2 algorithm), Zone of Proximal Development
  • → Define concept trees: Algebra I (47 concepts), Biology (60 concepts), etc.
  • → Design prompt templates for each concept type (factual, procedural, conceptual)
  • → Set up Ollama with Phi-3 or Mistral for reasoning quality
Phase 2 — Build + Pilot
Weeks 5–10
  • → Build mobile app: login, concept browser, quiz, explanation view
  • → Implement mastery tracking (Leitner box system or SM-2)
  • → Add adaptive difficulty: track correct/incorrect per concept
  • → Pilot with 20 students from AP classes; collect feedback
  • → Iterate on prompt quality based on student confusion signals
Phase 3 — Scale + Measure
Weeks 11–18
  • → Expand to 200+ students (word of mouth + counselor referral)
  • → Collect before/after grade data (anonymized, with consent)
  • → Run t-test to verify improvement is statistically significant
  • → Generate individual progress reports (PDF export)
  • → Write up methodology and present at school science fair

📝 The Essay Angle

"My friend was failing Algebra II and too embarrassed to ask for help. I built her an AI tutor that remembered she struggled with quadratic equations specifically — not just 'math.' By December, she'd moved from a D+ to a B-. But the more important thing I learned: an AI that adapts to the learner is fundamentally different from one that just delivers content."

🏆 What This Proves

LLM Integration Learning Science Mobile Development Research Methodology Statistical Analysis

Verifiable via: live app, grade improvement data (with consent), concept mastery dashboard.

07

Campus Lost & Found 2.0

Computer Vision Full-Stack
🎯 Impact Target

200+ lost items reunited with owners in one school year. 50+ image match queries per day during peak periods. Adopted by the full school district (3+ schools). Average match time under 24 hours from photo to reunion.

Tech Stack
Python / FastAPI CLIP (OpenAI) / Faiss React / Next.js PostgreSQL + pgvector Cloudflare R2 (images) District SSO (Clever)
Technical Challenges
  • Image embeddings: CLIP is the right model (物体-agnostic)
  • Faiss index for fast nearest-neighbor search at scale
  • False positive management: 3 similar items → rank by recency
  • District data sharing agreements (FERPA compliance)
  • Photo quality: handle blurry/partial photos
Step-by-Step Implementation
Phase 1 — District Partnership
Weeks 1–5
  • → Approach district IT: present CV matching concept, data handling plan
  • → Get IT security review + FERPA compliance sign-off
  • → Set up data architecture: what is stored, for how long, who can access
  • → Build admin dashboard for school lost & found staff
  • → Pilot at one school (the student's own school)
Phase 2 — Build the Matching Engine
Weeks 6–11
  • → Fine-tune or prompt CLIP for item image similarity
  • → Build image ingestion pipeline: upload → embed → index
  • → Build matching UI: camera → compare top 5 → confirm match
  • → Implement automated owner notification (email/SMS)
  • → Add privacy controls: blur faces, no personal identifiers stored
Phase 3 — Scale + Measure
Weeks 12–18
  • → Expand to 3 schools in district
  • → Train staff at each school on photo quality standards
  • → Collect match rate data: how many matches per 100 reports?
  • → Document reunion story (with permission): names, photos
  • → Present to school board for recognition

📝 The Essay Angle

"The lost and found was a cardboard box in the front office. By December, it had 200 items nobody ever claimed. I thought: what if we could just take a photo? A computer can match a photo to the thing you're looking for — that's CLIP. The hardest part wasn't the code. It was convincing the district's IT department that a high schooler could be trusted with student photos."

🏆 What This Proves

Computer Vision Vector Search / Faiss District Partnership Privacy Engineering Full-Stack Development

Verifiable via: live site, district adoption letter, 200+ item reunion log.

08

Mental Health Check-In Bot

Safety-Critical AI Social Impact
🎯 Impact Target

School-board-approved pilot. 100+ students using the bot weekly. 12+ crisis situations detected and escalated to school counselors. Zero incidents of data breach or mishandling. Anonymous, confidential, FERPA/HIPAA compliant design.

Tech Stack
Python / FastAPI Ollama (local LLM) Telegram Bot API PostgreSQL (encrypted) PHQ-4 Screening Instrument School SSO integration
Safety Requirements
  • Must use a validated screening instrument (PHQ-4, GAD-7)
  • No PII stored — anonymous by design
  • All data encrypted at rest and in transit
  • School counselor escalation pathway: explicit handoff protocol
  • Mandatory school board + counselor approval before launch
Step-by-Step Implementation
Phase 1 — Approval + Research
Weeks 1–5
  • → Consult with school counselor: what does responsible escalation look like?
  • → Present to school board: data handling, privacy design, safety protocols
  • → Get formal approval in writing (legal protection + credibility)
  • → Research PHQ-4 / GAD-7 screening instruments for depression/anxiety
  • → Design: how does the bot know when to escalate?
Phase 2 — Build with Safety
Weeks 6–12
  • → Implement crisis detection: keyword + PHQ-4 score threshold
  • → Build escalation pathway: bot → counselor alert (no student PII in alert)
  • → Anonymous design: no login, no data that identifies student
  • → Content safety: system prompt locked, cannot be jailbroken
  • → Beta test with counselor present: simulate 10 crisis scenarios
Phase 3 — Pilot + Measure
Weeks 13–20
  • → Soft launch: counselor referral only (not open enrollment)
  • → Track: weekly check-ins, PHQ-4 scores over time, escalations
  • → Document 12 crisis detections and counselor follow-ups
  • → Publish anonymized impact report for school board
  • → Present at school board meeting with data

📝 The Essay Angle

"I almost didn't build this because the stakes were too high. A chatbot that mishandles a suicide risk assessment could make things worse, not better. So I spent three weeks talking to counselors before writing a single line of code — understanding what responsible escalation actually means. The code came last. The ethical framework came first."

🏆 What This Proves

Safety-Critical Systems LLM Safety Engineering Ethical Design Stakeholder Research Psychology Literacy

Verifiable via: school board approval letter, counselor testimonial, escalation count (12+), privacy audit.

09

Automated Grading System

CI/CD EdTech
🎯 Impact Target

5 computer science teachers using the system. 2,000+ assignments graded. Integration with GitHub Classroom (so teachers don't change workflow). Coverage for Python, Java, and JavaScript. Test suite quality comparable to teacher-created rubrics.

Tech Stack
GitHub Actions Python (grading scripts) Docker GitHub Classroom API PostgreSQL (results DB) React (teacher dashboard)
What Makes This Credible
  • Must integrate with GitHub Classroom (not a standalone tool)
  • Test suites must be reviewed and approved by teachers
  • Must detect common plagiarism patterns (code similarity)
  • Feedback must be actionable: not just "wrong," but "why wrong"
Step-by-Step Implementation
Phase 1 — Grading Engine
Weeks 1–5
  • → Study: how do CS teachers actually grade? (interview 3 teachers)
  • → Build test runner: executes student code against teacher test cases
  • → Add style checker (PEP8, Checkstyle equivalent)
  • → Build feedback generator: "Your function returned X but expected Y"
  • → Handle edge cases: infinite loops, runtime errors, security
Phase 2 — GitHub Integration
Weeks 6–10
  • → Register as GitHub Classroom app (OAuth flow)
  • → Watch for assignment submissions via GitHub Classroom webhook
  • → Run grading in Docker container (sandboxed, timed)
  • → Post results back as GitHub PR review comments
  • → Build teacher dashboard: view all assignments, re-grade, override
Phase 3 — Adoption + Scale
Weeks 11–18
  • → Sign up first 2 CS teacher beta users (start with own CS teacher)
  • → Iterate based on teacher feedback: what are edge cases?
  • → Add Java and JavaScript support
  • → Expand to 5 teachers, 2000+ assignments
  • → Document: time saved per teacher, accuracy vs. manual grading

📝 The Essay Angle

"My CS teacher spent 6 hours a week grading. I thought: this is a software engineering problem. If we can write tests for production code, we can write tests for student code. But I learned the hard way that a test suite that's too strict drives students crazy, and one that's too loose lets bugs slide. Calibration took as long as the engineering."

🏆 What This Proves

CI/CD Pipelines Docker & Sandboxing API Integration Teacher-User Research Software Engineering

Verifiable via: GitHub Classroom app listing, 5 teacher testimonials, 2000+ graded assignment count.

10

College Essay Feedback Engine

LLM / NLP EdTech
🎯 Impact Target

500+ students used the tool for real college essays. Evidence of tool-assisted essays submitted and accepted at top-choice schools (anonymous testimonials with permission). Feedback quality validated against a human college counselor (formal comparison study).

Tech Stack
Python / FastAPI Ollama (fine-tuned model) Common App Essay Rubric Next.js Supabase PDF generation
What Makes Feedback Credible
  • Must be trained on AO perspective, not generic feedback
  • Feedback must reference specific criteria: narrative arc, voice, specificity
  • Must distinguish between "good essay" and "good college essay"
  • Student must prove: tool users got into schools (testimonials)
Step-by-Step Implementation
Phase 1 — Research the Domain
Weeks 1–4
  • → Study: 50 successful college essays (Harvard, Stanford, etc.)
  • → Study: AO perspectives on what makes essays memorable
  • → Research: Common App rubric, Coalition App criteria
  • → Interview 3 college counselors: what do you look for?
  • → Design feedback categories: narrative, voice, structure, specificity
Phase 2 — Build the Engine
Weeks 5–11
  • → Fine-tune or engineer prompts for LLM: AO persona + rubric criteria
  • → Build essay upload: text input or PDF extraction
  • → Generate multi-section feedback: one per rubric criterion
  • → Add comparative analysis: "This is stronger than X, weaker than Y"
  • → Generate feedback report as PDF (printable for application prep)
  • → Run blind test: does AI feedback match counselor feedback?
Phase 3 — Adoption + Validation
Weeks 12–20
  • → Recruit first 50 beta users (friends, NHS members, Reddit)
  • → Collect testimonials: "I used this and got into X school"
  • → Formal comparison study: 20 essays, AI vs. counselor, correlation
  • → Scale to 500 users; document with anonymized testimonials
  • → Present methodology: how did you validate feedback quality?

📝 The Essay Angle

"I didn't build this because I thought AIs could write essays better than humans. I built it because I noticed my peers were getting generic feedback — 'good job!' — that didn't actually tell them why their essay worked or didn't. I wanted to give them the AO's internal rubric, translated into actionable advice. The validation question — does the AI actually match a counselor? — became the core of my research."

🏆 What This Proves

LLM Fine-tuning NLP / Text Analysis Research Methodology Domain Expertise User Validation

Verifiable via: 500 user testimonials, school acceptance letters (with permission), methodology comparison study.