For each of the 10 shortlisted projects: the impact target, tech stack, step-by-step implementation plan, key milestones, and how to talk about it in a college essay.
1,000+ daily unique readers getting factual, ad-free news summaries generated by a local LLM from free RSS feeds. Real analytics. Real return traffic. Evidence of sustained readership over 4+ weeks.
"I noticed my grandparents were falling for misinformation. Facebook's algorithm wouldn't help them — it profited from their confusion. So I built a news aggregator with no ads, no algorithm, no manipulation — just facts, twice a day. Getting to 1,000 daily readers meant learning not just how to code, but how to write a newsletter people actually want to open."
Verifiable via: live site URL, Plausible analytics dashboard, GitHub repo, newsletter subscriber count.
100+ high school tutors onboarded. 300+ younger students matched with tutors. $5,000+ in combined tutor earnings. Real revenue (even token amounts) flowing through the platform. All within one school year.
"I noticed the kids who needed tutoring most couldn't afford it. So I built a platform where the smartest kids in school could teach — and earn something doing it. But the hardest part wasn't the code. It was convincing 100 of my classmates to trust a stranger with their little siblings. That required 40 cold emails, 12 in-person meetings, and a working Stripe integration."
Verifiable via: live site, Stripe dashboard (redacted), tutor count, student testimonials.
30+ active chapter members. 50+ girls completing the curriculum. 10 members continuing to AP Computer Science the following year. 20 HS volunteers recruited and trained as instructors. Official Girls Who Code affiliation (brand credibility + resources).
"My AP Computer Science class had 4 girls in it — including me. I couldn't change the class, but I could build something outside it. Girls Who Code wasn't just about teaching Python. It was about showing 30 girls that code was a place they belonged. The hardest part wasn't the curriculum. It was convincing my school that it was worth the classroom space on a Tuesday afternoon."
Verifiable via: GWC national affiliation, faculty advisor letter, member count, AP CS enrollment data.
3+ merged pull requests in real, actively-used open source projects. PRs must touch meaningful code (not just docs fixes). Each PR must have gone through code review. Changelog / release note acknowledgment preferred. Target projects: Rust, Python, Vue.js, React, Node.js.
"Contributing to Python's standard library felt like walking into a room full of the most rigorous engineers in the world and asking to help carry something. My first PR was rejected twice before it was good enough to merge. The third time, a core developer said it was 'exactly the kind of contribution the project needed.' I framed that comment."
Verifiable via: github.com/[username]?tab=overview&from=YYYY-MM-DD, merged PR links, release notes.
15 school clubs using the platform. 500+ total student users across clubs. The student's own club is the pilot; 14 other clubs adopt it voluntarily. Evidence of improved club governance: attendance, event planning, election integrity.
"Our student council had no way to track which clubs were active, when they met, or who was actually showing up. I built a platform to fix it for my own club. Then I realized: if it worked for me, it could work for everyone. Convincing 14 other club presidents to switch was harder than the code. I learned that the best software solves a problem people already know they have."
Verifiable via: live site, analytics (500+ users), school records of club participation, club president testimonials.
200+ students used the tutor over one semester. Before/after grade data for at least 50 students showing measurable improvement. Concept mastery tracking proving the AI adapted to individual student weaknesses. IRB approval if publishing data.
"My friend was failing Algebra II and too embarrassed to ask for help. I built her an AI tutor that remembered she struggled with quadratic equations specifically — not just 'math.' By December, she'd moved from a D+ to a B-. But the more important thing I learned: an AI that adapts to the learner is fundamentally different from one that just delivers content."
Verifiable via: live app, grade improvement data (with consent), concept mastery dashboard.
200+ lost items reunited with owners in one school year. 50+ image match queries per day during peak periods. Adopted by the full school district (3+ schools). Average match time under 24 hours from photo to reunion.
"The lost and found was a cardboard box in the front office. By December, it had 200 items nobody ever claimed. I thought: what if we could just take a photo? A computer can match a photo to the thing you're looking for — that's CLIP. The hardest part wasn't the code. It was convincing the district's IT department that a high schooler could be trusted with student photos."
Verifiable via: live site, district adoption letter, 200+ item reunion log.
School-board-approved pilot. 100+ students using the bot weekly. 12+ crisis situations detected and escalated to school counselors. Zero incidents of data breach or mishandling. Anonymous, confidential, FERPA/HIPAA compliant design.
"I almost didn't build this because the stakes were too high. A chatbot that mishandles a suicide risk assessment could make things worse, not better. So I spent three weeks talking to counselors before writing a single line of code — understanding what responsible escalation actually means. The code came last. The ethical framework came first."
Verifiable via: school board approval letter, counselor testimonial, escalation count (12+), privacy audit.
5 computer science teachers using the system. 2,000+ assignments graded. Integration with GitHub Classroom (so teachers don't change workflow). Coverage for Python, Java, and JavaScript. Test suite quality comparable to teacher-created rubrics.
"My CS teacher spent 6 hours a week grading. I thought: this is a software engineering problem. If we can write tests for production code, we can write tests for student code. But I learned the hard way that a test suite that's too strict drives students crazy, and one that's too loose lets bugs slide. Calibration took as long as the engineering."
Verifiable via: GitHub Classroom app listing, 5 teacher testimonials, 2000+ graded assignment count.
500+ students used the tool for real college essays. Evidence of tool-assisted essays submitted and accepted at top-choice schools (anonymous testimonials with permission). Feedback quality validated against a human college counselor (formal comparison study).
"I didn't build this because I thought AIs could write essays better than humans. I built it because I noticed my peers were getting generic feedback — 'good job!' — that didn't actually tell them why their essay worked or didn't. I wanted to give them the AO's internal rubric, translated into actionable advice. The validation question — does the AI actually match a counselor? — became the core of my research."
Verifiable via: 500 user testimonials, school acceptance letters (with permission), methodology comparison study.