Methodology
How we research a playbook.
Multi-model AI research, human verification, every claim sourced.
Every playbook runs through the same four-step process. First, the intake form collects the student’s profile: academics, activities, heritage, intended major, geographic preferences, target schools, family financial context. Second, a multi-model AI research pass using Claude, ChatGPT, and Gemini simultaneously, the same Ask Three AI approach Paul Takisakibuilt to reduce hallucinations, surfaces scholarship candidates, stacking-rule references, and institutional merit tiers. Third, a human analyst verifies every candidate against the source, kills anything the AI got wrong, and builds the ranked list, the stacking analysis, the pursue / conditional / drop decisions, and the realistic aid-range estimates per school. Fourth, the draft gets a second-reader pass for voice, accuracy, and actionability, then it ships to the family. The whole thing runs in 48 to 72 hours. We do not sell a one-click AI report. Every playbook has a human’s name on the delivery.
The four steps in detail
Intake
The intake form takes about 15 to 20 minutes to complete. It collects the full student profile: academics (GPA, test scores, class rank), activities, heritage and faith background, intended major, geographic preferences, target schools, and family financial context. Every field has a purpose. The depth of the playbook depends on the depth of the intake.
Multi-model AI research
We run the research pass through Claude, ChatGPT, and Gemini in parallel. This is the same Ask Three AI methodology Paul built to reduce hallucinations: any one model will invent or misremember details, but three models running the same query disagree in revealing ways. Where they agree, the answer is usually right. Where they disagree, that’s exactly where a human needs to verify. The AI surfaces scholarship candidates, references for stacking rules, and institutional merit tiers at every target school on the list.
Human verification
An analyst takes the AI research pass and verifies every candidate against the source. Scholarships that don’t exist get killed. Scholarships with changed criteria get updated. Stacking-rule claims get cross-checked against the school’s own financial aid page or Common Data Set filing. Then the analyst builds the ranked list, the school-by-school stacking analysis, the pursue / conditional / drop decisions, and the realistic aid-range estimates per school. This is where the playbook actually gets written, not assembled.
Second-reader pass
A second reviewer reads the draft for voice, accuracy, and actionability. Every quantified claim gets a second check. Every recommendation gets a sanity check against the student’s actual profile. Then the playbook ships to the family with the analyst’s initials in the delivery.
What AI does well, and where it fails
Multi-model AI research is good at three things: cross-referencing institutional aid policies across many schools quickly, surfacing scholarship candidates the family hasn’t heard of, and catching contradictions between sources. It’s bad at fact-checking specific dollar amounts, distinguishing scholarships that still exist from ones that quietly shut down, and applying judgment about whether a $500 award is worth a 10-hour application. Those failure modes are exactly what the human analyst pass is designed to catch.
The combination is what makes the methodology work. AI without human verification is a fast way to publish wrong information at scale. Human research without AI is what takes families 40 to 80 hours of work to do themselves. Multi-model AI plus a human analyst is the only way to deliver real depth in 72 hours without sacrificing accuracy.
What we refuse to do
We don’t sell one-click AI reports. We don’t publish “average savings” or “success rate” statistics, because averages lie and the category is full of fabricated numbers. We don’t claim financial aid credentials Paul doesn’t have. We don’t guarantee outcomes. We don’t recommend scholarships we haven’t verified are still active. And we don’t build databases, because the world has enough of those already.
See what the methodology produces: view a real sample playbook, or start your student’s playbook.