What the notebook is for
Orientation — introduces all ten rubric criteria. Read together on day one.
Objective
Convince every member of your team that the engineering notebook is how the team thinks, not a chore that happens at the end of the season, and introduce the ten rubric criteria that will be referenced in every subsequent notebook section.
Concept
The engineering notebook is not a diary. It is not a journal of what you did today. It is not the thing you backfill on the flight to a regional event. It is the single most-read artefact your team produces, and it has three audiences.
The first audience is a panel of judges at a competition, who will spend about fifteen minutes with it while you sit across from them in an interview. The second audience is your future team — a returning student six months from now who wants to know why you picked the intake design you did, so they do not redo the thinking. The third audience, and the one that matters most day-to-day, is you, right now, because writing something down is how engineering thinking clarifies. A decision you have not written is a decision you have not fully made.
📐 Engineering tip. The notebook is scored against ten rubric criteria. A team that treats the notebook as a chore loses points on all ten. A team that treats it as how they think wins on all ten — often by a wide margin — because the same discipline that writes good notebook entries is the same discipline that builds good robots.
The ten rubric criteria
The criteria are: Identify the Problem, Brainstorm, Select the Best Solution, Build / Programme the Solution, Test the Solution, Repeat / Iterate, Independent Inquiry, Usability / Completeness, Team / Project Management, and Notebook Format. Every section in this chapter is tagged with one or more of these. You will see them again in N1.4, in every Tier 2 section, in every Tier 3 section, and in the Innovate Award walkthrough in N5.1. Memorise them now. Judges will.
| # | Criterion | What it rewards |
|---|---|---|
| 1 | Identify the Problem | Clear problem statements with symptoms, causes, and success criteria |
| 2 | Brainstorm | Multiple distinct concepts considered in parallel before committing |
| 3 | Select the Best Solution | A defensible, documented selection process (e.g. decision matrix) |
| 4 | Build / Programme the Solution | Detailed build and code logs showing what was done, by whom, when |
| 5 | Test the Solution | Quantitative test data with hypotheses, procedures, and conclusions |
| 6 | Repeat / Iterate | Evidence of multiple completed design-loop iterations across the season |
| 7 | Independent Inquiry | Research beyond handed-to-you materials, citing outside sources |
| 8 | Usability / Completeness | Navigation layer: table of contents, cross-references, page numbers |
| 9 | Team / Project Management | Roles, timelines, values, and evidence of team development |
| 10 | Notebook Format | Consistency: dates, signatures, pagination, continuation markers |
The trap every team falls into once
The trap every team falls into at least once is writing the notebook at the end of the season. It always looks the same: week 18, the team realises the notebook is due at an event, and someone sits down to reconstruct five months of decisions from memory. The reconstruction always reads the same — smoothed-over, vague, dateless, unsigned, and obviously backfilled. Judges recognise backfill instantly. They do not explicitly score on "was this backfilled," but they do not need to: a backfilled notebook is vague on the details that make every rubric criterion legible.
No team has ever done this twice. Every experienced team has done it once. Read this paragraph and decide your team will be the exception. You will not be, unless the team has a ritual that makes not-backfilling cheaper than backfilling. That ritual is the rest of this chapter.
⚡ Competition tip. Thirty to forty-five minutes per day spent writing notebook entries in the moment — a PIL entry when a problem is identified, a build log at the end of a session, a test log when data is recorded, a reflection page at the end of an iteration — buys back an entire event's worth of panic. And it produces a notebook judged better than a backfilled one written by more talented writers, because the entries are contemporaneous, specific, and continuously cross-referenced. Process beats talent.
Guided practice
Read this short case aloud together as a team.
A team walks into a regional event on a Saturday morning. They placed well in practice, the robot is working, and the strategy is solid. At 9:00 am the judges take their notebook for review. At 11:15 am the team sits down in the interview. The judges ask: "Can you walk us through how you chose this drivetrain?"
The build lead freezes for a second. She knows they considered three drivetrains in August, and she knows why they chose this one, but the brainstorm sketches never made it into the notebook. The decision matrix exists as a spreadsheet tab nobody printed. The build log exists as phone photographs on three different phones. She gives a good answer from memory and the judges nod politely, but they cannot cross-check any of it against the notebook. The team finishes with no judged award. In the car home, someone says: "We had the work. We just never wrote it down."
This is the most common failure mode in notebook-dependent awards. The work happened. The notebook did not keep up.
As a team, go around the circle and each answer one question: last season (or this current week), what did your team do that you are proud of, and where in the notebook could a judge find it? Be honest about gaps.
Independent exercise
Each team member writes one paragraph in response to the prompt: "The notebook is not a diary. It is _____." Share around the circle. Do not debate. The paragraphs you produce become the orientation essay for the first page of your notebook this season — literally, paste the best one onto page two of your book.
Common pitfalls
- Treating the notebook as one person's job. The notebook is everyone's job; the writer is a coordinator, not a solo author.
- Believing you will "just remember" and write it up later. Memory does not work that way in week 14.
- Writing the notebook in polished prose at the end of each session instead of rough, contemporaneous entries throughout. The rough entries are worth more.
- Picking a beautiful notebook format with ten colours and calligraphy, then abandoning it in week three. A rough format you can sustain beats an elegant format you cannot.
- Assuming the notebook is only for judges. The second and third audiences — your future team and your present self — are the ones who benefit every single week.
Where this points next
N1.2 compares three real notebook formats and helps your team pick one.
📐 Reflection prompt (notebook-ready)
- Can you name at least six of the ten rubric criteria from memory, right now, without looking back at the list?
- Can you name one of the three audiences and explain what that audience needs from the notebook?
- Can you name the one trap in this section and explain what ritual prevents it?
Next up: N1.2 — Three notebook formats compared.
Three notebook formats compared
Pick one of three proven formats and commit to it for the season.
Objective
Pick one of three proven notebook formats and commit to it for the season, so that every subsequent entry this chapter teaches you to write has a consistent home.
Concept
There are three notebook formats that have all produced award-winning notebooks at the highest levels of competition. They are not the only formats, but they are the three most common working patterns among top teams, and they trade off cleanly against one another. Picking one early matters more than picking the right one, because the worst notebook format is the one you switch midway through the season. Consistency is the load-bearing variable in criterion 10. A notebook that is handwritten with a colour key for the first ten weeks and then switches to typed sections reads as two notebooks — and neither is scored as a complete artefact.
Format A — Handwritten with a colour key
The notebook is a physical bound book, filled by hand, using a consistent colour key to mark sections (red for problems, green for decisions, blue for build logs, black for running text, etc.). The colour key appears on the first or second page and is referenced in every subsequent entry. This format works on a field with no laptop, requires no power, and is visually clear once the reader knows the key. Its strength is that entries cannot be "perfected later" — you write what you write, you draw what you draw, and the time-stamping is unambiguous. Its weakness is that it scales poorly if multiple team members need to write simultaneously (only one person can have the book open), and corrections are awkward — you draw a single strikethrough line and initial it, you do not erase.
Format B — Typed and section-structured
The notebook is a digital document, usually produced in a collaborative editor (Google Docs, Notion, or a set of markdown files), organised into sections that map to the rubric criteria. Each entry is a dated, signed block under its section. The finished notebook is printed and bound before events. This format is the most version-controllable, the easiest to edit, the easiest to scale to multiple writers, and the most forgiving of new team members who have not yet learnt a handwriting style. Its weakness is that the very editability that makes it convenient also makes it vulnerable to "I'll polish it later" procrastination and to last-minute backfilling.
Format C — Typed with extensive photography
A variant of Format B where almost every entry is anchored by one or more dated photographs of the robot, the whiteboard, the test setup, or the field state. Photographs are captioned and cross-referenced. This format is visually the most impressive, and judges respond to the density of evidence. Its weakness is operational overhead: someone has to take the photographs, file them, import them, caption them, and verify they all ended up in the right entry. If that discipline slips, the notebook has gaps where photographs were supposed to be.
📐 Engineering tip. The curriculum's default recommendation is Format B — typed and section-structured. It is the most version-controllable, the most scalable to multiple contributors, and the most forgiving if your team adds or loses a member mid-season. If your team has strong discipline and a student who loves photography, Format C produces the most impressive final artefact. If your team is small or simply prefers working with paper and pen, Format A is perfectly competitive.
Guided practice
Below are three short entries showing the same content — a PIL entry for a generic intake stall — rendered in each of the three formats. Look at the differences, not the contents.
🖼 Image brief
- Alt: Three-panel comparison showing the same PIL entry rendered as a handwritten colour-keyed page, a typed section-structured page, and a typed page with embedded captioned photographs.
- Source: Scan or mock-up of each format using the intake-stall example content.
- Caption: Same content, three formats. Pick the one your team can sustain for twenty weeks.
Format A (handwritten, colour key). Imagine a scanned page from a bound notebook. A red header reads "PROBLEM — Intake Stalls (Entry #14)." The date appears in the top-right corner in black ink. Below, in black running text: a symptom paragraph, underlined where the writer wants emphasis. Green ink marks the "How might we…" reframe. Blue ink marks a small marginal sketch. The page ends with a signature and a page number in a pre-printed footer. A small legend on the inside cover tells the reader: red = problems, green = decisions, blue = build, black = running text. The visual effect: on a flipped-through book, red and green leap off the page, so a judge skimming can find decisions instantly.
Format B (typed, section-structured). A cleanly formatted digital entry with a heading, date, owner, status, rubric tag, and related-pages field, followed by structured paragraphs for Symptom, Context, Candidate cause, How-might-we reframe, Constraints, Goals, Success criteria, and Next action. The visual effect: consistent section headers, unambiguous cross-references, searchable in the source document before it is printed.
Format C (typed with photography). The same structure as Format B, but every entry is anchored by one or more captioned photographs — baseline test images, close-ups of the failure mode, annotated overlays. The visual effect: a judge cannot skim past this entry without absorbing the evidence. Every claim is backed by a visible artefact.
Independent exercise
As a team, pick one of the three formats today and commit to it in writing on page 1 of your notebook. Include a one-paragraph justification: why this format, why your team can sustain it, and who is the primary notebook owner (the person who keeps the table of contents current). This commitment is binding for the season — not because format-switching is impossible, but because format-switching mid-season costs criterion 10 points every time.
Common pitfalls
- Picking Format C because it looks the most impressive in exemplars, without anyone on the team willing to own the photography pipeline.
- Picking Format A because it is "classic" without checking whether the team can actually keep a single physical book with them at every session.
- Picking Format B because it is easy, then using the editability to defer every entry by "just a few days."
- Switching formats in week 6 because the first choice "was not working." Almost always, the format was not the problem — the ritual was. Fix the ritual.
- Using mixed formats (some handwritten, some typed) within one notebook. Judges will score that as two incomplete notebooks.
Where this points next
N1.3 walks through setting up the first pages of your chosen format — title page, team identity, table of contents, colour key or section map.
📐 Reflection prompt (notebook-ready)
- Your team has picked exactly one format, written on page 1, signed and dated. Which format and why?
- Who is the primary notebook owner? Not two owners. Not "everyone." One name.
- How does your chosen format handle corrections?
Next up: N1.3 — Setting up your notebook on day one.
Setting up your notebook on day one
Set up the first twelve pages so that criteria 8, 9, and 10 are satisfied from day one.
Objective
Set up the first pages of your notebook so that criterion 8 (Usability), criterion 9 (Team), and criterion 10 (Format) are satisfied from day one and remain satisfied with minimal upkeep.
Concept
The first ten pages of a notebook do a disproportionate share of the rubric work. A judge opens the notebook, sees a title page, a team identity section, a table of contents, and a format key — and in the first thirty seconds they have already scored three criteria. Conversely, a notebook that opens with "Day 1: built the drivetrain" skips past three rubric criteria before it even starts the work. Setting up the first pages is a ninety-minute investment that pays out for the entire season. Do it once, on the first team meeting of the season, with everyone present.
These first pages are not where the design work lives. They are the navigation layer over the design work. A judge who knows the navigation layer well can find the interesting entries in the back of the book in seconds. A judge who opens a notebook with no navigation layer will score the navigation criteria at zero regardless of how good the later entries are.
Guided practice
Below is a complete setup checklist with every page you write on day one. Follow it in order. The page numbers are suggested — adapt to your chosen format.
Page 1 — Title page
Contents: team number, team name, school or programme affiliation, season identifier (e.g. "2025–26 season"), start date, and book number (if you expect to fill more than one). A single photograph or logo if your format supports it. A judge sees this before anything else. It orients them.
Page 2 — Why we run this notebook
One short paragraph — two to four sentences — stating how this team uses the notebook. The best version is the independent-exercise paragraph from N1.1, transcribed onto this page. Signed by every founding member of the team. This satisfies criterion 9 (team values) and signals to the judge that the notebook is a team artefact, not one writer's hobby.
Page 3 — Team photograph and roster
Photograph of the full team. Underneath: every member's name, role, and year level. Direct criterion 9 credit.
🖼 Image brief
- Alt: Example team photograph page from a notebook, with the photograph centred above a neatly formatted roster listing each member's name, role, and year level.
- Source: Photograph of a sample notebook page or a mock-up in the team's chosen format.
- Caption: Page 3 of a day-one notebook: the team photograph anchors the roster.
Pages 4–6 — Individual member biographies
One paragraph per member, plus a self-evaluation across four axes: coding, design / build, test / drive, game analysis. Each axis scored 1–5 by the member themselves. This four-axis format is used by strong notebook teams because it makes skill distribution visible at a glance.
Page 7 — Team roles and responsibilities
A short table: role, owner, description, primary rubric criterion. Every row is one owner, not two. Shared roles become coordination debt by week four.
Pages 8–9 — Table of Contents
Reserve these two pages for a running table of contents. In a bound book, leave them blank and add entries as you write pages. In a digital format, auto-generate or maintain manually. Structure of each entry: Page [N] — [Title] — [Date first written] — [Last updated] — [Rubric criterion tag]. The ToC must be kept current — this is the single most-missed item at pre-event checks (see N5.3).
Page 10 — Format key
A half-page reference of how your team uses colours, sections, icons, or heading levels. For Format A, this is your colour legend. For Format B and C, this is your section-header scheme and any abbreviations you use. Criterion 10 credit, and it gives a judge a key for decoding the rest of the book.
Page 11 — Season goals
Three to seven concrete goals for the season. Each goal has an owner. At least one goal should be judge-facing ("win a judged award at an event"), at least one should be competitive ("qualify for the regional championship"), and at least one should be internal ("every member can explain every subsystem"). Revisit this page at every iteration reflection.
Page 12 — Engineering design process diagram
A single-page diagram of the design loop your team will follow. Most versions are some variant of: Identify → Research → Brainstorm → Select → Build → Test → Reflect → back to Identify. Draw it. Label every node. Put the page number of the first PIL entry under "Identify" so a judge can click through. Criterion 8 (navigation), and it makes your loop visible as a loop, which is exactly what criterion 6 later rewards.
🖼 Image brief
- Alt: A hand-drawn or digitally rendered circular design-process diagram with seven labelled nodes: Identify, Research, Brainstorm, Select, Build, Test, Reflect, with an arrow from Reflect back to Identify.
- Source: Photograph of a team's actual design-process diagram from their notebook, or a clean mock-up.
- Caption: Page 12: the design loop, drawn once, referenced all season.
Independent exercise
Block off ninety minutes with your entire team in one room. Set up pages 1 through 12 in one sitting. Every member writes their own biography paragraph and their own skill self-evaluation. The notebook lead types or writes the format key and the ToC structure. The coach or mentor reviews the finished section before anyone leaves the room. Do not break this into four sessions. One sitting, ninety minutes, everyone present — the alternative is that these pages get half-done across three weeks and never properly signed.
Common pitfalls
- Skipping the biographies because "we all know each other." The notebook is for the judges, not for the team. They do not know you.
- Putting the ToC off until "when we have more entries." The ToC needs to exist empty; filling it is easier when the skeleton is already there.
- Letting one person write all the biographies. Every member writes their own, in their own voice. This is both more credible and is scored under criterion 9.
- Skipping the season-goals page because it feels aspirational. The season goals page is where you hang your criterion-9 accountability from later in the year.
- Treating the format key as decorative. Judges use it — and if it exists but the rest of the notebook does not follow it, that is worse than not having one.
Where this points next
N1.4 gives you the per-page format rules that keep every subsequent entry compliant with criterion 10.
📐 Reflection prompt (notebook-ready)
- Are pages 1–12 all written, in order, on day one?
- Has every member filled in their own biography and skill evaluation — no placeholders?
- Is the Table of Contents set up and usable, even if mostly empty at this stage?
Next up: N1.4 — Format rules.
Format rules that satisfy criterion 10
The ten per-page rules every notebook entry must follow.
Objective
Name the per-page rules every notebook entry must follow, and teach the team to enforce them so that criterion 10 is satisfied automatically once the habit sticks.
Concept
Criterion 10 is the easiest criterion to lose and the easiest to win. It rewards consistency — dates, signatures, pagination, continuation references, consistent headers, no blank spaces. None of those things require engineering skill. They require habit. A team that enforces the format rules in week one will satisfy criterion 10 for the rest of the season without thinking about it. A team that does not enforce them in week one will spend an afternoon before every event retrofitting page numbers and signatures.
The format rules are mechanical, not creative. They are the same rules used by professional engineering notebooks in industry, for the same reason: a notebook that can be audited is a notebook that can be used as evidence. At a competition, "used as evidence" means a judge can flip to a specific claim and verify when it was written, by whom, and in what sequence.
Guided practice
These are the ten rules. Each rule has a one-line statement and a short note on why it matters.
Rule 1 — Every page has a date. The date of the first writing on that page, in a consistent format. If a page spans multiple days, add the date of the subsequent addition next to the new text. Judges cannot score iteration (criterion 6) without dates. Pages without dates are unscorable for any time-based criterion.
Rule 2 — Every page has a page number. Sequential, starting from 1. Cross-references depend on stable page numbers. A notebook where page numbers shift mid-season breaks every "see page N" reference.
Rule 3 — Every page has a signature. The author initials (or signs in full) every page they write. Co-authors sign with their own initials next to the primary author's. Signatures are an attribution claim. Judges ask "who wrote this?" at interviews — the signature is the answer.
Rule 4 — Every page has continuation markers where needed. If an entry continues on the next page, the bottom of the current page says "continued on p. [N+1]" and the top of the next page says "continued from p. [N]". Both directions. Without continuation markers, a judge flipping past a mid-entry page break has no way to know the entry continues.
Rule 5 — No blank spaces left in the middle of an entry. If you finish writing halfway down a page, draw a single diagonal line through the unused space to the bottom of the page (Format A) or insert an explicit "end of entry" marker (Format B, C). An unfilled space in the middle of a signed entry allows (or appears to allow) backfilling.
Rule 6 — Corrections are single strikethroughs, initialled, never erased. Draw one line through the old text, write the new text nearby, and initial the change. Do not erase. Do not scribble out. In a digital format, use a documented edit marker or track-changes-style notation. An erased correction looks like hiding something. A strikethrough correction looks like engineering.
Rule 7 — Consistent section headers. Every entry of the same type uses the same heading structure. Every PIL entry uses the same heading format. Every build log uses the same heading format. The templates in N2 and N3 exist to make this automatic. A judge reading a notebook where every PIL entry is laid out differently has to re-learn the structure every time.
Rule 8 — Cross-references are real and resolve. Every "see page N" reference must point at an existing page N that actually contains the referenced content. A broken cross-reference is the single fastest way to lose credibility with a judge. They will check at least one.
Rule 9 — The Table of Contents is updated the same day a page is written. Not weekly. Not monthly. Same day. The ToC is the index — if it is three weeks behind, it is useless for any purpose.
Rule 10 — If you skip a day, you do not backfill. If you miss a build session and the log was not written, the log is not written. Do not pretend it was. Write the next entry as the next entry, and note at the top "[no entry for 3 October — session cancelled]" if the absence is worth explaining. A notebook full of same-day entries with one honest gap is stronger than a notebook with no gaps where three entries are visibly backfilled.
📐 Engineering tip. Spend twenty minutes now learning these rules; save an afternoon every month. The teams that spend an afternoon before every event retrofitting page numbers and signatures are the teams that did not enforce the format rules in week one.
Independent exercise
Take the next three entries you write this week. Review each one against the ten rules above. If any rule is missing from any entry, fix it before the end of the day. Then appoint one team member (the notebook lead from N1.3) as the format auditor, whose job it is to scan every week's entries against these ten rules at the weekly stand-up. A five-minute audit per week is enough to keep criterion 10 fully satisfied.
Common pitfalls
- Treating format as something you will "clean up later." Later never comes; judges arrive on schedule.
- Erasing instead of striking through. Every erased correction costs criterion 10 credit.
- Skipping the signature because "we know who wrote it." Judges do not.
- Letting the ToC lag. A week-lagged ToC is tolerable; a month-lagged ToC is not.
Where this points next
Tier 2 begins. N2.1 takes the PIL framework from Chapter V and turns it into a notebook entry.
📐 Reflection prompt (notebook-ready)
- Does every page written this week have a date, a page number, a signature, and consistent headers?
- Is every cross-reference in this week's entries resolving to an actual page?
- Is the Table of Contents current as of today?
Next up: N2.1 — PIL entries as notebook entries.
PIL entries as notebook entries
Turn a raw Problem Identification Log row into a judge-ready notebook entry.
Objective
Turn a raw Problem Identification Log row into a judge-ready notebook entry that documents the same artefact in the voice the rubric rewards.
Concept
Chapter V PM.4 already taught you what a Problem Identification Log is and how a team runs one. The PIL is the spreadsheet — the living row-per-problem artefact your team updates in real time, pairs with planner tasks, and reviews at stand-up. That section is about the operational tool. This section is about the notebook version of the same artefact. Same problem, same constraints, same success criteria — different voice, different reader. The spreadsheet row is written for the team. The notebook entry is written for a judge who has never met your team, your robot, or your season.
That distinction matters because a PIL spreadsheet and a PIL notebook entry fail in different ways. The spreadsheet fails when it is three weeks behind reality and nobody can remember what "Intake jam v2" referred to. The notebook entry fails when it reads like a spreadsheet — one-line shorthand, unexplained jargon, no context, no date, no owner, no "why this matters." A judge reading the spreadsheet row alone would have no idea whether your team understood the problem or just typed words into a cell. A notebook entry is what proves you understood. It is the artefact judges score under Identify the Problem, the very first criterion on the rubric, and it is also the artefact your team will cross-reference from every brainstorm, matrix, build log, and design review that follows. The PIL entry is the spine of the design process in the notebook. Everything else hangs off it.
📐 Engineering tip. A judge-rewarded PIL notebook entry does three things a weak one does not. First, it separates symptom from cause. Second, it reframes the problem as a design challenge ("How might we…"), not a complaint. Third, it names its own success criteria up front, before any solution is chosen, and it makes those criteria measurable.
The other thing to understand about PIL entries is that they have a lifecycle. The same entry is written once as an open problem — symptom, cause hypothesis, constraints, goals, success criteria, no resolution yet. It is revisited as an in-progress entry that cross-references the brainstorm page, the decision matrix, the chosen solution, and the build log. It is finally closed with a resolution entry that points at the test data and states whether the original success criteria were met. A strong notebook shows this arc. A weak notebook shows twenty open entries and no closures — every problem orphaned, every decision dangling. Resolution closes the loop. It also feeds criterion 6 (Repeat / Iterate), because a closed PIL entry is evidence that a loop actually finished.
Guided practice
Below is a fully written PIL notebook entry for a generic intake subsystem. Read it as a template. Every heading, every guiding sentence, every field is something you will reuse.
🖼 Image brief
- Alt: Scan or mock-up of a filled PIL notebook page showing the full template: heading, date, owner, status, rubric tag, related pages, symptom paragraph, context paragraph, candidate cause, How-might-we reframe, constraints list, goals list, success criteria, and next action.
- Source: Scan or mock-up of a filled PIL page in the team's chosen notebook format.
- Caption: A complete PIL notebook entry. Six minutes longer to write than the spreadsheet row, and worth every second.
The template: PIL Entry #[N] — [descriptive title]. Date opened, owner (name and role), status (Open / In progress / Resolved), rubric tag (Identify the Problem, criterion 1), related notebook pages (brainstorm, matrix, build log, test log — cross-referenced by page number). Then the body: Symptom (what the driver saw), Context (conditions under which it occurred), Candidate cause (framed as a hypothesis, not a certainty), How might we… (design-challenge reframe), Constraints (real constraints you are operating under), Goals (short-term and long-term), Success criteria (measurable — a percentage, a time, a count), and Next action (one sentence pointing at the brainstorm phase).
Now compare that to a weak version of the same entry: "Intake keeps jamming when we pick up two objects. Need to fix. — Alex." The weak version has no date, no conditions, no measurement, no hypothesis, no constraints, no success criteria, no "how might we" reframe, and no next action. A judge reads the first version and gives the team credit. A judge reads the second and gives none. Both took a similar amount of thinking. The first took about six minutes longer to write down.
Independent exercise
Pick one open problem from your team's current PIL spreadsheet — ideally one that has been sitting in the "in progress" column for more than a few days. Write it up as a full notebook entry using the template above. Every heading present. Every field filled. If a field is genuinely not knowable yet, write "Not yet known — will update by [date]" rather than leaving it blank. Paginate and sign the page. Add it to your table of contents. Then go update the spreadsheet to match. The notebook is the source of truth; the spreadsheet is the index.
Common pitfalls
- Writing the symptom and stopping there. A symptom without a candidate cause is a complaint. A candidate cause — even an admittedly wrong one you later revise — is engineering.
- Copying the spreadsheet row verbatim. The spreadsheet is shorthand; the notebook is long-form. Judges cannot score shorthand.
- Omitting success criteria because "we'll know it when we see it." If you cannot state a measurable success criterion, you cannot close the loop.
- Leaving the PIL entry in the "open" state for ever. Resolution is what turns an Identify-the-Problem entry into a Repeat / Iterate entry.
- Writing the PIL entry the week before a competition, backfilled from memory. A PIL entry written on the same day the problem appeared is worth five written in hindsight.
Where this points next
N2.2 turns the "how might we" question from this entry into a brainstorm page with three annotated sketches.
📐 Reflection prompt (notebook-ready)
- Are symptom and candidate cause written as separate paragraphs, with the candidate cause framed as a hypothesis?
- Is the "How might we…" reframing present and does it point towards a design challenge rather than a complaint?
- Are success criteria measurable and do they include a number — a percentage, a time, a count?
Next up: N2.2 — Brainstorming pages.
Brainstorming pages
Turn a “How might we…” question into a brainstorm page with at least three labelled, annotated concepts.
Objective
Turn a "How might we…" question from a PIL entry into a single brainstorm page that shows at least three distinct, labelled, annotated concepts a judge can tell were considered in parallel.
Concept
A brainstorm page is the notebook entry that sits between a problem and a decision. It exists to prove two things to a judge. First, that your team actually considered more than one approach before committing. Second, that no idea was dismissed too early. These are the two failure modes the rubric's Brainstorm criterion was written to catch.
A brainstorm page with one sketch is evidence the team made up its mind before thinking. A brainstorm page with three sketches where two are obviously throwaway strawmen is evidence of a team pretending to have thought. Judges can tell. The fix is to brainstorm genuinely, then document the brainstorm with the same care you would give a build log.
Why three. Two forces a comparison but still feels like a false binary — the "either this or that" that teams use to justify a choice they had already made. Three forces genuine exploration because it is much harder to write three plausible options than two. If you can only produce two, your search space is still too narrow — widen it before you leave the brainstorm. You can, and often should, produce more than three; three is the floor, not the ceiling.
No idea is dismissed on this page. The brainstorm page is divergent. Its job is to widen the search space, not narrow it. Rejection happens on the next page, the decision matrix, where you can show your reasoning for picking one option over another. If you cross ideas out on the brainstorm page, you are collapsing divergence and convergence into one artefact, and the judge reading it cannot tell whether the rejected ideas were considered or just added for show.
📐 Engineering tip. Sketches can be hand-drawn, photographed from a whiteboard, or rendered digitally. Judges do not score on draftsmanship. They score on whether the sketch is labelled — arrows to named components, callouts for unfamiliar terms, a clear indication of what is moving and what is fixed. An unlabelled sketch is worth about the same as a blank page.
🖼 Image brief
- Alt: A brainstorm page showing three annotated concept sketches side by side, each with labelled arrows, a pros list, a cons list, and an open question. A "How might we" question is written across the top.
- Source: Scan or mock-up of a brainstorm page in the team's chosen notebook format.
- Caption: Three concepts, all labelled, none dismissed. The decision matrix on the next page will pick the winner.
Guided practice
Below is a worked brainstorm page for the "How might we…" question inherited from the PIL entry in N2.1. Copy the structure; replace the mechanism.
The template: Brainstorm Page — [subsystem, problem name]. Date, owner (with collaborators named), cross-reference to the originating PIL entry, and the "How might we…" question reproduced at the top. Then three or more concept blocks, each containing: a labelled sketch (or photograph of a whiteboard sketch), a one-line mechanism description, a pros list, a cons list, and one open question. At the bottom: a note stating that all concepts will advance to the decision matrix, with no concept dismissed at this stage. Signed, dated, page-numbered.
Independent exercise
Take the "How might we…" question from your own PIL entry in N2.1. On a single notebook page, produce at least three concept sketches, each with a one-line mechanism description, a pros list, a cons list, and one open question. Every sketch must be labelled — arrows, component names, a clear sense of what moves. Photograph a whiteboard if that is faster. Sign and date the page. Cross-reference it back to the originating PIL entry.
Common pitfalls
- Producing three sketches where two are obvious strawmen. Judges recognise false choices instantly and score the page as if it had one idea.
- Writing a brainstorm page after the team has already started building. If the build log precedes the brainstorm page in the notebook, the rubric credit is gone.
- Unlabelled sketches. A beautiful unlabelled drawing is worth less than an ugly labelled one.
- Mixing brainstorm and decision-matrix reasoning on the same page. Keep the pages separate so each one can be scored cleanly on its own criterion.
- Limiting brainstorming to solutions you know how to build. Independent inquiry (criterion 7) rewards researching something you do not yet know how to build.
Where this points next
N2.3 turns these three concepts into a weighted decision matrix that makes a defensible choice.
📐 Reflection prompt (notebook-ready)
- Are there three or more distinct concepts, not three variations of the same concept?
- Does every sketch have labels and arrows — can a judge identify what each piece is without asking?
- Is no idea dismissed on this page — does dismissal happen in the decision matrix?
Next up: N2.3 — Decision matrices.
Decision matrices
Turn a brainstorm page into a weighted decision matrix with a built-in honesty test.
Objective
Turn a brainstorm page into a weighted decision matrix that a judge can read and trust — with a built-in honesty test that prevents the matrix from being a story told in hindsight.
Concept
A decision matrix is a table. Rows are options, columns are evaluation criteria, cells are numeric scores, and a total column picks a winner. It is the artefact that satisfies rubric criterion 3, Select the Best Solution, and it is the bridge between the divergent thinking of a brainstorm page and the convergent commitment of a build log. Without it, your notebook has ideas and it has a robot, and no documented reason why one became the other.
The honesty test. The trap is that teams fill in matrices to justify decisions already made. You know the option you want to build. You pick scoring values that nudge it to the top. The matrix becomes a prop, not a tool. The fix: before you write any numbers, each team member privately writes down which option they think will win and why. Then score. Then compare. If the matrix agrees with everyone's prediction, consider widening the criteria or the options — you may be falling into groupthink. If the matrix disagrees with a prediction, the disagreement is the most valuable data the matrix produced, and it should be written up alongside the matrix itself.
The six required columns. Effectiveness (does it solve the problem), simplicity and reliability (how likely is it to just work), buildability (can you build it with current tools, skills, and parts), rule compliance (does it fit the constraints of the rules you are operating under), strategic value (does it align with your game plan for the season), and innovation or advantage (does it give you a unique edge). Use a one-to-five or one-to-ten scale; pick one and be consistent within the season. If some criteria matter more than others, assign weights — a one-to-three weight multiplier per column is enough. Weights are decided before scoring, not after.
📐 Engineering tip. The matrix should be followed by a paragraph of prose that interprets it. Numbers alone do not demonstrate thinking; a paragraph that says "the winner was Concept 2 on effectiveness and innovation but lost on buildability, so we committed to Concept 2 and built a parallel buildability-risk mitigation plan" demonstrates thinking. Judges score the paragraph as much as they score the table.
Guided practice
Below is a decision matrix picking between the three concepts from the N2.2 brainstorm page. Scores are one-to-five. Effectiveness and rule compliance are weighted ×2 because those are the make-or-break criteria this sprint.
| Criterion | Weight | C1: Asymmetric wheel compression | C2: Widened throat + passive funnel | C3: Active pre-funnel w/ extra motor |
|---|---|---|---|---|
| Effectiveness | ×2 | 3 → 6 | 3 → 6 | 5 → 10 |
| Simplicity / reliability | ×1 | 4 | 4 | 2 |
| Buildability (this sprint) | ×1 | 5 | 3 | 1 |
| Rule compliance | ×2 | 5 → 10 | 3 → 6 | 5 → 10 |
| Strategic value | ×1 | 3 | 3 | 4 |
| Innovation / advantage | ×1 | 3 | 2 | 4 |
| Weighted total | 31 | 24 | 31 |
Predictions vs. matrix. Concepts 1 and 3 tied at the top. Concept 2 came in clearly lowest — a surprise to the team member who predicted it would win. The reason is that Concept 2's widened throat scored 3 on rule compliance because of a field-sizing constraint risk, and rule compliance carried a ×2 weight. If that constraint turns out to be non-binding, Concept 2 would climb by four points. Worth checking before finalising.
Tiebreaker reasoning. Concept 3 scores higher on effectiveness but cannot be built this sprint under the current motor port allocation. Concept 1 scores lower on effectiveness but can be built and tested within the week. Commit to Concept 1 for this iteration; open a new PIL entry to revisit Concept 3 when the iteration-three motor port budget is re-allocated.
Independent exercise
Take the brainstorm page you produced in N2.2. Before you do anything else, write your private pre-score prediction on a scrap of paper. Then build the matrix with your team, using the six criteria above plus any weightings you want to set. Score each cell deliberately, one at a time. Total the weighted column. Compare the winner to your prediction. If they match, widen your criteria and run the matrix again. If they do not match, write a paragraph explaining the disagreement. Sign and date the page.
Common pitfalls
- Filling in the matrix after the solution is already half-built. If the build log date is earlier than the matrix date, the matrix is retroactive and judges will read it that way.
- Letting the winner drift as you score. If you find yourself nudging a cell from 3 to 4 because "otherwise Concept 2 would lose," stop and run the whole matrix again cold.
- Dropping criteria you find inconvenient. If buildability is going to kill your favourite option, you need buildability in the matrix, not absent from it.
- Listing alternatives in the matrix that were not on the brainstorm page. Every row must trace back to a sketched, annotated concept.
- Using a matrix with no weightings when some criteria obviously matter more. Unweighted matrices hide trade-offs; weighted matrices make them visible.
Where this points next
N2.4 documents the design review that converts this matrix into a committed solution with recorded feedback.
📐 Reflection prompt (notebook-ready)
- Are weights set before scoring and written at the top of each column?
- Are pre-score predictions recorded on the same page — the honesty test?
- Does a paragraph of prose follow the table, naming a winner, a second place, and a reason?
Next up: N2.4 — Design review documentation.
Design review documentation
Turn a design review meeting into a notebook entry with named feedback and committed action items.
Objective
Turn a design review meeting — which PM.7 in Chapter V taught you to run — into a notebook entry that captures named feedback, committed action items, and the decisions the meeting actually produced.
Concept
A design review is the team ritual where a subsystem lead presents a problem, a brainstorm, and a decision matrix to the rest of the team (and ideally a mentor or an adjacent team) for critique before building. PM.7 teaches the meeting. This section teaches the page. The page matters because the meeting is ephemeral — feedback said in a room is gone within an hour unless someone wrote it down.
A weak design review entry is a bulleted list of what was said. A strong one is structured so that every piece of feedback is traceable to a named source and every action item is assigned to a named owner with a deadline. "We got some feedback on the intake" is not a design review entry. "J. Patel raised concern about the calibration being too sensitive to battery voltage and recommended running the test-log baseline at two voltage levels — action: R. Garcia to re-test by 4 October" is a design review entry. The second one is scored under criteria 3 and 9 and feeds forward into the next test log.
📐 Engineering tip. The rubric rewards design reviews because they prove the team operates as a team. A single student can brainstorm, matrix, and build alone. Multiple students producing coordinated decisions through a named, dated meeting is evidence of the kind of project management the Team Management criterion specifically looks for. A design review entry is therefore double-scored — once for the decision quality (criterion 3) and once for the team coordination (criterion 9).
Guided practice
Template: Design Review — [subsystem, PIL reference]. Date and time, location, attendees (names and roles, including external reviewers), cross-references (PIL, brainstorm, matrix). Then seven numbered sections:
- Problem brief — summary of the PIL entry, presented by the subsystem lead.
- Review of brainstormed ideas — the concepts from the brainstorm page, presented.
- Decision matrix summary — the matrix outcome, presented.
- Proposed solution — what the team intends to build.
- Feedback received (direct attribution) — every piece of feedback attributed to a named person, each with an action item.
- Action items (summary) — checkbox list, one owner per item, each with a due date.
- Decision status — unambiguous: committed, tentative-pending-X, or deferred.
Signed by every attendee. Page-numbered.
Independent exercise
After your next team design review, write up the meeting using this exact template. Every section present. Every piece of feedback attributed to a named person. Every action item owned by one person with a due date. Cross-reference the PIL, brainstorm, and matrix pages the review discussed. Sign the page with the initials of every attendee.
Common pitfalls
- Writing the entry after the fact from memory. Take notes during the meeting, on the page itself.
- Grouping feedback into a single "discussion" bullet. The rubric rewards named attribution; anonymous bullets score as zero named feedback events.
- Action items assigned to "us" or "the team." If everyone owns it, nobody does. One name per action item.
- Burying the decision in a paragraph. End the entry with an explicit one-line decision status.
- Treating the design review as optional once the team is "moving fast." The faster you are moving, the more the review matters.
Where this points next
N3.1 picks up the decided solution and turns it into a daily build log.
📐 Reflection prompt (notebook-ready)
- Are attendees named, not grouped as "the team"?
- Is every piece of feedback attributed to one specific person?
- Does every action item have one owner, a verb, and a due date?
Next up: N3.1 — Build logs.
Build logs
Produce a build log entry that a team member six months from now could use to rebuild the same subsystem.
Objective
Produce a build log entry that a team member six months from now could use to rebuild the same subsystem without having been there when you built it.
Concept
A build log is the running record of what was actually built, by whom, in what session, with what problems encountered and what decisions made in the moment. It is the artefact that satisfies rubric criterion 4, Build / Programme the Solution, and it is the single easiest entry to write badly. The default failure mode is writing "Worked on the intake today" and treating that as a log. That is not a log. It is a timestamp.
The rubric rewards build logs because judges cannot watch you build. They have to infer the work from what you wrote down. That inference fails if the log says "adjusted the mounting" and it succeeds if the log says "moved the left-side mounting bracket outward by one hole position (from position 4 to position 5) because the C-channel was fouling the flex wheel at full compression; photograph taken before and after; change made permanent with nylock nut; screw replaced with a longer one from the hardware bin."
🔧 Build tip. Build logs are daily. This rule matters more than any other in Tier 3. Missing a day is acceptable; batching a week's worth of logs at the end of the week is not. A log written the same day the work happened contains details — which hardware bin, which exact screw, which driver was present, which battery was used — that vanish from memory within forty-eight hours.
🖼 Image brief
- Alt: A build log page showing the full template: iteration tag, date, session length, attendees, subsystem, cross-references, before state, goal, what-was-built numbered list, challenges encountered, on-the-fly decisions, after state, what's next, and signature line.
- Source: Scan or mock-up of a build log page in the team's chosen notebook format.
- Caption: A complete build log. The "challenges encountered" and "on-the-fly decisions" sections are where the engineering detail lives.
Guided practice
Template: Build Log — Iteration [N] — [subsystem] — [descriptive title]. Date, session length, present (names), subsystem, iteration reference, cross-references (decision matrix, design review, PIL). Then the body: Before state (what the subsystem looked like before this session, with a photograph reference), Goal for this session (what you set out to do), What was built (numbered list, specific enough to reproduce), Challenges encountered (problems that arose during the build and how you dealt with them), On-the-fly decisions (departures from the plan and why), After state (what the subsystem looks like now, with a photograph reference), What's next (pointer to the next entry). Signed by every contributor. Page-numbered.
Independent exercise
At the end of your next build session, before you leave the team room, write a build log entry using the structure above. Every heading. Every field. Photograph the before, the during, and the after state; caption each photograph with a reference code that appears in the log. Sign and date the page. Add it to the table of contents.
Common pitfalls
- Writing "worked on the intake." Not a log.
- Batching a week of logs on Friday. Retroactive logs are always visibly thinner than contemporaneous ones.
- Omitting photographs. A build log without photographs requires the judge to take your word for the work.
- Smoothing over the on-the-fly decisions. The moment you noticed the friction tape was not sticking and switched to cleaning the surface is exactly the kind of engineering detail the rubric rewards.
- Using "we" without naming who. One builder, one owner per action.
Where this points next
N3.2 turns the built subsystem into a test log with quantitative results.
📐 Reflection prompt (notebook-ready)
- Is the date today, not last week?
- Are before state and after state both documented, ideally with photograph references?
- Is the "what was built" section specific enough that a team member who was not there could reproduce it?
Next up: N3.2 — Test logs.
Test logs
Produce a test log with quantitative results that a judge reads as evidence of engineering rigour.
Objective
Produce a test log entry with quantitative results that a judge can read as evidence of engineering rigour rather than "we tried it."
Concept
A test log is the notebook entry that converts a built subsystem into data. It is the artefact that satisfies rubric criterion 5, Test the Solution, and the key word in the criterion is quantitative. "It worked" is not a test log. "Three of five cycles under eight seconds, best cycle 6.2 seconds, worst cycle 12.1 seconds; the 12.1-second run had visible chain slip at the 4-second mark" is a test log. The difference between those two sentences is the difference between an unscored entry and a scored one.
Quantitative data is non-negotiable but it is not enough on its own. A test log also needs a hypothesis written before the test runs, a procedure specific enough to reproduce, conditions recorded (battery voltage, field state, driver, which subsystem revision), and a conclusion that explicitly compares the results to the hypothesis. The hypothesis matters because a test without one is just a demonstration — and demonstrations are not tests.
📐 Engineering tip. Judges have seen thousands of test logs. The ones they remember are the ones where the team clearly expected a particular result, measured carefully, and then wrote about what actually happened — including when the result was different from the hypothesis. A surprising result documented honestly is worth more than a confirming result written up casually.
Guided practice
Template: Test Log — Iteration [N] — [subsystem] — [descriptive title]. Date, session time, driver (same driver as baseline if possible), conditions (battery voltage, field state, subsystem revision), cross-references (PIL, build log, baseline test log). Then the body:
- Hypothesis — specific and falsifiable, written before the test.
- Procedure — numbered steps, specific enough that a different team member could run the same test tomorrow.
- Quantitative results — a table of individual runs (not just a summary), with columns for run number, outcome, cycle time, battery voltage, and notes.
- Summary — aggregate statistics: stall rate, mean cycle time, best, worst.
- Qualitative results — observations not captured by numbers (visible deformation, audible chain slip, inconsistent behaviour at low battery).
- Conclusion vs. hypothesis — explicitly state whether the hypothesis was supported, and what the next action is.
Signed. Page-numbered. Cross-referenced.
Independent exercise
Pick one thing on your robot that you believe works. Write a hypothesis stating the level at which you believe it works — a specific number, not "it works." Design a procedure with at least twenty repetitions. Run the procedure. Record every run in a table. Write a conclusion that explicitly compares results to hypothesis. Sign and date. Cross-reference the build log for the subsystem under test.
Common pitfalls
- "It worked" as the entire results section. Unscored.
- A summary statistic with no underlying runs. Judges want to see the raw data, not just the mean.
- A hypothesis written after the test. If the hypothesis happens to match the result perfectly and the log is dated the same day, it reads as backfill.
- Running three runs and calling it a test. Small-sample tests hide variance. Twenty is a reasonable floor.
- Recording results without conditions. A cycle time measured on a full battery and a cycle time measured on a dying battery are different data.
Where this points next
N3.3 synthesises the build and test logs for this iteration into an iteration reflection.
📐 Reflection prompt (notebook-ready)
- Is the hypothesis written before the test, and is it specific enough to be checked against a number?
- Is the procedure specific enough that a different team member could run the same test tomorrow?
- Does the conclusion explicitly name whether the hypothesis was supported?
Next up: N3.3 — Iteration reflections.
Iteration reflections
Close the loop on an iteration with a five-question reflection page.
Objective
Produce an iteration reflection page at the end of every iteration that closes the loop on the iteration's goals, names what was learnt, and seeds the next iteration.
Concept
Criterion 6 of the rubric — Repeat / Iterate — rewards evidence that your team actually ran iteration loops across the season. Build logs prove work happened. Test logs prove measurement happened. Iteration reflections prove learning happened. Without a reflection page, a judge reading your notebook sees work and data but no explicit moment where the team turned that data into a better version of the robot.
An iteration reflection is written once per iteration, not daily. An iteration is a deliberate unit — often 2–3 weeks, often bounded by a scrimmage or a design review — and the reflection page is the capstone that closes it. The next iteration's PIL entries and goals should be seeded directly from this reflection, creating an explicit trail across the notebook. When a judge reading the notebook can trace that chain through several loops, criterion 6 is satisfied.
📐 Engineering tip. A strong reflection answers five questions, in order. What was the goal of this iteration? What did we actually build, programme, or change? What worked? What did not? What will we change in the next iteration, and why? The last question is load-bearing — if the reflection names what will change and does not explain why, it reads as arbitrary.
Guided practice
Template: Iteration Reflection — Iteration [N] — [descriptive title]. Date (end of iteration), iteration window (from date X to date Y), authors, cross-references (PILs, brainstorm, matrix, design review, build logs, test logs, previous iteration reflection). Then five numbered sections:
- What was the goal of this iteration?
- What did we build / programme / change? — specific, dated, cross-referenced.
- What worked?
- What did not work? — at least one item. If this section is blank, the reflection is not credible.
- What will we change in the next iteration, and why? — the "why" is the connective tissue.
Optionally: a Lessons that apply beyond this iteration paragraph for insights the team wants to carry forward to all future sprints. Signed by every contributor. Page-numbered.
Independent exercise
At the end of your current iteration, write a reflection page using the five-question structure above. Answer each question with at least one specific, dated, cross-referenced fact. Name at least one thing that did not work. Name at least one thing you will change in the next iteration, and the why for that change. Sign the page with every contributor's initials.
Common pitfalls
- Writing the reflection as a victory lap. If every iteration reflection is positive, none are credible.
- Skipping the "why" on the changes. "We will redesign the intake next iteration" is not a reflection; "we will redesign the intake because the close-pair stall mode accounts for 10% of all failures and will not respond to further tuning" is.
- Writing one reflection for the whole season instead of per-iteration. The iteration is the unit.
- Not cross-referencing forward. The reflection should seed the next iteration's PIL entries by filing them as part of the reflection write-up.
- Omitting the retrospective honesty about design-review debt, timing pressure, or bad calls made under deadline.
Where this points next
N3.4 turns match data from a completed event into the next iteration's PIL entries through a competition review.
📐 Reflection prompt (notebook-ready)
- Are all five questions answered, in order, on one page?
- Is at least one "did not work" item named honestly?
- Does the "what will we change" section include a why for each change?
Next up: N3.4 — Competition reviews.
Competition reviews
Turn the data and notes from a completed event into new PIL entries that seed the next iteration.
Objective
Turn the data, video, and notes from a completed event into a notebook entry that generates new PIL entries and seeds the next iteration.
Concept
A competition review is the notebook's converter between the chaos of a live event and the calm of the next sprint. Matches happen fast, nothing gets written down in the moment, drivers and builders see different things, and everyone is tired by the end of the day. Without a structured post-event entry, all of that signal leaks. A competition review captures it in one sitting, within 24 hours of the event, and turns it into specific PIL entries your team can act on the following week.
The entry is scored under Test the Solution (every match is a test under the most adversarial conditions you have all season) and under Repeat / Iterate (the new PIL entries are the visible output of the loop closing).
⚡ Competition tip. The discipline is twofold. First, the review is written quickly — within 24 hours, while memory is still fresh. A competition review written three weeks after the event is indistinguishable from a story. Second, the review is written completely — every section, even the ones that feel uncomfortable, because the uncomfortable sections are where the next iteration's best PIL entries come from.
Guided practice
Template: Competition Review — [Event], [Division]. Date of event, date this review written (within 24 hours), attendees, cross-references (current iteration reflection, current PIL entries, match video folder). Then eight numbered sections:
- Event summary — one-sentence record and outcome.
- Match-by-match observations — a table with columns for match, result, stall events, autonomous result, and notes.
- What broke — specific failure modes with conditions. At least one item.
- What worked — specific confirmations of prior test data.
- Alliance-partner feedback — attributed where possible.
- Driver reflections — first-person notes from the driver.
- New PIL entries generated by this review — each with an owner.
- Reflection note — overall assessment and sprint-weighting recommendation for the next iteration.
Signed by every attendee. Page-numbered.
Independent exercise
After your next event, within 24 hours, write a competition review using this template. Every section present. Every match listed. At least one "what broke" item. Every observation turned into either an addition to an existing PIL entry or a new one. File the new PIL entries as part of the review.
Common pitfalls
- Writing the review a week later. Memory decays; specific failure conditions become "I think the intake did something weird in one match."
- Skipping the "what broke" section because it felt like a good day. Every event has at least one observation that should seed next iteration's work.
- Not attributing alliance-partner feedback. The rubric rewards named sources.
- Leaving the review as a dead document instead of filing new PIL entries from it.
- Batching the match-by-match observations from memory. Have one person take notes during matches if at all possible, even just on a phone.
Where this points next
N4.1 turns research into an Independent Inquiry page — the rubric criterion 7 entry that many notebooks never attempt.
📐 Reflection prompt (notebook-ready)
- Is this review written within 24 hours of the event?
- Does the "what broke" section have at least one item naming conditions specifically?
- Are new PIL entries filed as a direct output of the review, with owners and due dates?
Next up: N4.1 — Independent inquiry / research entries.
Independent inquiry / research entries
Write a research entry that cites outside sources and connects the learning to a specific choice on your robot.
Objective
Write a research entry that cites outside sources, summarises what was learnt, and connects the learning to a specific choice on your robot — satisfying the criterion most notebooks underweight.
Concept
Criterion 7, Independent Inquiry, is the rubric criterion that distinguishes a good notebook from a great one. It rewards research beyond what the team was handed. A good notebook documents the design process on the robot. A great one also documents the reading, the outside sources, the investigations the team undertook to learn something nobody told them to learn. This is also the criterion that most teams underweight or skip entirely, which means that a team that does it well is scored much higher than average almost automatically.
A research entry is not a homework assignment. It is a short investigation on a question the team chose, informed by sources the team cited, with a conclusion that connects to a real design decision the team made. "We looked at drivetrain types" is not a research entry. "We investigated three drivetrain configurations, cited two external sources (one paper, one referenced team post), summarised the trade-offs we learnt, and used the findings to inform our decision matrix on page 44" is a research entry. The difference is sources, and the difference is the connection back to a real decision.
📐 Engineering tip. Sources can be outside papers, textbooks, manufacturer datasheets, referenced posts on public forums, publicly posted notebooks from other teams, video tutorials, or direct conversations with mentors. The format of the citation does not need to be rigorous academic style, but every source must be identifiable enough that a judge could look it up themselves. "YouTube video" is not a citation. "Tutorial video titled [X] by [channel name], accessed September 2025" is.
The load-bearing part of a research entry is the connection paragraph at the end. This is the paragraph where you say: because of what we learnt from these sources, we made this specific choice on this specific page of our notebook. Without that connection, the research is a book report. With it, the research is engineering.
Guided practice
Template: Research Entry — [topic]. Date, owner, and a "Why this question" paragraph connecting the research to a current PIL entry or design question. Then: Sources consulted (numbered list, at least two, each identifiable enough to look up), Summary of findings (the team's own synthesis, not a copy of one source), Connection to our robot's design (the paragraph that makes this research engineering rather than a book report), and What we still want to learn (research is never complete, and saying so is credible). Signed. Page-numbered.
Independent exercise
Pick one question your team is currently facing that is not obvious from standard materials. Spend 60 minutes reading outside sources on it. Take notes. Then write the research entry using the structure above. At least two outside sources. One connection paragraph. One "what we still want to learn" list. Sign and date.
Common pitfalls
- Writing a research entry that does not cite anything. A research entry without sources is an opinion entry, and opinion entries are not scored under criterion 7.
- Citing one source and calling it research. One source is a reference; two or more sources with a synthesis is research.
- Pasting summaries from sources without synthesis. Judges can tell when a paragraph is a rewrite of a source; they look for the team's own voice.
- Skipping the connection paragraph. Research with no connection to the robot scores under criterion 7 but weakly.
- Treating the research entry as optional. Criterion 7 is one of the ten; teams that skip it are capped at 90% of the rubric regardless of how good the other nine are.
Where this points next
N4.2 turns the game analysis into a living, versioned document that contributes to criteria 1 and 7.
📐 Reflection prompt (notebook-ready)
- Are at least two outside sources cited in a form that someone else could look up?
- Does a connection paragraph link the research to a specific decision on a specific page of the notebook?
- Does the entry end with a "what we still want to learn" section?
Next up: N4.2 — Game analysis entries.
Game analysis entries
Maintain a versioned game analysis section that evolves through the season.
Objective
Maintain a versioned game analysis section that evolves through the season as the team learns more about the game — contributing to both criterion 1 and criterion 7.
Concept
Game analysis is the part of the notebook where your team documents its understanding of the game itself — the scoring priorities, the rules that affect design, the strategic priorities, the autonomous considerations, the endgame trade-offs. It is one of the few notebook sections that is not pegged to a specific robot subsystem, and it is one of the few that is expected to change across the season as the team learns.
Unlike a build log, which is written once and never edited, a game analysis is versioned. Version 0.1 is the team's first-pass understanding in the first week. Version 1.0 is after the first competition, when real data comes in. Version 2.0 might be mid-season after a rule clarification or a strategic realisation. Each version is dated, authored, and left in the notebook — the old versions are not deleted, because the history is the evidence of learning.
⚡ Competition tip. A strong game analysis section serves two rubric criteria simultaneously. It feeds criterion 1 (Identify the Problem) because it is where the team names the strategic problems the robot must solve. And it feeds criterion 7 (Independent Inquiry) because good game analysis involves research the team chose to do: watching matches from other regions, reading rule clarifications, analysing published strategy posts.
Guided practice
Template: Game Analysis — Version [X.Y] ([description]). Date, owner (with inputs from named team members), status. Then nine sections:
- Game overview — one-paragraph summary in plain language.
- Field layout — labelled diagram with every significant zone, scoring target, and starting position.
- Game objects — table of every object type, quantity, value, and handling rules.
- Scoring breakdown with priority ranking — the value-dense part. Distinguish between point values (rules) and team priorities (opinions).
- Rule analysis — specific rules that affect design choices, cited by rule number, each followed by one sentence on design impact.
- Autonomous period strategy — what the autonomous period should accomplish.
- Endgame strategy — including the trade-off between scoring and endgame actions.
- Strategic priorities for our robot — three to five concrete sentences that subsequent PIL entries can cite.
- Open questions for next version — everything you do not yet know and plan to investigate.
For subsequent versions (1.0, 2.0, etc.): add a What changed since Version [previous] section at the top, explicitly naming the prior version, the date of change, and the reason. Signed. Cross-referenced to the previous version's page number.
🖼 Image brief
- Alt: A hand-drawn or digitally rendered field layout diagram with labelled zones, scoring targets, starting positions, and reference numbers for cross-referencing.
- Source: Photograph of a team's actual field-layout diagram from their notebook, with zones clearly numbered.
- Caption: Section 2 of a game analysis page: every zone labelled, every target numbered.
Independent exercise
Write a Version 0.1 game analysis for your current season using the nine-section template above. If you are early in the season, block two hours and do it thoroughly, with the whole team in the room for sections 4 and 8 at minimum. If you are mid-season, write a new version that explicitly names what has changed since the previous version. Every version is signed, dated, and cross-referenced to the previous one.
Common pitfalls
- Writing one game analysis in August and never updating it. Criteria 6 and 7 both reward versioning.
- Confusing point values (fixed by the rule book) with priorities (the team's opinion). A priority ranking that matches the point values exactly will convince a judge the team has not yet formed a strategic opinion.
- Skipping the rule analysis because "we know the rules." The rule analysis is where the notebook proves the team understood the rules well enough to let them constrain the design.
- Deleting earlier versions when the strategy changes. Keep them. The evolution is the evidence.
Where this points next
N4.3 turns the team identity and project-management section into its own rubric-scored artefact.
📐 Reflection prompt (notebook-ready)
- Are all nine sections present and filled in?
- Does the scoring priority ranking distinguish between point values (rules) and team priorities (opinions)?
- If this is not the first version, does the "what changed" section explicitly name the prior version and the date of change?
Next up: N4.3 — Team identity and project management sections.
Team identity and project management sections
Turn the team identity section into a living project-management record that demonstrates criterion 9 across the full season.
Objective
Turn the team identity section from N1.3 into a living project-management record that demonstrates criterion 9 across the full season — roles, values, timeline, and honest reflection on team development.
Concept
Criterion 9, Team and Project Management, is scored on more than just the team roster. A judge looking for criterion 9 evidence wants to see: clear roles that people actually occupy, a season timeline the team is working against, a set of values or norms the team has written down and is accountable to, and evidence of reflection on how the team itself is developing as a group. The roster alone — the thing you wrote on day one in N1.3 — is the start of the criterion, not the whole of it.
📐 Engineering tip. The four-axis skill self-evaluation introduced in N1.3 (coding / design-build / test-drive / game analysis) becomes especially valuable if the team re-runs it mid-season and again at the end of the season. The delta between the August scores and the March scores is the team's own visible development curve, and it is exactly what the rubric rewards.
Guided practice
Six elements to add on top of the N1.3 foundation:
- Roles (updated from N1.3) — current role table, with dated change notes where roles have shifted mid-season.
- Season timeline (Gantt chart) — a month-by-month bar chart showing the five phases of the engineering cycle across the season, per subsystem, with known event dates overlaid. Updated at the start of each iteration to reflect what slipped, what advanced, and what was added.
- Team values and collaborative norms — not vague platitudes ("we value respect") but operational norms ("if two members disagree on a design decision, we run a decision matrix before committing"; "every stand-up is ten minutes and we enforce the time"). Written early, revisited mid-season, amended when a norm is not working.
- Skill self-evaluation over time — the four-axis table re-run at three points: week 1, mid-season, end of season. The deltas are the visible development curve.
- Reflection on team development — short paragraph entries, written every month, naming what the team has noticed about itself. These are the most personal entries in the notebook and the highest-leverage ones for criterion 9.
- Season goals — revisited — the season-goals page from N1.3, re-visited with current progress notes. Every goal marked as on-track, at-risk, or achieved.
🖼 Image brief
- Alt: A Gantt chart showing five subsystems (drivetrain, intake, scoring mechanism, endgame mechanism, autonomous) across eight months, with event dates marked as milestones.
- Source: Photograph or mock-up of a team's season timeline from their notebook.
- Caption: A season timeline: honest about slips, updated at every iteration start.
Independent exercise
Take your existing team identity section from N1.3 and extend it with the six elements above. Draw the Gantt chart for your actual season — real months, real subsystems, real event dates. Re-run the four-axis skill evaluation if you did it in August and you are now at least eight weeks later. Write at least one team-development reflection paragraph about something real that changed on your team. Sign the page with everyone's initials.
Common pitfalls
- Writing the team-identity section once and leaving it frozen. Frozen sections do not earn criterion-9 credit.
- Using vague team values ("we respect each other"). Judges reward operational norms, not platitudes.
- Drawing the Gantt chart once and never updating it when plans slip. A Gantt chart that has never been revised is a plan, not a project-management artefact.
- Skipping the reflection paragraphs because they feel awkward. They are the single highest-leverage entry in the section.
- Running the skill self-evaluation only once. Without a second column, there is no visible development.
Where this points next
N4.4 extends the team identity section into outreach and programme identity.
📐 Reflection prompt (notebook-ready)
- Does the role table reflect the team as it exists today, not as it was on day one?
- Has the Gantt chart been updated at least once since it was first drawn?
- Is at least one team-development reflection paragraph present and dated?
Next up: N4.4 — Outreach and programme identity.
Outreach and programme identity
Document the team’s role in the larger ecosystem — outreach, mentorship, programme pipeline.
Objective
Document the team's role in the larger ecosystem — outreach, mentorship, programme pipeline — in a way that makes your team visibly distinct from an isolated robot-only operation.
Concept
Outreach documentation is the optional-but-judge-impressive part of the notebook. It is not directly tied to a single rubric criterion the way the design-process spine is. But it contributes to criterion 9 (Team / Project Management) and it is specifically what judges consider for Excellence-type awards, which reward programmes that extend beyond the single team.
The pitfall is writing this section as marketing. "We love to share our passion for robotics" is a marketing sentence, not a notebook entry. The fix is the same as every other rubric criterion: specificity, dates, names, and outcomes. "On 15 October we hosted a two-hour introductory workshop for six students from our school's younger programme. Workshop materials in appendix. Attendance photographed. Feedback collected: three students asked about joining the programme next year" is an entry. It is scoreable, it is real, and a judge can follow up on any part of it.
📐 Engineering tip. This section is usually short — three to six pages at most — and it does not need to be updated daily. It is updated when an outreach activity actually happens. If no outreach has happened, the section is small and honest; do not fabricate outreach to fill the page.
Guided practice
Template: Programme Identity and Outreach Section. Date first written, last updated, owner. Then six sub-sections:
- Programme context — one page introducing the broader programme your team operates within (if any).
- Pipeline activities — mentoring younger students — running log of activities supporting younger tiers. Each entry names: date, activity, who from your team led, who attended, what was covered, and one concrete outcome.
- Peer-to-peer mentorship — log of activities supporting or collaborating with other competitive teams.
- Community outreach — log of activities engaging with a non-robotics community (school open houses, public demos, career fairs).
- Events hosted or co-hosted — list of events your team organised, if any.
- Reflection on programme role — short paragraphs about how the team thinks about its role in the larger ecosystem.
Independent exercise
Write a programme identity section of whatever length is honest to your team's current activities. If your team does no outreach, write a short section that describes the programme context honestly and names one outreach commitment you plan to make this season — then revisit that commitment at the end of the season. If your team does a lot of outreach, log each event as it happens, using the per-entry structure above. Do not inflate; judges can tell.
Common pitfalls
- Writing the section as a marketing brochure. Judges filter marketing sentences out almost instantly.
- Inflating the outreach log with activities that did not really happen or that double-count one event as several.
- Skipping the section entirely because "we don't do outreach." A short, honest section is better than an absent one.
- Treating outreach as purely a judging tactic. Teams that do outreach for the rubric burn out quickly; teams that do it because they value the programme pipeline sustain it.
- Forgetting to capture photographs and attendance. One photograph per event is a huge credibility boost.
Where this points next
Tier 5 begins. N5.1 is the Innovate Award submission walkthrough — the capstone entry that references artefacts from all four other strands.
📐 Reflection prompt (notebook-ready)
- Does every logged activity have a date, a named lead from your team, and a concrete outcome?
- Is the section honest about the scale of your team's outreach?
- Does at least one reflection paragraph name what the team has learnt about its own role?
Next up: N5.1 — Innovate Award submission walkthrough.
Innovate Award submission walkthrough
Write a one-page Innovate Award submission that a panel of judges reads as the tip of a documented iceberg.
Objective
Write a one-page Innovate Award submission that a panel of judges reads as the tip of a documented iceberg — a novel aspect of your design backed by specific notebook pages across every stage of the engineering process.
Concept
The Innovate Award is given to a team whose design contains a genuinely novel element and who can prove it. The award is not given for the coolest idea on the field. It is given for the best-documented novel idea. A team with a mediocre novel system and an immaculate notebook beats a team with a brilliant novel system and a notebook that cannot point at it. Judges have seen every version of every mechanism before. What they have not seen is your team's engineering process around one specific choice. That process, documented, is the submission.
The submission itself is short. The form asks for three things: a brief description of the novel aspect, the page numbers where its development is documented, and an explanation of why the submission is unique. That is it. One page. Everything else that makes the submission work lives outside that one page, on the referenced notebook pages, and has been accumulating since the season started.
⚡ Competition tip. The best teams pick a candidate novel aspect by the end of the first iteration and then deliberately over-document that aspect for the rest of the year. There is no way to write this submission well at the beginning of the season, because the evidence is not there yet. But there is also no way to write it well at the end of the season if the team did not know in advance what evidence they were accumulating.
The four tests of a strong submission
A strong Innovate submission passes four tests:
- Actually novel. Not "we used a lift" but "we used a lift with a driver-facing haptic feedback loop that we could not find documented anywhere else on the VEX Forum, GitHub, or any publicly posted notebook."
- Cites the space it searched. A team that writes "we believe this is unique" without naming where they looked is making a weaker claim than a team that writes "we searched the VEX Forum, GitHub, and eight publicly posted notebooks from the past two seasons and found no comparable implementation."
- Continuous documentation trail. A single page saying "we built this" is not a trail. A problem-identification entry that seeded it, a brainstorm page that considered alternatives, a decision matrix that picked this path, a build log that walks the reader through assembly, and a test log with measured results is a trail.
- Explains what the aspect does for the robot. Novelty without function is a stunt. Novelty that measurably improves a cycle time, a reliability rate, a driver workload, or a match outcome is an award.
🖼 Image brief
- Alt: A one-page Innovate Award submission form showing the three required sections: brief description, page references, and uniqueness explanation, with handwritten cross-reference arrows pointing to PIL, brainstorm, matrix, build, and test pages.
- Source: Mock-up of a filled Innovate Award submission page with the three sections and cross-reference annotations.
- Caption: The submission is one page. The evidence it points at is the whole notebook.
Guided practice
Template: Innovate Award Submission — Team [number]. Event name and date. Then three sections:
1. Brief description of the novel aspect. One tight paragraph naming the feature, what it does, the inputs it runs on, the outputs it produces, and the driver's relationship to it. No marketing language. No claims of uniqueness yet. A judge finishing this paragraph should be able to explain the feature to a colleague in two sentences.
2. Page numbers and sections where documentation lives. A cross-referenced map of the entire design process: PIL entries, research notes, brainstorm page, decision matrix, build/programming logs, test logs, iteration reflections. The references should span at least five different rubric criteria. A judge scanning this list sees that every stage of the engineering design cycle produced documentation for this one feature.
3. Why this submission is unique. Three things at once: where you looked (the search space), what you found (the existing alternatives, named and briefly described), and what your solution does differently — closing with quantitative impact. The search space is the difference between "we think this is unique" and "we searched and this is what we found."
Independent exercise
Pick one candidate novel aspect of your design. Write the three sections of the submission with real page numbers pointing at your actual notebook. For each referenced page that does not yet exist, note "PLANNED" and write one sentence describing what will go on that page when it is written. This is your Innovate Award roadmap. Review it at the end of each sprint.
Common pitfalls
- Submitting the coolest-looking subsystem instead of the most-documented one. If your mechanism is novel but only appears on three pages, and your localisation code is less flashy but appears on twelve, submit the localisation.
- Claiming uniqueness without a search. "We believe this is unique" is scored the same as saying nothing.
- Writing the submission as a single-page story with no page references. The cross-references are the entire point.
- Picking the novel aspect in the final week and backfilling the notebook. The backfill always shows.
- Treating the Innovate Award as separate from the rest of the notebook. The submission is a cover sheet for work you have already done for other reasons.
Where this points next
N5.2 surveys the other judged awards — Design, Excellence, Build, Amaze, Think — and how the same notebook supports each one.
📐 Reflection prompt (notebook-ready)
- Does the page reference list cite pages across at least five different rubric criteria?
- Does the uniqueness claim name the search space rather than asserting uniqueness without evidence?
- Does the submission close with measured impact, expressed quantitatively?
Next up: N5.2 — Design, Excellence, and other judged awards.
Design, Excellence, and other judged awards
Understand what each major judged award evaluates and how the same notebook supports all of them.
Objective
Understand what each of the major judged awards evaluates, and how the same notebook supports all of them — so the team can target awards deliberately rather than hoping.
Concept
There are several judged awards beyond the Innovate Award, and each one evaluates a different subset of what the notebook already contains. Teams that treat all judged awards as interchangeable — "we hope to win a judged award" — are less likely to win than teams that know which award their notebook is actually strongest for and prepare accordingly. This section does not teach you to write new material. It teaches you to point the material you already have at the right award.
| Award | What it evaluates | Notebook sections weighted heaviest | Primary rubric criteria |
|---|---|---|---|
| Innovate Award | One novel aspect of the design, backed by a continuous documentation trail | PIL → research → brainstorm → matrix → build → test → reflection, all pointing at one novel feature | 3, 7, and cumulative |
| Design Award | The strongest complete engineering-design-process story | Full design-process spine (Tier 2) across multiple subsystems, plus iteration reflections | 1, 2, 3, 4, 5, 6 |
| Excellence Award | The overall programme — robot, notebook, interview, performance, and outreach | Every section, with extra weight on team identity, season timeline, outreach, and final-iteration polish | All ten |
| Build Award | Construction quality of the physical robot, with notebook as supporting evidence | Build logs, iteration reflections, photographs of each iteration | 4, 6, 8 |
| Amaze Award | High on-field performance combined with solid notebook and interview | Game analysis, test logs, competition reviews, driver reflections | 1, 5, 6, 9 |
| Think Award | Programming innovation — autonomous routines, control systems, algorithms | Programming sections, control system research, autonomous development logs, test logs with code references | 4, 5, 7 |
⚡ Competition tip. At the start of the season, name one primary award target and one secondary. Do not target everything — spreading preparation thin produces a notebook that is weak at every award. For your primary target, identify which notebook sections the judges will weight heaviest and commit to over-documenting those sections.
Guided practice
How to use the table above:
- At the start of the season, name one primary and one secondary award target.
- For your primary target, identify which notebook sections judges will weight heaviest and commit to over-documenting those sections.
- At each event, review the target. If the primary has been won, pivot to the secondary.
- Update the table of contents before each event to reflect the current target. Put the sections that support the primary target first or highlight them.
Cross-award shared infrastructure. Most of the work you do for any one award serves multiple awards. A strong PIL → brainstorm → matrix → build → test loop serves Innovate and Design and Excellence simultaneously. You do not write a different notebook for each. You write one good notebook and point it at the right award in the interview.
Independent exercise
As a team, pick a primary award target for your next event and a secondary target. Write this decision on a single page in the notebook, with a one-sentence justification per award. Then review your notebook's current state against the "sections weighted heaviest" column for your primary target. Any gaps? File them as PIL entries.
Common pitfalls
- Targeting every award. No team wins every award; notebooks that try to look equally strong everywhere look equally thin everywhere.
- Picking Innovate as a primary without a genuinely novel aspect. Innovate is hard to win without a real innovation.
- Picking Excellence as a primary when the team is under-resourced. Excellence rewards programmes with reach; a small team with a great robot is better positioned for Design or Think.
- Not revisiting the target mid-season. A target set in August is based on assumptions; a target reviewed in November is based on data.
Where this points next
N5.3 is the pre-event checklist that converts all of this preparation into an event-ready artefact.
📐 Reflection prompt (notebook-ready)
- Does your team have one primary and one secondary award target, written down and signed?
- Have the notebook sections weighted heaviest for the primary target been reviewed for gaps?
Next up: N5.3 — Preparing the notebook for an event.
Preparing the notebook for an event
A printable pre-event checklist that catches dumb-mistake rubric losses before the judges find them.
Objective
Produce a pre-event checklist that catches the dumb-mistake rubric losses — the missing page numbers, the broken cross-references, the lagging ToC — before the judges find them.
Concept
Every experienced team has lost rubric points at an event for reasons that had nothing to do with engineering. A cross-reference to page 73 that does not exist because the pagination shifted. A table of contents three weeks out of date. A PIL entry from the last sprint that never got a notebook page because everyone assumed someone else would write it. These are not engineering failures. They are hygiene failures, and they come off the rubric just as hard.
The pre-event notebook review is the hygiene pass that catches them. It is done the week before every event, not the night before, because some of the findings require an additional hour of writing, not just a format fix.
⚡ Competition tip. Two hours, one team member leading, every finding logged and resolved before the notebook leaves the team room. The first time the team runs this checklist, it will take three hours and find ten problems. By the third time, it takes ninety minutes and finds two.
Guided practice
The pre-event checklist. Copy it onto its own notebook page and run it before every event.
Section A — Format compliance (criterion 10)
- Every page has a date.
- Every page has a page number, and page numbers are sequential with no gaps or duplicates.
- Every page is signed.
- Every entry that spans multiple pages has continuation markers on both sides.
- No blank half-pages are left open in the middle of entries.
- All corrections are single strikethroughs with initials, not erasures.
- Section headers are consistent within each entry type.
Section B — Table of Contents and cross-references (criterion 8)
- The Table of Contents lists every page in the notebook.
- The ToC is current — no entries written in the last week are missing.
- Every cross-reference ("see p. N") is spot-checked on at least 10 entries picked at random. Every one resolves.
- Any "TBD" or placeholder references are resolved or explicitly marked as open.
Section C — Sprint and iteration completeness (criteria 1, 4, 5, 6)
- Every PIL entry from the most recent sprint has a corresponding notebook page.
- Every open PIL entry has a current status.
- The most recent iteration has a reflection page (N3.3 format).
- The most recent competition review (if there was a prior event) is complete and has generated new PIL entries.
- The current iteration's build log, test log, and reflection are all in place for every active subsystem.
Section D — Team identity and programme management (criterion 9)
- The team identity section reflects the team as it exists now.
- The season timeline / Gantt chart has been updated at least once this iteration.
- Outreach section (N4.4) has been updated if any outreach events occurred since the last review.
Section E — Award submission readiness (if applicable)
- If targeting the Innovate Award, the submission form (N5.1) is written, the referenced pages exist, the uniqueness claim is defensible, and every cited page number resolves.
- If targeting another award, the sections weighted heaviest for that award (see N5.2) have been reviewed specifically for completeness.
Section F — Physical preparation
- If using a handwritten notebook, the book is physically intact — no loose pages, no water damage.
- If using a typed notebook, the current printed copy is up to date with the digital source, and the printed copy is readable.
- Any supporting materials (photographs, supplementary pages, tabbed dividers) are in place.
Section G — Interview preparation (supporting N5.4)
- The notebook lead can find any page in the notebook within 30 seconds, starting from the ToC.
- At least two team members have rehearsed the 30-second pitch.
- At least two team members have rehearsed the 2-minute design story.
- Each team member can name at least one "failure" entry they would walk a judge through if asked.
Independent exercise
Before your next event, block two hours and run this checklist end to end. Log every finding in a findings table: finding number, section, description, owner, resolved. Resolve every finding before closing the review. Sign and date the review page. Keep the completed review page in the notebook as a permanent record — it is itself scoreable under criterion 8.
Common pitfalls
- Running the checklist the night before the event. Too late — several findings require a full work session to fix.
- Splitting the review across four people. The value comes from one person holding the whole notebook in their head during the pass.
- Skipping the spot-check on cross-references. This is the single most common source of dumb-mistake losses at events.
- Ignoring the findings log. A finding without an owner is a finding that will not be fixed in time.
Where this points next
N5.4 is the last section in this chapter: how to walk a judge through the notebook you just finished preparing.
📐 Reflection prompt (notebook-ready)
- Has the checklist been run at least once before every event?
- Does every finding have an owner and a resolution date?
- Are the completed review pages kept in the notebook, not thrown away?
Next up: N5.4 — The judging interview.
The judging interview
Walk judges through your notebook in a 10–15 minute interview, confidently and specifically.
Objective
Walk judges through your notebook in a 10–15 minute interview, confidently and specifically, so that the notebook's rubric claims are confirmed in real time rather than left to be inferred.
Concept
The judging interview is a ten-to-fifteen-minute conversation between a small panel of judges and your team, usually at an event, with your notebook on the table between you. The judges have sometimes skimmed the notebook in advance. Sometimes they have not. Either way, their time with your team is short, and their decision about which award to give will be influenced by what they remember from the interview — which is usually two or three specific moments, not an overall impression. Your job in the interview is to make those specific moments good ones.
The notebook is not a book you show to a judge; it is a book you navigate with a judge. "We worked on the intake a lot" is not navigating. "We worked on the intake across three iterations — here on page 51 is the first version, on page 64 is the second, and on page 78 is what we landed on. Can I walk you through one decision from page 44?" is navigating. The second response hands the judge something concrete to score and gives your team control of the conversation.
⚡ Competition tip. Two pieces of rehearsed content matter more than any other. The 30-second pitch is the elevator-pitch version of the team and the robot. The 2-minute design story is a single end-to-end walk through one design decision, from problem identification to test results. Every team member should have one of these memorised.
Guided practice
The 30-second pitch (template)
"Hi, I'm [name] from [team number]. We are a [year level] team from [school]. This season we focused on [one to two subsystem priorities] and our primary award target is [Innovate / Design / Excellence / etc.]. The core of our engineering process this year was [one-sentence summary]. We would love to walk you through our notebook."
Time yourself. It should be under 30 seconds. If it runs over, cut the least-specific sentence.
The 2-minute design story (template)
Pick one subsystem with a complete design-process spine in your notebook. Walk the judges through it end-to-end, using page numbers. Every sentence points at a page. Practise this until it is natural. Approximately 100–120 seconds of speech.
"Tell me about a failure" answer (template)
The hardest question judges ask. Teams that rehearse for it do well. Teams that do not say "nothing, it all worked" and lose the rubric credit the question was offering. The right answer is a specific, dated, cross-referenced failure from your notebook, with a one-sentence lesson learnt. Name the failure, cite the page, state the lesson, and say what changed as a result.
"What would you do differently?" answer (template)
Similar to the failure question but forward-looking. Pick one genuine thing and be specific. Close with what you would change for next season.
Practice protocol
- One team member plays interviewer. Use the four questions above plus three from the question bank below.
- Interviewed member has the notebook open. Every answer must reference at least one page number.
- Interviewer times each answer. Anything over 3 minutes is too long.
- After the interview, both critique: which answers landed, which were vague, which page references were wrong.
- Swap roles. Re-run.
- Repeat until every team member has practised with at least two different interviewers.
Judge question bank
- "Walk us through your robot."
- "Can you tell us about one design decision and why you made it?"
- "How did your team decide on [subsystem]?"
- "What is the most novel part of your design?"
- "Tell us about a failure or something that did not work."
- "What would you do differently if you started over?"
- "How does your team work together? What are the roles?"
- "What does your autonomous routine do?"
- "How do you test your robot?"
- "What did you learn from your last competition?"
- "How does the notebook help your team?"
- "If we wanted to find [specific thing] in your notebook, where would it be?"
Every one of these is a page-reference question. Treat them as such.
Independent exercise
Block 60 minutes with at least one teammate. Run the practice protocol at least twice, with each person playing interviewer once and interviewee once. Use the full question bank. Time every answer. After the practice, each team member identifies one weak spot and writes a one-sentence improvement plan. Add a "rehearsal log" page to the notebook listing who practised, when, and what they are working on.
Common pitfalls
- Answering "tell me about a failure" with "nothing, it all worked." Unscored, and slightly insulting to the judges who asked in good faith.
- Not referencing page numbers in answers. Answers without page references force the judges to trust your memory; answers with page references let them verify your claims.
- Memorising one story and delivering it regardless of the question asked. Judges notice immediately.
- Having only one team member who can talk about the notebook. Every member should be able to carry at least a two-minute segment of the interview.
- Skipping practice because "we'll know what to say on the day." Interview quality at events is almost perfectly correlated with how much the team practised.
Where this points next
This is the final section of the final chapter. The notebook strand, running from N1.1 through N5.4, has now covered every one of the ten rubric criteria and every piece of judging-ready preparation. The next time a judge opens your notebook, every page they find should be one you deliberately prepared them to find.
📐 Reflection prompt (notebook-ready)
- Can every team member deliver the 30-second pitch without notes?
- Does every team member have at least one 2-minute design story memorised with correct page references?
- Does every team member have a prepared answer to "tell me about a failure" that references a real notebook page?
- Has the team practised the full protocol at least once per month in season?
You have completed the curriculum. All six chapters — Foundations, Coding, Building & Engineering, CAD, Project Management, and Engineering Notebook — are done. Take what you have learnt and go win.