Skip to main content
0 / 12 done
Chapter V of VI · Agile for robotics

Project Management

This chapter runs first. Before you touch a line of PROS code, open OnShape, or pick up a single C-channel, read PM.1 through PM.4 and set up your Problem Identification Log. The technical strands in Chapters II, III, and IV assume you have a working PIL, a sprint cadence, and named roles before the first build session. Roughly ten hours end-to-end for all twelve sections; the first four can be done in a single afternoon.

Why robotics needs project management

Start here. Read this aloud with the whole team on day one.

~30 min

Objective

Convince every member of your team that thirty minutes a day of explicit process buys back ten hours a week of wasted work, and set the expectation that this season is run like an engineering project, not a hobby.

Concept

A competitive VRC season is a year-long software project with a physical product attached to it. It has more moving parts than any single student can hold in their head: a drivetrain, a scoring mechanism, sensors, autonomous routines, driver skills, a notebook, a budget, a competition calendar, and four-to-eight humans with conflicting schedules. The teams that punch above their weight — the ones with the ranked skills runs and the judging awards — do not do so because one student is a genius builder. They do it because they run real process. They are the teams that do not burn out, ship broken robots to events, or rebuild the same subsystem four times.

Without process, predictable failures compound. The programmer tunes a lateral PID on a drivetrain that the builder has not squared yet, and spends two sessions chasing ghosts. The builder redesigns the intake on Tuesday while the CAD still shows Monday's version, so the mounting holes do not line up. The notebook writer is three weeks behind reality and has to reconstruct decisions from memory on the flight to a regional event. Two team members independently “fix” the same problem in incompatible ways. Someone orders a part that was already on order. These are not character flaws. They are the default outcome when five smart students work in parallel on a shared artefact without an agreed cadence.

📐 Engineering tip. If the last sentence made you think “that is exactly what happened to us last year,” you are in the right chapter. The frameworks here are cheap — a stand-up costs ten minutes, a sprint planning meeting costs an hour, a PIL entry costs ninety seconds. The alternative — uncoordinated work, forgotten decisions, undocumented failures — costs entire sessions and occasionally entire iterations of the robot.

Agile project management — sprints, stand-ups, retrospectives, explicit problem tracking — is the connective tissue that prevents these failures. It is not paperwork. It is the minimum coordination needed to let technical work actually compound.

The pitch for this strand, in one line: thirty minutes of process a day buys back ten hours of wasted work a week. A team that adopts the PIL, a weekly sprint, and a daily ten-minute stand-up will out-ship a team with twice the raw talent and no process. That is the deal. The rest of this chapter teaches the rituals and artefacts that make it real.

⚡ Competition tip. Judging panels do not award the Innovate or Design awards to teams that “just built a good robot.” They award them to teams that can show a documented, reflective engineering process. Every framework in this chapter produces a notebook artefact. Process is not overhead — it is evidence.

Guided practice

Read this short case aloud together as a team.

A team enters week six of the season. The lead builder has rebuilt the intake twice. The programmer has tuned the drivetrain PID three times because the wheelbase geometry kept changing underneath them. The notebook writer has four pages of “TBD — ask Jamie what we decided.” At a scrimmage they realise the autonomous routine in the event code does not match the robot on the field, because someone pulled an old branch to the brain Tuesday night. Nobody remembers why.

None of these failures came from lack of skill. Every one of them came from a missing handoff: builder to programmer, CAD to builder, whoever-decided-the-thing to whoever-wrote-it-down, latest-code to the brain. The robot is fine. The team ran out of time.

As a team, list the last three things your robot did badly at a practice session. For each one, ask: was the root cause a skill problem, or a coordination problem? Be honest.

Independent exercise

Each team member, in five minutes of silent writing, completes this sentence three times: “Last season, we lost time because nobody _____.” Then share around the circle. Do not debate. The list you produce is your motivation for every other section in this chapter — tape it to the wall where the team meets.

Common pitfalls

  • Treating process as something the notebook writer does alone. Process is a team sport; every role participates.
  • Adopting every framework at once and exhausting the team in week two. Start with PM.4 (the PIL) and PM.7 (stand-ups). Add the rest as bandwidth allows.
  • Letting process become theatre — stand-ups that nobody speaks at, retros that nobody acts on. A ritual you do not honour is worse than none.
  • Assuming “we already know each other” is a substitute for written coordination. It is not. Memory is not an artefact.
  • Picking tools before picking rituals. The ritual is the thing; the tool serves it.

Where this points next

PM.2 introduces the sprint as the container that holds every other PM practice.

📐 Reflection prompt (notebook-ready)

  • Open your engineering notebook to the Team Identity section. Write one paragraph titled Why we run process this season that names one specific thing that broke last year because of missing coordination, and one specific ritual from this chapter you intend to adopt in week one.
  • This paragraph is judge-facing — it shows that your team reflects on its own practice.

Next up: PM.2 — Agile, sprints, and design cycles.

Agile, sprints, and design cycles

Start here alongside PM.1. Three interlocking containers of time that make every task bounded instead of infinite.

~45 min Prereqs: PM.1

Objective

Install three interlocking containers of time — the sprint, the design cycle, and the iteration — so that every technical task on the team happens inside a known cadence instead of floating in an unbounded week.

Concept

Agile is the philosophy of shipping small working things often and adapting based on what you learn. The opposite of Agile is “we will build the whole robot, then test it at the end,” which is how teams arrive at a competition with a robot they have never actually run a full match on. In practice, Agile on a VRC team means three nested time containers: the sprint, the design cycle, and the iteration.

A sprint is a fixed-length block of time — usually two weeks during build season, three weeks during pre-competition tuning, one week during event prep — with a defined goal, a set of committed tasks, and a working demonstration at the end. “Defined goal” is the part that matters. A sprint goal is not “work on the robot.” A sprint goal is “land a squared drivetrain that holds a straight-line PID tune within two inches over six feet.” At the end of the sprint, you either did that or you did not, and the team is honest about which.

A design cycle is the inner loop that lives inside every sprint and inside every subsystem decision: identify the problem, brainstorm options, decide using a decision matrix, build or programme or model the chosen option, test it against the success criteria, reflect and feed the result back into the Problem Identification Log. This is the same loop the judging rubric expects to see in the engineering notebook, and it is the same loop real engineering teams run. A sprint typically contains several design cycles running in parallel on different subsystems.

An iteration is a numbered version of the whole robot. Iteration 1 is your first competitive build. Iteration 2 is the rebuild you did after the first scrimmage told you the intake geometry was wrong. Iterations are big — they contain multiple sprints — and they are the unit at which the team makes structural decisions about the robot architecture. Over a season, a strong team will typically run four to five iterations. Naming them explicitly (“we are in Iteration 3”) is what keeps the team from endlessly tinkering without acknowledging that a rebuild is happening.

📐 Engineering tip. The failure mode this structure prevents is unbounded work. Without a sprint, a task “takes as long as it takes,” which in practice means it takes until a panic deadline. Without a design cycle, decisions happen by whoever grabs the keyboard first. Without named iterations, the team pretends small changes are small when they are actually a rebuild. Putting explicit time containers around work forces honesty about how much is really getting done.

Guided practice

A sample sprint definition for a mid-season team. Copy this shape for your own first sprint.

Sample sprint definition — Sprint 3
FieldValue
Dates2 weeks, Mon week 7 through Fri week 8
GoalDemonstrate a squared drivetrain that holds a straight-line PID tune, and a working intake v2 prototype that resolves the dual-element stall from the PIL
Demo at sprint endA recorded skills run showing (a) six-foot straight-line drive within two inches of target on three consecutive runs, and (b) ten consecutive dual-element intake cycles with no stalls
Design cyclesCycle A: drivetrain squaring — identify → measure current deviation → decide on bracing change → rebuild → retest. Cycle B: intake redesign — PIL entry → brainstorm three geometries → decision matrix → CAD → prototype → test
IterationStill Iteration 2 (tuning the existing architecture, not a new robot)

Independent exercise

Draft a sprint definition for your team's next two weeks using the shape above. Pick exactly one primary goal. Write down the demo you will show at the end. If you cannot describe the demo in one sentence, your goal is too vague — tighten it. Decide whether you are in a new iteration or continuing the current one, and name it.

Common pitfalls

  • Sprint goals that are aspirational rather than demonstrable. “Improve the intake” is not a goal; “ten consecutive dual-element cycles with no stalls” is.
  • Letting sprint length drift. A two-week sprint that runs three weeks is not a long sprint — it is a missed sprint. Call the miss, retrospect, reset.
  • Treating the design cycle as optional for “small” changes. Small changes are how undocumented decisions pile up.
  • Refusing to name iterations because “it is still the same robot.” If you replaced the drivetrain and the intake, it is not.
  • Planning sprints in isolation from the Problem Identification Log. The PIL is where sprint content comes from.

Where this points next

PM.3 defines the roles that own each strand of work inside a sprint.

📐 Reflection prompt (notebook-ready)

  • In the notebook's design process section, create a Sprint Log subsection. For each sprint this season, record: dates, goal, demo result (honest — shipped, partial, or missed), and one line on what you learned.
  • This Sprint Log is a judging artefact on its own because it shows evidence of iterative process over time.

Next up: PM.3 — Team roles and responsibilities.

Team roles and responsibilities

Start here alongside PM.1 and PM.2. Assign named roles so every decision has exactly one owner.

~45 min Prereqs: PM.1, PM.2

Objective

Assign explicit named roles to every member of the team so that for every decision, task, and artefact on the robot, exactly one person is accountable for driving it — and so every other team member knows who to ask.

Concept

A team without named roles defaults to “whoever is free picks it up,” which sounds generous and is in practice chaos. The programmer starts a build fix because the builder is busy, halves it, leaves it, and the builder comes back to a half-modified subsystem with no context. Two team members independently own the auton route because nobody was the owner. The notebook is nobody's first job, so it is nobody's job. Unowned work quietly becomes everyone's resentment.

Named roles prevent this. A role is not a hierarchy and it is not a box you hide inside. It is one person's name attached to a set of deliverables, so that when something in that area breaks, the team knows who is driving the fix and every other member knows whose door to knock on. Roles are explicit, visible (written on a wall, not remembered from week one), and reviewed at sprint boundaries.

📐 Engineering tip. Roles do not mean no cross-training. In fact, the opposite: every role should learn enough of every other role to ask useful questions. The lead programmer should be able to square a drivetrain. The lead builder should be able to read a PID trace. The notebooker should understand odometry well enough to know what question to ask when the build log is vague. Cross-training is what makes a role a concentration, not a silo.

On teams of fewer than six students, one person wears multiple hats. That is fine, but it must be explicit: when the captain is making a build decision, they say “wearing my builder hat” so the rest of the team knows which perspective they are speaking from. The hat is stated out loud and written in the stand-up notes.

Guided practice

The standard role set, with responsibilities and a representative day. Copy this as a role card set for your team; change the names, keep the structure.

Standard role cards
RoleOwnsRepresentative Tuesday
Team captain / project leadSprint goal, PM strand, PIL triage, cross-strand dependency map, escalation point for blockersStand-up at start, floats between strand leads to unblock, facilitates the afternoon design review, closes the day with a ten-minute wrap and PIL update
Lead programmerCode quality, build system, competition auton, driver control code, signs off on any code merge to the event branchPID tuning on a rebuilt drivetrain, logs telemetry to the notebook, updates the odometry calibration, reviews the junior programmer's PR
Lead builderStructural integrity, subsystem assembly, friction audits, hardware bill of materials, signs off on any physical change to the robotRebuilds the front intake per yesterday's CAD, runs a friction audit, updates the build log with bolt sizes and torques, flags a part order for Wednesday
Lead CADMaster assembly, subsystem CAD, CAD-to-build handoffs, keeps the CAD in sync with the real robotUpdates the master assembly to match yesterday's rebuild, exports a drawing for Thursday's part fabrication, reviews the builder's proposed geometry change
Notebooker / documentation leadEngineering notebook, decision matrices in the notebook, judging materials, attends stand-ups and design reviewsWrites up Monday's PID tuning result, drafts a decision matrix page from yesterday's brainstorm, reviews the build log for gaps
Driver / field leadDriver practice, field setup, pre-match checklist, feedback into the PIL from match observationTwenty minutes of driver practice, logs three PIL entries from things that felt wrong, walks the team through match video

Shared responsibilities — every role does these:

  • Log PIL entries for anything weird observed on the robot, immediately, not later.
  • Attend the daily stand-up.
  • Close the session with a ten-minute wrap (notebook + PIL).
  • Read the cross-strand dependency map and know who they block and who blocks them.

Independent exercise

As a team, write a role card for each member: name, primary role, secondary role if any, one-sentence “what I own” statement, and one line on who to escalate to when blocked. Post the cards on the wall of your team meeting space. If two roles are claimed by the same person, write “hat one / hat two” and agree on how they will announce which hat they are wearing in stand-ups.

Common pitfalls

  • Roles as titles instead of responsibilities. “Lead programmer” on a badge is theatre. The test is whether, when the code breaks, the team knows exactly one person who owns the fix.
  • Unassigned work. Anything that is “everyone's job” is nobody's job. If a deliverable does not have a name next to it, it does not happen.
  • Silos. The opposite failure: the lead programmer refuses to touch anything that is not code. Cross-training is non-negotiable.
  • Hat-switching without announcing it. On small teams, one person wearing two hats must state the hat every time.
  • Never revisiting roles. Roles change as the season progresses and as students grow. Review them at every sprint retrospective.

Where this points next

PM.4 introduces the Problem Identification Log — the shared artefact that sits above all the roles and makes sure observations from any role get captured and triaged by the team captain.

📐 Reflection prompt (notebook-ready)

  • In the notebook's Team Identity section, paste the role cards and add a short paragraph titled How we divide the work that names each role, its owner, and one example of a decision that role makes unilaterally without needing a team vote.
  • This is judging evidence that the team runs as an organisation, not as a free-for-all.

Next up: PM.4 — The Problem Identification Log.

The Problem Identification Log

Start here — Chapter I sends you to this section before any technical work. Set up the single highest-leverage artefact in the curriculum.

~60 min Prereqs: PM.1, PM.2

Objective

Stand up a shared Problem Identification Log that forces every observed robot problem through a five-column discipline — symptom, design challenge, constraints, goals, measurable success criteria — before anyone is allowed to start fixing it.

Concept

The Problem Identification Log, or PIL, is the single highest-leverage artefact in this strand. It is one shared document — a spreadsheet, a database, a structured issue tracker — where every problem observed on the robot, in practice, in a match, in a CAD review, anywhere, gets logged as a row. Before the team works on a problem, the row is filled out in five columns: Understanding the Problem, Design Challenge, Constraints, Goals, and Success Criteria. Only then does it become a sprint task with an owner.

This discipline sounds like paperwork. It is not paperwork. It is the forcing function that prevents the most expensive failure mode in a competitive season: starting work on a problem that nobody has actually defined. A programmer “fixes” intake stalling by lowering the motor cap, but the real issue was a geometry problem that put the second element in the wrong channel — and the lowered cap now means the intake cannot grab a single element at high speed either. A builder “solves” a chassis flex problem by adding a crossbar that now fouls the lift. Every one of these failures shows up in teams that skip problem definition. The PIL exists so it cannot be skipped.

📐 Engineering tip. The five columns are not arbitrary. Each one kills a specific failure mode. Understanding the Problem forces a concrete symptom instead of a vibe. Design Challenge reframes the symptom as a “How might we…” question that stays solution-open. Constraints surfaces the rule book, motor count, size box, and interactions with other subsystems before anyone spends four hours on a non-starter. Goals names what “better” actually means this sprint. Success Criteria commits to a measurable test, so at the end of the work the team knows whether it worked — not whether it feels better.

The workflow is simple and unforgiving. Something weird happens at practice — log it in the PIL immediately, even if the entry is only one column filled in. At the next stand-up or sprint planning, the team triages open entries: which do we fix this sprint, which do we defer, which do we close as won't-fix. For every entry that gets picked up, the five columns must be completed before the task becomes actionable. When the work is done and the success criterion is measured, the entry is marked resolved with a one-line note. The PIL is reviewed weekly in team meetings and referenced during design reviews. It is also the primary evidence a judging panel looks for when asking “how does your team make decisions?”

⚡ Competition tip. The PIL is not a duplicate of the notebook — it is the upstream source the notebook is written from. The Understanding column becomes the problem-statement paragraph in a notebook entry. The Constraints column becomes the design-requirements bullet list. The Success Criteria column becomes the test-plan section. Write the PIL first; narrate it into the notebook second.

Tool-wise, the PIL is tool-agnostic. A shared spreadsheet works. A database works. A structured issue tracker with labelled templates works. A task board with structured checklists on every card works. Pick one, train the team, do not switch tools mid-season. What matters is that every team member can open the PIL from their device in under ten seconds and log an entry before they forget.

Guided practice

Below is a complete worked PIL entry for a realistic mid-season problem. This is the shape every entry on your team's PIL should take. Copy it as a template.

Worked PIL entry — PIL-014
FieldValue
Entry IDPIL-014
Date openedTue, week 6
Logged bySam (lead builder)
OwnerSam (build) + Riley (code)
StatusIn progress
SprintSprint 3

Understanding the Problem. During Monday's practice session, the intake stalled on 6 of 10 attempts when the drivetrain ran toward a pair of elements at full forward speed. Stalls occurred only when two elements entered the intake mouth within roughly 150 ms of each other. Single-element pickups from the same approach succeeded 10 of 10 times. Stall signature in the motor telemetry showed a hard current spike at the moment the second element contacted the upper roller, followed by a stalled-at-zero-velocity trace for 300–500 ms before the over-current protection cut power. The stall does not happen at low drive speed, and it does not happen when the second element is delayed by at least half a second. Video review shows the second element wedging between the upper roller and the intake side-plate rather than riding onto the conveyor.

Design Challenge. How might we redesign the intake path so that two elements arriving simultaneously at full drive speed are both accepted onto the conveyor without wedging or stalling, while preserving single-element pickup performance and without adding a motor?

Constraints.

  • Must fit inside the starting-size envelope with the intake in its deployed position.
  • Intake remains limited to a single motor; no additional motor budget is available this sprint.
  • Must not interfere with the scoring mechanism's travel path on deploy.
  • Upper roller axle cannot move inboard by more than 6 mm without fouling the drivetrain crossbar added in Iteration 2.
  • The side-plate geometry is shared with the CAD master assembly; any change must be reflected there before the build happens (Strand 3 dependency).
  • Rule compliance: any added compliant surface must remain legal under current robot-construction rules. Check the rule reference before ordering parts.
  • The fix must not regress the single-element pickup success rate below its current 10/10 baseline.

Goals.

  • Short-term (this sprint): eliminate the dual-element stall at full drive speed so that driver-controlled matches stop losing 3–5 seconds per stall recovery.
  • Medium-term (next sprint): instrument the intake current trace and publish a simple current-vs-time chart for the notebook so the team can spot regression automatically on future rebuilds.
  • Long-term (iteration 3 scope): evaluate whether a redesigned intake geometry frees the side-plate for a lighter bracket, saving weight on the lift subsystem.

Success Criteria.

  • Measurable: 18 of 20 consecutive dual-element pickups at full drive speed succeed without a stall. A success is defined as both elements reaching the conveyor within 400 ms of intake contact, with no over-current cut and no manual reset.
  • Regression guard: 20 of 20 single-element pickups still succeed from the same approach, at the same speed.
  • Telemetry evidence: a logged current-vs-time trace for at least five of the dual-element test runs, attached to this PIL entry, showing no sustained stall region above the over-current threshold.
  • Qualitative: driver confirms in a post-test ride that the intake “feels the same or better” on single elements.
  • Hard stop: if two full build sessions pass without reaching 18/20, reopen this entry and re-triage — do not keep iterating silently.
🖼 images/05-pil-spreadsheet.png PIL spreadsheet with five columns and colour-coded status labels

🖼 Image brief

  • Alt: Screenshot of a PIL spreadsheet with five named columns (Understanding the Problem, Design Challenge, Constraints, Goals, Success Criteria), one row filled in, and colour-coded status labels.
  • Source: Take a screenshot of your team's actual PIL once it has its first entry.
  • Caption: The PIL is the upstream source every sprint task and every notebook entry is written from.

Independent exercise

Open whichever tool your team has committed to. Create the PIL with the five columns above plus Status, Owner, Sprint, Date opened, and Entry ID. Pin it somewhere every team member can reach it from their device in under ten seconds. Then, as a team, pick one real problem your robot had in the last practice session and write its PIL entry end-to-end, following the worked example above. Do not start fixing the problem. The exercise is the entry, not the fix. When you have filled every column — including a measurable success criterion and a hard stop — the exercise is complete.

Common pitfalls

  • Vague symptom lines. “Intake is bad” or “drivetrain feels off” are not PIL entries. If the Understanding column cannot name a specific observable symptom with conditions, the entry is not ready.
  • Skipping the design challenge reframe. Going straight from symptom to solution (“we need a new intake”) locks the team into the first idea. The “How might we…” question keeps options open until the decision matrix (PM.5) narrows them.
  • Success criteria that are not measurable. “Works better” is not success. A number, a ratio, or a pass/fail test with a specific condition is success.
  • Filling in the PIL after the work is done. The whole point is that the entry exists before the work, so the work is bounded. Retrofitted entries are theatre.
  • Letting the PIL become one person's job. Every team member logs their own observations. A PIL that only the notebook writer touches is already dead.
  • No hard stop. An entry without a “stop and re-triage if X” clause quietly consumes sprint after sprint.

Where this points next

PM.5 introduces the decision matrix — the structured way to pick between the options a completed PIL entry lets you brainstorm.

📐 Reflection prompt (notebook-ready)

  • Your completed PIL entry is the notebook artefact. In your notebook's Problem Identification section, paste the Understanding, Design Challenge, Constraints, Goals, and Success Criteria columns directly, reworded into the notebook's formal voice.
  • The PIL is the source of truth; the notebook is the narrated version. Cross-link them by entry ID — for example, write “See PIL-014” in the notebook and “See notebook p. 23” in the PIL's resolution notes.

Next up: PM.5 — Decision matrices.

Decision matrices

The structured step between brainstorming and building, so every non-trivial design choice is scored, not argued.

~45 min Prereqs: PM.4

Objective

Adopt a weighted decision matrix as the required step between brainstorming and building, so that every non-trivial design choice is made against explicit criteria instead of whoever argued loudest.

Concept

A completed PIL entry ends with a “How might we…” design challenge. Brainstorming then produces three to five candidate solutions. The question is how to pick one. Teams without a process pick by volume — the student with the strongest opinion, or the first idea sketched, wins. Teams with a process score the candidates against explicit criteria, weight the criteria by importance, and let the numbers carry the decision. That is a decision matrix.

The failure mode this prevents is subtle and common. A team brainstorms three intake geometries. One student has already built a half-version of option B at home and advocates hard for it. The team “decides” on option B in ten minutes of conversation, nobody writes down why, and two weeks later — when option B fails its success criterion — there is no record of what option A or C were, what trade-offs were considered, or whether the failure mode that killed B was something the team had already flagged as a risk. A decision matrix produces a dated artefact that says “here are the options, here are the criteria, here is the score, here is the decision, here is the reason.” When B fails, the team can go back to the matrix and pick up A or C without re-running the entire brainstorm.

📐 Engineering tip. The criteria are not optional and not up to taste: every matrix on your team uses the same six-criterion starter set, so matrices are comparable across subsystems and across iterations. Weights change per decision. A drivetrain rebuild weights reliability and buildability heavily; a scoring mechanism might weight strategic value and innovation. What does not change is that rule compliance is a hard-gate: any option scoring 1 on rule compliance is disqualified before weighted totals are tallied, even if its weighted score would have won.

Required criteria, always present:

  • Effectiveness — how well does this option actually solve the problem stated in the PIL entry?
  • Simplicity / reliability — how many failure points does it introduce? How hard is it to diagnose when it breaks?
  • Buildability — can this team build this option with the parts, tools, and skills on hand, this sprint?
  • Rule compliance — does it pass the current rule book, size box, motor count, and construction rules?
  • Strategic value — does it align with the team's season strategy, not just this subsystem in isolation?
  • Innovation or advantage — does it create a differentiator, or is it a me-too copy of what every team at the event will have?

Guided practice

A worked matrix for PIL-014 (the dual-element intake stall from PM.4). The team brainstormed three candidate geometries: (A) widen the intake mouth with a compliant guide, (B) add a passive upper deflector ramp, (C) re-geometry the side-plate to lift the upper roller axle 4 mm outboard.

Decision matrix — PIL-014 intake redesign
CriterionWeightA: wider mouth + guideB: passive deflectorC: re-geometry side-plate
Effectiveness5345
Simplicity / reliability4432
Buildability (this sprint)4542
Rule compliancegatepasspasspass
Strategic value3344
Innovation / advantage2234
Weighted total586357

Reading the row: option B wins on weighted score. Option A is close behind on buildability. Option C has the highest pure effectiveness score but loses on buildability because the side-plate change cascades into CAD master updates the team cannot land this sprint.

Decision: move forward with option B. Rationale: highest weighted total, acceptable buildability inside the current sprint, does not block option C for a future iteration if B proves insufficient. Document options A and C in the matrix record so they remain available as fallbacks.

Independent exercise

Take the PIL entry your team wrote in PM.4's exercise. Brainstorm three candidate solutions on a whiteboard — not polished, just sketches and a sentence each. Then fill out a matrix using the six criteria above. Agree on weights before you start scoring, not after. Score every cell as a team, not as individuals. Tally the weighted totals. The decision is whichever option wins, unless a rule-compliance gate disqualifies it. Write down the decision, the date, and the people present. Attach the matrix to the PIL entry.

Common pitfalls

  • Weighting after scoring. Teams unconsciously tune weights to make their favourite option win. Agree weights first, before a single score is written.
  • Scoring criteria the team does not understand. If nobody on the team can honestly score “buildability” for option C, the team is not ready to build it.
  • Matrices that always produce the same answer. If every matrix on your team says “go with the simplest option,” your weights are probably wrong — or your brainstorm is not producing real alternatives.
  • Treating the matrix as binding after the situation changes. A matrix is dated. If a constraint changes, re-run it.
  • Skipping the matrix “because it is obvious.” The cases where it feels obvious are the cases where the team is about to miss a constraint.

Where this points next

PM.6 puts the PIL and the decision matrix into the sprint planning meeting, where triage and commitment actually happen.

📐 Reflection prompt (notebook-ready)

  • The filled-in matrix is a direct notebook artefact. Paste the matrix into the notebook page alongside a one-paragraph rationale naming the winner, the runner-up, and the reason the winner won despite any lower individual scores.
  • Cross-link the matrix to its PIL entry by entry ID so the judging panel can trace the thread from problem to decision.

Next up: PM.6 — Sprint planning.

Sprint planning

A repeatable sixty-minute meeting that turns the PIL, decision matrices, and dependency map into committed sprint tasks.

~60 min Prereqs: PM.2, PM.4, PM.5

Objective

Run a repeatable sixty-minute sprint planning meeting at the start of every sprint that turns the PIL, the decision matrices, and the dependency map into a committed set of sprint tasks with owners, estimates, and a demo.

Concept

Sprint planning is the meeting where a team decides what to do and — more importantly — what not to do. The failure mode it prevents is over-commitment: a team that enters a sprint with thirty tasks and finishes ten is a team that told itself twenty lies on Monday morning. The fix is not to work harder. The fix is to commit smaller, honestly, and actually ship what you committed.

A good sprint planning meeting is short (sixty minutes), structured (same agenda every time), and physical (someone stands at a whiteboard or shares a sprint board). The inputs are known before the meeting starts: the current PIL, the cross-strand dependency map, last sprint's velocity, and any competition-driven constraints. The output is a sprint board with columns To Do / In Progress / Blocked / Done, populated with owned and estimated tasks, and a sprint goal written at the top.

📐 Engineering tip. The hardest skill in sprint planning is saying no. Every team wants to tackle every open PIL entry this sprint. Every team is wrong. Capacity is finite; velocity (PM.12) tells you what the honest ceiling is. The team captain's job in planning is to protect the sprint from aspirational over-commitment.
🖼 images/05-sprint-board.png Sprint board with four columns and sticky notes

🖼 Image brief

  • Alt: Photograph of a sprint board on a whiteboard with four columns (To Do, In Progress, Blocked, Done), sticky notes with task names and owner initials, and the sprint goal written at the top.
  • Source: Photograph your team's sprint board after the first sprint planning meeting.
  • Caption: The sprint board is the single source of truth for what the team is working on this sprint.

Guided practice

The sprint planning meeting agenda, sixty minutes, every sprint. Copy this. Do not deviate.

  1. Review the last sprint (5 minutes). One sentence: did we hit the demo, partially hit it, or miss it? Read off last sprint's velocity (number of tasks committed versus shipped). Do not retrospect — that happens separately in PM.9.
  2. State the sprint goal (5 minutes). One sentence, demo-able. If the goal does not describe a demo, tighten it.
  3. Triage the PIL (15 minutes). Walk the open entries. For each, one of four decisions: in scope for this sprint, deferred to next sprint, deferred indefinitely, or closed as won't-fix. Write the decision next to the entry. Do not debate the fix — only the triage.
  4. Break in-scope entries into tasks (15 minutes). For each entry pulled into the sprint, list the code tasks, build tasks, CAD tasks, and notebook tasks it generates. One task is one owner and one deliverable. Assign owners as you go.
  5. Estimate (5 minutes). Each task gets a t-shirt estimate: S (under two hours), M (half a day), L (a full session), XL (re-scope — break it down further). XL tasks are not allowed on the sprint board.
  6. Check the dependency map (5 minutes). For every task, confirm the upstream work from PM.10 is done. If it is not, the task moves to Blocked or gets re-sequenced.
  7. Commit to a realistic subset (5 minutes). Sum the estimates. Compare against last sprint's velocity. Cut tasks until the sum matches velocity. This is the hardest step; expect discomfort.
  8. Write the demo sentence (5 minutes). One sentence describing what will be shown at the end of the sprint. Tape it to the wall.

Independent exercise

Schedule your team's next sprint planning meeting for Monday of sprint start. Send the agenda above to every member twenty-four hours in advance. Run the meeting from the agenda without deviation. At the end, take a photo of the sprint board and the demo sentence and post it in the team chat so every absent member sees the commit.

Common pitfalls

  • Running planning without the PIL in front of you. The PIL is the input. Without it the meeting defaults to “what do we feel like doing this week.”
  • Over-committing because last sprint's velocity “does not count” for some reason. Velocity always counts. The reason your last sprint was slow is probably the reason this one will be too.
  • Skipping estimation because it feels fake. Bad estimates calibrate into good estimates over a few sprints. No estimates calibrate into panic.
  • Letting the meeting run past sixty minutes. A planning meeting that runs ninety is a planning meeting that will be skipped next sprint.
  • No demo sentence. Without the demo, there is no falsifiable claim that the sprint succeeded.

Where this points next

PM.7 introduces the stand-up — the daily ten-minute check that keeps the sprint board honest between planning meetings.

📐 Reflection prompt (notebook-ready)

  • Create a Sprint Plan page in the notebook process section for this sprint. Paste the sprint goal, the demo sentence, the list of committed tasks with owners, and the list of deferred entries with reasons.
  • At the end of the sprint, add a one-paragraph “how we did” note next to the plan — this pairing of plan and outcome is direct judging evidence.

Next up: PM.7 — Stand-ups and working sessions.

Stand-ups and working sessions

A ten-minute daily blocker-surfacing meeting and the two-hour session shape that wraps around it.

~30+ min Prereqs: PM.3, PM.6

Objective

Install a ten-minute daily stand-up that surfaces blockers fast enough to resolve them the same session, and a structured two-hour working session shape that keeps the notebook and the PIL caught up to reality.

Concept

A stand-up is ten minutes, standing (the standing is the point — it keeps the meeting short), at the start of every working session. Each team member answers three questions in under ninety seconds: what did I do since the last stand-up, what will I do before the next one, and what is blocking me. The meeting is not a status report to the captain. It is a blocker-surfacing mechanism for the team. The question that matters is the third one.

The failure mode this prevents is silent blocking. A builder can spend an entire two-hour session waiting on a CAD export that the CAD lead thought was not needed until Thursday. A programmer can spend a session debugging an IMU that the lead builder already knew was loose but never mentioned. A stand-up is ten minutes of preventing those two-hour holes. A team that does stand-ups every day loses fewer sessions to silent blocking than a team that does not — that is the entire ROI, and it is enormous.

⚡ Competition tip. A team that can show a Stand-up Log — one line per stand-up, with date, attendees, blockers raised, blockers resolved — for a month straight is a team that produces irrefutable evidence of daily process. Judging panels notice.

The working session around the stand-up has a shape too. Stand-up at the start, focused work for roughly ninety minutes, a fifteen-minute close where the notebook and PIL catch up to what actually happened, and a short hand-off note for the next session. The close is as non-optional as the stand-up. A session that ends without a notebook and PIL update is a session whose evidence is lost.

Guided practice

A sample ten-minute stand-up transcript for a mid-season team. Copy the shape. Notice how the captain redirects non-stand-up discussion.

Captain (Jordan): Stand-up, everyone up. Sam first.

Sam (build): Yesterday I finished the front intake rebuild per the Monday CAD. Today I am running a friction audit on the drivetrain and reinstalling the upper roller bearing that came loose in Friday's practice. Blocker: I need the updated side-plate drawing from Alex by noon or I cannot start the friction audit.

Captain: Alex, noted. Riley?

Riley (code): Yesterday I ran three PID tunes on the Monday wheelbase and logged the traces. Today I am refactoring the auton route to use the new motion profile. No blockers — the PIL entry for the old route is already scoped.

Captain: Alex?

Alex (CAD): Yesterday I updated the master assembly and the intake v2 subassembly. Today I am exporting the side-plate drawing Sam needs and then starting the lift bracket revision. Sam, you will have it by eleven. I also want to talk about the bracket redesign — I think the current approach will not —

Captain: Take it offline. Park it. You and Jordan after the stand-up. Morgan?

Morgan (notebook): Yesterday I wrote up Monday's PID tuning session and drafted the decision matrix page for the intake v2 options. Today I am pulling data from Riley's logs into a tuning chart, and reviewing the build logs for gaps. Blocker: Sam, I need the bolt-torque spec for the front intake — I could not find it in Friday's log.

Kai (driver): Yesterday I ran twenty minutes of driver practice and logged two PIL entries from things that felt off. Today I am running another practice block after lunch and writing up the pre-match checklist draft. No blockers.

Captain: Two blockers logged — Alex gets the drawing to Sam by eleven, Sam sends Morgan the torque spec. Alex and I on the bracket redesign right after this. Stand-up done, back to work.

Elapsed time: under eight minutes. Every blocker identified. One “take it offline” redirect. No status theatre.

Session shape around the stand-up
TimeActivity
0:00Stand-up (10 minutes)
0:10Focused work (90 minutes)
1:40Pre-close wrap (15 minutes) — notebook entries written, PIL entries logged, tomorrow's opener noted
1:55Hand-off note posted to the team chat

Independent exercise

Run a stand-up at the start of your team's next working session, following the transcript shape above. Time it — the timer is the tool that keeps the meeting honest. At the end, write down every blocker that was surfaced. After the session closes, check: did each blocker get resolved, or explicitly parked? Any blocker that was surfaced and then ignored is a stand-up failure. Retrospect on why.

Common pitfalls

  • Stand-ups that become status reports. The captain asking “what did you do” is fine; the third question “what is blocking you” is the one that earns the meeting's cost.
  • Sitting down. Once the team sits down, the stand-up doubles in length. Stand up.
  • Letting debate into the meeting. Park it. “Take it offline” is a skill.
  • Skipping the session close. A session that ends without the fifteen-minute wrap is a session whose evidence evaporates.
  • No stand-up on “quick” sessions. There is no such thing as a quick session. The team always loses ten minutes without a stand-up and gains them back by the end of the session.

Where this points next

PM.8 puts a structured design review at the sprint boundary, so the decisions a sprint produces are reviewed before they commit.

📐 Reflection prompt (notebook-ready)

  • In the notebook process section, keep a Stand-up Log page with one line per stand-up: date, attendees, blockers raised, blockers resolved by end of session.
  • A team that does this for a month produces irrefutable evidence of daily process.

Next up: PM.8 — Design reviews.

Design reviews

A repeatable meeting at the end of each sprint where every non-trivial decision is presented, challenged, and signed off before metal is cut.

~45+ min Prereqs: PM.4, PM.5

Objective

Run a repeatable design review meeting at the end of each sprint so that every non-trivial build or redesign decision is presented, challenged, and signed off before metal is cut or code is merged.

Concept

A design review is the checkpoint between “we decided to build this” and “we are building this.” It is a scheduled meeting — internal to the team, peer-to-peer with another team at your organisation, or mentor-led with a coach — at which a student presents the problem, the options considered, the decision matrix, and the proposed solution, and the room gets one chance to push back before the work commits. The outcome is either sign-off to build, sign-off with changes, or a re-scope back to the PIL.

The failure mode this prevents is the silent expensive mistake: a subsystem that gets built, tested, and only then revealed to be incompatible with a constraint someone else on the team already knew about. The lead CAD did not know the programmer was planning to add a third tracking wheel. The driver did not know the intake redesign would block their quick-deploy. The mentor would have spotted in thirty seconds that the proposed mechanism violates a construction rule added in the latest rule revision. A design review is the thirty minutes where all of those checks happen, on purpose, before the cost gets baked in.

📐 Engineering tip. There are three flavours, and a serious team runs all three over a season. Internal reviews happen weekly or bi-weekly inside your own team at the end of a sprint — fast, informal, high-trust, focused on catching stupid mistakes. Peer-to-peer reviews happen between two teams at the same organisation — slower, more formal, high-value because a fresh set of eyes catches things your own team has gone blind to. Mentor-led reviews happen monthly with a coach or alumnus — the slowest, most rigorous, and the review most likely to surface strategic issues invisible from inside the sprint.

Guided practice

The design review slide outline — copy this for every review. Each numbered item is one slide or one whiteboard panel.

  1. Problem brief (1 slide, 60 seconds). State the PIL entry ID and paste the Understanding-the-Problem column. No context, no history — just the symptom and the conditions.
  2. Constraints recap (1 slide, 30 seconds). Paste the Constraints column from the PIL entry. Call out any constraint that is tighter than it was last sprint.
  3. Brainstormed options (1 slide, 90 seconds). Sketches of every candidate considered. Ugly hand-drawings are better than polished CAD at this stage — they keep the conversation on the idea, not the rendering.
  4. Decision matrix (1 slide, 2 minutes). The filled-in matrix from PM.5, with weights visible. Walk through the scores briefly and name the winner and the runner-up.
  5. Proposed solution (2–3 slides, 3–4 minutes). What is being built, at what level of detail. For a build decision: CAD thumbnail plus a bill of materials sketch. For a code decision: a rough structure diagram plus the public interface.
  6. Success criteria (1 slide, 30 seconds). Paste the Success Criteria column from the PIL entry. The room needs to agree this is what “done” looks like.
  7. Risks and unknowns (1 slide, 1 minute). The presenter names the two or three things they are not sure about. This is the slide that makes the review valuable — it invites targeted feedback instead of vague objections.
  8. Proposed next steps (1 slide, 30 seconds). Who does what, by when, and which strand each task belongs to.
  9. Feedback and open floor (5–10 minutes). Capture every comment in a single shared document — the feedback notes become PIL updates.
  10. Decision (1 minute). Sign-off to build, sign-off with changes, or re-scope. Recorded explicitly, with names attached.

Timing: 20 minutes for an internal review, 30–40 minutes for peer-to-peer, 45–60 minutes for mentor-led. If an internal review is running past 25 minutes, the presenter is either under-prepared or the problem is too big for one review — split it.

Independent exercise

Schedule the next design review on your team's calendar now, even if it is ten days away. Assign a presenter (rotate — not always the same student). The presenter uses the ten-slide outline above to prepare a review for one PIL entry currently in progress. Run the review. Capture the feedback in a shared document and update the PIL entry with every action item that came out. If nothing changed in the PIL as a result of the review, the review was theatre — run the next one harder.

Common pitfalls

  • Presenting a polished solution instead of options. If the matrix is not visible, the review is a show-and-tell, not a review.
  • Letting the loudest voice in the room rewrite the decision on the spot. Capture the objection, update the PIL, re-score the matrix if needed, and make the change deliberately — not in the meeting's heat.
  • Skipping the risks slide. Presenters duck the risks slide because it feels like admitting weakness. It is the most valuable slide on the deck.
  • No recorded decision. A review that ends “let us think about it” is a review that cost the team forty-five minutes and produced nothing. Every review produces one of: sign-off, sign-off with changes, re-scope.
  • Reviewing work that is already half-built. The review goes before the build. If metal is already cut, the review becomes a post-mortem, not a checkpoint.

Where this points next

PM.9 handles the other half of the sprint close — the retrospective on the team's own process, and the competition review that turns event evidence into fresh PIL entries.

📐 Reflection prompt (notebook-ready)

  • The design review feeds two notebook artefacts. Paste the slide outline and the captured feedback into a notebook page, headed with the sprint, iteration, and attendee list.
  • Any CAD changes the review produced are a direct input to the master assembly update log. Cross-link by PIL entry ID.

Next up: PM.9 — Retrospectives and competition reviews.

Retrospectives and competition reviews

Two meetings that close the loop: a sprint retro and a post-event review, so that process failures and match-play evidence feed directly back into the PIL.

~45+ min Prereqs: PM.2, PM.4, PM.6

Objective

Run an honest retrospective at the end of every sprint and a structured competition review after every event, so that process failures and match-play evidence both feed directly back into the PIL as new entries for the next sprint.

Concept

Two meetings, one purpose: close the loop on what the team learned. The retrospective asks “how did we work this sprint, and what will we change next sprint?” The competition review asks “what did the event tell us about the robot and the strategy, and what new PIL entries does it generate?” Neither meeting is a victory lap. Neither is a blame session. Both are the team systematically turning experience into inputs for the next sprint.

The failure mode both meetings prevent is the same: expensive signal getting lost. A sprint that was chaotic for a fixable reason is a lesson, but only if the team names the reason. A competition that revealed the intake cannot handle a real opponent's defensive pressure is a gift, but only if the observation lands in the PIL before the team moves on. Teams that skip retrospectives and competition reviews repeat the same failure modes every sprint and every event, because nothing is in place to convert experience into change.

⚡ Competition tip. The competition review runs within forty-eight hours of every scrimmage, tournament, or league event. It is longer — sixty to ninety minutes — because the event is the most expensive signal a team gets all season and deserves to be mined slowly. Put the review on the calendar before the event, not after.

Guided practice

Retrospective template (45 minutes)

Ground rule: separate the idea from the person.

  1. What went well? (10 minutes). Each person names one or two. Write them on the board. No discussion yet.
  2. What did not? (15 minutes). Same structure. Bias toward specific, observable facts — “we missed three stand-ups” not “we were not disciplined.”
  3. Themes (5 minutes). Facilitator groups the “did not” items into themes.
  4. Changes for next sprint (10 minutes). The team picks at most three concrete changes. Each change has an owner and a test — how will you know next sprint whether it worked?
  5. Log it (5 minutes). Write the three changes into the next sprint's plan page. Not the retro document — the plan page. Retros that only live in retro documents never change anything.
🖼 images/05-retro-board.png Retrospective whiteboard with three columns and sticky notes

🖼 Image brief

  • Alt: Whiteboard divided into three columns: Went Well, Did Not, Changes. Sticky notes in each column with handwritten items.
  • Source: Photograph your team's first retrospective board.
  • Caption: At most three changes per retro. More will not stick.

Competition review template (60–90 minutes)

  1. Match-by-match (30 minutes). One row per match. Columns: match number, result, what worked, what broke, what the driver wished they had. Do not skip losses. Losses are where the signal is.
  2. Quantitative data (15 minutes). Cycle times, scoring rates, auton success rate, skills run scores. Compare to pre-event targets from the sprint plan.
  3. Qualitative data (15 minutes). Driver observations, alliance partner comments, observations about opponents, judging interaction feedback.
  4. PIL generation (15 minutes). Every observation above becomes either a new PIL entry or a note against an existing one. The team captain drives this step.
  5. Season goal check (5 minutes). Given what you learned, are you still on track for the season goal? Yes, no, or modify. Record the decision.

Independent exercise

Run a retrospective at the end of your team's current sprint, using the template above, with a rotating facilitator who is not the team captain. Produce at most three concrete changes and write them into the next sprint plan page. After your next event, schedule the competition review within forty-eight hours — put it on the calendar before the event, not after. Run it from the template.

Common pitfalls

  • Turning a retro into a blame session. The rule “separate the idea from the person” exists for this. Facilitators enforce it or the meeting dies.
  • Retros that produce ten changes. Three or fewer. More will not stick.
  • Competition reviews held a week after the event. The signal decays fast. Within forty-eight hours or the review is a ghost.
  • Skipping the losses in the match-by-match. The losses are where the value is.
  • No owner for a change. A change without a named owner is a wish.

Where this points next

PM.10 zooms out from the sprint and the event to the whole season's cross-strand sequencing — the dependency map that every sprint plan should be checked against.

📐 Reflection prompt (notebook-ready)

  • Both artefacts are notebook-ready. Paste the retro output into a Sprint Retrospective page with the date, facilitator, went-well / did-not / changes sections, and owners for each change.
  • Paste the competition review into an Event Review page with the match-by-match table, the quantitative summary, and the list of new PIL entries generated. Over a season these pages are the strongest evidence of reflective practice in the notebook.

Next up: PM.10 — The cross-strand dependency map.

The cross-strand dependency map

A living map of every lesson that depends on another lesson in a different strand, so sprint planning sequences work in the order that does not require rework.

~60 min Prereqs: PM.2, PM.4, PM.6

Objective

Build and maintain a living map of every lesson in Strands 1, 2, 3, and 5 that depends on another lesson in a different strand, so that sprint planning sequences work in the order that does not require rework.

Concept

Every technical problem in a VRC season looks like a single-strand problem until you start building it. Then you discover the lateral PID you are trying to tune in the Coding strand is unreachable because the drivetrain in the Building strand is not square yet, so the code is chasing a build problem that nobody flagged as a build problem. You discover the IMU drift you are trying to filter is a mounting problem. You discover the pneumatic CAD is blocked because the team has not worked through the pneumatics physics in the Building strand. These are not surprises once you are halfway through them — they are predictable, structural, and repeat every season.

The cross-strand dependency map is the artefact that makes them visible before they bite. It is one document — a table, a graph, a wall of sticky notes, whichever your team will actually maintain — that lists every lesson or task in one strand whose success depends on a lesson or task in another strand.

📐 Engineering tip. There are two kinds of dependency, and your team's sprint planner must know both. Hard dependencies mean the upstream lesson must be complete before the downstream lesson is attempted; skipping the order produces work that has to be redone. Soft dependencies mean the work can happen in either order, but one order is cheaper — usually because catching mistakes at the upstream stage is an order of magnitude cheaper than catching them at the downstream stage.

Guided practice

A starter dependency map, shaped for a team running all strands. Adopt this wholesale as a starting point and extend it as new lessons come online.

Hard dependencies — upstream must be complete first
UpstreamDownstreamWhy it is hard
Building: frame first, squaringCoding: lateral PID tuningAn unsquared drivetrain makes lateral PID untuneable; the code chases geometry ghosts.
Building: screw theory, boxingCoding: tracking wheelsTracking pod mounting is a build problem first. Mis-mounted pods produce odometry drift no code can filter.
Building: electronics placementCoding: IMU readingA poorly-mounted IMU generates noise and vibration artefacts that look like software bugs.
Building: pneumatics physicsCAD: pneumatic assemblyModelling a pneumatic subsystem without understanding bore, stroke, and duty cycle produces CAD that has to be redone.
PM: PIL (PM.4)PM: decision matrices (PM.5)The matrix operates on the candidate options for a problem — no problem, no matrix.
Any build work on the robotNotebook: build logsIf it happens on the robot, it gets logged. No undocumented build work.
Soft dependencies — either order works, but one is cheaper
FirstThenWhy this order is cheaper
CAD: sketchingBuilding: drivetrainMistakes in a sketch cost a pencil line. Mistakes in metal cost a weekend.
Coding: PID theoryCoding: PID tuningTuning without the theory produces numbers that work on one chassis and fail on the next.
Coding: PIDCoding: motion librariesLibrary configuration assumes the student already reads PID terms fluently.
PM: PIL (PM.4)PM: design reviews (PM.8)A design review without a PIL entry has nothing concrete to review.
🖼 images/05-dependency-map.png Dependency graph with four swim lanes and hard/soft dependency arrows

🖼 Image brief

  • Alt: Directed graph showing the four strands (Coding, Building, CAD, Notebook) as swim lanes, with arrows between nodes representing hard and soft dependencies. Hard dependencies are solid arrows; soft dependencies are dashed.
  • Source: Draw in a diagramming tool or on a whiteboard and photograph.
  • Caption: The dependency map tells sprint planning which tasks must be sequenced and which can run in parallel.

How to read the map in a sprint planning meeting: for every task the team is about to commit to, the sprint planner asks “what does this depend on from another strand, and is that upstream work already done?” If the upstream is not done, the task gets deferred or re-scoped until it is. If the upstream is a soft dependency and the team is willing to eat the rework cost, the decision is made explicitly and recorded, not defaulted into.

Independent exercise

Take this starter map and, as a team, add three edges your own team has hit in the past season — failures where a downstream lesson broke because an upstream lesson in another strand was incomplete. For each, decide whether it is a hard or soft dependency, write the “why” column, and merge it into the map. Post the map on the wall where the sprint planning meeting happens.

Common pitfalls

  • Treating the map as a one-time setup. Dependencies appear as the season unfolds. A map that is never updated is a map that stops matching reality by week four.
  • Hiding soft dependencies behind “we will just be careful.” Careful is not a plan. Sequence or accept the rework cost explicitly.
  • Strand leads who do not read the map. The point of the map is that the programmer knows the builder is blocking them. If strand leads do not read it, the map does not exist.
  • Confusing a missing lesson with a missing dependency. If the upstream lesson is not done because nobody has scheduled it, that is a planning problem, not a dependency problem — schedule it.
  • Mistaking “we always do it this way” for a dependency. Habits are not dependencies. Write down the actual failure mode that justifies the edge.

Where this points next

PM.11 takes the dependency map and schedules it into the actual week — the cadence at which a team runs, day by day, so that upstream work stays one step ahead of downstream work without anyone having to think about it.

📐 Reflection prompt (notebook-ready)

  • The dependency map lives in the team's internal documents, not the judging notebook directly. But every time a sprint retrospective identifies a failure that traces back to a dependency the team skipped, log that failure against the map in the notebook's process reflection section.
  • The “how we learned to sequence work” narrative is real evidence of engineering process.

Next up: PM.11 — Cadence: how a working week runs.

Cadence: how a working week actually runs

A default weekly shape with stand-ups, strand-aligned work blocks, deliberate cross-strand handoffs, and a Friday close.

~45 min Prereqs: PM.2, PM.6, PM.7, PM.10

Objective

Install a default weekly cadence — with stand-ups, strand-aligned work blocks, deliberate cross-strand handoffs, and a Friday close — so the dependency map from PM.10 turns into an actual schedule the team runs without re-arguing it every week.

Concept

PM.10 told you which lessons depend on which. PM.11 tells you when in the week each of them happens, so the upstream work is reliably one step ahead of the downstream work without anyone having to re-plan it. A cadence is the difference between a team that has good practices and a team that is good. Practices are things a team believes in. A cadence is things that happen this Tuesday at 4:15 whether or not anyone is feeling inspired.

The failure mode this prevents is cadence collapse — the week where “we will do the stand-up tomorrow” becomes three weeks without a stand-up, the sprint retrospective is skipped because everyone was tired, the notebook writer catches up on Friday night instead of logging daily, and the team arrives at the next event three sprints behind where the plan said they would be. Every one of those slips is individually reasonable. Collectively they cost the season. A written, posted cadence that every member of the team has seen and agreed to is the defence.

📐 Engineering tip. A cadence is not a rigid schedule. It is a default shape the week takes in the absence of a reason to deviate. When there is a reason — a competition, a critical rebuild, exam week — the team deviates deliberately and re-runs the cadence when they come back. The cadence is also phase-aware: early-season prototyping runs differently from mid-season iteration, which runs differently from a competition week, which runs differently from a post-event learning week.

Guided practice

Four sample weeks, one per season phase. Pick the one that matches your team's current phase and run it.

Mid-season iteration week (the default shape)

Mid-season cadence — the default shape
DayCodingBuildingCADNotebookPM activity
MonTune lateral PID on new wheelbaseUpdate chassis assembly to match rebuildBuild log + PID tuning data tableStand-up, triage PIL, confirm sprint focus
TueRewrite motion profile for intake v2Rebuild front intake per CADUpdate intake assemblyBrainstorm page for failed intake v1Stand-up
WedAuton route refactorDecision matrix: intake v2a vs v2b vs v2cStand-up, design review (end of sprint half)
ThuInstall intake v2b; friction auditBuild log + test log with cycle time dataStand-up
FriSkills run test + telemetry capturePost-test analysis, pre-event checklist draftStand-up, end-of-week retro if sprint boundary

Note how the handoffs are deliberate: Monday the CAD catches up to Friday's rebuild, so Tuesday's build session has a current reference. Wednesday's decision matrix is notebook work that feeds Thursday's install. Thursday's test data feeds Friday's skills run. Every cross-strand transition is planned, not emergent.

Phase variations

Early-season prototyping week — shift weight toward CAD and brainstorm, away from tuning. Stand-ups stay daily. Sprint length is typically two weeks. Design reviews are short and internal; decisions are cheap because nothing is committed. The notebook focuses on ideation pages and decision matrices.

Competition week — freeze design changes. No CAD updates unless a part breaks. Stand-ups become shorter (five minutes) but more frequent (morning and mid-afternoon). The week's goal is pre-match checklists, driver practice, auton tuning, and the pre-event notebook sweep. Design reviews are replaced with match-by-match review sessions. Retros are deferred until the post-event review.

Post-competition learning week — the opposite of a competition week. No building except triage fixes. The week is dominated by competition review (the event is the expensive signal), new PIL entries generated from match footage, and a long retrospective. Sprint planning for the next phase happens on the Friday of this week, not the Monday of next week — the learning must land before the plan.

⚡ Competition tip. Freezing design changes the week before an event feels wrong, but unfrozen design the week before an event is how teams arrive with an untested robot. Freeze early, test often, arrive confident.

Rules that hold in every phase:

  • Stand-up every day, even if brief. A missed stand-up is a leading indicator of cadence collapse.
  • Notebook work is daily, not weekend-batched.
  • Cross-strand transitions are scheduled on specific days, not “whenever.”
  • Friction audits and pre-test checks happen before testing, not after a failed test.
  • Sprint boundaries fall on the same day of the week for every sprint in a phase.

Independent exercise

Pick the phase your team is currently in, take the matching sample week above, and rewrite the coding / building / CAD / notebook / PM columns for your team's next seven days. Fill in real task names from your PIL. Post the filled-in week on the wall of the team meeting space on Sunday night. On Friday, compare what actually happened to what you posted. The gap is what you retrospect on.

Common pitfalls

  • Writing a perfect cadence and then not posting it. If the week is not visible where the team meets, the week does not exist.
  • Re-planning the cadence every week from scratch. The cadence is a default shape; only the task names change.
  • Skipping the Friday close. The Friday close is where the notebook and PIL catch up to reality. Skip it once and the team is a week behind by Wednesday.
  • Letting a competition week run like a mid-season week. Freezing design changes feels wrong, but unfrozen design the week before an event is how teams arrive with an untested robot.
  • No phase awareness. A team that runs the same week in week 3 and week 23 is not learning.

Where this points next

PM.12 adds estimation and velocity — the discipline that calibrates how much work a team can actually fit into the sprint-shaped containers the cadence schedules.

📐 Reflection prompt (notebook-ready)

  • Your filled-in weekly cadence is a notebook artefact. In the notebook's process section, keep a running “weekly cadence” page — one row per week, showing planned versus actual, with a one-line note on any deliberate deviation.
  • Over a season this becomes clear evidence to a judging panel that the team runs on a schedule, not on vibes.

Next up: PM.12 — Estimation and velocity.

Estimation and velocity

Calibrate how much work your team can actually ship per sprint, so commitments match reality instead of aspiration.

~45+ min Prereqs: PM.6, PM.9

Objective

Calibrate the team's ability to predict how much work fits into a sprint by recording estimate-versus-actual on every task and using the running average (velocity) to size the next sprint honestly.

Concept

Estimation is the skill of predicting, before you start a task, how long it will take. Velocity is the running measurement of how many tasks the team actually finishes per sprint. Together they let a team commit to a sprint that matches its real capacity instead of its aspirational one. Teams without estimation either over-commit (and finish half the sprint) or under-commit (and coast). Teams with estimation converge, over three or four sprints, on a realistic sprint size and then get a lot done.

Two rules do most of the work. First: estimates are predictions, not commitments. If the estimate turns out to be wrong, the team revises it — it does not pretend the task is still on track by working through a weekend. Second: record estimate and actual on every task. Without the pair, there is no calibration and estimation becomes superstition. Keep it simple: a two-column note on the task card — “estimate: M, actual: L” — is enough.

📐 Engineering tip. The failure mode this prevents is the most common one in every new team: over-commitment as a habit. The sprint planning meeting puts twenty tasks on the board; the team finishes ten; the retro asks why; someone says “we were not focused enough.” The real reason is almost never focus. The real reason is that the team's velocity is ten tasks per sprint and they committed to twenty. The fix is not harder work. The fix is committing to ten next sprint and actually shipping them.

T-shirt sizes are easier than hours. S is under two hours. M is half a day. L is a full working session. XL is “break it down further — not allowed on the board.” Over a few sprints the team develops a feel for each size. More precise estimates (in hours or points) are possible and some teams like them, but the t-shirt scale is low-friction and calibrates surprisingly fast.

Guided practice

A worked three-sprint calibration for a new team. This is the shape estimation actually takes — an over-commit, a correction, and a convergence.

Three-sprint velocity calibration
SprintCommittedShippedRetro finding
Sprint 1 — baseline over-commit22 tasks (8 S, 10 M, 3 L, 1 XL)11 tasks (6 S, 4 M, 1 L)Team committed to nearly twice its real capacity. The XL task was actually three L tasks hiding together.
Sprint 2 — correction13 tasks (5 S, 6 M, 2 L)12 tasks (5 S, 5 M, 2 L)Estimate-vs-actual on individual tasks: six tasks were off by one t-shirt size (three over, three under). The team is calibrating.
Sprint 3 — convergence13 tasks (4 S, 7 M, 2 L)13 tasksIndividual estimates now match actuals on roughly 70% of tasks. The team is reliable enough to start planning two sprints ahead.
Running velocity chart
SprintCommittedShippedNotes
12211Baseline over-commit; one XL snuck in
21312First honest sprint
31313Converged; team can now plan with confidence
414?Small stretch — is it real growth or wishful?

The team's honest capacity is 12–13 tasks per sprint. Sprint 4's 14-task commit is a deliberate stretch with a specific reason (a new member joined; one of the L tasks is really an M now that a dependency cleared). Stretches are fine when they have a reason. Stretches for vibes are how sprints fall apart.

Independent exercise

For your next sprint, write an estimate next to every committed task using the t-shirt scale. At the end of every session, for every task that was touched, write the actual beside the estimate. At the end of the sprint, count estimate-matches-actual, total shipped, and total committed. Use shipped as your velocity target for the next sprint's commit. After three sprints, your team has a real number.

Common pitfalls

  • Treating estimates as commitments. An estimate that turns out to be wrong gets revised, not hidden.
  • Padding estimates to “play it safe.” Padding destroys calibration. Estimate honestly; let velocity account for variance.
  • No estimate-actual comparison. Without the comparison, estimation is ritual without learning.
  • Adding an XL to the board. XL means the task is not defined well enough to estimate. Break it down before committing.
  • Ignoring a bad velocity in sprint planning. Last sprint's velocity is the next sprint's ceiling. Arguing with the number is how sprints miss.

Where this points next

You have reached the end of Chapter V. Return to PM.1 and re-read it: it is a different document now that you have installed the frameworks. The next step is using them — sprint by sprint, for a season. When you are ready, move on to Chapter VI — Engineering Notebook, where every artefact this chapter produced becomes a notebook page.

📐 Reflection prompt (notebook-ready)

  • Keep a Velocity Log page in the notebook process section. One row per sprint with committed, shipped, and velocity.
  • Over a season the chart is direct evidence the team plans and executes with discipline.

Next up: Chapter VI — Engineering Notebook.