Skip to main content
0 / 39 done
Chapter II of VI · C++ / PROS / LemLib

Coding

This chapter takes you from a single-file VEXcode program to a multi-file PROS project with LemLib, tuned PID controllers, odometry, and a state-machine autonomous routine that survives partial failure on the field. Thirty-nine sections, six tiers. Tier 0 is a refresh; Tier 1 migrates you off VEXcode; Tiers 2–4 build the core; Tier 5 adds advanced techniques; Tier 6 makes it all competition-ready.

VEXcode C++ syntax quick reference

A rapid refresher on the C++ you need for a working robot program — motor declarations, the control loop, joystick reading, and the wait that holds it all together.

~20 min

Objective

Write a small VEXcode C++ program that configures a motor, spins it at a controller-determined speed in a loop, and stops cleanly on a button press.

Concept

This tutorial is a refresher on the C++ you need to write a working robot program — not a full language introduction. If you have written anything on a V5 Brain before, you can skim it. If you have not, you will get the four things you need most often: declaring a motor, looping with a delay, reading the V5 Controller, and structuring the main function.

A VEXcode text project is built around a main function that runs once when the program starts. You declare hardware at the top, set up any initial state, and enter a while (true) loop that reads inputs and drives outputs. wait(milliseconds) pauses so the loop does not hog the CPU.

Controller joysticks report values from −100 to 100 (or −127 to 127, depending on the API). A small deadband around zero is always a good idea because at rest the joystick does not report zero exactly.

💡 Coding tip. Three things every program needs: hardware declarations, a loop, and a delay. Miss any one of the three and the robot misbehaves in a way that looks like a hardware fault.

Guided practice

// Declarations (at the top of main.cpp or in the config)
motor leftMotor = motor(PORT1, ratio18_1, false);
controller master = controller(primary);

int main() {
    while (true) {
        int stick = master.Axis3.position();   // -100..100
        if (std::abs(stick) < 5) stick = 0;    // deadband
        leftMotor.spin(forward, stick, percent);
        wait(10, msec);
    }
}

What each piece does:

  • motor leftMotor = motor(PORT1, ratio18_1, false) — declares a motor on port 1, 200 RPM gearing, not reversed.
  • controller master = controller(primary) — the primary controller.
  • master.Axis3.position() — reads the left vertical joystick, returns −100 to 100.
  • leftMotor.spin(forward, stick, percent) — spins the motor at stick per cent voltage.
  • wait(10, msec) — 10 ms delay so the loop runs at 100 Hz.

Never run a while (true) loop without a wait inside it. Without the wait, the loop consumes the CPU and blocks anything else from running.

Independent exercise

Write a small program on your own robot that:

  1. Declares two motors for a left and right drivetrain side.
  2. Reads two joystick axes.
  3. Applies a 5% deadband to each.
  4. Spins each motor at its corresponding joystick value.
  5. Waits 10 ms per loop.

Success criterion: both drivetrain sides respond to the joysticks smoothly and the robot does not creep when the sticks are released.

Common pitfalls

  • Missing the wait. Everything else breaks.
  • Forgetting the deadband. The robot creeps.
  • Declaring motors inside main instead of at the top of the file. Scoping gets confusing fast.

Where this points next

L0.2 is the sensor refresh — bumper, distance, rotation, sonar — which you will port to PROS in L2.4.

💡 Reflection prompt (notebook-ready)

  • Why does the loop need a wait? What happens to the robot if you remove it?

Next up: L0.2 — Sensor API refresh.

Sensor API refresh

Declare and read the four most common V5 sensors — bumper switch, distance sensor, rotation sensor, sonar — and use a reading in a conditional.

~25 min

Objective

Declare and read from the four most common V5 sensors — bumper switch, distance sensor, rotation sensor, sonar — and use a reading in a conditional to change robot behaviour.

Concept

Every V5 sensor follows the same pattern: declare it on a port at the top of your code, call a method on it to get its current reading, and use the reading in an if or a while to change behaviour. The specific method name depends on the sensor, but the shape is identical.

  • Bumper switch — digital, pressed or not. .pressing() returns true when the switch is down. Good for “has the robot hit a wall?” and “is something in the receiver?” checks.
  • Distance sensor — analogue, reports distance in millimetres out to about 2 metres. Noisy at long range; always check confidence or filter before trusting a reading.
  • Rotation sensor — analogue, reports position (continuous) and angle (wrapped 0–360). This is the sensor you put behind tracking wheels and on lift pivots.
  • Sonar — older ultrasonic distance sensor. Slower than the V5 distance sensor, but useful for long ranges.

Two rules for all of them:

  1. Declare once, read many times. Put the declaration at the top of the file. Call its methods from inside your loop. Never declare a sensor inside a loop.
  2. Validate readings before acting on them. A noisy reading that triggers an action causes more bugs than a silent one that never triggers. Range-check the value, confidence-check it if possible, and debounce state transitions.

Guided practice

// Declarations
bumper frontBumper = bumper(Brain.ThreeWirePort.A);
distance frontDistance = distance(PORT2);
rotation liftSensor = rotation(PORT3);

int main() {
    while (true) {
        // Bumper — stop when pressed
        if (frontBumper.pressing()) {
            leftMotor.stop();
            rightMotor.stop();
        }

        // Distance — only trust short readings
        int dist = frontDistance.objectDistance(mm);
        if (dist < 500) {
            Brain.Screen.printAt(10, 10, "Close: %d mm", dist);
        }

        // Rotation — read lift position
        double liftAngle = liftSensor.position(rotationUnits::deg);
        Brain.Screen.printAt(10, 40, "Lift: %.1f deg", liftAngle);

        wait(20, msec);
    }
}

Three sensors, three reads, three uses. None of them touches another; each one is read and acted on independently.

Using a condition on a sensor

The power of a sensor comes from making the robot do different things based on the value. A classic pattern:

while (frontDistance.objectDistance(mm) > 100) {
    // creep forward until within 100 mm of a wall
    leftMotor.spin(forward, 20, percent);
    rightMotor.spin(forward, 20, percent);
    wait(10, msec);
}
leftMotor.stop();
rightMotor.stop();

The loop condition re-reads the sensor every iteration. The robot keeps moving until the sensor tells it to stop.

Independent exercise

On your own robot, pick two sensors you have wired and write a small program that uses each one to change a motor’s behaviour:

  1. A bumper that stops the drivetrain on contact.
  2. A rotation or distance sensor that prints its value to the brain screen continuously.

Drive the robot around, press the bumper into something, and confirm the drivetrain stops. Move the rotation sensor’s shaft (or move an object in front of the distance sensor) and confirm the screen value changes.

Success criterion: both sensors visibly affect the robot, and you can explain what each one is reading.

Common pitfalls

  • Declaring sensors inside the loop. The declaration resets the sensor every iteration.
  • Reading distance-sensor values at maximum range and trusting them. Always range-check.
  • Using a bumper for a state that should be continuous. Bumpers are good for “pressed right now” and bad for “pressed some time in the last minute.”

Where this points next

Tier 1 begins with L1.1 — the case for leaving VEXcode and moving to PROS. The sensor knowledge from this lesson carries directly over; only the syntax changes.

💡 Reflection prompt (notebook-ready)

  • Which sensor on your robot is the most important one, and what decision does your code make based on it?

Next up: L1.1 — Why we leave VEXcode.

Why we leave VEXcode

The four capabilities a professional C++ toolchain gives you that VEXcode cannot, and why the migration is not optional for a competitive team.

~30 min

Objective

Explain, in your own words, why a competitive team graduates from VEXcode to a professional C++ toolchain, and name the four capabilities you gain by doing so.

Concept

VEXcode is training wheels. That is not an insult. Training wheels exist because you need them at the start, and you outgrow them the moment you stop needing them. If you are reading this tutorial, you have probably already hit the wall that every returning team hits eventually: your program is getting too big, you cannot find the line you wrote last week, the autonomous only has one file, and the moment two of you try to work on the same project the whole thing falls apart.

That wall is not a skill problem. It is a tool problem. VEXcode is built around a single-file, single-author, single-session model. A competitive robot is none of those things. It is a multi-subsystem machine, built by a multi-person team, across a whole season. Your tools need to match that reality.

There are four things a real toolchain gives you that VEXcode cannot:

  1. A real editor. Visual Studio Code gives you full-project search, jump-to-definition, find-all-references, refactoring, extensions, split views, and a dozen more keyboard shortcuts you will use hundreds of times a day. You stop reading code by scrolling and start reading code by navigating.
  2. A real build system. PROS is a proper C++ toolchain. It compiles your code the same way professional embedded software is compiled. It gives you clear error messages, optional warnings, and the ability to split your program into as many files as you want.
  3. A real library ecosystem. Once you are on PROS, you can pull in LemLib for motion control, other community libraries for specialised hardware, and your own reusable code from previous seasons. VEXcode is a walled garden; PROS is a shared workshop.
  4. A real version-control story. Git lets every team member work on their own branch, merge their changes when they are ready, and roll back instantly when something breaks. It also gives you a permanent history of why every change was made, which is gold when you are trying to figure out who changed the intake speed before the last qualifier.

These four things compound. A team using VS Code + PROS + LemLib + Git in week two moves faster than a team using VEXcode in week ten, because every week the gap widens. The toolchain is not an accessory. It is the thing that lets you keep working on the robot instead of fighting the robot.

💡 Coding tip. The rest of this tier installs the stack, ports your existing tank drive over, and gets your code into version control. By the end, you will have exactly the same robot behaviour you had in VEXcode, but in a workspace you can actually grow into.

Guided practice

There is no code for this lesson. Instead, do this experiment:

  1. Open your most recent VEXcode project (any season, any robot).
  2. Count how many lines are in your longest file.
  3. Try to find every place you set a motor voltage. Use whatever search tool VEXcode gives you.
  4. Now imagine splitting that file into an intake.cpp, a drivetrain.cpp, and an autonomous.cpp. Notice how hard that would be to do inside VEXcode.

That friction is the problem you are about to solve.

Independent exercise

Write one paragraph in your engineering notebook describing a moment this season (or last season) when you wished your coding tools had done something they could not. Be specific. What were you trying to do? What got in the way? What did you end up doing instead?

Success criterion: another student on your team can read the paragraph and recognise the frustration.

Common pitfalls

  • Treating this as “just a different UI.” It is not. The workflows are different, and that is the point.
  • Assuming returning teams do not need the refresh. Every returning team has at least one member who has never used a real IDE.
  • Skipping Git because “we only have one programmer.” One programmer today; two programmers the week before your regional.
  • Being intimidated by the command line. You will use exactly four commands this tier and then it becomes invisible.

Where this points next

L1.2 installs the full stack on your machine so you have somewhere to write your first PROS project.

💡 Reflection prompt (notebook-ready)

  • Which of the four capabilities (editor, build system, libraries, version control) do you predict will change how your team works the most? Write your prediction down now, and come back to check it at the end of Tier 2.

Next up: L1.2 — Installing the stack.

Installing the stack

Visual Studio Code, the PROS CLI, the PROS extension, and V5 USB drivers — installed, verified, and smoke-tested.

~60 min

Objective

Have Visual Studio Code, the PROS CLI, and the PROS VS Code extension installed, see your V5 Brain over USB, and run a smoke-test terminal command.

Concept

You need four things before you can write PROS code: Visual Studio Code (the editor), the PROS CLI (the command-line tool that builds and uploads your project), the PROS VS Code extension (a friendly front-end for the CLI), and the V5 USB drivers (so your machine and the brain can actually talk). These tools are all free, all still maintained, and the canonical installation guides for each one live at the upstream project’s own documentation.

One thing to watch for: PROS moves. The canonical install instructions live at the PROS “New Users” page (pros.cs.purdue.edu/v5/getting-started/new-users.html). If anything in this tutorial has drifted from reality, trust the upstream docs over this page. The goal of this lesson is to get you past the spots the upstream docs assume you already know, not to duplicate step-by-step instructions that will rot.

💡 Coding tip. Every install page linked below detects your environment and walks you through the right steps. Follow the upstream instructions — they are always more current than anything printed here.

Guided practice

Step 1 — Install Visual Studio Code

Go to code.visualstudio.com and follow the installer. Launch it once and make sure it opens. No extensions yet.

Step 2 — Install the PROS CLI

Follow the official PROS “New Users” install page: pros.cs.purdue.edu/v5/getting-started/new-users.html. The page detects your setup and walks you through the right installer. Finish the steps there, then come back here.

When you are done, open a new terminal window (important — an old one will not see the updated PATH) and run:

pros --version

If you see a version number, you are good. If you see “command not found,” your PATH is not set up — close the terminal, open a new one, and try again. If it still fails, the PROS docs have a troubleshooting section that covers the common cases.

Step 3 — Install the PROS VS Code extension

In VS Code, open the Extensions panel (the four-square icon in the sidebar) and search for “PROS.” Install the one published by “purdue-acm-sigbots.” Restart VS Code.

You should now see a small PROS icon in the sidebar. Click it — you will get a welcome page with “Create Project,” “Open Project,” and so on.

🖼 images/02-vscode-pros-extension.png VS Code with PROS sidebar icon and welcome page visible

🖼 Image brief

  • Alt: VS Code editor window showing the PROS extension icon in the sidebar and the PROS welcome page with Create Project, Open Project, and other options
  • Source: Screenshot of VS Code with PROS extension installed, sidebar expanded to show the PROS welcome panel
  • Caption: VS Code with the PROS extension installed. The sidebar icon (circled) opens the welcome page where you create and manage projects.

Step 4 — Drivers and firmware

Plug your V5 Brain into your machine with a USB-A-to-micro-USB cable (not USB-C — yet). Power the brain on. Then run:

pros lsusb

You should see a line that mentions a VEX V5 device. If you do not:

  • First, try a different cable. The single most common cause of this failure is a power-only cable that charges the brain but does not carry data.
  • If a new cable does not help, check the PROS “New Users” troubleshooting section — it lists the driver and permissions steps for your environment.

Make sure the brain firmware is current. The easiest way is to let VEX’s own tool update it once. Out-of-date firmware causes upload failures that look like software bugs, so rule it out early.

Step 5 — Smoke test

pros terminal

You should see something like “Connected to V5” and then nothing else. That is correct — there is no program running yet, so there is no output. Press Ctrl+C to exit. If you saw “Connected to V5,” the whole stack works.

Independent exercise

Take a screenshot of your VS Code window with the PROS sidebar visible and the PROS welcome page open. Paste it into your engineering notebook with a caption stating the PROS CLI version you are on (pros --version).

Success criterion: another student could use your screenshot to confirm their install matches yours.

Common pitfalls

  • Old terminal windows not seeing the new PATH. Always open a fresh terminal after installing the CLI.
  • Power-only USB cables. They charge the brain but do not transmit data.
  • Stale V5 firmware. Update it once through VEX’s own tool, then forget about it.
  • Installing the wrong PROS extension — there are forks. Use the one published by “purdue-acm-sigbots.”
  • Treating this page as canonical. When the PROS docs and this page disagree, the PROS docs win.

Where this points next

L1.3 creates your first PROS project and tours the file layout so you know what each directory is for.

💡 Reflection prompt (notebook-ready)

  • Which step took the longest? If a teammate was installing the same stack tomorrow, what one sentence of advice would you give them to save time?

Next up: L1.3 — Your first PROS project.

Your first PROS project

Create a PROS project, learn the directory layout, and upload a minimal program to the V5 Brain.

~45 min

Objective

Create a new PROS project, explain what each top-level directory is for, and upload a minimal program to the V5 Brain.

Concept

A PROS project is not a single file. It is a small, opinionated directory tree that separates your source code from the generated build artefacts, the library headers, and the project metadata. Once you know what lives where, you never have to think about it again.

Here is the canonical layout you will get from pros c new:

my_project/
├── Makefile           # build rules — do not touch
├── common.mk          # shared build settings — do not touch
├── project.pros       # project metadata — PROS manages this
├── firmware/          # precompiled PROS kernel — do not touch
├── include/           # header files (.h, .hpp) — you write these
│   ├── api.h          # pulls in the PROS API
│   ├── main.h         # your project-wide declarations
│   └── pros/          # PROS headers
├── src/               # source files (.c, .cpp) — you write these
│   └── main.cpp       # the default entry point
└── static/            # images, configs, SD-card files — optional
🖼 images/02-pros-project-explorer.png VS Code file explorer showing PROS project directory tree

🖼 Image brief

  • Alt: VS Code file explorer panel showing a PROS project with folders firmware, include, src, and static, plus files Makefile, common.mk, and project.pros at the root level
  • Source: Screenshot of VS Code file explorer with a freshly created PROS project expanded to show all top-level directories and files
  • Caption: A fresh PROS project in the VS Code file explorer. You will work exclusively in include/ and src/.

You will spend all of your time in include/ and src/. Everything else is the build system’s territory, and touching it without a good reason will break things in interesting ways.

Your program has five entry points that PROS calls for you at specific moments. Knowing which runs when is half the battle:

  • initialize() — runs once when the brain boots. Use this for sensor calibration and anything that only needs to happen once.
  • disabled() — runs while the robot is disabled between autonomous and driver control. You almost never need this.
  • competition_initialize() — runs once before a competition round starts, after initialize(). Use this to pick an auton route.
  • autonomous() — runs during the 15-second autonomous period.
  • opcontrol() — runs during the driver-control period. When you are in the pits testing without a competition switch, PROS treats everything as opcontrol.

If you do not understand a function yet, leave its body empty. The build will still succeed.

Guided practice

Step 1 — Create the project

From the PROS sidebar in VS Code, click “Create Project.” Pick a folder, name the project something like first-pros, and accept the default kernel version. Alternatively, from a terminal:

pros c new first-pros
cd first-pros
code .

Wait for the PROS CLI to finish fetching the kernel. When it is done, open src/main.cpp.

Step 2 — Write a minimal program

Replace the contents of src/main.cpp with:

#include "main.h"

void initialize() {
    pros::lcd::initialize();
    pros::lcd::print(0, "Hello from PROS");
}

void disabled() {}
void competition_initialize() {}
void autonomous() {}

void opcontrol() {
    pros::Controller master(pros::E_CONTROLLER_MASTER);
    pros::Motor left_motor(1);   // port 1
    pros::Motor right_motor(2, true);  // port 2, reversed

    while (true) {
        int fwd = master.get_analog(pros::E_CONTROLLER_ANALOG_LEFT_Y);
        left_motor.move(fwd);
        right_motor.move(fwd);
        pros::delay(10);
    }
}

Change the port numbers to match your robot. The true on right_motor reverses it so “both motors forward” actually moves the robot forward.

Step 3 — Build and upload

In VS Code:

  • Click the PROS “Build” button in the sidebar. Watch the terminal. You should see it compile main.cpp and link the firmware. If there are errors, fix them before moving on.
  • Plug your brain in. Click “Upload.” The CLI will transmit the program over USB.

Step 4 — Run it

On the brain screen, pick your program and start it. Move the left joystick forward — both motors should spin. The brain screen should say “Hello from PROS.”

Independent exercise

Modify the program so the right joystick controls the right motor independently (tank drive), and the brain screen prints the current joystick Y value for the left stick every loop iteration. Upload and confirm.

Success criterion: left stick drives the left motor, right stick drives the right motor, and you can watch the brain screen number change as you move the stick.

Common pitfalls

  • Forgetting #include "main.h" at the top of a new source file. It pulls in the PROS API.
  • Wrong motor port. The number must match the physical port on the brain.
  • Not reversing one side of the drivetrain — the robot spins in place instead of moving forward.
  • Editing Makefile or common.mk. Do not. If the build breaks, delete bin/ and rebuild.
  • Leaving pros::delay(10) out of your while (true) loop. Without it you starve other PROS tasks and the brain behaves unpredictably.

Where this points next

L1.4 walks you through porting an existing VEXcode tank drive into this project, side by side.

💡 Reflection prompt (notebook-ready)

  • Compare the file layout of your PROS project to the layout of your old VEXcode project. Which layout do you think will scale better as your program grows, and why?

Next up: L1.4 — Porting a VEXcode tank drive to PROS.

Porting a VEXcode tank drive to PROS

Take an existing VEXcode tank drive and reproduce the same behaviour in a PROS project, including joystick deadband.

~45 min

Objective

Take an existing VEXcode tank drive and reproduce the same behaviour in a PROS project, including joystick deadband.

Concept

VEXcode C++ and PROS C++ are the same language. What differs is the library wrapped around the robot hardware. VEXcode uses the vex:: namespace and objects like vex::motor and vex::controller. PROS uses the pros:: namespace with pros::Motor and pros::Controller. Everything else — the while loop, the if statements, the arithmetic — is literally identical.

So porting a tank drive is a mechanical translation. You change the namespace, change the class names, change a few method names, and you are done. The behaviour does not change. The robot does not care which library drove the motor. The motor just spins.

There is one piece of polish you should add while you port: a deadband on the joystick. Controllers drift. A stick at rest often reads 2 or 3 instead of 0. Without a deadband you hear the drivetrain humming during a “stopped” state, and worse, you slowly roll across the pit floor when you set the controller down. A deadband says “any reading under this threshold counts as zero.” Ten per cent of stick range is a sensible starting point.

💡 Coding tip. PROS joystick values are −127 to 127, not −100 to 100. That matches the units that pros::Motor::move() takes, so you do not need to scale them.

Guided practice

Step 1 — Write down the VEXcode reference

Here is a typical VEXcode tank drive:

// VEXcode
vex::motor leftFront(vex::PORT1, vex::gearSetting::ratio18_1, false);
vex::motor leftBack(vex::PORT2, vex::gearSetting::ratio18_1, false);
vex::motor rightFront(vex::PORT3, vex::gearSetting::ratio18_1, true);
vex::motor rightBack(vex::PORT4, vex::gearSetting::ratio18_1, true);
vex::controller master;

int main() {
    while (true) {
        int leftStick = master.Axis3.position();   // -100..100
        int rightStick = master.Axis2.position();
        leftFront.spin(vex::forward, leftStick, vex::percent);
        leftBack.spin(vex::forward, leftStick, vex::percent);
        rightFront.spin(vex::forward, rightStick, vex::percent);
        rightBack.spin(vex::forward, rightStick, vex::percent);
        vex::wait(20, vex::msec);
    }
}

Step 2 — Translate to PROS

Here is the same behaviour in PROS, written inside opcontrol():

// PROS
#include "main.h"

void opcontrol() {
    pros::Controller master(pros::E_CONTROLLER_MASTER);
    pros::Motor left_front(1);
    pros::Motor left_back(2);
    pros::Motor right_front(3, true);   // reversed
    pros::Motor right_back(4, true);

    const int DEADBAND = 10;  // percent

    while (true) {
        int left_stick  = master.get_analog(pros::E_CONTROLLER_ANALOG_LEFT_Y);   // -127..127
        int right_stick = master.get_analog(pros::E_CONTROLLER_ANALOG_RIGHT_Y);

        if (abs(left_stick)  < DEADBAND) left_stick  = 0;
        if (abs(right_stick) < DEADBAND) right_stick = 0;

        left_front.move(left_stick);
        left_back.move(left_stick);
        right_front.move(right_stick);
        right_back.move(right_stick);

        pros::delay(10);
    }
}
🖼 images/02-vexcode-vs-pros-comparison.png Side-by-side code comparison of VEXcode and PROS tank drive

🖼 Image brief

  • Alt: Two-column comparison showing VEXcode tank drive code on the left and equivalent PROS code on the right, with colored highlights linking corresponding lines: motor declarations, joystick reads, motor commands, and delay calls
  • Source: Diagram in Figma or code-diff tool. Place VEXcode on the left, PROS on the right, with arrows or color-coded highlights connecting equivalent lines (vex::motor to pros::Motor, Axis3.position() to get_analog, etc.)
  • Caption: VEXcode (left) vs PROS (right). Same logic, different API. Color-coded lines show the one-to-one translation between namespaces and method names.

Two things to notice:

  • PROS joystick values are -127..127, not -100..100. That matches the units that pros::Motor::move() takes, so you do not need to scale them.
  • The PROS motor constructor’s second argument is bool reversed. There is no separate gear-cartridge enum — the gearset is a configurable property (set_gearing) that defaults to green (18:1).

Step 3 — Upload and test

Build and upload. Verify:

  • Both sticks at rest: motors silent, robot still.
  • Left stick forward: left side spins forward.
  • Right stick forward: right side spins forward.
  • Both sticks forward: robot drives straight.
  • Sticks opposite: robot turns.

Independent exercise

Start from your own VEXcode tank drive (or the team’s most recent one). Write a side-by-side table in your notebook with two columns: VEXcode line and PROS equivalent. Port every line. Then type the PROS version into a new opcontrol(), upload, and drive the robot around your practice field for two minutes.

Success criterion: the PROS robot drives identically to the VEXcode robot.

Common pitfalls

  • Forgetting the deadband. The robot creeps.
  • Mixing up axis numbers. On PROS, LEFT_Y is the up/down on the left stick. Verify with a brain-screen printout.
  • Not reversing one side of the drivetrain. Classic “spins in place” bug.
  • Using pros::Motor::move_velocity() instead of move(). move_velocity is RPM, not a voltage per cent — very different feel.
  • Leaving vex:: constants in your PROS code by accident. The compiler will complain; trust it.

Where this points next

L1.5 shows you how to get useful debug output out of your running robot using the PROS terminal and the brain screen.

💡 Reflection prompt (notebook-ready)

  • Your side-by-side port table is a translation dictionary. Which VEXcode idiom did not have a clean one-to-one translation, and what did you do about it?

Next up: L1.5 — The PROS terminal, printf, and brain-screen output.

The PROS terminal, printf, and brain-screen output

Print values from a running robot to the PROS terminal and the V5 brain screen, and know when each is the right tool.

~30 min

Objective

Print values from a running robot to both the PROS terminal and the V5 brain screen, and explain when each is the right tool.

Concept

Debugging on an embedded system is different from debugging a desktop program. You cannot set a breakpoint and poke around. You have to make the robot tell you what it is doing. That means adding telemetry — print statements that let you watch the robot’s state change in real time.

You have two places to print to, and they are good at different things:

  • The PROS terminal. Your machine connected over USB to the running brain. Anything you printf goes there. It can handle a firehose of data — you can print every loop if you want. The catch: you have to be tethered.
  • The brain screen. The little LCD on the V5 Brain itself. It has five lines of text and updates slowly. It is the right place for slow, glanceable state — “which auton is selected,” “is the IMU calibrated,” “current heading.” The driver can see it on the field.

Rule of thumb: brain screen for humans, terminal for programmers. If a driver needs to know it during a match, put it on the screen. If you are tuning a PID at your desk, use the terminal.

🖼 images/02-pros-terminal-brain-screen.png Split view: PROS terminal output on laptop and brain screen on V5 Brain

🖼 Image brief

  • Alt: Left side shows a laptop terminal window with rapidly scrolling printf output (y=0, y=12, y=-45). Right side shows the V5 Brain LCD screen displaying a stable heading value and tick counter.
  • Source: Composite photo or screenshot: capture the PROS terminal window in VS Code showing streaming printf output, and photograph the V5 Brain screen showing lcd::print output. Combine side by side with labels.
  • Caption: Two telemetry channels. The PROS terminal (left) streams fast data for tuning. The brain screen (right) shows slow, glanceable state for the driver.
💡 Coding tip. Trailing spaces in lcd::print format strings are not a typo — they overwrite leftover digits from a previous, longer number. Without them, “100” followed by “7” reads as “700” on the screen.

Guided practice

Step 1 — Brain screen output

Open main.cpp. In opcontrol(), print the controller Y-axis value every loop:

void opcontrol() {
    pros::Controller master(pros::E_CONTROLLER_MASTER);
    pros::lcd::initialize();

    while (true) {
        int y = master.get_analog(pros::E_CONTROLLER_ANALOG_LEFT_Y);
        pros::lcd::print(0, "Left Y: %d    ", y);
        pros::delay(20);
    }
}

Upload and run. Move the left stick. The brain screen’s first line updates live.

Step 2 — Terminal output with printf

Add a printf call to the same loop:

while (true) {
    int y = master.get_analog(pros::E_CONTROLLER_ANALOG_LEFT_Y);
    pros::lcd::print(0, "Left Y: %d    ", y);
    printf("y=%d\n", y);
    pros::delay(20);
}

Then, with the robot running, from a terminal:

pros terminal

You should see y=0 repeating, then changing as you move the stick. Press Ctrl+C to exit the terminal (the robot keeps running — you are just detaching from the output stream).

Step 3 — Choose the right tool

Add an IMU-style heading read (or fake it with a counter) and print it to the brain screen only. Meanwhile, keep printing the raw joystick Y to the terminal at full speed. This is the pattern you will use for the rest of the season: slow human telemetry on the brain, fast programmer telemetry in the terminal.

int tick = 0;
while (true) {
    int y = master.get_analog(pros::E_CONTROLLER_ANALOG_LEFT_Y);
    pros::lcd::print(1, "Tick: %d    ", tick++);     // slow, for you to watch
    printf("y=%d tick=%d\n", y, tick);               // fast, for later analysis
    pros::delay(20);
}

Independent exercise

Add a third print that counts the number of times your loop has run since the program started and prints it to line 3 of the brain screen. Run for 60 seconds. Compare the counter value to what you would expect from a 20 ms loop. Are you losing any iterations? If so, why might that be?

Success criterion: you can explain the relationship between your pros::delay value and the loop count you observed.

Common pitfalls

  • Forgetting pros::lcd::initialize() before the first print. Prints silently fail.
  • No trailing spaces in lcd::print format strings. Leftover digits make numbers look wrong.
  • Printing too much to the brain screen. The screen can only update so fast; you starve the loop.
  • Forgetting to detach the terminal (pros terminal stays attached until you press Ctrl+C). Not harmful, but confusing.
  • Using std::cout. It works but is awkward on embedded; stick to printf.

Where this points next

L1.6 puts your whole project under version control so you stop losing work and start collaborating.

💡 Reflection prompt (notebook-ready)

  • Pick one value on your own robot that you want to be able to see during a practice session. Decide whether it belongs on the brain screen or in the terminal, and justify the choice in one sentence.

Next up: L1.6 — Git for robotics teams.

Git for robotics teams

Put your PROS project under Git, push to a shared GitHub repository, write a sensible .gitignore, and adopt a branch-and-pull-request workflow for the rest of the season.

~75 min

Objective

Have a PROS project under Git, pushed to a shared GitHub repository, with a sensible .gitignore and a branch-and-pull-request workflow you can use the rest of the season.

Concept

Version control exists because “final_v3_REAL_final.cpp” is not a workflow. It is a cry for help. Git gives you three things VEXcode could never give you:

  1. History. Every change, who made it, when, and why. When the intake stops working the day before a qualifier, you can run one command and see every commit that touched the intake file. You will find the bug in thirty seconds.
  2. Branches. Two people can work on two features at the same time without overwriting each other. When a feature is ready, you merge it in. When a feature is half-broken, you leave it on its branch until it works.
  3. Pull requests. The team reviews each other’s code before it hits the main branch. This catches bugs, spreads knowledge, and makes the whole team better at reading code.

The single most important habit is one repo per team, branch per feature. Main is sacred — main always builds, main always drives the robot. Features live on branches like feature/intake-state-machine or fix/heading-wraparound until they are reviewed and merged.

🖼 images/02-git-branch-workflow.png Diagram of Git branch-and-merge workflow for a robotics team

🖼 Image brief

  • Alt: Git workflow diagram showing a main branch as a horizontal line with two feature branches (feature/intake-state-machine and fix/heading-wraparound) branching off, each with commits, and merging back into main via pull requests
  • Source: Diagram in Figma. Horizontal main branch with commit dots, two feature branches diverging and converging. Label the branch points, commits, PR review step, and merge points. Use the project accent color for main.
  • Caption: The branch-per-feature workflow. Main always builds. Features live on branches until reviewed and merged.

The second most important habit is do not commit binaries. PROS generates a bin/ directory full of .o and .d files on every build. Those files are enormous, change constantly, and are regenerated from your source automatically. Committing them bloats your repo and causes merge conflicts on files nobody should ever be editing. The .gitignore file tells Git to ignore them.

💡 Coding tip. If your team is still emailing main.cpp around, freeze the next build session and make Git the only thing you set up that day. Every hour you spend unblocking the team from “who has the latest?” is an hour you could have spent tuning a PID.

Guided practice

Step 1 — Install Git and make a GitHub account

  • Install Git from git-scm.com — the page detects your environment and shows the right installer.
  • Make a GitHub account. Ask your team mentor to create a team organisation and invite everyone.
  • Run git --version to confirm the install.

Set your identity once:

git config --global user.name "Your Name"
git config --global user.email "you@example.com"

Step 2 — Write a .gitignore

In your PROS project root, create a file called .gitignore with these contents:

# PROS build artifacts
bin/
*.o
*.d
*.elf
*.bin
*.a
hot.package.bin
cold.package.bin

# VS Code user-specific
.vscode/

# OS-generated junk files
.DS_Store
Thumbs.db

# PROS cache
.pros-project

# Editor swap files
*.swp
*~

Anything listed here Git will pretend does not exist. If you accidentally committed one of these before adding it here, run git rm --cached <file> to untrack it without deleting the local copy.

Step 3 — Initialise and push

From the project root:

git init
git add .
git status

git status should list your source files, headers, Makefile, project.pros, and your new .gitignore. It should not list anything in bin/. If it does, check your .gitignore.

git commit -m "Initial PROS project"

On GitHub, create a new empty repository under your team organisation (do not initialise it with a README — you already have files). Then:

git remote add origin https://github.com/your-team/your-repo.git
git branch -M main
git push -u origin main

Refresh the GitHub page. Your project is now in version control.

Step 4 — Branch, edit, pull-request

Here is the workflow you will use for every change the rest of the season:

git checkout -b feature/tank-deadband
# edit files
git add src/main.cpp
git commit -m "Add joystick deadband to tank drive"
git push -u origin feature/tank-deadband

Then on GitHub, click “Compare & pull request.” Write a one-sentence description. Ask a teammate to review it. When they approve, click “Merge pull request.” Then locally:

git checkout main
git pull

You are now back on main with the latest code, ready to start the next feature.

Step 5 — Resolving a conflict

Sometimes two branches edit the same line. Git will tell you when you try to merge:

Auto-merging src/main.cpp
CONFLICT (content): Merge conflict in src/main.cpp

Open the file. You will see markers:

<<<<<<< HEAD
left_motor.move(left_stick);
=======
leftMotor.move(leftStick);
>>>>>>> feature/rename-motors

Pick the version you want (or merge them by hand), delete the marker lines, save, and commit. Conflicts are not a disaster. They are Git telling you “I cannot make this decision for you.”

Independent exercise

Take your ported tank drive from L1.4, put it in a repo, push it to GitHub, then create a branch called feature/exponential-curve, modify the joystick reading to square the input (preserving sign), and open a pull request against main. Have a teammate review it. Merge it.

Success criterion: GitHub’s pull request page shows your change merged into main, with at least one reviewer’s approval.

Common pitfalls

  • Committing bin/ by accident — your first push is huge and the repo is permanently bloated. Fix it with git rm --cached -r bin/ and a new commit.
  • Force-pushing to main. Do not. Ever. It rewrites history and your teammates lose work.
  • Committing directly to main. Start every change on a branch, even tiny ones.
  • Not pulling before you start a new branch. You will build on stale code.
  • Giant commits. One logical change per commit; the history is only useful if you can read it.

Where this points next

Tier 1 is complete. In Tier 2 you start carving your growing code into subsystems, writing your first Drivetrain class, and adding continuous integration so a broken build is caught before it reaches the robot.

💡 Reflection prompt (notebook-ready)

  • Describe, in three sentences, the branching workflow you and your team will use for the rest of the season. Be specific about who reviews pull requests and when they get merged.

Next up: L2.1 — Project structure for a competitive season.

Project structure for a competitive season

One .cpp plus one .hpp per subsystem, a Robot facade that owns everything, and a main.cpp that does nothing but wire entry points.

~60 min

Objective

Lay out a PROS project with header/source splits per subsystem, a Robot facade that owns the subsystems, and a main.cpp that does nothing but wire entry points to subsystem methods — and defend this structure against the pull of the single-file approach.

Concept

Your first PROS project put everything in main.cpp. That is correct for week one. It stops being correct the moment your code crosses 300 lines, and it is actively hostile by 500. Here is what fails:

  • You cannot find anything. Search stops being enough; you start scrolling.
  • Two people cannot edit the file at once without merge conflicts.
  • The intake logic and the drivetrain logic live next to each other and share nothing but whitespace. Change one, break the other.
  • You cannot unit-test, reuse, or rip out a subsystem without rewriting half the file.

The fix is boring and effective: one .cpp plus one .hpp per subsystem, a Robot facade that owns every subsystem, and a main.cpp that only wires the framework entry points to the facade. No logic in main.cpp. None.

The header/source split

Every subsystem has two files. The header (.hpp) declares what the subsystem is and what it exposes. The source (.cpp) defines how. Other files include the header, never the source.

// include/drivetrain.hpp
#pragma once
#include "lemlib/api.hpp"

class Drivetrain {
public:
    Drivetrain(lemlib::Chassis* chassis);
    void tankDrive(double leftVolts, double rightVolts);
    void straightlineTo(double x, double y, int timeoutMs);
    lemlib::Pose getPose() const;
private:
    lemlib::Chassis* chassis;
};
// src/drivetrain.cpp
#include "drivetrain.hpp"

Drivetrain::Drivetrain(lemlib::Chassis* chassis) : chassis(chassis) {}

void Drivetrain::tankDrive(double leftVolts, double rightVolts) {
    // implementation
}

The pattern repeats for every subsystem: intake, lift, clamp, indexer, whatever your robot has.

The Robot facade

All the subsystems get owned by a single Robot class. Anywhere in the code that needs the drivetrain gets it through the facade, not through a free-floating global.

// include/robot.hpp
#pragma once
#include "drivetrain.hpp"
#include "intake.hpp"

class Robot {
public:
    Robot();
    Drivetrain& drivetrain();
    Intake& intake();
private:
    Drivetrain drivetrain_;
    Intake intake_;
};

extern Robot robot;
// src/robot.cpp
#include "robot.hpp"

Robot robot;

Robot::Robot()
    : drivetrain_(&chassis), intake_(...) {}

Drivetrain& Robot::drivetrain() { return drivetrain_; }
Intake& Robot::intake() { return intake_; }

Now everywhere else in the codebase: robot.drivetrain().straightlineTo(24, 48, 3000);. The facade is the single point of truth. Change the drivetrain implementation, and every caller stays the same.

main.cpp as pure wiring

With the facade in place, main.cpp has almost no logic. It exists only to hold the hardware globals, construct the LemLib chassis, and route the PROS entry points to subsystem methods.

// src/main.cpp
#include "main.h"
#include "robot.hpp"
#include "autons.hpp"

pros::MotorGroup leftMotors({-1, -2, -3}, pros::MotorGearset::blue);
pros::MotorGroup rightMotors({4, 5, 6}, pros::MotorGearset::blue);
pros::Imu imu(10);

lemlib::Chassis chassis(/* ... */);

void initialize() {
    chassis.calibrate();
}

void autonomous() {
    autons::redLeft();
}

void opcontrol() {
    while (true) {
        int l = master.get_analog(ANALOG_LEFT_Y);
        int r = master.get_analog(ANALOG_RIGHT_Y);
        robot.drivetrain().tankDrive(l * 100, r * 100);
        pros::delay(10);
    }
}

Each entry point is a few lines at most. If opcontrol() is growing past 40 lines, move the body into an opcontrol.cpp file. Same with autonomous() — autons live in autons.cpp/autons.hpp as free functions in an autons namespace.

A reference folder layout

include/
  robot.hpp
  drivetrain.hpp
  intake.hpp
  opcontrol.hpp
  autons.hpp
  skills.hpp
src/
  main.cpp
  robot.cpp
  drivetrain.cpp
  intake.cpp
  opcontrol.cpp
  autons.cpp
  skills.cpp
🖼 images/02-season-project-structure.png Folder tree diagram of a well-organized PROS project for a full season

🖼 Image brief

  • Alt: Folder tree diagram showing include/ with robot.hpp, drivetrain.hpp, intake.hpp, opcontrol.hpp, autons.hpp, skills.hpp and src/ with matching .cpp files. Arrows show which files include which headers. main.cpp is highlighted as the thin wiring layer.
  • Source: Diagram in Figma. Draw the folder tree with file icons, color-code header/source pairs, and add dependency arrows from main.cpp to robot.hpp and from robot.hpp to each subsystem header.
  • Caption: A season-ready project layout. Each subsystem is a header/source pair. The Robot facade owns them all, and main.cpp is pure wiring.

This scales. Add a lift.hpp/lift.cpp pair for a new subsystem; nothing else changes. Add a new auton routine to autons.cpp and call it from main.cpp's autonomous(). No file grows without bound.

💡 Coding tip. If main.cpp is past 500 lines, you are overdue for a split. A week-two commit that moves opcontrol into its own file is ten minutes. A week-ten commit is a day.

Guided practice

  1. Create the folders. From your project root, ensure include/ and src/ exist (they should from pros c new).
  2. Extract the drivetrain. Create include/drivetrain.hpp and src/drivetrain.cpp using the templates above. Move any drivetrain code out of main.cpp into the new files.
  3. Create the Robot facade. Create include/robot.hpp and src/robot.cpp. Declare one subsystem to start — the drivetrain you just extracted.
  4. Thin out main.cpp. For every piece of logic that could live in a subsystem method, move it. main.cpp should end the session with roughly 50 lines.
  5. Build and verify. pros mu. The robot should behave identically to before.

Independent exercise

Extract one more subsystem from main.cpp into its own .hpp/.cpp pair. Intake is usually the easiest first target. Add it to the Robot facade. Confirm the robot still works identically.

Success criterion: your main.cpp is under 100 lines, you have at least two subsystem classes, and the compile still produces a working upload.

Common pitfalls

  • Including .cpp files instead of .hpp files. Never include a source file.
  • Creating a subsystem with no purpose because you think you might need it. Subsystems follow hardware; if there is no hardware, there is no subsystem.
  • Putting logic in main.cpp “just for now.” Just for now lives forever.
  • Making the Robot facade singleton-enforced with elaborate patterns. A extern Robot robot; in the header and a single definition in the source is enough.
  • Declaring everything public. Private fields force you to go through the API, which is the point.

Where this points next

L2.2 turns the drivetrain stub into a real class that wraps pros::MotorGroup, with voltage/velocity commands, brake modes, and a tank-drive opcontrol binding.

💡 Reflection prompt (notebook-ready)

  • List your subsystems as they exist today. For each one, write a one-sentence description of what it does. If any sentence contains “and” more than twice, the subsystem is doing too much — plan to split it.
  • Which file is currently the largest in your project? What would it look like to cut that file in half?

Next up: L2.2 — The Drivetrain class.

The Drivetrain class

Wrap pros::MotorGroup in a class that exposes voltage and velocity commands, configures brake modes and current limits, and binds a tank-drive opcontrol method.

~45 min

Objective

Wrap pros::MotorGroup in a Drivetrain class that exposes voltage and velocity commands, configures brake modes and current limits, and binds a tank-drive opcontrol method the rest of the code can call.

Concept

A Drivetrain class is a thin wrapper around two pros::MotorGroup objects — one for the left side, one for the right — with a well-defined set of operations the rest of the robot is allowed to perform. The point is not to hide the motors; the point is to guarantee that every motor command goes through one place, so you can change the drivetrain’s behaviour without hunting through a dozen files.

The class exposes a small surface:

  • Voltage commands — raw millivolt control. move_voltage(mV) on each side. This is what PIDs use and what tank drive uses.
  • Velocity commands — target-RPM control. move_velocity(rpm). LemLib uses voltage underneath; you rarely need velocity unless you are writing your own closed-loop controller.
  • Brake modes — what the motors do when commanded to zero. BRAKE_COAST (drift), BRAKE_BRAKE (resist rotation), BRAKE_HOLD (actively hold position).
  • Current limits — how much current the motors can draw before cutting back. Protects the breaker. 2500 mA is the VEX default.
🖼 images/02-drivetrain-motor-layout.png Top-down diagram of drivetrain motor layout showing left and right motor groups

🖼 Image brief

  • Alt: Top-down schematic of a six-motor drivetrain. Three motors on the left side (ports 1, 2, 3 reversed) grouped as leftMotors, three on the right (ports 4, 5, 6) grouped as rightMotors. Arrows show forward spin direction for each side.
  • Source: Diagram in Figma. Top-down rectangle representing the chassis, with motor icons on each side. Label port numbers, group names, and spin directions. Use red for left group, blue for right group.
  • Caption: A typical six-motor drivetrain layout. The Drivetrain class wraps both MotorGroups and guarantees every command goes through one place.

Voltage vs velocity

Voltage is “apply this many millivolts to the motor and let the load determine the RPM.” Velocity is “use an internal PID to hold this RPM regardless of load.” For a drivetrain, voltage is what you want — putting another PID underneath your PID adds a second dynamic that fights the first. Voltage is direct. Voltage is predictable. Velocity shines for things like a flywheel where you want a constant RPM regardless of disturbances.

Brake modes in practice

  • pros::E_MOTOR_BRAKE_COAST — driver control. The motors release when the joystick is released. The robot glides to a stop.
  • pros::E_MOTOR_BRAKE_BRAKE — autonomous. When a PID commands zero voltage at the end of a motion, the motors resist rotation and the robot stops promptly.
  • pros::E_MOTOR_BRAKE_HOLD — holding position under external force. Rare on drivetrains; common on lifts and clamps.
🔧 Build tip. Set brake mode in the constructor and switch it between auton and opcontrol. Coast for driving, brake for autonomous.

Guided practice

The header

// include/drivetrain.hpp
#pragma once
#include "pros/motor_group.hpp"

class Drivetrain {
public:
    Drivetrain(pros::MotorGroup* left, pros::MotorGroup* right);
    void tankDrive(double leftVolts, double rightVolts);
    void moveVoltage(double leftVolts, double rightVolts);
    void stop();
    void setBrakeMode(pros::motor_brake_mode_e_t mode);
    void setCurrentLimit(int mA);
private:
    pros::MotorGroup* left;
    pros::MotorGroup* right;
};

The source

// src/drivetrain.cpp
#include "drivetrain.hpp"
#include <algorithm>

Drivetrain::Drivetrain(pros::MotorGroup* left, pros::MotorGroup* right)
    : left(left), right(right) {
    left->set_brake_mode_all(pros::E_MOTOR_BRAKE_COAST);
    right->set_brake_mode_all(pros::E_MOTOR_BRAKE_COAST);
    left->set_current_limit_all(2500);
    right->set_current_limit_all(2500);
}

void Drivetrain::tankDrive(double leftVolts, double rightVolts) {
    leftVolts  = std::clamp(leftVolts,  -12000.0, 12000.0);
    rightVolts = std::clamp(rightVolts, -12000.0, 12000.0);
    left->move_voltage(leftVolts);
    right->move_voltage(rightVolts);
}

void Drivetrain::stop() {
    left->move_voltage(0);
    right->move_voltage(0);
}

void Drivetrain::setBrakeMode(pros::motor_brake_mode_e_t mode) {
    left->set_brake_mode_all(mode);
    right->set_brake_mode_all(mode);
}

Wiring into opcontrol

void opcontrol() {
    robot.drivetrain().setBrakeMode(pros::E_MOTOR_BRAKE_COAST);
    pros::Controller master(pros::E_CONTROLLER_MASTER);

    while (true) {
        int l = master.get_analog(ANALOG_LEFT_Y);
        int r = master.get_analog(ANALOG_RIGHT_Y);

        if (std::abs(l) < 5) l = 0;
        if (std::abs(r) < 5) r = 0;

        double leftVolts  = (l / 127.0) * 12000.0;
        double rightVolts = (r / 127.0) * 12000.0;

        robot.drivetrain().tankDrive(leftVolts, rightVolts);
        pros::delay(10);
    }
}

Raw joystick ints become millivolts before hitting the drivetrain method. The class never sees joystick units — it only knows about volts.

Switching to brake for auton

void autonomous() {
    robot.drivetrain().setBrakeMode(pros::E_MOTOR_BRAKE_BRAKE);
    // run the auton
}

Independent exercise

Add two methods to your Drivetrain class:

  1. double getAverageEncoderInches() const; — returns the average of left and right encoder positions converted to inches.
  2. void resetEncoders(); — zeros both sides’ encoders for a fresh motion.

Use them to write a dead-simple encoder-based drive that drives forward until the average encoder reads 24″ and then stops.

Success criterion: the method compiles, the encoder-based drive runs, and the robot stops roughly at the right distance.

Common pitfalls

  • Exposing pros::MotorGroup* as a public getter. Every caller now bypasses your API.
  • Mixing voltage and velocity commands in the same movement. Pick one per motion.
  • Forgetting to clamp voltage. A math error can send 24000 mV to move_voltage and PROS ignores it silently.
  • Setting the current limit too low. 1500 mA makes the drivetrain sluggish under load.
  • Keeping brake mode on coast during auton. Every motion will overshoot.

Where this points next

L2.3 reads the inertial sensor into the same architecture — a clean, tested heading source is the foundation for every PID and every odometry call downstream.

💡 Reflection prompt (notebook-ready)

  • Describe, in your own words, why the Drivetrain class hides motor objects rather than exposing them. Give one concrete scenario where that encapsulation would save you from a bug.
  • What brake mode does your robot use in auton, and why? What happens if you swap to coast and run the same auton?

Next up: L2.3 — Inertial sensor and first heading readings.

Inertial sensor and first heading readings

Calibrate a V5 inertial sensor, read its heading, print it everywhere, and handle the 0°/360° wrap-around correctly.

~45 min

Objective

Calibrate a V5 inertial sensor, read its heading into your code, print it to both the terminal and the brain screen, and handle the 0°/360° wrap-around correctly in your own maths.

Concept

Heading is the most important number on your robot. Every PID turn reads it, every odometry update uses it, every auton that needs to face a direction depends on it. A wrong heading corrupts every downstream calculation. The inertial sensor — the thing that produces heading — has to be rock-solid before you write a single line of control code.

Step one is mounting. The IMU must be flat, low, on a stiff crossbrace, and not near a vibrating subsystem. Software never fixes hardware.

🖼 images/02-imu-mounting.png Photo/diagram of correct inertial sensor mounting on a VEX robot chassis

🖼 Image brief

  • Alt: Annotated photo of a V5 Inertial Sensor mounted flat on a steel crossbrace near the center of a VEX chassis. Callouts highlight: flat orientation, stiff mounting point, low position, and distance from vibrating mechanisms like intakes.
  • Source: Annotated photo of a competition robot. Photograph the IMU from above, add callout arrows pointing to the flat mounting, the rigid crossbrace, and the distance from the nearest vibrating subsystem.
  • Caption: Correct IMU mounting: flat, low, on a stiff crossbrace, away from vibrating subsystems. Software never fixes hardware.

Step two is calibration. Every time the robot powers on, the IMU calibrates against gravity and the local magnetic field. Calibration takes about two seconds and must happen with the robot completely stationary. If the robot is bumped during calibration, the IMU starts with a biased reading and drifts forever. Put calibration in initialize(), and do not let any motor run until is_calibrating() returns false.

Step three is reading. The IMU exposes get_heading() (0 to 360°) and get_rotation() (unwrapped, unbounded). Both are useful:

  • get_heading() — use for absolute orientation. “Face 90°” turns are easiest in this form.
  • get_rotation() — use when you need to know how many degrees the robot has turned relative to a starting point, without wrap-around confusion.

Wrap-around is the trap. If the robot is at 350° and you want it to turn to 10°, the naive error = target − current gives you 10 − 350 = −340°. The PID thinks it needs to spin backward 340°. In reality the shortest path is +20°. You must wrap the error into the range [−180, 180] before feeding it to the controller.

💡 Coding tip. Never compute heading error without wrapping. Make this a reflex. The first 359°→1° transition will expose the bug.

Guided practice

Step 1 — declare the sensor

pros::Imu imu(10);  // port 10 — match your wiring

Step 2 — calibrate in initialise

void initialize() {
    pros::lcd::initialize();
    imu.reset();  // starts calibration
    while (imu.is_calibrating()) {
        pros::lcd::print(0, "Calibrating IMU...");
        pros::delay(10);
    }
    pros::lcd::print(0, "IMU ready");
}

Step 3 — print the heading everywhere

void headingMonitor(void*) {
    while (true) {
        double h = imu.get_heading();
        printf("heading=%.2f\n", h);
        pros::lcd::print(2, "H: %.2f", h);
        pros::delay(50);
    }
}

void initialize() {
    pros::lcd::initialize();
    imu.reset();
    while (imu.is_calibrating()) pros::delay(10);
    pros::Task monitor(headingMonitor);
}

Step 4 — the wrap-around helper

// include/util.hpp
#pragma once

inline double wrapError(double error) {
    while (error > 180.0)  error -= 360.0;
    while (error < -180.0) error += 360.0;
    return error;
}

Use it in the PID loop: double error = wrapError(target - imu.get_heading());

Independent exercise

Write a small diagnostic routine that runs in opcontrol() on a button press. The routine should:

  1. Record the current heading as startHeading.
  2. Print “Rotate me 90° clockwise and press A again” to the brain screen.
  3. On the second press, read the current heading and compute wrapError(current − startHeading).
  4. Print the result.

Test at five different starting headings (0°, 45°, 179°, 181°, 350°). The measured rotation should always be close to 90°.

Success criterion: at five different starting headings, the measured rotation is within 2° of 90°.

Common pitfalls

  • Running motors during is_calibrating(). Any vibration biases the sensor.
  • Not waiting for calibration in initialize(). The first motion reads a garbage heading.
  • Using get_heading() where get_rotation() is needed.
  • Forgetting wrapError in PID. The first 359°→1° transition exposes the bug.
  • Assuming a mounted-but-flexing IMU is “good enough.” Software never fixes hardware.

Where this points next

L2.4 ports the Tier 0 sensor lessons (distance, rotation) into PROS idiom so your subsystem classes have a full set of sensor reads to work with.

💡 Reflection prompt (notebook-ready)

  • Why does the IMU have to calibrate with the robot stationary? What would a biased calibration do to a 10-second auton?
  • Describe, in your own words, why heading error must be wrapped to [−180, 180]. Give a specific example where not wrapping breaks the robot.

Next up: L2.4 — Rotation and distance sensors in PROS.

Rotation and distance sensors in PROS

Declare, configure, and read from a V5 rotation sensor and a V5 distance sensor in PROS, convert raw readings to meaningful units, and wrap each into a clean subsystem method.

~45 min

Objective

Declare, configure, and read from a V5 rotation sensor and a V5 distance sensor in PROS, convert raw readings to meaningful units, and wrap each into a clean subsystem method.

Concept

You already know how to use these sensors from the Tier 0 refresh — the conceptual content is the same. The difference is syntax. PROS exposes every sensor as a C++ class with methods. You declare the sensor with its port number and call methods on it.

Rotation sensor

The V5 rotation sensor is a rotary encoder with 0.01° resolution that plugs into a smart port. It is the sensor you put behind a tracking wheel, on a lift pivot, on a flywheel — anything rotational that is not a motor.

#include "pros/rotation.hpp"

pros::Rotation liftSensor(11);  // port 11

Core methods:

  • get_angle() — returns the current angle in centidegrees (0–36000 for one full rotation). Divide by 100.0 for degrees.
  • get_position() — returns the total rotation in centidegrees, unwrapped. Useful for tracking wheels.
  • set_position(value) — zero or set the position.
  • reset() — zeros the position.
  • set_reversed(bool) — flips the direction without rewiring.

All values are in centidegrees. Always convert to degrees (÷100) or to inches (via wheel circumference) before handing the number to any other code.

Distance sensor

#include "pros/distance.hpp"

pros::Distance frontDistance(12);

Core methods:

  • get() — returns distance in millimetres. Range is roughly 20–2000 mm.
  • get_confidence() — returns 0–63. Below about 20, do not trust the number.
  • get_object_size() — returns the apparent size of the reflecting object.

The sensor is cheap-but-noisy at range. Always check confidence before using a reading for anything that matters.

Unit conversions

Your subsystem code should never hold raw sensor units. Convert at the sensor boundary.

// Rotation to inches of linear travel (tracking wheel)
double inches = (liftSensor.get_position() / 100.0)  // degrees
              / 360.0                                // revolutions
              * (M_PI * wheelDiameterInches);        // inches per rev

// Distance sensor millimetres to inches
double inches = frontDistance.get() / 25.4;
📐 Engineering tip. These conversions go inside a method on the subsystem class, not scattered through the rest of the code. A value in centidegrees in one place and degrees in another is a unit bug waiting to bite you.

Guided practice

A lift subsystem with a rotation sensor

// include/lift.hpp
#pragma once
#include "pros/motors.hpp"
#include "pros/rotation.hpp"

class Lift {
public:
    Lift(pros::Motor* motor, pros::Rotation* sensor);
    void moveVoltage(int mV);
    double getAngleDegrees() const;
    void tareAngle();
private:
    pros::Motor* motor;
    pros::Rotation* sensor;
};
// src/lift.cpp
#include "lift.hpp"

Lift::Lift(pros::Motor* motor, pros::Rotation* sensor)
    : motor(motor), sensor(sensor) {
    sensor->reset();
}

void Lift::moveVoltage(int mV) { motor->move_voltage(mV); }

double Lift::getAngleDegrees() const {
    return sensor->get_position() / 100.0;
}

void Lift::tareAngle() { sensor->reset(); }

Callers see lift.getAngleDegrees() — they never touch centidegrees.

A distance-sensor wall-finder

bool isNearWall(pros::Distance& sensor, double maxInches) {
    if (sensor.get_confidence() < 20) return false;
    double inches = sensor.get() / 25.4;
    return inches < maxInches;
}

while (!isNearWall(frontDistance, 4.0)) {
    robot.drivetrain().moveVoltage(4000, 4000);
    pros::delay(10);
}
robot.drivetrain().stop();

Independent exercise

Pick one rotation sensor and one distance sensor on your own robot. For each:

  1. Wire it through a subsystem class — never directly accessed from opcontrol or autonomous.
  2. Expose a single method returning a meaningful unit (degrees, inches).
  3. Print the value to the terminal at 10 Hz during opcontrol().
  4. Move the sensor and confirm the value changes smoothly.

Success criterion: the values printed change continuously and correctly, and the main loop is not reading the sensor directly.

Common pitfalls

  • Forgetting that the rotation sensor returns centidegrees. A silent 100× error.
  • Trusting the distance sensor at max range. Low confidence readings are not “2000 mm” — they are “I have no idea.”
  • Exposing the raw pros::Rotation* through a subsystem getter. Unit conversions leak everywhere.
  • Calling sensor.reset() every loop iteration instead of once in initialize.
  • Using a distance sensor aimed at a moving mechanism and interpreting the noise as real change.

Where this points next

L2.5 writes your first auton — deliberately with timed motions and zero sensor feedback — so you can feel the drift and understand exactly why every downstream lesson exists.

📐 Reflection prompt (notebook-ready)

  • Why do we convert sensor units at the subsystem boundary rather than in the caller? Give one example of a bug that this convention prevents.
  • At what distance does your distance sensor’s reading start to get noisy? What will that mean for its usefulness in auton?

Next up: L2.5 — Your first real auton (and why it is terrible).

Your first real auton (and why it is terrible)

Write a deliberately fragile timed auton, measure its run-to-run variance across ten trials, and articulate in concrete numbers why sensor-based feedback control exists.

~45 min

Objective

Write a deliberately fragile timed auton, measure its run-to-run variance across ten trials, and articulate in concrete numbers why sensor-based feedback control exists.

Concept

This lesson is the motivating failure. You are going to write the worst kind of auton — raw voltage commands for raw time intervals — and then run it ten times and watch it fail differently each time. The failure is the point. Everything in Tier 3 and Tier 4 is a response to the specific way this kind of code falls apart.

A timed auton looks like this:

void autonomous() {
    robot.drivetrain().moveVoltage(10000, 10000);
    pros::delay(1000);
    robot.drivetrain().stop();
}

Drive at 10 volts for one second. The code is trivial. The robot will move. It will also do something different the next time, and the reasons are:

  • Battery voltage droop. A fresh battery pushes harder than a half-empty one.
  • Motor temperature. Cold motors are more efficient than hot ones. Your first run is faster than your fiftieth.
  • Starting conditions. A robot pushed into its starting position at 88° vs 92° drifts to different endpoints.
  • Tile friction variations. The far end of the field has slightly different grip than where you practise.
  • Load. Anything hanging off the robot changes mass distribution and therefore acceleration.

A timed auton is open-loop — it commands an action without any measurement of whether the action is happening. Open-loop control on a mechanical system with this many disturbances is structurally broken. The fix is feedback: a sensor that measures where the robot actually is, a controller that compares that to where you want it to be, and a command that updates every 10 milliseconds. That is PID. That is Tier 3.

🖼 images/02-timed-auton-field-diagram.png Field diagram showing intended vs actual paths of a timed auton across multiple trials

🖼 Image brief

  • Alt: VEX field diagram (top-down view) showing one intended autonomous path as a bold line and five actual paths from different trials as scattered, diverging lines. The endpoints are spread across a large area, illustrating run-to-run variance.
  • Source: Diagram in Figma. Draw a half-field top-down with the starting position marked. Show the intended path as a solid bold line and five trial paths as dashed lines that diverge progressively, ending in scattered endpoints with position markers.
  • Caption: Five runs of the same timed auton. Same code, same starting position, five different endpoints. Open-loop control cannot correct for battery voltage, motor temperature, or tile friction.
⚡ Competition tip. But first you have to feel the problem. Write the timed auton. Run it ten times. Measure the variance. Write the number down.

Guided practice

Step 1 — write the auton

void autonomous() {
    robot.drivetrain().setBrakeMode(pros::E_MOTOR_BRAKE_BRAKE);

    // forward
    robot.drivetrain().moveVoltage(10000, 10000);
    pros::delay(1500);
    robot.drivetrain().stop();
    pros::delay(200);

    // turn
    robot.drivetrain().moveVoltage(8000, -8000);
    pros::delay(500);
    robot.drivetrain().stop();
    pros::delay(200);

    // forward again
    robot.drivetrain().moveVoltage(10000, 10000);
    pros::delay(1000);
    robot.drivetrain().stop();
}

Step 2 — set up the measurement

Put the robot at the same starting pose every time — use tape on the tiles to mark its starting position and heading. Pick an intended endpoint and mark it too.

Step 3 — run ten trials

Run the auton ten times. Record, in your notebook:

  • Final X position (inches from start, measured with a tape).
  • Final Y position (inches from start).
  • Final heading (degrees, estimated visually).
  • Any subjective observations — did the robot seem faster? Slower? Drift more?

Step 4 — compute the variance

From your ten trials, compute range of final X (max − min), range of final Y, and range of final heading. These three numbers are the uncertainty of your current auton.

Independent exercise

Modify the auton to use slightly different voltages. Run ten more trials and measure the new variance. Note whether the variance is smaller, larger, or about the same. The answer is “about the same, within noise” — changing the constant does not fix the open-loop problem.

Write a one-paragraph prediction of how much this variance would shrink with a PID controller reading the IMU. You will revisit this prediction after Tier 3.

Success criterion: you have written, in specific numbers, the X/Y/heading variance of your open-loop auton, and you have a written prediction for how PID will affect it.

Common pitfalls

  • Running five trials and declaring the variance “acceptable.” Five is not enough; run ten.
  • Resetting the robot imprecisely between trials. You are measuring auton variance, not starting-position variance.
  • Tweaking the constants until one run lands on target and calling it done. That run is a lucky accident.
  • Believing the variance is caused by “the code being wrong.” The code is doing exactly what you told it to do. The world is wrong — open-loop cannot see the world.
  • Skipping this lesson because “I already know timed autons are bad.” Feel the number. The number is the lesson.

Where this points next

L2.6 puts a heading-lock assist into opcontrol using a PID loop — the same kind of controller you will generalise in Tier 3 — so driver control becomes less fragile.

⚡ Reflection prompt (notebook-ready)

  • Which of the five disturbances listed above did you observe most clearly across your ten trials? What did you see?
  • Your competition window is a specific physical region on the field. How much of that region does your current auton cover in the worst case? Is it enough?

Next up: L2.6 — Driver control ergonomics.

Driver control ergonomics

Implement joystick deadbands, exponential stick curves, button-held macros, and a one-button heading-lock assist that holds the robot pointed in a fixed direction using a heading PID.

~60 min

Objective

Implement joystick deadbands, exponential stick curves, button-held macros, and a one-button heading-lock assist that holds the robot pointed in a fixed direction using a heading PID.

Concept

Your driver controls matter more than you think. A tank drive wired raw-stick-to-raw-voltage is usable, but it is not comfortable. Small stick wiggles become small robot wiggles. A full push gives full speed, but so does a 60% push — the driver has almost no resolution at low speeds. Every top team puts effort into driver ergonomics.

Deadbands

A V5 joystick at rest does not read exactly 0. It reads 2, −1, 3. The robot creeps when the driver is not touching the sticks.

int stick = master.get_analog(ANALOG_LEFT_Y);
if (std::abs(stick) < 5) stick = 0;

Exponential curves

A linear stick maps 50% push to 50% voltage. An exponential curve reshapes the stick so small pushes produce very small outputs and full pushes produce full output.

double stickCurve(int raw) {
    double x = raw / 127.0;
    double curved = x * x * x;   // cubic
    return curved * 12000.0;     // millivolts
}

Button-held macros

if (master.get_digital(DIGITAL_R1)) {
    robot.intake().move(12000);
} else if (master.get_digital(DIGITAL_R2)) {
    robot.intake().move(-12000);
} else {
    robot.intake().brake();
}

Toggled macros

static bool clampDown = false;
if (master.get_digital_new_press(DIGITAL_L1)) {
    clampDown = !clampDown;
    robot.clamp().set(clampDown);
}

Heading-lock assist

When the driver presses a button, the robot records its current heading and enters an “assisted” state where it actively holds that heading using a PID, regardless of disturbances.

bool headingLockOn = false;
double lockedHeading = 0;

void opcontrol() {
    while (true) {
        if (master.get_digital_new_press(DIGITAL_X)) {
            headingLockOn = !headingLockOn;
            if (headingLockOn) lockedHeading = imu.get_heading();
        }

        int fwd = master.get_analog(ANALOG_LEFT_Y);
        if (std::abs(fwd) < 5) fwd = 0;
        double fwdVolts = (fwd / 127.0) * 12000.0;

        double turnVolts;
        if (headingLockOn) {
            double error = wrapError(lockedHeading - imu.get_heading());
            const double kP = 150;
            turnVolts = kP * error;
            if (turnVolts > 6000)  turnVolts = 6000;
            if (turnVolts < -6000) turnVolts = -6000;
        } else {
            int turn = master.get_analog(ANALOG_RIGHT_X);
            if (std::abs(turn) < 5) turn = 0;
            turnVolts = (turn / 127.0) * 8000.0;
        }

        robot.drivetrain().tankDrive(fwdVolts + turnVolts, fwdVolts - turnVolts);
        pros::delay(10);
    }
}

Guided practice

In your opcontrol() function, implement the following in order:

  1. Deadband — reject values below 5 on both sticks.
  2. Cubic curve — apply stickCurve to both sticks.
  3. Button held macro — bind intake to R1 (in) and R2 (out).
  4. Toggle macro — bind a toggle to L1 for any latching subsystem.
  5. Heading lock assist — bind to X, toggled, using the snippet above.

Test each feature one at a time. Do not wire them all at once.

Independent exercise

Hand the controller to another team member and watch them drive for five minutes. Tune your deadband, your curve power, and your kP based on their feedback.

Success criterion: a second driver can complete a full opcontrol run — drive across the field, operate one subsystem, press the heading-lock — without any instructions from you.

Common pitfalls

  • Using integer arithmetic for the curve. (stick * stick * stick) / (127 * 127) in int truncates to zero for small values. Use double.
  • Setting kP for heading lock way too high. The robot will oscillate.
  • Forgetting the deadband inside the heading-lock branch.
  • Toggled macros without get_digital_new_press. Using get_digital toggles every loop iteration.
  • Declaring heading-lock finished after testing on a polished tile. Test it with someone pushing the robot from the side.

Where this points next

L2.7 sets up continuous integration with GitHub Actions so every push automatically verifies the build.

💡 Reflection prompt (notebook-ready)

  • Describe the difference between raw stick control and cubic-curve control in your own words. What did the driver say about the two options?
  • When does heading-lock assist help your driver, and when would it get in the way?

Next up: L2.7 — Continuous integration with GitHub Actions.

Continuous integration with GitHub Actions

Set up a GitHub Actions workflow that builds your PROS project on every push, identify a failed build from the Actions log, and download the compiled artefact as a fallback during an event.

~45 min

Objective

Set up a GitHub Actions workflow that builds your PROS project on every push, identify and resolve a failed build from the Actions log, and download the compiled artefact as a fallback during an event.

Concept

CI — continuous integration — is the single tool that prevents “it works on my laptop” from costing you a practice session. Every push to the team’s Git repository is automatically compiled on a clean machine. If the build fails, GitHub emails whoever pushed the change.

The LemLib team maintains a pre-built action — pros-build — that runs a PROS toolchain inside a Docker container on GitHub’s servers. You drop a short YAML file into your repository and GitHub runs your project’s make on every push.

Two things make this especially valuable for a VRC team:

  1. Catches merge breakages before competition. A teammate pushes a branch that compiles on their machine but breaks when combined with main. CI catches it inside of a minute.
  2. Artefact fallback. Every successful build uploads the compiled .bin file as a GitHub artefact. If your laptop dies on event day, you can download the binary from the last green commit.
⚡ Competition tip. The artefact fallback has saved teams from scratch matches. Keep CI green and you always have a deployable binary, even if every laptop in your pit catches fire.

Guided practice

Step 1 — create the workflow file

In your repository, create the path .github/workflows/build.yml.

name: Build

on:
  push:
    branches: [ "**" ]
  pull_request:
    branches: [ "main" ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Build PROS project
        uses: LemLib/pros-build@master

      - name: Upload artifact
        uses: actions/upload-artifact@v4
        with:
          name: pros-build
          path: |
            bin/hot.package.bin
            bin/cold.package.bin
            bin/*.bin

Step 2 — commit and push

git add .github/workflows/build.yml
git commit -m "ci: add PROS build workflow"
git push

Step 3 — watch the build

Go to your repository on GitHub. Click the Actions tab. You should see a workflow run labelled “Build.” A successful build ends with a green check. A failed build ends with a red X.

Step 4 — download an artefact

On a successful run’s summary page, scroll to the “Artifacts” section. Download the zip — inside are the hot and cold .bin files.

Step 5 — read a failed build

Deliberately break your code. Add undeclared_thing; to a random line. Commit. Push. Watch the Actions tab go red. Open the failed run to see the compiler output with the path and line number.

Independent exercise

Set up the workflow on your own team repository. Verify three things:

  1. A clean commit produces a green build and an uploadable artefact.
  2. A deliberate compile error produces a red build and a specific error message you can map back to the source.
  3. The failed-build notification email reaches the pusher.

Success criterion: a member of your team who did not set up CI can go to the Actions tab, see the build status, download an artefact, and map a failed build back to its source line without help.

Common pitfalls

  • Putting the workflow file at the wrong path. It must be .github/workflows/build.yml, with the dot.
  • Branch filter set too narrowly. branches: [main] means feature branches never get built. Use branches: ["**"].
  • Expecting the action to upload firmware to a physical brain. It builds; the .bin still has to reach the brain via USB.
  • Blaming the action for a failure that is a bug in the code. Read the log.
  • Ignoring the email when a build fails. Fix it before the next push.

Where this points next

Tier 3 begins with L3.1 — the first-principles explanation of what PID actually does and why the tuning order is PDI, not PID.

⚡ Reflection prompt (notebook-ready)

  • Describe one scenario — past or hypothetical — where CI would have saved your team a practice session. Be specific about the failure mode and when you would have caught it.
  • What would happen if your repository did not have CI for the last week before a major event? Write down the risk in concrete terms.

Next up: L3.1 — What PID actually does.

What PID actually does

Explain the P, D, and I terms from first principles, derive why the correct tuning order is PDI (not PID), and write a minimal heading PID loop in C++ that a controller can execute on a V5 brain.

~75 min

Objective

Explain the P, D, and I terms from first principles, derive why the correct tuning order is PDI (not PID), and write a minimal heading PID loop in C++ that runs on a V5 brain.

Concept

Your timed auton drifted. You already know why — batteries sag, motors heat up, tiles vary, and a fixed voltage for a fixed duration never produces a fixed motion. You need a controller that watches the robot and adjusts the command in real time. That controller is PID.

Start with a dumber controller. Suppose you want the robot to turn to 90°. The simplest idea: drive the motors at full voltage until the IMU reads 90°, then stop. This is called bang-bang. It fails every time: the robot blows past 90° because at the instant it crosses 90° it still has enormous angular momentum. Overshoot.

What we actually want is a command that scales with how far we are from the target. Close to the target, small command. Far from the target, big command. That is P.

P — proportional

Define error = target − current. The command is V = kP × error. When you are 90° away, you floor it. When you are 1° away, you barely push. kP is the scale factor that converts “degrees of error” into “volts of command.”

P-only controllers have one flaw: momentum. The robot arrives at its target still moving. P tries to stop it with a tiny command, and momentum carries the robot past. It then reverses and overshoots in the other direction. P-only oscillates.

D — derivative

The derivative of error is how fast the error is changing. If the robot is hurtling at the target, d(error)/dt is a large negative number. D says: when the error is changing quickly in the right direction, subtract some command. Slow the robot down before it arrives. V = kP × error + kD × d(error)/dt. P points at the target; D keeps the robot from overshooting.

P + D, tuned correctly, handles about 99% of what you actually want a drivetrain controller to do.

I — integral

The integral of error is the area under the error-versus-time curve. If the error stays at 0.5° for a full second, the integral accumulates. V = kP × error + kD × d(error)/dt + kI × ∫error dt. I grows when the robot is stuck, until the command is large enough to break static friction.

I has a known failure mode: at the start of a big motion, the integral accumulates an enormous area, and the robot overshoots spectacularly. The fix is a startI threshold: do not start integrating until the error has shrunk below a small value (say, 5° for a heading PID).

💡 Coding tip. Tuning order is PDI, not PID. Tune P first, alone. Then add D to kill the oscillation. Then add I to close the last fraction. Trying to tune P with D already enabled means you cannot see whether P is too weak or D is too strong.
🖼 images/02-pid-control-loop.png PID control loop block diagram: setpoint, error, P/I/D terms, output, plant, feedback

🖼 Image brief

  • Alt: Block diagram of a PID control loop. A setpoint (target heading) enters a summing junction, producing error. Error feeds into three parallel blocks: P (proportional), I (integral), and D (derivative). Their outputs sum to produce a voltage command sent to the plant (drivetrain motors). The plant output (current heading from IMU) feeds back to the summing junction.
  • Source: Diagram in Figma. Standard control-loop block diagram with labeled blocks for P, I, D, summing junction, plant, and sensor feedback. Use the project accent color for the signal flow arrows.
  • Caption: The PID control loop. The setpoint is where you want to be, the error is how far you are, and the three terms (P, I, D) produce the motor command that closes the gap.

Guided practice

Here is a minimal heading PID. This is not LemLib — this is the code you need to understand before LemLib’s controller is anything more than magic.

#include "pros/imu.hpp"
#include "pros/motor_group.hpp"
#include "pros/rtos.hpp"

pros::Imu imu(10);
pros::MotorGroup leftDrive({-1, -2, -3});
pros::MotorGroup rightDrive({4, 5, 6});

void turnTo(double target) {
    const double kP = 0.0;   // you will tune these in L3.2
    const double kI = 0.0;
    const double kD = 0.0;
    const double startI = 5.0;
    const double exitError = 0.5;

    double error = 0, prevError = 0, integral = 0, derivative = 0;

    while (true) {
        double heading = imu.get_heading();
        error = target - heading;
        while (error > 180) error -= 360;
        while (error < -180) error += 360;

        derivative = error - prevError;
        if (std::fabs(error) < startI) integral += error;

        double voltage = kP * error + kI * integral + kD * derivative;
        if (voltage > 12000) voltage = 12000;
        if (voltage < -12000) voltage = -12000;

        leftDrive.move_voltage(voltage);
        rightDrive.move_voltage(-voltage);

        if (std::fabs(error) < exitError) break;

        prevError = error;
        pros::delay(10);
    }

    leftDrive.move_voltage(0);
    rightDrive.move_voltage(0);
}

Read every line. Map each one back to the paragraphs above. The startI check is not a formality; delete it, crank kI, and watch the robot spin into the wall.

Independent exercise

Paste this function into your PROS project. Do not tune it yet — that is L3.2. In opcontrol(), call turnTo(90.0) on a button press and print the error every 10 ms. Observe: with kP = 0, the robot does nothing. Set kP = 2.0 and observe the oscillation. Set kP = 0.5 and observe the crawl.

Success criterion: you can describe, in one sentence each, what happens at kP = 0.5, kP = 2.0, and kP = 20.0, and you can predict which one will overshoot most.

Common pitfalls

  • Writing kP × error where error is in one unit and kP is scaled for another. PID is unit-bound.
  • Forgetting to wrap heading error to [−180, 180]. An unwrapped error of 350° tells the robot to take the long way around.
  • Omitting the startI check. The integral will explode on the first long movement.
  • Tuning P with D enabled. You cannot see what you are doing.
  • Trusting “example” values from another team’s code. Their kP is their robot’s kP. Yours is yours.

Where this points next

L3.2 walks you through tuning kP, kD, and kI in sequence on a real heading PID, with the exact failure mode to watch for at each step.

💡 Reflection prompt (notebook-ready)

  • In your own words, why is bang-bang control unusable for a robot with momentum? Reference P, D, and the order they compensate in.
  • What is your prediction for which term you will find hardest to tune on your own drivetrain, and why?

Next up: L3.2 — Tuning a heading PID.

Tuning a heading PID

Tune a heading PID to sub-degree accuracy in three passes — P, then D, then I — and identify the specific failure mode that indicates each gain is wrong.

~90 min

Objective

Tune a heading PID to sub-degree accuracy in three passes — P, then D, then I — and identify the specific failure mode that indicates each gain is wrong.

Concept

Tuning is not guessing. Tuning is a controlled experiment where you change one variable and watch one output. The output you watch is the error trace printed to your terminal. The variables you change, in strict order, are kP, then kD, then kI.

You need a repeatable trigger (a button), a repeatable starting heading (zeroed before each run), a repeatable target (a 90° turn is conventional), and the error printed every loop iteration. Your eyes will lie to you; the number will not.

💡 Coding tip. Set your maximum voltage to 12000 mV (full power) during tuning. If you tune at 6000 mV and then increase the cap for a match, your gains are wrong again.
🖼 images/02-heading-pid-tuning-graph.png Graph showing three PID tuning responses: oscillating (P too high), overdamped (D too high), and well-tuned

🖼 Image brief

  • Alt: Three time-series graphs stacked vertically, each plotting heading (degrees) vs time (ms) for a 90-degree turn. Top graph: kP too high, showing sustained oscillation around 90 degrees. Middle graph: kD too high, showing a sluggish approach that stops short at 88 degrees. Bottom graph: well-tuned PD, showing a smooth approach with one small overshoot settling precisely at 90 degrees.
  • Source: Diagram in Figma or generated from sample data in a spreadsheet. Three separate plots with a horizontal target line at 90 degrees. Label each with its failure mode and the gain that caused it.
  • Caption: Three heading PID responses. Top: kP too high (oscillation). Middle: kD too high (sluggish, stops short). Bottom: well-tuned (smooth, one small overshoot, settles at target).

Guided practice

Step 0 — the rig

pros::Controller master(pros::E_CONTROLLER_MASTER);

void opcontrol() {
    imu.reset();
    while (!imu.is_calibrating()) pros::delay(10);

    while (true) {
        if (master.get_digital_new_press(DIGITAL_A)) {
            imu.set_heading(0);
            turnTo(90.0);
        }
        pros::delay(10);
    }
}

Inside turnTo, add a print so you see the error every loop:

printf("t=%d err=%.3f V=%.1f\n", pros::millis(), error, voltage);

Step 1 — tune kP

Set kP = 1.0, kI = 0, kD = 0. Press A. You are looking for the robot to overshoot 90°, swing back, and overshoot once more — that one-and-a-half oscillations behaviour means your P is approximately right.

  • Robot crawls and stops short: kP too low. Double it.
  • Robot slams past and oscillates for five seconds: kP too high. Halve it.
  • Robot overshoots once, returns, overshoots once more, then keeps ringing: perfect. Freeze kP.

Step 2 — tune kD

Keep kP frozen. Set kD = 1.0. Press A. You are trying to kill the oscillation.

  • Robot overshoots and rings as before: kD too low. Double it.
  • Robot creeps to the target and gets stuck short: kD too high. Halve it.
  • Robot sweeps smoothly to 90° and sits within about a degree without oscillation: perfect. Freeze kD.

Run six trials. D is the most sensitive gain; small changes produce visible changes.

Step 3 — tune kI

Set kI = 0.1, leave the others alone, set startI = 5.0. Press A.

  • Robot settles as before, never closes the last half-degree: kI too low. Double it.
  • Robot settles at 90°, then slowly creeps past and oscillates: kI too high. Halve it.
  • Robot settles within half a degree, makes one small correction, and stops: perfect.

Step 4 — verify

Turn the exit error back to 0.5°. Run the turn ten times. You want a distribution tighter than ±0.5°.

Independent exercise

Follow Steps 1–4 end to end on your own robot. Record your gains in a comment:

// Heading PID, tuned 2026-04-11
// kP = 3.2  (P found at Step 1, robot rings once)
// kD = 18   (D killed oscillation at ~20, backed off to 18)
// kI = 0.2  (I closes ~0.4° in one correction, startI = 5)

Success criterion: across ten trials at 90° turns, maximum final error is under 0.5° and average final error is under 0.2°.

Common pitfalls

  • Not printing the error every loop. You cannot tune a controller you cannot see.
  • Starting with kP = 0.01 and slowly creeping up. Start at 1.0 and aggressively halve or double.
  • Touching kP after you start tuning kD. Every time you change P, D is wrong again. Commit.
  • Tuning on a low battery. Retune on a fresh battery and watch the numbers change.
  • Declaring the PID “done” because the robot stopped in the right spot once. Run it ten times.

Where this points next

L3.3 tunes the lateral PID. The process is the same, but there is one catastrophic mistake that costs teams weeks.

💡 Reflection prompt (notebook-ready)

  • Write down your final kP, kD, kI values and the behaviour you observed at each tuning step.
  • Which of the three gains was hardest to find, and why?

Next up: L3.3 — Tuning a lateral PID.

Tuning a lateral PID (and the mistake that costs teams weeks)

Tune a lateral (forward/back) PID to sub-inch accuracy, and diagnose the single most common catastrophic failure — printing the wrong value.

~90 min

Objective

Tune a lateral (forward/back) PID to sub-inch accuracy, and diagnose the single most common catastrophic failure — printing the wrong value — before it wastes a week of tuning.

Concept

The lateral PID is structurally identical to the heading PID from L3.2. Same loop, same three terms, same PDI order, same startI trick. The difference is the unit: inches instead of degrees. The test distance is 24″ instead of 90°, and startI is 2″ instead of 5°.

The mistake: printing the wrong value. A student writes driveDistance(24) and tunes the PID. They print “y position” to the terminal and twiddle kD until y position stops oscillating. The robot drifts, the PID will not settle, the tuning makes no sense. Weeks pass.

The problem: driveDistance is not reading y position. It is reading the average of the left and right drivetrain motor encoders. The student’s PID is controlling one variable, but they are watching another. Y position and averaged encoders can easily disagree by two or three inches under wheel slippage.

💡 Coding tip. Rule: print the exact variable the PID uses as current. Not a “nearby” variable. Not a “representative” one. The one inside the control loop.

The Readings/Robot/Ground distinction

When the PID says “24″” and the tape measure says “23.6″”, there are two possible problems:

  1. Readings↔Robot problem. The PID is tuned badly, and the readings say “24” while the control loop has given up early. Fix: tune the PID.
  2. Readings↔Ground problem. The PID is perfect, the readings say “24”, and the robot is at 23.6″ anyway. Fix: not the PID. Check for wheel slippage, bad unit conversions, wrong wheel diameter, or an out-of-square frame.
🖼 images/02-lateral-pid-tuning-graph.png Graph showing lateral PID response: position (inches) vs time for a 24-inch drive

🖼 Image brief

  • Alt: Time-series graph plotting distance (inches) vs time (ms) for a 24-inch lateral PID drive. Shows three overlaid traces: an oscillating response (kP too high), an overdamped response that stops at 23.5 inches (kD too high), and a well-tuned response that settles smoothly at 24 inches. A horizontal dashed line marks the 24-inch target.
  • Source: Diagram in Figma or spreadsheet chart. Three color-coded curves on one plot with the target line at 24 inches. Annotate each curve with its failure mode.
  • Caption: Lateral PID tuning. The well-tuned response (green) reaches 24 inches with minimal overshoot. The oscillating response (red) and overdamped response (yellow) show common failure modes.

Guided practice

Step 0 — the rig

void driveDistance(double inches) {
    const double kP = 0.0;
    const double kI = 0.0;
    const double kD = 0.0;
    const double startI = 2.0;
    const double exitError = 0.25;

    leftDrive.tare_position();
    rightDrive.tare_position();

    double error = 0, prevError = 0, integral = 0, derivative = 0;

    while (true) {
        double leftPos  = leftDrive.get_position();
        double rightPos = rightDrive.get_position();
        double avgTicks = (leftPos + rightPos) / 2.0;
        double travelled = ticksToInches(avgTicks);
        double current   = travelled;

        error = inches - current;
        derivative = error - prevError;
        if (std::fabs(error) < startI) integral += error;

        double voltage = kP * error + kI * integral + kD * derivative;
        if (voltage > 12000) voltage = 12000;
        if (voltage < -12000) voltage = -12000;

        leftDrive.move_voltage(voltage);
        rightDrive.move_voltage(voltage);

        // Print the exact value the loop is using
        printf("t=%d current=%.3f err=%.3f V=%.1f\n",
               pros::millis(), current, error, voltage);

        if (std::fabs(error) < exitError) break;
        prevError = error;
        pros::delay(10);
    }
    leftDrive.move_voltage(0);
    rightDrive.move_voltage(0);
}

Steps 1–3

Same process as L3.2. Start at kP = 1.0. Aim for one overshoot. Tune kD. Tune kI with startI = 2.0.

Step 4 — verify on the ground

Put the robot at a known starting line. Run the PID. Measure with a tape.

  • PID says 24.0, tape says 24.0: you are done.
  • PID says 24.0, tape says 23.5: do not retune. You have a readings↔ground problem. Check wheel slippage, tick-to-inches conversion, wheel diameter under load, and frame squareness.

Independent exercise

Build a diagnostic table in your notebook. For ten trials of driveDistance(24), record terminal final error, tape-measure position, and classify each row as a readings↔robot or readings↔ground problem.

Success criterion: you can point to every row and say whether the issue was “PID didn’t settle” or “PID settled but the robot was in the wrong spot.”

Common pitfalls

  • Printing y position instead of the loop’s current. The canonical mistake.
  • Tuning lateral PID on an unsquared frame. You get arcs, not straight lines.
  • Using L3.2’s gains as a starting point. Different units, different gains.
  • Over-tuning kD to suppress a drift that is actually wheel slippage.
  • Declaring success on one trial. Always run ten.

Where this points next

L3.4 is the conceptual payoff: once every movement has a tight exit error, you can count up how much error stacks across a full auton.

💡 Reflection prompt (notebook-ready)

  • Describe in your own words the difference between a readings↔robot problem and a readings↔ground problem. Give a concrete example of each from your own robot.
  • Before you started this lesson, what variable did you think driveDistance would be reading? What does it actually read?

Next up: L3.4 — Exit errors, stacking, and why odometry exists.

Exit errors, stacking, and why odometry exists

Why tight exit errors create time penalties, how small per-movement errors stack across a full auton, and why odometry — assigning movements to points instead of distances — is the fix.

~60 min

Objective

Explain why tight exit errors create time penalties, how small per-movement errors stack across a full auton, and why odometry — assigning movements to points instead of distances — is the fix.

Concept

You tuned your heading PID to 0.5° and your lateral PID to 0.25″. Accurate — and slow. Every movement sits at the exit condition for half a second while the controller closes the last fraction. Over a ten-movement auton, you spend five seconds settling. Five seconds is most of a skills run.

So you loosen the exit errors. The robot finishes each movement faster — but the robot ends up a foot away from where it should be. What happened?

Stacking error

Every movement ends inside a window around the intended endpoint. The next movement begins from wherever the previous one ended — but the code does not know that. The code assumed the previous movement ended exactly at the target. Chain six movements together and the robot is wildly offset from where the code thinks it is, even though every single movement individually was “close enough.”

🖼 images/02-error-stacking-diagram.png Diagram showing how small per-movement errors accumulate across a multi-step auton

🖼 Image brief

  • Alt: Top-down field diagram showing a six-movement autonomous path. The intended path is drawn as a solid line with precise waypoints. The actual path is drawn as a dashed line that gradually diverges, with error circles growing at each waypoint. The final position is several inches away from the intended endpoint, with the total accumulated error highlighted.
  • Source: Diagram in Figma. Draw a half-field with a six-step path. At each waypoint, draw a small error circle around the intended point; make each circle slightly larger than the previous one to show accumulation. Draw the actual path drifting through the edges of these circles.
  • Caption: Error stacking across six movements. Each movement ends inside a small error window, but the next movement starts from wherever the previous one actually ended. By the sixth waypoint, the robot is inches off target.

Aligners break the chain

Physical alignment to a fixed field feature zeroes out both coordinates and the heading at one instant. Whatever drift had accumulated is gone. But aligners cost time and constrain your routes.

The real fix: track the robot’s actual position continuously

If you always know where the robot is in absolute field coordinates, you can send it to a point instead of a distance. “Move forward 24 inches” inherits the previous movement’s error. “Move to (24, 36)” is absolute: the code asks “where am I right now?”, computes the vector, and drives it. The error does not stack.

That is what odometry is. Odometry uses sensors you already have (motor encoders, tracking wheels, the IMU) to keep a continuously updated estimate of the robot’s (x, y, heading) in field coordinates.

💡 Coding tip. Before odometry, you think in relative distances. After odometry, you think in absolute points. Every lesson in Tier 4 is built on this reframing.

Guided practice

Do the maths on your own robot. This is a pencil-and-paper exercise. Assume your lateral PID exits at ±1″ and your heading PID exits at ±2°. You are running a six-movement auton:

  1. Forward 24″
  2. Turn to 90°
  3. Forward 18″
  4. Turn to 180°
  5. Forward 30″
  6. Turn to 270°

In the worst case, how far is the robot from its intended final position? Work through the geometry — the final position can be 3–6″ and 4–8° off target after only six movements.

Now rerun with odometry. The code tracks the actual position after Movement 1, recalculates the vector from there, and drives to the correct absolute point. The final error is only the error of the final movement, not the sum of all previous errors.

Independent exercise

Write down your current auton’s movements. For each one, estimate the exit-error window. Then estimate the worst-case position error at the final movement.

Success criterion: your notebook contains a specific number — “my current auton has a worst-case stacking error of roughly X inches” — that you can point to and defend.

Common pitfalls

  • Assuming tight exit errors “solve” the problem. They reduce per-movement error; they still stack.
  • Believing stacking error is linear. Two movements with 2° heading error each can combine to much more than 4° of position error.
  • Thinking aligners make odometry unnecessary. They buy you time budget, not spatial accuracy.
  • Skipping this lesson because “we’ll just use LemLib.” Students who do not understand the problem cannot debug the solution.

Where this points next

Tier 4 installs LemLib and teaches odometry as the implementation of the ideas you just worked through. L4.1 starts with installation.

💡 Reflection prompt (notebook-ready)

  • Write down the difference between a movement that ends at a distance and a movement that ends at a point, in your own words.
  • Identify the first movement in your current auton where stacking error actively costs you points. What is downstream of that movement?

Next up: L4.1 — Installing LemLib.

Installing LemLib into a PROS project

Install LemLib using pros c add-depot and identify where in the library’s source tree to look when you need to understand what a LemLib call is actually doing.

~30 min

Objective

Install LemLib into an existing PROS project using pros c add-depot and identify where in the library’s source tree to look when you need to understand what a LemLib call is actually doing.

Concept

LemLib is a C++ library for VEX V5 motion control. It wraps a tuned PID, an odometry tracker, and a small set of movement primitives into a single lemlib::Chassis object. You do not write the control loop; you describe your drivetrain and let the library drive. This is the payoff for Tier 3 — your understanding of PID is how you tune LemLib’s controllers from inside.

LemLib is not a black box. It is open-source C++ that lives in a folder inside your PROS project. When something misbehaves, you can open the library’s source files and read the exact function the chassis is calling. Treat LemLib as readable code from day one.

The library is organised as:

  • include/lemlib/ — public headers. The API. Start in chassis.hpp.
  • src/lemlib/ — implementation. PID loops, odometry maths, exit-condition logic. Start in chassis/chassis.cpp.

Guided practice

Step 1 — add the LemLib depot

pros c add-depot lemlib https://raw.githubusercontent.com/LemLib/LemLib/depot/stable.json

Step 2 — fetch and apply the template

pros c fetch --no-check lemlib
pros c apply lemlib

Step 3 — verify it builds

Add #include "lemlib/api.hpp" to src/main.cpp. Build: pros mu. If it compiles, LemLib is installed.

Step 4 — tour the source

include/lemlib/api.hpp              — the single include
include/lemlib/chassis/chassis.hpp  — Chassis class declaration
include/lemlib/pid.hpp              — PID controller declaration
src/lemlib/chassis/chassis.cpp      — Chassis method implementations
src/lemlib/chassis/motions/moveToPoint.cpp — the motion primitive source

Open moveToPoint.cpp and scroll. You will see a PID loop almost identical to the one you wrote in L3.1. The library is not hiding anything from you.

Independent exercise

Open chassis.hpp. Find the declaration of moveToPoint. Copy its full signature into your engineering notebook. Next to each parameter, write one sentence about what you think it does.

Success criterion: you can point to chassis.hpp on disk and you have written something next to every parameter of moveToPoint.

Common pitfalls

  • Forgetting the pros c apply step. Fetch downloads the template; apply installs it.
  • Mixing LemLib versions between team members. Commit the firmware/lemlib.a file and relevant headers to Git.
  • Skipping the source-code tour because “I’ll look at it if I need to.” Finding files under debugging pressure is the worst time.
  • Trying to use LemLib before setting up lemlib::Chassis. The library does nothing until you give it a configured chassis — that is L4.2.

Where this points next

L4.2 configures the lemlib::Chassis object with your drivetrain geometry, the PID gains you tuned in Tier 3, and the sensors you will use for odometry.

💡 Reflection prompt (notebook-ready)

  • Why is it important that LemLib is readable C++ rather than a compiled black box? Give a concrete scenario where you would open the source.
  • What is your current mental model of how LemLib fits into your existing Tier 2 project — does it replace the Drivetrain class, sit alongside it, or wrap it?

Next up: L4.2 — The LemLib Chassis object.

The LemLib Chassis object

Construct a fully configured lemlib::Chassis object with correct drivetrain geometry, tuned lateral and angular PID controllers, and an odometry sensor setup.

~60 min

Objective

Construct a fully configured lemlib::Chassis object with correct drivetrain geometry, tuned lateral and angular PID controllers, and an odometry sensor setup — and explain what each parameter means in terms of your physical robot.

Concept

The Chassis object is the single thing that represents “the drivetrain” to LemLib. It owns the motors, the sensors, the PID controllers, and the odometry tracker. Every LemLib motion call is a method on this object.

Chassis construction takes four big pieces:

  1. Drivetrain geometry — which motors, what wheel size, what track width, what gear ratio.
  2. Lateral controller settings — the PID gains and exit conditions for forward/back motion (Tier 3 lateral gains).
  3. Angular controller settings — the PID gains and exit conditions for turns (Tier 3 heading gains).
  4. Sensors — IMU, tracking wheels (if any), and the odometry configuration.

Every parameter corresponds to a physical measurement or a tuned number you already have. No guessing. Measure the robot, copy in the values, compile.

Guided practice

Step 1 — declare hardware

#include "main.h"
#include "lemlib/api.hpp"

pros::MotorGroup leftMotors({-1, -2, -3}, pros::MotorGearset::blue);
pros::MotorGroup rightMotors({4, 5, 6}, pros::MotorGearset::blue);
pros::Imu imu(10);

Step 2 — describe the drivetrain geometry

lemlib::Drivetrain drivetrain(
    &leftMotors,
    &rightMotors,
    11.5,                       // track width (inches, wheel-centre to wheel-centre)
    lemlib::Omniwheel::NEW_325, // wheel type — 3.25" new-style omni
    450,                        // drivetrain RPM at the wheel
    2                           // horizontal drift
);

Step 3 — lateral controller

lemlib::ControllerSettings lateralController(
    3.2,  // kP — from L3.3
    0.2,  // kI
    18,   // kD
    3,    // anti-windup
    1,    // small error range (inches)
    100,  // small error timeout (ms)
    3,    // large error range
    500,  // large error timeout
    20    // max acceleration, slew
);

Step 4 — angular controller

lemlib::ControllerSettings angularController(
    2.0,  // kP — from L3.2
    0.1,  // kI
    12,   // kD
    3, 1, 100, 3, 500,
    0     // no slew on turns
);

Step 5 — sensors

lemlib::OdomSensors sensors(
    nullptr,  // vertical tracking wheel 1
    nullptr,  // vertical tracking wheel 2
    nullptr,  // horizontal tracking wheel 1
    nullptr,  // horizontal tracking wheel 2
    &imu      // inertial sensor
);

Step 6 — construct the Chassis and initialise

lemlib::Chassis chassis(drivetrain, lateralController, angularController, sensors);

void initialize() {
    pros::lcd::initialize();
    chassis.calibrate();  // calibrates IMU and zeroes odom
}

Step 7 — a smoke-test motion

void autonomous() {
    chassis.setPose(0, 0, 0);
    chassis.moveToPoint(0, 24, 3000);
}

Independent exercise

Measure your own robot’s track width. Plug it in. Pick the correct Omniwheel:: constant. Set wheelRPM from your gear ratio. Copy in your Tier 3 gains. Compile. Run the smoke test.

Success criterion: the robot executes moveToPoint(0, 24, 3000) without timing out and ends up between 22″ and 26″ along the Y axis.

Common pitfalls

  • Wrong track width. Measure wheel-centre to wheel-centre, not outer edge to outer edge.
  • Wrong wheel constant. NEW_325 and OLD_325 differ slightly but it shows up in odometry drift.
  • Forgetting chassis.calibrate() in initialize().
  • Copying another team’s gains. Use your Tier 3 gains as the starting point.
  • Passing motor groups by value instead of by pointer. &leftMotors, not leftMotors.

Where this points next

L4.3 explains why motor-encoder odometry (the null-tracking-wheel configuration you just built) is the correct starting choice for most teams.

📐 Reflection prompt (notebook-ready)

  • List every parameter in your Chassis construction and describe the physical measurement or tuned value it represents.
  • Which parameters will need to be retuned when you add tracking wheels? Which ones will stay?

Next up: L4.3 — Motor-encoder odometry.

Motor-encoder odometry, the default choice

Why motor-encoder odometry is the correct default for most competitive teams, the one operational constraint it imposes, and when tracking wheels become worth the trouble.

~40 min

Objective

Explain why motor-encoder odometry is the correct default for most competitive teams, identify the one operational constraint it imposes on your auton, and justify moving to tracking wheels only when that constraint becomes unacceptable.

Concept

There are two ways to run odometry on a VEX robot.

Option A: motor-encoder odometry. LemLib reads the built-in encoders inside your drivetrain motors and computes position from how far each side has spun. No extra sensors. No extra wiring. You already have the hardware.

Option B: tracking-wheel odometry. You install unpowered 2.75″ or 2″ omni wheels on dedicated rotation sensors — one tracking forward motion, one tracking sideways motion — and LemLib uses those readings instead.

For most teams, most of the time, motor-encoder odometry is the correct choice. The VEX motors’ built-in encoders are high-resolution and accurate. Teams blame motor encoders for problems that are actually bad code, a bad drivetrain, or an untuned wheel diameter.

🖼 images/02-odometry-geometry.png Diagram of odometry geometry: wheel positions, coordinate frame, and position tracking

🖼 Image brief

  • Alt: Top-down diagram of a differential drivetrain showing the field coordinate frame (X right, Y forward) overlaid on the robot. Left and right wheel arcs are drawn with encoder tick marks. Arrows show how left/right encoder differences produce forward translation and rotation estimates. The robot's current pose (x, y, theta) is labeled.
  • Source: Diagram in Figma. Draw a top-down robot with left/right wheels, the field coordinate axes, encoder arc segments, and vector arrows showing how encoder readings translate to (x, y, theta) updates.
  • Caption: Motor-encoder odometry. The left and right encoder readings combine with the track width to estimate the robot's position and heading in field coordinates.

Motor-encoder odometry does have one hard constraint: you cannot use arc movements, and you must come to a complete stop before turning. During a pure forward motion, both sides spin equally and the code can figure out the distance. During a pure turn, the sides spin in opposite directions and the code can figure out the rotation. But during an arc — simultaneous forward and rotation — the motor encoders cannot distinguish “forward” from “turning.”

The fix is to decompose every movement into a clean turn followed by a clean straight line. This is exactly what L4.8’s “straightlining” approach does.

💡 Coding tip. Start with motor-encoder odometry. Ship a working auton. Revisit the question when you have evidence — from actual runs, with actual logs — that the sensor is the bottleneck.

Guided practice

Your Chassis is already configured for motor-encoder odometry, because you passed nullptr for all four tracking-wheel slots in L4.2. Add a position print to opcontrol():

while (true) {
    auto pose = chassis.getPose();
    printf("x=%.2f y=%.2f theta=%.2f\n", pose.x, pose.y, pose.theta);
    pros::delay(50);
}

Drive forward 72″ (three tiles). The reported Y should be close to 72. Spin the robot in place and drive back — the position should return to roughly (0, 0). Now try a gentle arc with simultaneous forward and rotation. Watch the position drift. That drift is what L4.8 avoids.

Independent exercise

Write two autons. The first decomposes the path into three discrete steps: drive forward 48″, turn in place to 90°, drive forward 24″. The second attempts a single moveToPoint(24, 48). Run each five times and record final pose.

Success criterion: you can observe and describe the difference in repeatability, and explain it in terms of motor-encoder odometry’s limitations.

Common pitfalls

  • Assuming motor-encoder odometry is “the cheap option.” For most teams it is the correct option.
  • Running arcs on motor-encoder odometry and blaming the controller when the robot drifts.
  • Switching to tracking wheels before exhausting the easier fixes: tuning, wheel diameter, squaring the frame, straightlining.
  • Mixing tracking wheels and motor encoders in a single Chassis. Pick one.

Where this points next

L4.4 is the lesson for teams that have actually earned the upgrade to tracking wheels. It is a build problem as much as a code problem.

💡 Reflection prompt (notebook-ready)

  • Which of the two autons above was more repeatable? What does that tell you about your drivetrain and your sensors?
  • Write a one-sentence rule you will apply to your own motion planning for the rest of Tier 4.

Next up: L4.4 — Tracking wheels (and doing them right).

Tracking wheels (and doing them right)

Mount tracking wheels at zero offset from the drivetrain’s turning axes, configure them in LemLib, and verify correctness by spinning the robot in place.

~90 min

Objective

Mount tracking wheels at zero offset from the drivetrain’s turning axes, configure them in LemLib, and verify correctness by spinning the robot in place and confirming the tracking wheels report zero displacement.

Concept

Tracking wheels go wrong more often than they go right. Not because the code is complicated — the LemLib TrackingWheel class is five lines of setup — but because a tracking wheel is a mechanical subsystem that has to be built to a very specific standard.

This lesson has one thesis: mount both tracking wheels at zero offset from the respective axes of your coordinate system, or do not bother installing them.

What “offset” actually means

The correct definition: the offset is the perpendicular distance from the tracking wheel’s line of travel to the axis it is tracking.

  • Vertical pod (tracks Y motion): offset is measured left-to-right, from the pod’s line of travel to the Y axis. Zero offset means the pod sits on the Y axis.
  • Horizontal pod (tracks X motion): offset is measured top-to-bottom, from the pod’s line of travel to the X axis.
🖼 images/02-tracking-wheel-mounting.png Annotated photo/diagram of tracking wheel pods mounted at zero offset on a VEX chassis

🖼 Image brief

  • Alt: Annotated diagram or photo showing a VEX robot from below with two tracking wheel pods. The vertical pod (2.75-inch omni wheel on a rotation sensor) sits on the robot's Y axis. The horizontal pod sits on the X axis. Measurement lines show zero perpendicular offset for each pod. Labels identify the rotation sensors, omni wheels, and axes.
  • Source: Annotated photo of the underside of a competition robot, or a Figma diagram showing a bottom-up view with the two tracking wheel pods, their axes of travel, and offset measurement lines reading zero.
  • Caption: Tracking wheel pods at zero offset. The vertical pod sits on the Y axis; the horizontal pod sits on the X axis. Zero offset eliminates arc-induced drift during rotation.

Why zero offset anyway

LemLib’s maths handles non-zero offsets, but the physics does not. When a tracking wheel sits off-axis and the robot rotates, the pod swings through an arc. VEX omni wheels do not slide cleanly sideways under low loads — the pod’s readings become non-repeatable. Zero offset eliminates the problem by eliminating the arc.

🔧 Build tip. 2.75″ omni is the best choice. Good resolution, circular profile, no cogging. Both pods must be unpowered, solidly boxed, and correctly constrained.

Guided practice

Step 1 — physically verify zero offset

Mark the tracking wheels’ initial positions with tape. Spin the robot slowly in place by hand. If either tracking wheel rotates, your offset is not zero. Remount and try again.

Step 2 — configure in code

pros::Rotation verticalRotation(11);
pros::Rotation horizontalRotation(12);

lemlib::TrackingWheel verticalTracking(
    &verticalRotation,
    lemlib::Omniwheel::NEW_275,
    0.0   // offset — zero.
);
lemlib::TrackingWheel horizontalTracking(
    &horizontalRotation,
    lemlib::Omniwheel::NEW_275,
    0.0
);

lemlib::OdomSensors sensors(
    &verticalTracking, nullptr,
    &horizontalTracking, nullptr,
    &imu
);

If you cannot physically mount a horizontal pod at zero offset, do not install one. Pass nullptr and let motor encoders fill the gap. A missing sensor is better than a lying sensor.

Step 3 — software verification

chassis.setPose(0, 0, 0);
while (true) {
    auto pose = chassis.getPose();
    printf("x=%.3f y=%.3f theta=%.2f\n", pose.x, pose.y, pose.theta);
    pros::delay(100);
}

Spin the robot in place. The theta should go up; x and y should both stay within 0.1″ of zero. If they drift, your offset is not zero — regardless of what a measuring tape claims. The robot is telling you.

Independent exercise

Install one tracking wheel (start with vertical) and run the in-place-spin test ten times. Record the final drift.

Success criterion: ten consecutive spins, all with x and y drift under 0.2″.

Common pitfalls

  • Treating offset as distance-to-turning-centre. It is distance-to-the-respective-axis.
  • Installing a horizontal pod at “small” offset. A 1–2″ offset is worse than no pod at all.
  • Skipping the in-place-spin verification because “the mount looks straight.” The robot is the ground truth.
  • Running a moveToPoint auton before verifying the pods. If the pods lie, LemLib lies.
  • Ignoring the build quality of the pod’s box. A flexing carriage makes readings unrepeatable under load.

Where this points next

L4.5 calibrates the wheel diameter so the tracking wheels’ distance readings match ground truth. The diameter on the box is a lie; you measure your own.

🔧 Reflection prompt (notebook-ready)

  • Describe the mounting location of each of your tracking wheels and explain, in your own words, why that location gives zero offset.
  • What was the hardest part of the mount to get right — the build, the tuning, or the verification?

Next up: L4.5 — Calibrating wheel diameter.

Calibrating wheel diameter

Measure and correct the effective wheel diameter of your drivetrain or tracking wheels so that reported odometry matches ground-truth distance to within 0.25″ per 72″.

~30 min

Objective

Measure and correct the effective wheel diameter so reported odometry matches ground-truth distance to within 0.25″ per 72″.

Concept

The wheel diameter printed on the VEX product page is a nominal value derived from unloaded CAD geometry. Your wheel is not unloaded. Three things eat diameter on a real robot:

  • Load compression. Rubber tyres compress under the weight of the robot, reducing the rolling radius.
  • Tyre wear. The tread wears during practice, shrinking the wheel further.
  • Manufacturing tolerance. Two wheels from the same box can differ slightly.

A percentage point off on wheel diameter means a percentage point off on every distance reading. Over 72″, 1% is 0.72″. Over a full skills run, it compounds into inches.

🖼 images/02-wheel-diameter-calibration.png Diagram showing the wheel diameter calibration method: drive a known distance, compare reported vs measured

🖼 Image brief

  • Alt: Side-view diagram showing a VEX wheel rolling across three tiles (72 inches). A tape measure runs along the ground from start to finish. Two callout boxes compare the reported distance (from the terminal: 72.0 inches) with the tape-measured distance (71.4 inches). Below, the correction formula is shown: corrected diameter = nominal diameter times (actual / reported).
  • Source: Diagram in Figma. Show a side-view wheel rolling along a measured surface. Add callout boxes for the terminal reading and tape measurement. Include the correction formula below.
  • Caption: Wheel diameter calibration. Drive exactly 72 inches, compare the terminal reading to a tape measure, and apply the correction formula. One or two iterations gets you within 0.25 inches.

Guided practice

Step 1 — set up the test

Push the robot against a flat surface. Zero the pose. Mark the starting position on the tile.

void autonomous() {
    chassis.setPose(0, 0, 0);
    chassis.moveToPoint(0, 72, 5000);
    pros::delay(500);
    auto final = chassis.getPose();
    printf("final y = %.3f\n", final.y);
}

Step 2 — measure ground truth

Run the auton. Measure from the tape mark to the robot with a tape measure. Record both reported and actual distances.

Step 3 — compute the correction

correctedDiameter = nominalDiameter × (actualDistance / reportedDistance)

Example: nominal 3.25″, terminal reports 72.0″, tape says 71.4″. corrected = 3.25 × (71.4 / 72.0) = 3.223″.

Step 4 — apply the correction

lemlib::Drivetrain drivetrain(
    &leftMotors, &rightMotors,
    11.5,
    3.223,   // was Omniwheel::NEW_325, corrected from measurement
    450, 2
);

Step 5 — verify

Rerun the auton. The tape-measured distance should now match the reported distance within 0.25″ over 72″. One or two iterations is normal.

Independent exercise

Run the calibration on your own robot. Record: the nominal diameter, the reported and actual distances on your first run, the corrected diameter, and the final error.

Success criterion: your odometry agrees with ground truth to within 0.25″ over 72″, on at least three consecutive trials.

Common pitfalls

  • Relying on a single trial. Three trials minimum.
  • Measuring from the wrong reference point. Pick one edge, mark it, use it both ends.
  • Recalibrating PID after editing wheel diameter. Leave the gains alone.
  • Skipping calibration on tracking wheels because “they are new.” New 2.75″ omnis are notoriously out of tolerance.
  • Running the test on a slick tile. Calibrate on the same surface you plan to compete on.

Where this points next

L4.6 steps through the LemLib movement primitives — moveToPoint, moveToPose, turnToHeading, turnToPoint — now that your odometry agrees with the ground.

📐 Reflection prompt (notebook-ready)

  • What was the difference between your nominal and corrected wheel diameter? Convert it to a percentage. How much would that error contribute over a full skills run?
  • When will you recalibrate? Set a trigger (weekly, per-practice, after-replacement) and write it down.

Next up: L4.6 — LemLib movement primitives.

LemLib movement primitives

Call each of LemLib’s four core motion primitives — moveToPoint, moveToPose, turnToHeading, turnToPoint — with correct parameters, and choose appropriate exit conditions for each movement.

~60 min

Objective

Call each of LemLib’s four core motion primitives with correct parameters, understand blocking vs async calls, and choose appropriate exit conditions for each movement in an auton.

Concept

LemLib ships with a small set of motion primitives. Every auton is a sequence of calls to these functions.

  • turnToHeading(angle, timeout) — rotate in place until the IMU reports angle.
  • turnToPoint(x, y, timeout) — rotate in place until the front points at (x, y).
  • moveToPoint(x, y, timeout) — drive to field coordinate (x, y) with heading correction.
  • moveToPose(x, y, theta, timeout) — drive to (x, y) and arrive at heading theta. Uses a boomerang path.
🖼 images/02-lemlib-movement-primitives.png Field diagram showing paths for moveToPoint, moveToPose, turnToHeading, and turnToPoint

🖼 Image brief

  • Alt: Top-down VEX field diagram showing four labeled movement paths from a common starting pose. turnToHeading: robot rotates in place (circular arrow). turnToPoint: robot rotates to face a target point (arc arrow toward a dot). moveToPoint: robot drives a straight-ish line with heading correction to a target coordinate. moveToPose: robot follows a curved boomerang path arriving at a target with a specified heading.
  • Source: Diagram in Figma. Quarter-field with four color-coded paths originating from the same start pose. Label each path with its function name and key parameters.
  • Caption: LemLib's four motion primitives. Each serves a different use case: in-place turns, point-to-point drives, and heading-critical arrivals via boomerang.

Blocking vs async

By default, every LemLib motion blocks until the movement exits. The last parameter is a boolean async:

chassis.moveToPoint(24, 36, 3000);           // blocks
chassis.moveToPoint(24, 36, 3000, {}, true); // returns immediately

Async is how you do two things at once. Blocking is how you sequence moves.

Exit conditions

Each call accepts a params struct overriding defaults per-movement: earlyExitRange, maxSpeed, minSpeed. Timeouts are a hard upper bound.

Guided practice

Primitive 1 — turnToHeading

chassis.setPose(0, 0, 0);
chassis.turnToHeading(90, 1500);

Primitive 2 — turnToPoint

chassis.turnToPoint(36, 48, 1500);
// reverse:
chassis.turnToPoint(36, 48, 1500, {.forwards = false});

Primitive 3 — moveToPoint

chassis.moveToPoint(0, 24, 3000);
// with params:
chassis.moveToPoint(24, 36, 3000, {
    .forwards = true,
    .maxSpeed = 80,
    .earlyExitRange = 1
});

Primitive 4 — moveToPose

chassis.moveToPose(24, 48, 90, 4000, {
    .lead = 0.6,
    .maxSpeed = 100
});

Async in practice

chassis.moveToPoint(24, 36, 3000, {}, true);  // async
intake.move(127);
chassis.waitUntilDone();
intake.brake();

Independent exercise

Write an auton that uses all four primitives at least once. Start at (0, 0, 0). Print the pose after each movement.

Success criterion: the auton runs end to end without timing out and the final pose lands within 2″ and 3° of the intended target.

Common pitfalls

  • Forgetting chassis.setPose() at the start. LemLib remembers the last pose.
  • Using moveToPose when moveToPoint would do. moveToPose is harder to tune.
  • Async calls without waitUntilDone(). The next motion starts while the first is still running.
  • Timeout-free calls that hang forever.
  • Shipping a blocking call for a motion that should happen in parallel with an intake spin-up.

Where this points next

L4.7 teaches motion chaining — loosening exit conditions and using minSpeed so movements flow into each other.

💡 Reflection prompt (notebook-ready)

  • For each of the four primitives, write one sentence describing the scenario it is best suited to.
  • Which primitive felt the most predictable? Which felt most fiddly?

Next up: L4.7 — Motion chaining.

Motion chaining

Chain multiple movements so the robot flows through waypoints without pausing, using increased exit conditions, minSpeed, and perpendicular-line-cross exits.

~60 min

Objective

Chain multiple movements so the robot flows through waypoints without pausing, using increased exit conditions, minSpeed to keep the drivetrain moving, and perpendicular-line-cross exit conditions where needed.

Concept

The auton from L4.6 stops at every waypoint. Those pauses cost a skills run 3–5 seconds. Motion chaining is the fix: exit a movement early — as soon as the robot is “close enough” — and start the next movement before the first has fully stopped.

Technique 1: increase exit conditions. Loosen the exit window on every non-final movement. Stacking error does not bite because odometry recalculates the starting point.

Technique 2: add minimum speed. minSpeed puts a floor on the command so the robot never decelerates to zero during a chained movement. It blows through the waypoint at speed.

⚡ Competition tip. Earn aggressive chaining by shipping the conservative version first. If your PID is not tuned well, the robot will fly past the waypoint on a bad arc.
🖼 images/02-motion-chaining.png Field diagram comparing stop-and-go waypoints vs smooth chained waypoints

🖼 Image brief

  • Alt: Top-down field diagram with two paths side by side. Left path: three waypoints with full stops at each (indicated by square stop markers and pause icons), total time 8 seconds. Right path: same three waypoints but the robot flows through the first two with smooth rounded transitions (indicated by curved arrows and earlyExitRange circles), stopping only at the final waypoint, total time 5.5 seconds.
  • Source: Diagram in Figma. Same three waypoints drawn twice. Left side shows full-stop behavior with pause indicators. Right side shows chained behavior with rounded transitions at intermediate waypoints and a time comparison label.
  • Caption: Stop-and-go (left) vs motion chaining (right). Loosening exit conditions at intermediate waypoints saves seconds without sacrificing final accuracy.

Guided practice

A conservative chain

void autonomous() {
    chassis.setPose(0, 0, 0);
    chassis.moveToPoint(0, 24, 2000, {.earlyExitRange = 3});
    chassis.moveToPoint(24, 48, 3000, {.earlyExitRange = 3});
    chassis.moveToPoint(48, 48, 3000);  // last movement — precise exit
}

An aggressive chain

void autonomous() {
    chassis.setPose(0, 0, 0);
    chassis.moveToPoint(0, 24, 2000, {.earlyExitRange = 6, .minSpeed = 70});
    chassis.moveToPoint(24, 48, 3000, {.earlyExitRange = 6, .minSpeed = 70});
    chassis.moveToPoint(48, 48, 3000);
}

Combining async with chaining

chassis.moveToPoint(0, 24, 2000, {.earlyExitRange = 3, .minSpeed = 60});
intake.move(127);   // spin up as we approach the pickup
chassis.moveToPoint(24, 48, 3000, {.earlyExitRange = 3, .minSpeed = 60});
intake.brake();
chassis.moveToPoint(48, 48, 3000);

Independent exercise

Rewrite your L4.6 auton twice: conservative chain (add earlyExitRange) and aggressive chain (add minSpeed). Measure total run time and final-pose repeatability over ten trials.

Success criterion: aggressive chain is at least 30% faster, and the final pose is within 3″/5° on at least 8 of 10 trials.

Common pitfalls

  • Setting minSpeed without tightening maxSpeed. The robot slams through waypoints.
  • Chaining with a loose final movement. Always tighten the last exit.
  • Declaring the chain done after one good run. Ten trials minimum.
  • Copying earlyExitRange values from another team. The right value depends on your PID and your route.

Where this points next

L4.8 reframes everything in Tier 4: the default movement should be “turn and drive straight,” not “arc through points.”

⚡ Reflection prompt (notebook-ready)

  • What was the time saving on your aggressive chain versus the default? Was the repeatability cost worth it?
  • Identify one waypoint where you had to use minSpeed. What was happening there?

Next up: L4.8 — Straightlining is the default.

Straightlining is the default movement

Write every auton movement as a turn-to-point followed by a drive-in-a-straight-line, defend this choice over drive-to-point and boomerang, and identify the specific cases where an exception is justified.

~60 min

Objective

Write every auton movement as a turn-to-point followed by a drive-in-a-straight-line, defend this choice, and identify the specific cases where an exception is justified.

Concept

If you cannot articulate a specific reason to use something else, straightline. Turn to the target, then drive in a straight line. Every movement in your auton, unless you have a particular reason to deviate.

Why moveToPoint is worse than it looks

moveToPoint applies a heading PID continuously while the robot drives. Near the target, a tiny heading error triggers jerky wobbles because the heading PID is still trying to correct an angle that barely matters.

Straightlining sidesteps the problem. The heading correction happens once, at the start, when the robot is stationary. Then the robot drives a clean straight line and the lateral PID does its one-dimensional job.

The straightline primitive

chassis.turnToPoint(targetX, targetY, turnTimeout, {
    .forwards = true,
    .earlyExitRange = 5
});
chassis.moveToPoint(targetX, targetY, driveTimeout);

Wrap it as a helper:

void Drivetrain::straightlineTo(double x, double y, int timeout) {
    chassis.turnToPoint(x, y, timeout / 3, {.earlyExitRange = 5});
    chassis.moveToPoint(x, y, (timeout * 2) / 3);
}

The maths of repeatability

A movement that works 9 times out of 10 is worth more than a movement that works 6 times out of 10 and is half a second faster. Your auton score is set by the worst case, not the fastest case. Straightlining wins the tail.

When to deviate

  • Long movements where you need to throw the robot fast.
  • Arrive-with-heading (the robot needs to end facing a specific direction).
  • A tight chain around an obstacle where the turn-straight decomposition requires a detour.

Guided practice

Rewrite your L4.7 aggressive-chain auton as pure straightline movements:

void autonomous() {
    chassis.setPose(0, 0, 0);
    drivetrain.straightlineTo(0, 24, 2500);
    drivetrain.straightlineTo(24, 48, 3500);
    drivetrain.straightlineTo(48, 48, 2500);
}

Run it ten times and compare repeatability to the L4.7 version.

Independent exercise

Rewrite your current competition auton as pure straightlines. Run both versions ten times each and record total run time, final pose error, and “visible wobble” count.

Success criterion: your notebook contains a side-by-side comparison table and a one-sentence defence of which version you would take to a qualifier.

Common pitfalls

  • Setting the turn exit too tight. You do not need to settle the turn; let the drive do the precision work.
  • Using moveToPoint for the turn half. Use turnToPoint.
  • Skipping the turn on short movements. A 3″ drive with a 10° heading error lands an inch off.
  • Complaining about the 0.5-second time cost while running a 3-out-of-10 auton. Repeatability first.

Where this points next

L4.9 covers the exceptions — drive-to-point and boomerang — as tools for specific jobs, not competing defaults.

💡 Reflection prompt (notebook-ready)

  • Describe one movement where straightlining was clearly right, and one where a continuous arc was better. What was different?
  • Was the time penalty of straightlining smaller or larger than you expected?

Next up: L4.9 — Drive-to-point and boomerang.

Drive-to-point and boomerang (exceptions to the default)

Identify the specific conditions under which moveToPoint and moveToPose beat straightlining, configure each primitive’s key parameters correctly, and accept that both are less repeatable.

~45 min

Objective

Identify the specific conditions under which moveToPoint and moveToPose beat straightlining, configure each primitive’s key parameters, and accept that both are less repeatable than the default.

Concept

Drive-to-point (moveToPoint)

moveToPoint drives toward a target while continuously adjusting heading. It is the “throw the robot in a rough direction as fast as possible” primitive.

When it beats straightlining: the movement is long (48″+), precision at the endpoint is not critical, and you cannot afford the 0.3–0.5 second turn penalty.

chassis.moveToPoint(48, 72, 3500, {
    .forwards = true,
    .maxSpeed = 127,
    .minSpeed = 50,
    .earlyExitRange = 4
});

Boomerang (moveToPose)

moveToPose adds a final heading requirement. A moving “carrot point” leads the robot into a curved path.

When it beats straightlining: the robot needs to end facing a specific direction without time for a final turn. The lead parameter controls the curve: small lead (0.3–0.4) is gentle, large lead (0.6–0.8) is sharper.

chassis.moveToPose(24, 48, 90, 4000, {
    .forwards = true,
    .lead = 0.6,
    .maxSpeed = 90
});
🖼 images/02-straightline-vs-boomerang.png Diagram comparing straightline path, moveToPoint path, and boomerang (moveToPose) path

🖼 Image brief

  • Alt: Top-down field diagram showing three paths from the same start to the same end. Path A (blue): turn-then-straightline, two discrete segments with a stop between them. Path B (green): moveToPoint, a gentle arc with continuous heading correction. Path C (orange): moveToPose boomerang, a pronounced curve arriving at the target at a specified heading. Labels show time cost and repeatability rating for each.
  • Source: Diagram in Figma. Same start/end points, three color-coded paths. Add small robot icons at start and end showing heading. Label each path with its function name, approximate time, and a repeatability star rating.
  • Caption: Three ways to reach the same point. Straightlining (blue) is most repeatable. moveToPoint (green) saves time on long legs. moveToPose (orange) handles heading-critical arrivals but is hardest to tune.
💡 Coding tip. Both primitives rely on continuous heading correction near the target and are sensitive to initial conditions. Use them for what they are good at; do not force them into the straightline slot.

Guided practice

A long traversal with moveToPoint

// long crossing, no finesse required
chassis.moveToPoint(0, 120, 4500, {.maxSpeed = 127, .minSpeed = 60});
// follow up with a precise straightline
drivetrain.straightlineTo(24, 120, 2000);

A heading-critical arrival with moveToPose

// before: straightline + turn
drivetrain.straightlineTo(24, 48, 3000);
chassis.turnToHeading(90, 1500);
// after: single boomerang
chassis.moveToPose(24, 48, 90, 4000, {.lead = 0.6, .maxSpeed = 90});

Run each ten times. Log the final pose. If the boomerang wins on time and does not lose too much repeatability, keep it. Otherwise revert.

Independent exercise

Identify one leg for a long moveToPoint and one for a moveToPose in your auton. Swap them in. Measure the time saved and the repeatability change. If either swap fails, revert.

Success criterion: your auton contains at most two non-straightline movements, and you can defend each one with a specific reason written as a comment.

Common pitfalls

  • Defaulting to moveToPoint for short precise movements. Use straightline.
  • Treating lead as a universal constant. It is per-route.
  • Chasing moveToPose for a small heading adjustment that a quick turnToHeading would handle.
  • Writing a boomerang at full speed. Boomerang is unstable near the target; cap maxSpeed.
  • Skipping the repeatability test. Both fail more interestingly than straightline.

Where this points next

Tier 5 picks up where Tier 4 leaves off: feedforward, sensor-driven resets, macros, and threaded tasks. L5.1 starts with the controller term you have not yet met — feedforward.

💡 Reflection prompt (notebook-ready)

  • For each non-straightline movement in your auton, write a one-sentence justification. If you cannot write one, delete the movement.
  • What was the tuning experience of lead like compared to PID gains?

Next up: L5.1 — Feedforward for drivetrain control.

Feedforward for drivetrain control

Explain the low-error problem that P-only control solves badly, add kS (static friction) and kV (velocity) feedforward terms, and configure LemLib’s lateral feedforward parameters.

~60 min

Objective

Explain the low-error problem that P-only control solves badly, add kS and kV feedforward terms to a controller, and configure LemLib’s lateral feedforward parameters to match your drivetrain.

Concept

PID is a feedback controller. It works beautifully when the error is large. It works badly at the end of a motion, when the error is small and kP × error is below the motor’s break-away threshold. The robot has 0.3″ left to go, the controller commands 500 mV, static friction eats every volt, and the robot just sits there until the integrator winds up.

Feedforward is different. Instead of reacting to error, it predicts what command the robot should need:

  • kS — static friction compensation. The minimum voltage to overcome static friction and get the motors moving. A constant: kS × sign(error).
  • kV — velocity feedforward. The voltage required to hold a given velocity. Linear: kV × desiredVelocity.

The full controller: V = kP × error + kD × d(error)/dt + kI × ∫error dt + kS × sign(error) + kV × targetVelocity.

💡 Coding tip. Feedback corrects the error. Feedforward provides the base command. The result: the robot moves smoothly throughout the motion, including at the ends where P alone would crawl.

Guided practice

Step 1 — measure kS

for (int v = 0; v < 3000; v += 100) {
    leftDrive.move_voltage(v);
    rightDrive.move_voltage(v);
    pros::delay(200);
    printf("v=%d\n", v);
}

The voltage at which the drivetrain just starts to roll is your kS. Typical value: 1000–1300 mV.

Step 2 — measure kV

leftDrive.move_voltage(6000);
rightDrive.move_voltage(6000);
pros::delay(1500);
double startPos = leftDrive.get_position();
pros::delay(500);
double endPos = leftDrive.get_position();
leftDrive.move_voltage(0);
rightDrive.move_voltage(0);
double velocityInches = ticksToInches((endPos - startPos) / 0.5);
printf("at 6000 mV, velocity = %.2f in/s\n", velocityInches);

Compute kV = 6000 / velocityInches. Typical: 100–200 mV per inch/second.

Step 3 — plug into LemLib

LemLib’s ControllerSettings struct has fields for feedforward. Add them to your lateral configuration.

Step 4 — verify

Run a 3″ movement before and after adding feedforward. Before: the robot crawls out of the starting position. After: the robot moves crisply from the first millisecond.

Independent exercise

Measure kS and kV on your own drivetrain. Compare a short-movement auton (three 4″ hops) before and after.

Success criterion: the feedforward version is measurably faster on short movements and does not overshoot.

Common pitfalls

  • Guessing kS and kV instead of measuring them. They are robot-specific.
  • Setting kS too high. The robot starts with a jolt that overshoots the first degree of motion.
  • Forgetting to update kS after a drivetrain change.
  • Using feedforward to hide a broken PID. Tune kP first.
  • Assuming kV is linear at all speeds. It saturates near the motor’s top speed.

Where this points next

L5.2 tackles sensor-driven odometry resets — the full 16-case framework for using distance sensors to zero out accumulated drift.

💡 Reflection prompt (notebook-ready)

  • Describe the difference between feedback and feedforward terms. Give one scenario where feedforward is essential.
  • Measure your kS on a fresh battery and on a half-depleted battery. Does it change?

Next up: L5.2 — Sensor-driven odometry resets.

Sensor-driven odometry resets at 90° headings

Implement a distance-sensor-based odometry reset framework that handles all 16 quadrant/heading combinations, uses only current-quadrant readings, and correctly handles the sensor-to-axis offset.

~90 min

Objective

Implement a distance-sensor-based odometry reset framework that handles all 16 quadrant/heading combinations, uses only current-quadrant readings to avoid noise, and correctly handles the sensor-to-axis offset using a slope-1 line through a known calibration point.

Concept

Motor-encoder odometry drifts. Tracking-wheel odometry drifts less, but still drifts. You can correct the drift by reading a distance sensor pointed at a fixed wall and resetting the odometry coordinate to match.

The basic idea

When the robot is facing a cardinal direction and a sensor points at a wall, the reading directly gives one coordinate. A single sensor resets one coordinate at a time.

The 16 cases

Four possible headings (0°, 90°, 180°, 270°) times four possible quadrants = 16 combinations. Each determines which sensor is aimed at a wall, which coordinate it reads, and the sign.

The “only current quadrant” rule

A distance sensor at max range is noisy. Only reset using sensors that are close enough to give a high-confidence reading. Check distance (<1000 mm) and confidence (>40).

Sensor-to-axis offset

For every sensor, there is a line of slope 1 through a known (reading, coordinate) point. Calibrate once: position the robot at a known coordinate, record the sensor reading. The relationship is coord = reading − calibrationConstant.

Guided practice

Step 1 — declare your sensors

pros::Distance frontDist(15);
pros::Distance backDist(16);
pros::Distance leftDist(17);
pros::Distance rightDist(18);

Step 2 — per-sensor calibration

// calibrated in shop
// robot at y=-24, facing 0°, backDist reads 50"
constexpr double BACK_AT_FACING_0_CAL = 74.0;
// Y = reading - BACK_AT_FACING_0_CAL

Step 3 — a reset function

void tryDistanceReset() {
    auto pose = chassis.getPose();
    double heading = pose.theta;
    double snapped = std::round(heading / 90.0) * 90.0;
    if (std::fabs(heading - snapped) > 5.0) return;

    bool qPosX = pose.x > 0;
    bool qPosY = pose.y > 0;

    // example: facing 0°, negative Y quadrant, backward sensor
    if (std::fabs(snapped) < 1.0 && !qPosY) {
        if (backDist.get_confidence() > 40) {
            double reading = backDist.get() / 25.4;
            if (reading < 60.0) {
                double newY = reading - BACK_AT_FACING_0_CAL;
                chassis.setPose(pose.x, newY, pose.theta);
            }
        }
    }
    // ... 15 more cases
}

Step 4 — wire it into auton

chassis.turnToHeading(0, 1500);
tryDistanceReset();
chassis.moveToPoint(0, 24, 3000);

Independent exercise

Pick your two distance sensors. Calibrate each at a known point. Implement the reset function for your case set. Write an auton that drifts deliberately, stops at a cardinal heading, calls the reset, and drives again. Compare final pose with reset enabled vs disabled across five trials.

Success criterion: with the reset enabled, drift is visibly smaller (half or less).

Common pitfalls

  • Trusting a low-confidence reading. Always check get_confidence().
  • Skipping the calibration step. Sensor mounting offset is unique to every robot.
  • Resetting the wrong coordinate for the heading.
  • Calling setPose without pausing the odometry task. Race condition.
  • Resetting during a motion. The sensor is tilted by rotation. Reset only at stops.

Where this points next

L5.3 covers the alternative — physical alignment to field features — and when to use it instead of or alongside distance-sensor resets.

📐 Reflection prompt (notebook-ready)

  • Why is the “only current quadrant” rule necessary? Give a specific example of a reading your code should ignore.
  • How many of the 16 cases do your sensors actually cover? What does that tell you about sensor placement?

Next up: L5.3 — Resetting off field features.

Resetting off field features

Use physical alignment to a fixed field feature as an odometry-reset mechanism, and judge when to use a physical reset versus a distance-sensor reset.

~45 min

Objective

Use physical alignment to a fixed field feature as an odometry-reset mechanism, and judge when to use a physical reset versus a distance-sensor reset.

Concept

An aligner is shaped hardware on the robot that makes contact with a known field feature and forces the robot into a precise pose. Your code then calls chassis.setPose(knownX, knownY, knownHeading). Whatever the odometry thought before that moment is wiped.

When to use an aligner

  • Flat-faced features at known locations on the field.
  • Skills runs, where you design the route around the alignment.
  • When distance sensors are unreliable in your season’s field layout.

When distance sensors win

  • Speed. An aligner requires physically driving into a feature. A distance-sensor read is instantaneous.
  • Avoiding obstacles. If an alliance partner is blocking your aligner, you cannot align.
  • Partial resets. An aligner resets the robot to a single pose; distance sensors can reset one coordinate and leave the other alone.
⚡ Competition tip. The right answer for most advanced teams is both. Distance sensors for fast mid-movement corrections; aligners for the big hard resets between phases of a skills run.

Guided practice

Step 1 — design the aligner

Two points of contact separated laterally. Slightly longer than the robot’s pivot radius. Ideally spring-loaded or rubberised.

Step 2 — drive-and-push

drivetrain.straightlineTo(alignApproachX, alignApproachY, 3000);

robot.drivetrain().setBrakeMode(pros::E_MOTOR_BRAKE_COAST);
robot.drivetrain().moveVoltage(3000, 3000);
pros::delay(500);
robot.drivetrain().stop();
robot.drivetrain().setBrakeMode(pros::E_MOTOR_BRAKE_BRAKE);

chassis.setPose(knownX, knownY, knownHeading);

Step 3 — verify

Run the alignment sequence 10 times from different starting positions. After each run, measure the robot’s actual pose. It should be the same every time.

Independent exercise

Identify a flat-faced feature on your current field. Design an aligner (pencil sketch). Write the reset sequence in code with placeholder coordinates.

Success criterion: the code compiles and you can describe verbally how the aligner would contact the feature.

Common pitfalls

  • Treating an aligner as a software problem. It is a build problem finalised with a setPose call.
  • Pushing too hard during the align. The aligner can deflect the robot. Coast or low voltage.
  • Skipping the coast-during-push step. Brake mode locks the wheels; the robot grinds instead of snapping flat.
  • Trusting a single alignment. Run ten trials.
  • Relying on an aligner that an alliance partner will always be blocking. Have a plan B.

Where this points next

L5.4 puts a sensor-driven state machine into a subsystem — the classic colour-sort intake.

🔧 Reflection prompt (notebook-ready)

  • For each reset point in your auton, which method would you use — aligner, distance sensor, or both? Why?
  • What is the mechanical failure mode of your aligner design? What does the code do if it happens?

Next up: L5.4 — Sensor-driven subsystem logic.

Sensor-driven subsystem logic

Build a sensor-driven state machine for a subsystem — an optical or distance sensor that detects a game object and routes it — with debouncing, thresholding, and false-positive rejection.

~60 min

Objective

Build a sensor-driven state machine for a subsystem — using an optical or distance sensor to detect a game object and route it — with debouncing, thresholding, and false-positive rejection.

Concept

Subsystems get smarter when they react to sensors. Instead of “run the intake while the button is held,” you can write “run the intake until the sensor detects a game object, then stop.” A sensor-driven subsystem feels almost autonomous.

The state machine

enum class IntakeState {
    Idle, Intaking, Detecting, Routing_Accept, Routing_Reject
};

Debouncing

An optical sensor’s reading flickers. Require the sensor to report the same value for a minimum number of consecutive cycles before you act on it.

int redCount = 0;
if (optical.get_hue() > 340 || optical.get_hue() < 20) {
    redCount++;
} else {
    redCount = 0;
}
if (redCount > 5) {
    triggerAcceptRoute();
    redCount = 0;
}

Thresholding

The optical sensor reports hue as 0–360. Red is roughly 340–360 or 0–20; blue is roughly 200–260. Never use equality; always use a range. Calibrate on a real object under real lighting.

False positives

Check get_proximity() before trusting get_hue():

if (optical.get_proximity() < 100) return;   // nothing in view

Guided practice

The Intake class

// include/intake.hpp
#pragma once
#include "pros/motors.hpp"
#include "pros/optical.hpp"

enum class IntakeState { Idle, Intaking, Detecting, Accepting, Rejecting };

class Intake {
public:
    Intake(pros::Motor* motor, pros::Optical* optical, pros::Motor* sortGate);
    void start();
    void stop();
    void update();
    IntakeState getState() const { return state; }
private:
    pros::Motor* motor;
    pros::Optical* optical;
    pros::Motor* sortGate;
    IntakeState state = IntakeState::Idle;
    int detectCount = 0;
    int routeEndTime = 0;
    bool isAllianceColor(double hue);
};

The update loop

void Intake::update() {
    switch (state) {
        case IntakeState::Idle:
            motor->brake();
            break;
        case IntakeState::Intaking:
            motor->move_voltage(12000);
            if (optical->get_proximity() > 150) {
                state = IntakeState::Detecting;
                detectCount = 0;
            }
            break;
        case IntakeState::Detecting:
            detectCount++;
            if (detectCount > 5) {
                bool ours = isAllianceColor(optical->get_hue());
                state = ours ? IntakeState::Accepting : IntakeState::Rejecting;
                routeEndTime = pros::millis() + 400;
            }
            break;
        case IntakeState::Accepting:
            motor->move_voltage(12000);
            sortGate->move_voltage(0);
            if (pros::millis() > routeEndTime) state = IntakeState::Intaking;
            break;
        case IntakeState::Rejecting:
            motor->move_voltage(12000);
            sortGate->move_voltage(12000);
            if (pros::millis() > routeEndTime) state = IntakeState::Intaking;
            break;
    }
}

Run it in a task

void intakeTask(void*) {
    while (true) {
        robot.intake().update();
        pros::delay(10);
    }
}

void initialize() {
    pros::Task intakeUpdater(intakeTask);
}

Independent exercise

Implement a sensor-driven subsystem on your own robot. Calibrate thresholds. Debounce. Run ten cycles and record the false positive and false negative rates.

Success criterion: zero false triggers across ten cycles, response within 100 ms.

Common pitfalls

  • Using the sensor’s raw reading without proximity filtering. Noise triggers everything.
  • Using equality on a float (e.g. hue == 0). Use ranges.
  • Not debouncing. A single-frame reading is noise.
  • Long state transitions that block the update loop. Use timestamps.
  • Hardcoding alliance colour. Make it configurable.

Where this points next

L5.5 covers threaded tasks and macros in depth — pros::Task, pros::Mutex, and state-machine-macro patterns.

💡 Reflection prompt (notebook-ready)

  • What was your debounce window, and how did you pick it? What happens at half? At double?
  • Describe one false positive scenario that your threshold rules out. Describe one that slips through.

Next up: L5.5 — Macros, button binds, and threaded tasks.

Macros, button binds, and threaded tasks

Spawn a pros::Task for asynchronous subsystems, protect shared state with pros::Mutex, and implement button-held, toggled, and state-machine-macro patterns cleanly.

~60 min

Objective

Spawn a pros::Task for asynchronous subsystems, protect shared state with pros::Mutex, and implement button-held, toggled, and state-machine-macro patterns cleanly.

Concept

PROS is multitasking. You can spawn new tasks that run independently, at the same time as your main loop. Used well, this lets a subsystem “run itself.” Used badly, it produces race conditions and flaky behaviour.

pros::Task

void myTask(void*) {
    while (true) {
        // do the thing
        pros::delay(10);
    }
}
pros::Task myTaskHandle(myTask);

Spawn it in initialize(). Every task must have a pros::delay or it starves the scheduler.

pros::Mutex

When two tasks share data, you need a mutex:

pros::Mutex counterMutex;

// Task A:
counterMutex.take();
counter++;
counterMutex.give();

// Task B:
counterMutex.take();
int value = counter;
counterMutex.give();

Every piece of shared state has a mutex, and every access takes and gives it.

Button-held macros

if (master.get_digital(DIGITAL_R1)) {
    robot.intake().start();
} else {
    robot.intake().stop();
}

Toggle macros

static bool intakeOn = false;
if (master.get_digital_new_press(DIGITAL_R1)) {
    intakeOn = !intakeOn;
    if (intakeOn) robot.intake().start(); else robot.intake().stop();
}

State-machine macros

enum class ScoringPhase { Idle, Lifting, Extending, Releasing, Retracting };
ScoringPhase phase = ScoringPhase::Idle;
int phaseStart = 0;

void scoringTask(void*) {
    while (true) {
        int elapsed = pros::millis() - phaseStart;
        switch (phase) {
            case ScoringPhase::Idle: break;
            case ScoringPhase::Lifting:
                robot.lift().moveToAngle(90);
                if (elapsed > 800) { phase = ScoringPhase::Extending; phaseStart = pros::millis(); }
                break;
            case ScoringPhase::Extending:
                robot.extender().out();
                if (elapsed > 400) { phase = ScoringPhase::Releasing; phaseStart = pros::millis(); }
                break;
            case ScoringPhase::Releasing:
                robot.clamp().open();
                if (elapsed > 300) { phase = ScoringPhase::Retracting; phaseStart = pros::millis(); }
                break;
            case ScoringPhase::Retracting:
                robot.extender().in();
                robot.lift().moveToAngle(0);
                if (elapsed > 1000) { phase = ScoringPhase::Idle; }
                break;
        }
        pros::delay(20);
    }
}

// in opcontrol:
if (master.get_digital_new_press(DIGITAL_Y) && phase == ScoringPhase::Idle) {
    phase = ScoringPhase::Lifting;
    phaseStart = pros::millis();
}

Guided practice

  1. Move a subsystem updater onto a task. Create a task function and spawn it in initialize.
  2. Implement one state-machine macro. Pick a multi-step sequence. Write it as a task-driven state machine.
  3. Bind it to a button with the idle guard.

Independent exercise

Write three macros: a held macro, a toggle macro, and a state-machine macro on its own task. Test each while driving.

Success criterion: the driver can run all three macro types without any delay or lag in drivetrain control.

Common pitfalls

  • Spawning a task without a delay. It starves everything else.
  • Using shared state without a mutex. The race condition bites at the worst time.
  • Writing pros::delay(800) inside a phase instead of using timestamps.
  • Forgetting the Idle guard on the trigger. The macro restarts every tap.
  • Using tasks for everything, including things that do not need concurrency.

Where this points next

L5.6 begins the stretch lessons — 1D motion profiling — for teams that have exhausted the straightline playbook.

💡 Reflection prompt (notebook-ready)

  • Which subsystems on your robot would most benefit from running on their own task?
  • Describe one scenario where a mutex is essential on your robot. Which two tasks share the protected state?

Next up: L5.6 — Stretch: 1D motion profiling.

Stretch: 1D motion profiling

Describe a trapezoidal velocity profile, identify the situations where it beats PID, and decide whether your team actually needs it.

~45 min

Objective

Describe a trapezoidal velocity profile, identify the specific situations where it beats PID, and decide whether your team actually needs it.

Concept

A motion profile is a pre-computed plan for how the robot should move over time. The most common shape is the trapezoidal profile: accelerate at a constant rate to a cruise velocity, cruise, then decelerate back to zero.

Profiling earns its place in two cases:

  1. Long movements with acceleration limits. If your drivetrain slips when you command full voltage from rest, a trapezoid prevents slip without losing cruise speed.
  2. Timing-sensitive chains. When two subsystems must coordinate in time, a profiled motion gives predictable timing.

For everybody else, PID and feedforward suffice.

The maths

struct TrapezoidalProfile {
    double d, vMax, a;
    double tAcc, tCruise, tTotal;

    void init(double distance, double maxVel, double maxAcc) {
        d = distance; vMax = maxVel; a = maxAcc;
        tAcc = vMax / a;
        double dAcc = 0.5 * a * tAcc * tAcc;
        if (2 * dAcc > d) {
            tAcc = sqrt(d / a);
            tCruise = 0;
        } else {
            tCruise = (d - 2 * dAcc) / vMax;
        }
        tTotal = 2 * tAcc + tCruise;
    }

    double velocityAt(double t) {
        if (t < tAcc) return a * t;
        if (t < tAcc + tCruise) return vMax;
        if (t < tTotal) return a * (tTotal - t);
        return 0;
    }
};
💡 Coding tip. Most VRC teams never need motion profiling. LemLib’s PID + feedforward + slew handles 95% of what profiling would provide. This lesson exists so you know what the tool is.

Independent exercise

Skip unless you have hit the wall. If you have, implement the TrapezoidalProfile struct and compare smoothness to a LemLib moveToPoint on the same distance.

Success criterion: you can articulate a specific reason your drivetrain needed the profile.

Common pitfalls

  • Implementing profiling before tuning feedforward.
  • Assuming profiling is “better than PID.” It is different, worth it in specific cases.

Where this points next

L5.7 covers 2D Bézier paths and Ramsete — the path-following approach for teams at the very top of the skills ceiling.

💡 Reflection prompt (notebook-ready)

  • Did your robot actually need motion profiling, or did you want to implement it because it sounded interesting? Be honest in writing.

Next up: L5.7 — Stretch: 2D Bézier paths and Ramsete.

Stretch: 2D Bézier paths and Ramsete

Describe a Bézier curve as a smooth path between waypoints, recognise the Ramsete controller as a path-following algorithm, and judge whether your team benefits from 2D path following versus straightlining.

~45 min

Objective

Describe a Bézier curve, recognise the Ramsete controller as a path-following algorithm, and judge whether your team actually benefits from 2D path following.

Concept

A Bézier curve is a smooth parametric curve defined by control points. A cubic Bézier has four control points: two endpoints and two “handles” that define curvature. The curve bends toward the handles. You chain cubic Béziers to describe a full path.

Ramsete drives the robot along such a path. At every moment, it computes pose error relative to the nearest point on the path and outputs velocity commands that drive the robot back onto the path.

Why most teams should not bother

Straightlining wins on repeatability. Bézier + Ramsete wins on raw traversal time — maybe. The cost: more parameters, harder tuning, handle placement via a GUI or trial and error, more dramatic failure modes.

If you are not in the top 10 of skills at your regional, you do not need this. Straightlining plus occasional moveToPoint gets you to a championship.

When it is the right tool

  • Skills runs at the very top, when every tenth of a second matters.
  • Paths around obstacles that cannot be decomposed into turn-then-drive segments.
  • Very long traversals across multiple field lengths.

LemLib exposes this as chassis.follow(path, lookahead, timeout).

Independent exercise

If you are in the use-cases above: use LemLib’s path editor to draw a two-curve path, feed the waypoints into chassis.follow, and compare run time and repeatability to the equivalent straightline chain.

Success criterion: you have data — not a feeling — that justifies the complexity.

Common pitfalls

  • Using Bézier + Ramsete because it is cool. Cool costs time.
  • Comparing to an untuned straightline baseline. Tune the straightline first.
  • Assuming the path editor is perfect. Handle placement is part of the tuning.

Where this points next

L5.8 is the overview-level introduction to Monte Carlo localisation.

💡 Reflection prompt (notebook-ready)

  • Is your team at the skills ceiling where this tool earns its place? Describe the concrete score you are trying to beat.

Next up: L5.8 — Stretch: Monte Carlo localisation.

Stretch: Monte Carlo localisation

Describe Monte Carlo localisation (particle-filter localisation) at a conceptual level, identify the specific problem it solves better than odometry + distance resets, and understand why it is a research topic for VRC.

~30 min

Objective

Describe Monte Carlo localisation at a conceptual level, identify the problem it solves better than odometry + distance resets, and understand why it is a research topic for VRC, not a competitive necessity.

Concept

Monte Carlo localisation is a probabilistic method for estimating pose by maintaining a large set of “particles” — hypothetical robot poses — and weighting each by how well its predicted sensor readings match the actual readings. Over time, particles that match accumulate weight; particles that do not are culled and resampled. The weighted mean of the survivors is the estimated pose.

A well-tuned particle filter can recover from a kidnapping — pick the robot up, move it, and within seconds the filter converges on the correct new pose.

Why it does not matter for VRC

The VRC field is small, rigid, and largely unchanging during a match. A 10-second auton does not accumulate the kind of drift MCL was designed to correct. Distance-sensor resets plus well-built tracking wheels handle drift to within VRC scoring precision.

MCL is also computationally expensive. Teams who have published MCL implementations in VRC notebooks do so as a research project.

When you might actually want it

  • Research. Understanding MCL transfers directly to professional mobile robotics.
  • Unusual field conditions. Many moving obstacles, unpredictable lighting, significant wheel slippage.
  • You have exhausted the playbook. Odometry tuned, resets implemented, straightlining default, and you still need more.
💡 Coding tip. For most teams: know what it is, know it exists, and leave it for later.

Independent exercise

Read two or three published VRC team notebooks that describe MCL implementations. Write a one-paragraph summary in your notebook of what each was trying to solve.

Success criterion: you can explain to a teammate why MCL is a research topic and not a competition necessity.

Common pitfalls

  • Implementing MCL because it sounded advanced in a notebook. Copy the reason, not the implementation.
  • Expecting MCL to fix a broken odometry setup. It depends on sensor readings that are already reasonable.

Where this points next

Tier 5 ends here. Tier 6 begins with L6.1 — auton routes as state machines — which is how you make any of these advanced techniques survive a real match.

💡 Reflection prompt (notebook-ready)

  • What is the simpler, cheaper technique you would implement first? Under what conditions would that simpler technique fail?

Next up: L6.1 — Auton routes as state machines.

Auton routes as state machines

Convert a straight-script auton into a state machine with an enum of states, a transition function, and a run loop — with conditional branches that recover when an upstream movement times out.

~75 min

Objective

Convert a straight-script auton into a state machine with an enum of states, a transition function, and a run loop, and add conditional branches that recover when an upstream movement times out.

Concept

A naive auton is a script. If line one times out, line two runs anyway from the wrong pose. If the intake jams, the code still tries to score a game object that was never picked up. Scripts cannot handle partial failure. In a real match, partial failure is normal.

A state machine is the fix. Each state does one thing. Each state knows what comes next on success and what comes next on failure. Failures become explicit branches instead of silent disasters.

The three parts

  1. An enum of states. Every distinct phase is one state.
  2. A transition function. Given current state and outcome, pick the next state.
  3. A run loop. Execute the current state until it reports an outcome, then transition.
🖼 images/02-auton-state-machine.png State machine diagram with nodes for each auton phase and success/failure transitions

🖼 Image brief

  • Alt: State machine diagram with rounded-rectangle nodes: Start, DriveToPickup, CollectObject, DriveToScore, DepositObject, Retreat, Done, Failed. Green arrows labeled "Success" connect the happy path from Start through DepositObject to Done. Red arrows labeled "Timeout" branch from DriveToPickup and CollectObject to Retreat, and from DriveToScore to Failed. Retreat connects to Done.
  • Source: Diagram in Figma. Standard state-machine notation with color-coded transitions: green for success, red for failure. Each node is a rounded rectangle with the state name inside.
  • Caption: An auton as a state machine. Green arrows are the happy path. Red arrows are failure branches. Every state knows what comes next on success and on failure.

Guided practice

Step 1 — declare states

enum class AutonState {
    Start, DriveToPickup, CollectObject, DriveToScore,
    DepositObject, Retreat, Done, Failed
};

Step 2 — a result type

enum class StateResult { Success, Timeout, SensorError };

Step 3 — state handlers

StateResult driveToPickup() {
    bool done = robot.drivetrain().straightlineTo(24, 36, 3000);
    if (!done) return StateResult::Timeout;
    return StateResult::Success;
}

StateResult collectObject() {
    robot.intake().move(12000);
    int start = pros::millis();
    while (pros::millis() - start < 2000) {
        if (robot.intake().hasObject()) {
            robot.intake().brake();
            return StateResult::Success;
        }
        pros::delay(10);
    }
    robot.intake().brake();
    return StateResult::Timeout;
}

Step 4 — the transition function

AutonState nextState(AutonState current, StateResult result) {
    switch (current) {
        case AutonState::Start:
            return AutonState::DriveToPickup;
        case AutonState::DriveToPickup:
            return result == StateResult::Success
                ? AutonState::CollectObject
                : AutonState::Retreat;
        case AutonState::CollectObject:
            return result == StateResult::Success
                ? AutonState::DriveToScore
                : AutonState::Retreat;
        case AutonState::DriveToScore:
            return result == StateResult::Success
                ? AutonState::DepositObject
                : AutonState::Failed;
        case AutonState::DepositObject:
            return AutonState::Done;
        case AutonState::Retreat:
            return AutonState::Done;
        default:
            return AutonState::Done;
    }
}

Step 5 — the run loop

void autonomous() {
    AutonState state = AutonState::Start;
    StateResult result = StateResult::Success;

    while (state != AutonState::Done && state != AutonState::Failed) {
        printf("entering state %d\n", (int)state);
        switch (state) {
            case AutonState::Start:          result = StateResult::Success;   break;
            case AutonState::DriveToPickup:  result = driveToPickup();        break;
            case AutonState::CollectObject:  result = collectObject();        break;
            case AutonState::DriveToScore:   result = driveToScore();         break;
            case AutonState::DepositObject:  result = depositObject();        break;
            case AutonState::Retreat:        result = retreat();              break;
            default: break;
        }
        state = nextState(state, result);
    }
    printf("auton ended in state %d\n", (int)state);
}
⚡ Competition tip. If you print the state transitions to the SD card (L6.2), you get a timeline of the auton’s decisions you can replay after a match.

Independent exercise

Take your current auton. List every phase as a state. For each, identify what follows on success and on failure. Rewrite as a state machine. Run ten times and log which states each trial passed through.

Success criterion: at least one trial hits a failure branch and recovers gracefully.

Common pitfalls

  • Collapsing the state machine into one monolithic state. The point is decomposition.
  • Forgetting timeouts in state handlers. A state that loops forever hangs the whole auton.
  • Using strings or raw ints instead of an enum. Typos compile and crash.
  • Writing the transition function inline inside the run loop. Separate it — the function is the auton’s structure.
  • Skipping the Retreat state. Every auton needs a way to fail safely.

Where this points next

L6.2 adds telemetry and SD-card logging so the state machine’s decisions leave a trace you can analyse after the match.

⚡ Reflection prompt (notebook-ready)

  • Which state in your auton is the most fragile? What is its recovery path?
  • Did the state machine version run slower, faster, or the same as the script? What did the overhead buy you?

Next up: L6.2 — Telemetry, logging, and SD-card replay.

Telemetry, logging, and SD-card replay

Write timestamped telemetry to the V5 SD card during auton and opcontrol, retrieve the log afterward, and use it to diagnose a bug that only manifests on the field.

~60 min

Objective

Write timestamped telemetry to the V5 SD card during auton and opcontrol, retrieve the log afterward, and use it to diagnose a bug that only manifests on the field.

Concept

Logs are the only way to debug a match. You cannot see the terminal during a match. You cannot stop the robot. The robot does what it does, the match ends, and you are left asking “why did the intake jam at second twelve?” with no data. Unless you wrote the data to the SD card.

Three things make a log useful:

  1. Timestamps on every line. pros::millis() at the start of every entry.
  2. Structured format. A few fixed columns — time, state, x, y, theta, sensor1, sensor2 — that you can paste into a spreadsheet.
  3. A sane logging rate. 10 Hz for state and pose. 50 Hz for sensor jitter. More than 100 Hz fights the SD card’s write speed.

Guided practice

Step 1 — open the log file

#include <cstdio>

FILE* logFile = nullptr;

void initialize() {
    pros::lcd::initialize();
    if (pros::usd::is_installed()) {
        logFile = fopen("/usd/match.log", "w");
        if (logFile) {
            fprintf(logFile, "time,x,y,theta,state,intakeV\n");
            fflush(logFile);
        }
    }
}

Step 2 — a logging task

void logTask(void*) {
    while (true) {
        if (logFile) {
            auto pose = chassis.getPose();
            fprintf(logFile, "%u,%.2f,%.2f,%.2f,%d,%d\n",
                pros::millis(),
                pose.x, pose.y, pose.theta,
                (int)currentAutonState,
                robot.intake().getVoltage());
            fflush(logFile);
        }
        pros::delay(100);  // 10 Hz
    }
}

// in initialize():
pros::Task logger(logTask);

fflush after every write is essential. SD cards buffer writes; if the match ends or the code crashes, unflushed entries are lost.

Step 3 — flushing at match end

void disabled() {
    if (logFile) {
        fflush(logFile);
        fclose(logFile);
        logFile = nullptr;
    }
}

Step 4 — after the match

Pop the SD card. Open match.log in a spreadsheet. Scroll to the timestamp where the bug appeared and read the sensor values.

Step 5 — a log analysis exercise

In a spreadsheet: plot X vs Y (the robot’s path), plot theta vs time (when did it turn?), plot state vs time (when did each phase start and end?). Cross-reference with video.

🖼 images/02-telemetry-log-graphs.png Screenshot of spreadsheet graphs from SD-card telemetry: X-Y path plot and theta vs time

🖼 Image brief

  • Alt: Two spreadsheet charts from SD-card telemetry data. Left chart: an X-Y scatter plot showing the robot's path across the field during an autonomous routine, with waypoints marked. Right chart: a time-series plot showing theta (heading) and auton state over time, with vertical dashed lines marking state transitions.
  • Source: Screenshot of a real or mock spreadsheet (Google Sheets or Excel) with two charts generated from sample match.log CSV data. Include visible column headers (time, x, y, theta, state) in the background.
  • Caption: Telemetry from the SD card, plotted in a spreadsheet. The X-Y path (left) and heading timeline (right) reveal exactly what the robot did and when each state transition occurred.
⚡ Competition tip. Students who do this once stop guessing at auton failures. The log is your only window into the last match. Read it every time.

Independent exercise

Add logging to your current auton. Run it five times. Load all five logs into a spreadsheet. Look for variance in final pose, timing differences on each state, and any sensor reading that disagrees between runs.

Success criterion: you can point to at least one specific numeric difference between two runs and explain what caused it.

Common pitfalls

  • Forgetting to fflush. Half the log disappears on the final crash.
  • Opening the file in mode "a" without ever truncating. The log grows forever until the card fills.
  • Logging at 1000 Hz. The SD card cannot keep up and the main loop slows down.
  • Not including a state or mode column. You can see the pose moved, but not which part of the auton was running.
  • Reading the log on the brain instead of on a laptop. The brain does not have a spreadsheet.

Where this points next

L6.3 is the practical workflow for an event: what to tune between qualifiers, what to freeze, and how to recover from a sensor failure mid-day.

⚡ Reflection prompt (notebook-ready)

  • What is the first question you would ask the log the next time an auton fails? Be specific about the columns you would look at.
  • How large did your log file get after a two-minute match? At your current rate, how long could a log run before filling a reasonable SD card?

Next up: L6.3 — Competition-day tuning workflow.

Competition-day tuning workflow

Execute a disciplined event-day workflow: a pre-match checklist, parameters that are safe to change between qualifiers, parameters that must be frozen, and a recovery procedure for mid-event sensor failure.

~90 min

Objective

Execute a disciplined event-day workflow: a pre-match checklist, a known set of parameters safe to change between qualifiers, a set that must be frozen, and a recovery procedure for a mid-event sensor failure.

Concept

Every programming lesson up to this point taught you how to build a working auton in the shop. This lesson is about keeping that auton working at a tournament, where the variables change every hour and you cannot rebuild from scratch.

The pre-match checklist

Run this every single time, without exception. Attach a printed copy to your tool box.

  1. Battery check. Swap to a fresh battery before every match.
  2. Controller check. Full charge, or plugged in at the station.
  3. SD card present. The brain refuses to log without it.
  4. Firmware versions match. Mismatched firmware causes silent sensor failures.
  5. Auton selector set. Confirm the right routine before the match starts. This is the single most common programming failure at a tournament.
  6. Drivetrain spins free. Listen for grinding. A seized wheel kills the auton.
  7. Clean motor status. Run a 0.5-second low-voltage drive and watch the current draw.
  8. IMU calibrated. Power-cycle, wait for calibration, drive gently, watch heading stability.
  9. Last green build. Your CI build is green. If it is not, you are about to upload broken code.

Nine items. Four minutes if you are slow. This is the ritual. Never skip it.

What you can change between qualifiers

  • Auton selection. Free to change every round.
  • Small exit-condition adjustments. Bump one movement’s earlyExitRange by an inch. Low risk.
  • Small timeout adjustments. Increase a movement’s timeout by half a second. Low risk.
  • Auton route ordering. Swap in a pre-written alternative if a target is unreachable.

What you must freeze

  • PID gains. Retuning on match day makes things worse.
  • Wheel diameter. Your wheels have not changed.
  • Drivetrain geometry. Track width, gear ratio, motor ports.
  • Tracking-wheel offsets.
  • The state machine structure. Adjusting transition logic on the fly introduces bugs.
⚡ Competition tip. Change one parameter at a time, one movement at a time, and watch the next run carefully. If it is worse, revert immediately. Git makes this a single command.
🖼 images/02-competition-day-workflow.png Flowchart of the competition-day tuning and debugging process

🖼 Image brief

  • Alt: Flowchart showing the competition-day workflow. Start: Pre-match checklist (9 items). Arrow to: Run match. Arrow to: Pop SD card, read log. Decision diamond: Problem found? If no, arrow to: Next match. If yes, decision diamond: Route problem or robot problem? Route problem arrow to: Swap auton selection, dry-run in pit, then Next match. Robot problem arrow to: Check cables first, then fix or freeze, then dry-run in pit, then Next match. A red stop sign on the side reads: Do NOT retune PID gains.
  • Source: Diagram in Figma. Standard flowchart with decision diamonds, process boxes, and arrows. Use green for the happy path, yellow for route adjustments, red for robot problems. Include the "Do NOT retune PID" warning prominently.
  • Caption: The competition-day loop. Run the checklist, run the match, read the log, classify the problem, and make exactly one small change. Never retune PID gains at an event.

Recovery from a sensor failure mid-event

Odometry drifts by inches: check tracking-wheel mounts, check the IMU cable. Fall back to motor-encoder odometry if needed — set tracking-wheel pointers to nullptr, recompile.

IMU reports 0° forever: power-cycle the brain. Swap the smart cable. Cables fail ten times more often than sensors.

Subsystem sensor reports nonsense: check the cable, check the lens for debris, fall back to a timed version.

General rule: cables first, code last. The probability that the problem is a physical cable is vastly higher than a code bug at a tournament.

The between-match log review

  1. Pop the SD card, read the log.
  2. Find the first failure-branch state transition. Note the timestamp and pose.
  3. Cross-reference with video.
  4. Write a one-sentence note: “Match 3 — auton died at CollectObject, pose (22, 34), likely alliance partner in the way.”
  5. Decide: route problem (change auton next round) or robot problem (fix now or freeze)?

Three-minute ritual. The single most valuable thing you can do between matches.

Guided practice

Run through the checklist on your robot right now. Time it with a stopwatch. Practise until you can execute it in under five minutes.

Then run a mock scenario: ask a teammate to unplug one sensor cable without telling you which. Run your auton. When it fails, execute the recovery procedure — find the failure in the log, identify the cause, swap the cable, re-upload the fallback branch, and re-run. Time the whole recovery.

Independent exercise

Build an event-day binder for your team:

  1. The printed checklist, marked up for your robot.
  2. A list of your available autons with one-line descriptions.
  3. A “frozen parameters” card — exact PID gains, wheel diameters, track widths. Printed. Not on a laptop.
  4. A “known fallback branches” list.
  5. A contact list for who owns each subsystem during an event.

Success criterion: another team member, given only the binder and the last green build from CI, can prepare the robot for a match without asking you for help.

Common pitfalls

  • Tuning PID between matches. By round six your auton is worse than round one.
  • Uploading code without testing in the pit. Test every time.
  • Skipping the battery swap. “It was fine last round” is the voice of the team about to lose the round after next.
  • Treating the log as optional. Read it every time.
  • Changing auton selection without a dry run.
  • Not committing pit-session changes to Git. If the laptop dies, the changes die with it.

Where this points next

This lesson is the last in the Coding strand. The next strand you should engage with is the Engineering Notebook, which takes the logs, test data, and tuning decisions you are now generating and turns them into the narrative a judge will read.

⚡ Reflection prompt (notebook-ready)

  • Describe a time when your team made a match-day change to code and regretted it. What did you change, what went wrong, and what would the rule from this lesson have told you to do instead?
  • Which sensor on your robot is most likely to fail at an event? What is your pre-built fallback?

Next up: Chapter III — Building & Engineering.