A live, room-code-based testing platform built for the classroom — teachers compose question banks; students join with a QR scan; results stream back in real time.
TestLab grew out of a very practical classroom problem: paper tests are slow to grade, and most "ed-tech" alternatives lock student data behind opaque cloud platforms. The brief was simple — build something that runs on a teacher's own laptop, gets students into a test in under twenty seconds, and stores nothing about them beyond a chosen nickname.
The teacher composes a session by picking a class, a question bank, and a test mode; the app generates a six-character room code, prints a scannable QR, and waits in the lobby as students join. Once the test starts, every keystroke is streamed back to the teacher's dashboard via WebSockets — per-student progress bars, completion stats, and the option to end early.
Grading happens in a dedicated Review & Grade view: closed-form questions are auto-graded; open-form fill-in-the-blank and short-answer responses are sent to a locally-running LLM via LM Studio for AI assessment, with a teacher always in the loop to confirm or override. A per-student proctor timeline visualises focus events — tab-switches, idle gaps, suspicious back-to-back activity — so flagged sessions can be reviewed at a glance.
Architecturally it is intentionally minimal: an Express server, a Socket.IO bridge, and a single static-served front-end with no build step. The AI grading runs against a local model on the same machine. The whole thing self-hosts on a Mac mini in a closet, which is the point.
A six-character code and a scannable QR appear in the lobby. Students open the link, type a nickname, and they’re in — no accounts, no installs.
Banks are organised by class and subject, drag-and-drop importable as JSON, and individually previewable before being assigned to a session.
No timer, a single global countdown, or per-question pacing — the teacher chooses, with mid-test pauses supported for classroom interruptions.
Per-student progress bars, completion counts, and average pace stream back via Socket.IO. Spot a stuck student before the bell rings.
Open-form answers are graded by a local LLM via LM Studio — running on the teacher’s own machine, never leaving the network. The teacher confirms or overrides every score.
A per-student timeline records focus events — tab switches, idle gaps, suspicious bursts — so flagged sessions surface immediately during grading.
Open sessions accept any nickname; roster sessions restrict the join screen to a class list, eliminating typo-driven mismatches at grading time.
Real names are never stored — only the chosen nickname. AI grading runs on a local model. Sessions auto-expire, rate limits guard the join endpoint, and admin routes sit behind hardened session auth.
TestLab runs live on this domain — you can open the student-side join screen yourself.
Open TestLab