The Problem VivaEdu Solves

Education providers are facing a new mismatch. Students can now produce high-quality submissions at speed, but the submission is no longer reliable evidence of understanding.

The Problem, Quantified

HEPI 2025
Wiley 2024
Elon 2025

Evidence: Oral exams are the adaptation to AI

Generative AI changed the economics of assessment

Generative AI has become embedded across education and training. The models are strong enough to produce convincing essays, lab reports, code, and reflections across most subjects. For students under pressure, it is an obvious cognitive shortcut. A large share of the work that used to force learning is now easy to outsource.

That matters because education is not only a credentialing system. It is where students build the ability to reason, to write, to learn independently, and to sustain focused work over time. If assessment mainly measures output, then AI shifts assessment away from those outcomes. We risk producing graduates who have a polished portfolio of submissions but cannot clearly explain what they did, why they did it, or what it means.

Detection is not a strategy

Institutions have tried to respond with AI detection. In practice, this is adversarial and unreliable. Students can prompt models to rewrite until detectors are satisfied. Meanwhile, false positives can be severe, especially for non-native writers and students with atypical writing styles. The result is a whack-a-mole dynamic that erodes trust and consumes staff time without actually verifying understanding.

Higher education does not need another arms race. It needs assessment patterns that remain valid even when AI is everywhere. The goal is not to punish tool usage. The goal is to confirm learning.

Understanding is the invariant

There is one thing AI cannot do on a student’s behalf: show that the student actually understands their own submission. Understanding is visible in explanation. It is visible in the ability to justify a claim, defend a design choice, clarify a definition, and respond to follow-up questions.

This is why oral defence has always been a gold standard. If a student wrote the work and understands it, they can talk through it. If they relied on a tool without learning, the gap shows up quickly. Not through suspicion or policing, but through a simple requirement: articulate your thinking.

The real barrier is scale

The obvious response to AI-era submissions is a short viva. Picture a 15-minute conversation where an instructor asks, “Why did you make this claim in paragraph two?” or “What would change if your key assumption fails?” A student who understands the work can answer. A student who does not will struggle, immediately.

The problem is feasibility. Instructors already manage large cohorts, heavy marking loads, and limited time. Live oral examinations do not scale to 200 students. Scheduling alone becomes a second job.

The requirement: online, asynchronous, cohort-ready

The answer is not to abandon oral verification. It is to scale it. VivaEdu makes oral defence practical by moving it online and asynchronous. Instructors can flag individual submissions during grading, or assign verification vivas cohort-wide. Students respond on their own time. Responses are recorded, transcribed, and reviewed quickly.

This creates a healthier adaptation to AI. Students can use modern tools, but they still have to understand and explain what they submit. That is good for academic integrity, and it is good for the deeper purpose of higher education: developing capable graduates who can think, communicate, and own their work.

Comments

Leave a comment, question, or feedback. Comments are public — please don’t include personal data.

Loading comments…