
Introduction
Quizzes, exams, and standardized tests are the traditional assessments that have required students to recall information under controlled conditions. While these assessments are efficient, seemingly objective, and familiar to both instructors and students, they often answer only one question: What do you know? Alternative assessment shifts that question to something more meaningful: What can you do with what you know? and places that question into contexts that matter for students’ future lives.
Alternative assessments are not just an intriguing way to engage students. They reflect a deeper set of beliefs about what learning is for and who it should serve.
Scholars across education have long asked us to consider what our assessments actually measure and whose ways of knowing they tend to reward. Bourdieu, for instance, observed that formal assessments are never purely neutral instruments; they carry values, and they tend to reward students whose backgrounds already align with institutional expectations. Freire similarly invited educators to think about whether assessment positions students as passive recipients of knowledge or as active participants in its creation. These observations are not indictments but help invite us to design assessments that give students more agency in demonstrating what they know, that create multiple entry points into learning, and that open evaluation up as a conversation rather than a verdict. Alternative assessment strategies are one way to accept that invitation.
Why Now?
The need to rethinking assessments is found amidst several converging pressures that have pushed educators to examine whether traditional testing is still fit for purpose. The COVID-19 pandemic forced a rapid and largely uncomfortable experiment in remote assessment, exposing just how dependent traditional testing is on controlled environments. When those conditions disappeared, so did much of the logic holding recall-based assessment together. More recently, the widespread availability of generative AI has made the case even harder to ignore. When a student can produce a passable exam response with a few keystrokes, the question is no longer whether students can retrieve information, but whether they can think critically, apply knowledge, and make sound judgments in complex situations. Alongside these technological shifts, student-centered educators have a renewed opportunity to address the longstanding concern that standardized assessment tends to reward students whose prior experiences and backgrounds most closely match institutional expectations. Together, these pressures make the conversation about alternative assessment not just timely, but necessary.

Seven Alternative Assessment Strategies Worth Knowing
The following strategies represent a range of approaches to assessing student learning in meaningful ways. Each entry describes what the strategy is, how it works, and one implementation challenge to keep in mind.
-
Performance-Based Assessment
Students demonstrate learning through doing rather than telling. Think presentations, debates, lab demonstrations, and artistic performances. The assessment is the learning activity. Authenticity is the key variable here, and the closer the task mirrors real-world application, the stronger the assessment. While this strategy reveals what students can do in meaningful contexts, scoring consistency across raters remains a persistent validity concern.
-
Authentic Assessment
Authentic assessment is similar to performance-based assessment because it also requires students to actively demonstrate learning through doing rather than simply recalling information. What makes it distinct is that authentic assessment specifically insists that the task, audience, and purpose be genuinely real-world rather than simulated, meaning the work could exist and matter beyond the classroom walls, not just resemble something that might. For instance, a student in an environmental studies course may write a letter to a city council member about water quality. This student is doing far more than just completing an assignment; they are producing something that could actually make a difference. This is the distinction between simulated performance and genuine participation in a discipline, but designing truly authentic tasks at scale is resource-intensive. Before planning, consider how this approach can be unevenly accessible across under-resourced classrooms.
-
Portfolio Assessment
Portfolio assessment tracks growth over time rather than capturing a snapshot of performance. Students curate artifacts of their learning, often paired with reflective commentary. The metacognitive dimension is what makes portfolios especially powerful as students are not just producing work, they are analyzing their own development. While this strategy captures growth effectively, it can risk privileging students who are already skilled self-advocates and reflective writers.
-
Peer and Self-Assessment
Peer and self-assessment redistribute evaluative authority. When students assess each other’s work or their own, they internalize criteria, develop critical thinking, and become co-owners of learning standards. In practice, this might look like students using a shared rubric to evaluate a classmate’s draft essay before revision, or completing a structured reflection on their own contribution to a group project. This strategy builds shared ownership of quality, but students may lack the evaluative calibration to assess fairly, and social dynamics can quietly distort the quality of feedback exchanged.
-
Case-Based Assessment
In case-based assessment, students apply knowledge to complex, often ambiguous real-world scenarios. A medical student might work through a patient diagnosis with incomplete information, while an undergraduate business student might analyze a company’s ethical decision-making during a crisis. Case studies resist single correct answers, which is precisely the point, because they assess judgment, reasoning, and the ability to navigate complexity. Common in law, medicine, and business, this strategy is highly transferable to other college courses. However, case studies can be time-consuming to design well and can inadvertently advantage students with more prior exposure to professional contexts. You can learn more about Case Studies in our Techniques Video Library.
-
Project-Based and Problem-Based Learning (PBL) Assessments
PBL assessments are embedded within an extended inquiry process. Students are evaluated not just on a final product but on the quality of their process: questioning, iterating, collaborating, and revising. Though often grouped together, project-based and problem-based learning are meaningfully distinct. Project-based learning typically starts with a driving question or challenge and results in a student-created product or artifact. The outcome is somewhat open, and students have voice and choice in how they demonstrate learning. Problem-based learning though typically presents students with a specific, ill-structured real-world problem to solve. The problem is defined upfront, but interestingly, the solution is actually also open-ended because the problem is messy and complex by design. Students have to figure out what they need to learn in order to address it. In problem-based work, a public health student might encounter an unexplained rise in illness rates in a local community and have to determine what questions to ask, what resources to find, and how to propose a response, while a project-based learner in the same course might design and present an original community health awareness campaign. Both approaches share a commitment to evaluating learning in motion rather than learning as a finished artifact, and both position process as central to what counts as quality work. Assessment reliability may suffer, however, if rubrics fail to adequately distinguish individual contribution from group output. Learn about Triple Jump, a three-step technique that requires students to think through and attempt to solve a real-world problem, in our Techniques Video Library.

Where to Begin
Before selecting an alternative assessment strategy, it helps to start with a few clarifying questions:
- What do I most want students to be able to do by the end of this course, and does my current assessment actually ask them to do that?
- When students leave my course, what kind of thinkers and practitioners do I want them to be, and how would I know if they got there?
- Does my assessment ask students to demonstrate the same kind of thinking my course is trying to develop?
The answers to these questions might point naturally toward an alternative. If growth over time is the priority, portfolios may be the right entry point. If real-world application matters most, authentic or performance-based assessment may fit best. If building collaborative thinking is the goal, peer assessment may be worth exploring. Matching the strategy to actual learning goals, rather than adopting a method simply because it sounds engaging, is the most important first step.
From there, starting small is wise. Swap one traditional assessment for an alternative version before redesigning an entire course. Draft a simple rubric, share it with students in advance, and treat the first attempt as a pilot to be refined over time. AI tools can be genuinely useful in the design phase, helpful for generating rubric criteria, drafting case scenarios, or anticipating student questions about a new format, as long as learning objectives remain the driving force and you are checking the outputs as the content expert.
For those considering a larger shift toward end-of-term alternatives, the KP Cross Academy offers practical guidance on using final projects in your course.
And if the relational and collaborative dimensions of alternative assessment resonate with you, their piece on feedback loops and teaching is a natural next read.
Conclusion
Rethinking assessment is not about relocating what we consider rigor.
When the bar moves from recalling information to applying, demonstrating, and reflecting on knowledge in meaningful contexts, both the assessment and the learning become more purposeful.
The strategies described here offer a range of starting points, each with genuine strengths and honest challenges. No single approach fits every course or every learner, but the willingness to ask better questions about how we measure learning is itself a meaningful step forward.
Suggested Citation
Morris, S.J. (n.d.). Beyond the quiz: Rethinking classroom assessment. CrossCurrents. https://kpcrossacademy.ua.edu/beyond-the-quiz-rethinking-classroom-assessment/

Engaged Teaching
A Handbook for College Faculty
Available now, Engaged Teaching: A Handbook for College Faculty provides college faculty with a dynamic model of what it means to be an engaged teacher and offers practical strategies and techniques for putting the model into practice.






