The Grading Conference
Presentation Abstracts
43 Learning Outcomes in 15 Days: Lessons Learned from Standards-Based Grading in a J-Term Course
Arden Ashley-Wurtmann Novel Implementations in Math Courses
Thursday, June 18, 2:15 — 3:30 PM EDT
Much of the discourse on implementing alternative grading structures centers around an assumption of a standard 15-week semester. For this reason, when I was first offered an opportunity to teach a developmental math course over a 17-day “J-Term,” I brushed off the idea of alternative grading as too complicated for such a short period of time. Feeling regret at this decision, the next year I committed myself to structuring the course to allow for standards-based grading . . . even when I found out that the next year’s schedule would require me to teach the class in only 15 days . . . and even when I later found out that additional content was being added to the course.
In this talk, I will discuss how I structured a 15-day course with 43 Learning Outcomes, how the students responded to the system, and what lessons I learned from the experience. As the University of St. Thomas is located in St. Paul Minnesota and the course took place in January of 2026, I will also particularly highlight evidence of how the flexibility of this system impacted students whose education was disrupted by Operation Metro Surge.
My hope is that I can offer guidance to other educators who are interested in trying alternative grading in short-term courses.
A Choose Your Own Adventure course design that actually works: The surprising synthesis of cumulative- and mastery-based grading
Peter Aeschbacher Novel Implementations
Wednesday, June 17, 2:00 — 3:15 PM EDT
Broad enrollment courses often have a “gym class” problem: some students are excited, others simply want to pass, and some would rather be out behind the bleachers. Meanwhile, instructors strive for student accomplishment but mastery-based grading still has the gym class problem. Students lack choice of assessment and percentage scoring irretrievably reduces points. Cumulative grading provides additive achievement but not student options. Self-directed learning (a Choose Your Own Adventure model) can require additional teaching materials and assessments. But what if students had choice, mastery, and accomplishment while instructors had a manageable course? Surprisingly, combining mastery-based grading with cumulative grading negates each’s individual drawbacks. This presentation reports on the successful implementation of this synthesis across two iterative years of a mid-sized lab course. In-class activities provide shared foundation and proficiency levels and sufficient points for a passing grade. A wide range of additional self-selected activities evidence “mastery” (or better: *creativity*) as students tackle novel contexts. Instructor content remains constant; the additional work lies in devising opportunities for a broader, more inclusive range of student interests, abilities, and situations. Grading is no longer deduction-based. Rather, more than 240 points are available on a 100-point scale. Students report great satisfaction with the ability to proactively determine their final grade, “deep dive” options which speak to their strengths and interests, and being assessed on accomplishment rather error. Faculty find it refreshing to score only projects which students have chosen to engage with. Challenges include overcoming ingrained models of assessment in students, instructor, and online Learning Management Systems. Proven solutions for each will be presented, as will a clear course design structure breakdown.
A Standards-Based Grading System that Plays Nicely with Canvas
Megan Gibson Scaling Implementation
Tuesday, June 16, 1:00 — 2:15 PM EDT
The standards-based grading system described in this talk was implemented in four sections of a Prealgebra course at Ferris State University in Fall 2025 and Spring 2026.
In this grading system, 60% of the students’ grades come from learning objectives for the course. The other 40% is from daily quizzes, daily journals, and small projects. These categories are set up in Canvas with weighted grades.
The instructor identified 20 learning objectives for the course. Each objective is set up as an “assignment” worth 4 points. The objectives are assessed during class, in a testing environment. Each objective is assessed with 2 questions worth 2 points each.
Students who do not receive full credit for any objective can reattempt the objective on a subsequent testing date by completing corrections on their original assessment and completing a short reflection form.
One goal was to create a grading system that is completely contained within the Learning Management System, without requiring anything additional to track student grades. Another goal was to allow students to demonstrate learning on prior topics at any point in the course. Test anxiety is common among this student population, and the instructor wanted to design a method of assessing students that reduces the anxiety. The instructor also wanted to move from a “test corrections” approach to one that requires students to demonstrate knowledge on new questions. The grading system was successful in these aspects.
Based on student feedback, the instructor increased the number of assessment days during the spring semester. The instructor found that many students did not reattempt objectives. For future iterations, it would be worthwhile to consider how to better encourage students to do this. The instructor would also like to consider how to assess the impact of this grading system on students.
A study of SBG’s effect on motivation, anxiety and understanding in Calculus II
Nichole Barta, Katharine Shultis, Vesta Coufal, Danielle Teague Student Motivation
Tuesday, June 16, 1:00 — 2:15 PM EDT
This study of Standards Based Grading (SBG) in Calculus II at Gonzaga University was based on surveys administered both early and late in the Spring 2025 semester (63 and 53 respondents respectively) in a total of three class sections taught by two of the authors. This was the second semester the authors were employing SBG in their Calculus II courses and they were curious about student perception of the new grading system. The research questions were: How does SBG affect students’ motivation to engage with math concepts and persist in their learning efforts throughout the semester? To what extent does SBG reduce students’ anxiety related to assessments in the course? and How do students perceive SBG as influencing their understand of math concepts? The surveys included both Likert and open-ended questions. Quantitative data was analyzed using logistic regression to test relationships between SBG features and student-reported outcomes. Thematic coding was use for the qualitative responses. The results indicated that students were more motivated to study and to put more effort into challenging material, that students experienced less anxiety about their grades in the course, and that students felt they achieved more comprehension of concepts. Students also indicated that they struggled to track their grades during the term. Overall, students overwhelmingly supported SBG. We are continuing to improve the details for our SBG implementation, and have improved our student grade-tracker.
Alternative Grading and Student Motivation
Charles McKenna Student Motivation
Tuesday, June 16, 1:00 — 2:15 PM EDT
As faculty experiment with alternative grading methods, it is critical that the student experience is documented and understood to maximize method effectiveness. This presentation will briefly highlight existing problems with grades and grading and examine student experiences with alternative grading methods, specifically ungrading and contract grading, through the lens of self-determination theory (SDT). The author conducted a qualitative phenomenological research study with 20 students across four classes at a large, public, research-intensive institution in the mid-Atlantic region of the United States. Student opinions about their experiences were mixed but generally positive. Notably, students focused on their increased sense of autonomy and the value of feedback, contrasted their experience of relatedness with alternative grading versus traditionally graded classes, and discussed the ways in which alternative grading helped to influence their thinking and approach to learning. The experience with alternative grading and consequent rise in autonomy, competence, and relatedness resulted in increased motivation, deeper engagement with class content, and a perceived increased enjoyment of learning. Implications for practice will be discussed in relation to these findings.
Alternative Grading and the Ethical Use of AI: Pedagogical Tensions and Opportunities
Ishika Rathi, Haleema Welji Panel
Tuesday, June 16, 2:45 — 4:00 PM EDT
College writing programs face significant pedagogical challenges from widespread use of AI technologies, including ChatGPT, Grammarly, and other LLM-based writing assistants. Research shows that AI-assisted writing can diminish students’ ownership, reduce students’ ability to recall ideas, and homogenize the creative diversity of their final output. Alternatively, some studies suggest potential benefits for English as a Second Language (ESL) writers, and experts stress the importance of developing AI literacy in writing classrooms. Writing programs now face the challenge of integrating AI in ways that guide students through strategies of ethical, responsible, and equitable use while avoiding potential drawbacks to student writing, learning, critical thinking, and long-term success.
At UC San Diego, several first-year writing programs use alternative grading strategies, such as labor-based contract grading and specifications grading. Research shows these approaches enhance intrinsic motivation, metacognition, and autonomy. Students report that alternative grading systems allow them greater freedom to experiment, take risks, and develop individual voices, shifting the focus toward growth and learning rather than the subjective quality of a final product.
Given the potential contradictory outcomes of AI-assisted writing and alternative grading, this panel brings together first-year college writing programs to consider what tensions exist between AI-assisted writing and the pedagogical values guiding alternative grading; what opportunities might arise through these tensions for teaching responsible and equitable use of AI; and how we might guide students to use AI in ways that center their ownership, creativity, and reflective, critical thinking. Discussions consider the value of labor vs. efficiency in learning and writing, examine the use and impact of AI-assistance for ESL writers, and explore assignments that serve as opportunities for critical engagement with AI.
An Introduction to Alternative Grading: Key principles and characteristics
Adriana Streifer Presentation
Tuesday, June 16, 2:45 — 4:00 PM EDT
This session introduces the foundational principles and characteristics of alternative grading schemes. Whether you’re curious about alternative grading but unsure where to start, or you have experimented with alternative grading but haven’t quite found your preferred approach, this session will help you understand the practical “whats” and “hows” of alternative grading. After introducing core concepts and vocabulary, the session will describe three common alternative grading methods – specifications grading, collaborative grading, and standards-based grading. These methods are better understood as broad categories within which instructors can experiment and innovate, rather than rigid structures with rules that must be followed. All three approaches emphasize transparency, student growth, meaningful assessment, and improved instructor-student communication. There will be ample time for discussion and Q&A at the end of the presentation.
Asynchronous Grading Calibration Exercises to Improve Graduate Teaching Assistant Grading Accuracy
William Howitz Integrating Teaching Teams
Thursday, June 18, 12:30 PM — 1:45 PM EDT
Post-secondary institutions employ graduate teaching assistants (GTAs) to support STEM lecture and laboratory courses. Currently, professional development programs to ensure high-quality GTA assessment of undergraduate student work are underdeveloped, and previous studies underscore the importance of developing a long-term, course-specific intervention to reduce the grading discrepancies between GTAs and faculty. In an attempt to support GTA professional development in regards to grading accuracy, we implemented a series of asynchronous grading calibration exercises in a lower-division organic chemistry laboratory course that uses a specifications grading system. While the initial implementation of the grading calibration exercises using a completion-based model did not improve GTA grading accuracy, adjustment to a threshold-based model did. GTAs had a positive reception to the exercises, highlighting its utility in constructing feedback, recognizing misconceptions, and developing confidence in their grading abilities. These outcomes exemplify the benefits of implementing grading calibration exercises using a threshold-based model as part of GTA professional development training to ensure high-quality GTA assessment of undergraduate student work and to minimize grading discrepancies between GTAs and faculty.
Beyond Detection: Bernard Stiegler's Authentic Thinking as a Framework for Grading in the Age of AI
Trent Kays Supporting Student Identity Development
Thursday, June 18, 2:15 — 3:30 PM EDT
The proliferation of generative AI has created an assessment crisis with institutions responding through detection software and restrictive policies. These approaches treat AI as poison to be eliminated rather than engaging deeper pedagogical questions about what we value in student work. Bernard Stiegler's philosophy of technics (particularly his concept of authentic thinking) offers a more productive framework for reimagining grading in response to AI.
This theoretical exploration investigates how Stiegler's concepts of authentic versus inauthentic thinking and individuation can inform both traditional grading's failures and alternative grading's potential to foster genuine intellectual engagement in AI-saturated environments.
Stiegler distinguishes authentic thinking (individual engagement requiring struggle and active participation) from inauthentic thinking (passive consumption of pre-formed thoughts). Traditional grading, which emphasizes standardized products and algorithmic evaluation, already functions as an "industrial temporal object" that synchronizes consciousness and discourages authentic engagement. AI tools simply make visible what has long been true: traditional grading often fails to require or reward authentic thinking.
Alternative grading practices, such as ungrading, specifications grading, process-based assessment, align with Stieglerian authentic thinking by: (1) emphasizing iterative revision over single-draft products, (2) valuing learning processes over polished products, (3) using qualitative feedback rather than quantitative ranking, and (4) creating conditions for genuine individuation rather than synchronized performance. The question shifts from "how do we prevent AI use?" to "how do we design grading requiring authentic thinking?"
This framework offers alternative grading practitioners philosophical grounding for why their approaches aren't merely pragmatic but pedagogically sound in fostering authentic intellectual development.
Can Peers give Helpful Feedback as part of the Feedback Loop?
Marney Pratt Integrating Peers and Social Media
Tuesday, June 16, 2:45 — 4:00 PM EDT
Traditional grading often lacks opportunities for reattempts or revisions of work and students are frequently not given helpful feedback before being given a final opportunity for assessment. Even when faculty realize how key feedback loops are in the learning process, there may not be the capacity to give helpful feedback and allow revisions because of large class size and/or high teaching loads. One potential method to increase feedback without the instructor having to give it all individually is to have the students peer review each other’s work. However, some may hesitate using peer review with introductory-level students since they may not have the expertise to give useful feedback. In this talk, I will share my experience using peer review effectively in a 100-level introductory biology lab course with a total number of students between 35-60 per semester. I have taught over 800 students in this same course over 12 years. I will share changes I made in the Fall of 2025 that added more structure to the peer review process and resulted in noticeably improved assignments and saved me substantial time during grading. Peer review can provide useful feedback at the introductory level if students are given clear criteria and there is a lot of structure in the peer review process.
Customizing Specifications Grading for Writing Courses
Laura Vernon Panel
Tuesday, June 16, 1:00 — 2:15 PM EDT
This panel session will feature six writing faculty who are experienced in using specifications grading in their writing courses. The goal is to generate a robust discussion that provides meaningful insights and practical advice for beginners as well as veterans seeking to change, elevate, or refine their practice. Specifications grading is an alternative assessment method used in many disciplines, especially in STEM, but it does not get much attention in writing studies, which has more readily embraced contract, standards-based, and ungrading approaches. Whether used as the primary grading approach or in concert with other alternative grading frameworks, specifications grading breaks student work down into objective, discrete requirements, limiting subjective assessment and increasing transparency. This panel will showcase how specifications grading works in a variety of writing courses and will demonstrate the high value that specifications grading brings to writing studies.
Effects of standards-based grading on undergraduate student performance
Ariel Wygant, Lauren Graham Impacts on Faculty and Students
Wednesday, June 17, 2:00 — 3:15 PM EDT
Alternative grading practices are growing in popularity in K-12 school districts and are beginning to catch on in higher education as well. Relatively little research has been done on the effects of alternative grading, but there is evidence to suggest that these methods are more beneficial for students’ learning and emotional wellbeing, and that they are more equitable than points-based grading. This quantitative study focused on one alternative grading method, standards-based grading (SBG), and its effect on student performance in an undergraduate biopsychology course. We specifically wondered whether grade outcomes would change based on grading method and whether grading affects the academic opportunity gap between different student populations. In this study, we measured the effects of two grading schemes (points- and standards-based) on student final grades and measured opportunity gaps between marginalized and/or oppressed groups (underrepresented students; URS) and dominant and/or majority groups. We hypothesized that students' final grades would be higher when graded with standards than with points, and that the existing opportunity gap would be smaller in the class graded using standards compared to the class graded with points. We found no significant difference in grade outcomes between the grading methods. Although there was a significant difference in grades between URS and non-URS in both classes, the gap between the two groups did not differ by grading method. Our results suggest that a change in overall grades is not a natural outcome of using SBG compared to points-based grading. Additionally, manipulating the grading method alone is not likely to remedy the disparity in grades between demographic groups.
Enhancing Clinical Reasoning in DPT Education Through Gamification and Specifications Grading
Teresa Chen, Karen Lomond Novel Implementations
Wednesday, June 17, 2:00 — 3:15 PM EDT
Clinical practice in physical therapy requires students to connect pathomechanics, assessments, interventions, and risk factors—yet traditional instruction often emphasizes isolated knowledge acquisition. In a doctoral physical therapy Pathokinesiology course (N=41), we designed a semester-long gamification project using specifications grading to provide choice and iterative feedback while building clinical reasoning.
The MSK-Connect board game project has three phases grounded in Self-Determination Theory (SDT). In Creation, students select diagnoses and design game cards with four interconnected clinical "corners" through four iterative drafts evaluated via specifications grading—cards either meet requirements or are revised. In Consolidation, an eight-week interval between creation and gameplay promotes retention. In Retrieval and Synthesis, team gameplay requires connection-making across body regions using crowdsourced cards from the entire class. The project aligns with SDT by fostering autonomy (choosing diagnoses, designing cards), competence (iterative drafts with clear specs, mastery during gameplay), and relatedness (peer review, team play). Final grades reflect how many drafts meet specifications.
This project concludes May 2026. Data sources include the Intrinsic Motivation Inventory (IMI; subscales: enjoyment, perceived competence, effort, pressure, value) assessing motivation, the Gameful Experience Scale (GAMEX; subscales: enjoyment, absorption, creative thinking, activation, absence of negative affect, dominance) assessing the game experience, and custom questions on exam anxiety, preparation time, and perceived retrieval. Pre/post exam items and focus groups will also assess retrieval and connection-making.
By the conference, we will present complete results and practical recommendations for implementing gamification with specifications grading, including the project structure, spec criteria, and crowdsourced materials as adaptable resources.
Evaluating the impact of specifications grading on student motivation and self-efficacy in organic chemistry: a quasi-experimental study
Brandon Yik, Joseph Houck, Eric Nacsa Student Motivation
Tuesday, June 16, 1:00 — 2:15 PM EDT
Specifications grading is an alternative approach designed to shift emphasis from point accumulation to mastery of clearly defined learning outcomes. Advocates suggest it may enhance students’ motivation and self-efficacy by clarifying expectations and reducing competition. However, empirical evidence examining these affective claims remains limited, particularly in large-enrollment science, technology, engineering, and mathematics (STEM) courses. This quantitative quasi-experimental study examines whether specifications grading is associated with changes in students’ motivation and self-efficacy compared to traditional grading. Grounded in motivational theory and social-cognitive theory of self-efficacy, we asked: (1) Does students’ motivation toward chemistry change over a semester under specifications grading relative to traditional grading? and (2) Does students’ chemistry self-efficacy change differentially across grading systems? Participants were enrolled in large-enrollment Organic Chemistry I courses taught using either traditional or specifications grading. Students completed validated measures of motivation and self-efficacy at the beginning and end of the semester. We analyzed changes using descriptive statistics, factorial multivariate analysis of covariance (MANOVA), and a difference-in-differences (DiD) approach to account for baseline differences. Results indicate minimal changes in motivation and self-efficacy across both grading conditions. DiD analyses show no statistically significant differential effect of specifications grading relative to traditional grading on either outcome. These findings suggest that modifying grading structures alone may be insufficient to meaningfully influence students’ affective experiences in large-enrollment STEM contexts. Implications for grading reform research and instructional practice are discussed.
From Compliance to Mastery: Using Threshold Grading to Support Learning in a Large Social-Media-Integrated Biology Course
Michael Shavlik Integrating Peers and Social Media
Tuesday, June 16, 2:45 — 4:00 PM EDT
Implementing alternative grading systems in college classrooms that are large in scale (200+ students) remains a challenge for many. Strategies and schemes that work well in small classrooms with a few dozen students often face difficulties when scaling up to large classes. For example, instructors are often challenged to successfully implement some key staples of alternative grading systems, such as feedback loops and student-specific re-attempts at assignments. In this talk, I will share my classroom practice experience with implementing a threshold-style grading system in a large, introductory biology course designed for non-majors. Since this course acts as a “check-the-box” science credit for many departments on campus, students come to this class ranging from freshman to seniors across 30+ majors. Moving away from a traditionally graded, textbook-based, gen-ed class, I re-designed this course for the 2025-2026 school year with an emphasis on interweaving STEM focused social media content with traditional biology topics relevant to society. Students take two quizzes each week, one focused on content and the other assessing skills for analyzing a STEM social media post. Additionally, there is an optional midterm and optional final exam based solely on content knowledge. Meeting a threshold score on a quiz or an exam contributes positively to a student’s grade, which is determined by the number of times the threshold has been met by the end of the semester. In-class participation and real-time questions can yield a “+” to the letter grade if a certain amount of credit has been earned by the end of the semester. Students had received this grading scheme quite positively, though it had initially felt foreign and difficult to understand, as confirmed by informal survey data. In this talk, I will also share my own reflections on this scheme and provide recommendations for Canvas usage, feedback and re-take opportunities, and how to pitch alt-grading to students.
From Isolation to Collaborative Innovation: Building Institutional Communities of Practice for Equitable Grading Reform
Iris De Lis, Iris S. De Lis, Shoshana Zeisman-Pereyo Workshop
Wednesday, June 17, 3:45 — 5:00 PM EDT
Traditional grading leaves many faculty feeling “shackled to grades,” resentful, and unsure how to move from isolated panic to coordinated change, especially at public, access-oriented institutions serving first-generation, transfer, working, and non-traditional students. This interactive workshop shares a concrete, research-informed model for an institutional grading Community of Practice (CoP) that supports faculty in designing one syllabus-ready alternative-grading pilot aligned with assessment-for-learning, equity, and student well-being. Drawing on the GOAL Framework’s focus on belonging, agency, and visibility, the SCIENCE Collaborative’s Student-Centered Grading CoP, and literature on equity-minded faculty CoPs that reduce isolation and sustain pedagogical reform, we outline the design of Portland State University’s “Rethinking Learning Assessment” CoP and translate it into adaptable design moves for other campuses.
Targeted to faculty developers, center directors, assessment leaders, and department chairs, the session assumes only basic familiarity with alternative grading (e.g., ungrading, specs, standards-based grading). Participants will: (1) surface emotional and structural barriers to grading reform at their own institutions; (2) examine PSU’s five-session CoP arc (“The Why,” “The What,” “The How,” “The Who,” “The Launch”) as one case of campus-level design; and (3) draft a one-page launch plan for a local alternative-grading CoP, including recruitment, focus, session structure, and shared artifacts. Using Zoom breakout rooms, collaborative documents, and peer feedback, attendees will experience key CoP design principles—psychological safety, cross-disciplinary dialogue, and focus on one bounded pilot—while building a transferable playbook they can adapt to their own institutional and disciplinary contexts.
From Points to Possibility: Faculty-Led Grading Innovations for Generation Alpha Learners
Elle Corvette, Kristi Rittby, Jennifer Blush, Wade Newhouse Organized Session
Tuesday, June 16, 4:30 — 5:45 PM EDT
What happens when a generation raised on instant feedback, endless choice, and algorithmic curation walks into classrooms still organized around letter grades, percentages, and one-shot high-stakes exams? As higher education welcomes a new cohort of Generation Alpha students, digitally fluent, feedback-oriented, and attuned to equity and belonging, faculty are reexamining grading practices to align with these evolving learner profiles (García & Weiss, 2022; Harrington, 2025; Mackh, 2024; Miller, 2023). This collaborative presentation highlights three faculty members’ classroom innovations across different disciplines, each centering specifications and standards-based approaches to grading: specifications grading in mathematics, standards-based grading in psychology, and a blended specifications/standards model in an intensive writing course. This work emerges from a small, teaching-focused liberal arts university that primarily serves undergraduate students across a range of disciplines. Grounded in literature that challenges traditional grading models (Blum, 2021; Feldman, 2019; Nilson, 2015; Guskey & Brookhart, 2019), this session explores theory-to-practice insights that prioritize feedback, transparency, and motivation for Generation Alpha learners.
Grades as a Site of Sensemaking: How Students Process Shifts in Identities and Expectations
Melissa Ko, Rachel Weiher, Kai Korporaal, Lynn Chien, Anika Yu Supporting Student Identity Development
Thursday, June 18, 2:15 — 3:30 PM EDT
Many instructors have sought to reform how grades are assigned in their courses. However, as long as grades exist, the question remains: How do students make sense of grades, particularly when receiving low grades damages their self-concept? At a large, highly selective R1 university, we characterized student beliefs when confronted with the shock and disappointment of unexpected low grades. Through a mixed-methods analysis of undergraduate students representing diverse academic and social identities, we captured individual expectations around grades and group dialogue in which participants processed experiences of low grades and attempted to make sense of how and why this happened. Our data surfaced that students previously identifying as high achievers had to contend with shifts in this identity in a new, highly rigorous institution. Students identified multiple reasons for low grades centered on a disconnect with instructors. Several students described unfair application of grading standards or unspoken assignment expectations as determinants for low grades, not a lack of effort or capability. Others shared specific stories wherein students believed instructors acted in bad faith when assigning grades. Ultimately, student conversations explored possible reasons for low grades as participants attempted to make sense of the dissonance between their perceived capacity to learn and their actual academic performance. While grades remain a fixture in higher education, students struggle with the college transition, particularly when grading expectations shift from what was familiar. Attention to how students process the emotional shock of a low grade, and how they may gravitate towards certain cause-and-effect explanations, can help instructors to guide students towards productive and resilient responses to hardship. Moreover, these findings suggest instructors can more clearly communicate the reason for low grades in ways that students can understand and accept.
Holistic Grading: Aligning Instructional Methods with Course Content
Leigha McReynolds, Alexandra Harlig Workshop
Wednesday, June 17, 3:45 — 5:00 PM EDT
This workshop offers participants the opportunity to consider how critical grading practices can directly and thematically support student competency in the values, skills, and methods of their course’s topic and discipline. Equally, themes and concerns of the course content can bolster student buy-in to the assessment paradigm and ethos.
This workshop is suitable both for instructors looking for guidance to implement critical grading for the first time, and veteran practitioners who want their grading practice to be a more holistic and integrated aspect of their pedagogy.
We both have several years experience with critical grading in our own classes – including labor-based contract grading and ungrading – and have offered critical grading workshops for our department and university as well as at last year’s Grading Conference. This workshop builds on our previous work around values-based grading, moving from a discussion of policies and assignments to grounding pedagogical choices in a specific class context. We came to critical grading through our scholarly work; Dr. Harlig first used labor-based grading in a class on labor as another way for students to think about how labor is valued. Dr. McReynolds began ungrading in a class on disability and eugenics because both conversations critique dividing people based on numerical values. We want to help others come to critical grading grounded in their expertise. As facilitators, we encourage instructors to think concretely about implementation without proscribing a specific practice.
The workshop will include a reflection, a brief explanation of how this has worked in our classes, and activities – such as breakout room discussions, time for independent work, and general Q&A – for participants to work through and implement the material. Participants should leave with possible thematic connections, at least one practical strategy, and the inspiration to reflect on the ethos underlying their class or discipline.
How Educators Build and Adapt Technological Systems to Support Alternative Grading: A Mixed-Methods Study
Jacob Adler Systems of Scale
Thursday, June 18, 12:30 PM — 1:45 PM EDT
Alternative grading systems may have implementation requirements that differ from those used for traditional points-based grading, yet little research has examined how educators build or use tools to support these systems. Using a hybrid TPACK and TAM/UTAUT framework, we surveyed educators who self-identified as implementing alternative grading. We examined what technologies they use (such as Learning Management System (LMS) features, external spreadsheets, or custom tools), how they develop or adapt these systems, and how perceived usefulness shapes their design choices. Quantitative results show that more than half of participants report that setting up their alternative grading system requires more work than their traditionally graded courses. A little over half consulted with colleagues within or beyond their institutions, and some received help from instructional support staff. Initial qualitative analyses indicate that many educators use hybrid systems that combine LMS tools for submissions and feedback with external spreadsheets to track grade progress. Others mentioned using physical paper or custom-coded software. Relying on LMS-only approaches often occurs by using specific LMS tools, formulas, simple grading techniques (such as complete/incomplete), or even points-based grading to remain within the LMS. Some emphasized that revision and reassessment are central to their pedagogy. When the LMS gradebook cannot accommodate these processes, a hybrid system becomes necessary. Some also noted the complexity and effort for LMS set up and concerns about data management. Overall, there is a mismatch between pedagogical needs of alternative grading and the capabilities of many LMS gradebooks. Educators build supplemental or external systems to support their practices. The results highlight the importance of institutional support and collaboration in course design and suggest a need to expand and improve LMS features to better support alternative grading pedagogy.
How Students Experience an Ungraded Classroom
Sarah Beal Panel
Wednesday, June 17, 3:45 — 5:00 PM EDT
This panel will bring together three students from two different ungraded courses that were taught during the proposer’s first implementation of an alternative grading approach. In these ungraded courses, students received formative feedback throughout the semester and engaged in guided grading reflections. They then met with the instructor individually at the end of the semester to self-assign their own final grades.
The student panelists will be invited to provide an honest account of their experiences in these courses. They will share their first impressions, discomforts, and uncertainties. They will also share how this grading approach impacted their motivation in the course and their perception of academic challenge. The panel will include students from different majors, different background experiences, and different neurotypes to emphasize how experiences can vary across diverse identities.
This panel is for instructors who are interested in experimenting with alternative grading practices or those already engaged in such approaches. They will gain insight into how students perceive self-evaluation, the impact that new grading approaches have on students’ perceptions of workload and anxiety, and where there are opportunities to better support students, especially those who are neurodivergent. The primary objective of this session is to emphasize the role of students as active agents in shaping alternative grading approaches.
Questions:
1.) What was your first impression of the ungraded format and how did your perception of self-assessment change throughout the semester?
2.) What was challenging about the grading approach? How did you adapt or what support from your instructor was most helpful?
3.) Do you think that this grading approach enhanced your learning experience?
4.) What do you wish the instructor would have done differently?
5.) What advice would you give to future instructors who use this approach? What advice would you give to future students?
How and Why to Use Specifications Grading: The Latest Research
Linda Nilson Workshop
Tuesday, June 16, 4:30 — 5:45 PM EDT
A grading system should operate transparently, motivate students to learn and excel, uphold high standards, promote student use of feedback, and allow you to have a life. And it shouldn’t generate conflict with students or undue stress for you or them. How well does our current system perform? No wonder alternative systems are gaining a following.
This workshop is for all faculty, new or experienced, and requires only one semester of traditional points grading. It presents an alternative grading system, specifications (specs) grading, and provides evidence from surveys, videos, and publications of 120 users that it restores rigor, motivates students, and reduces grading time, as claimed in the subtitle of my 2015 1st-edition book that introduced it. It also helps students develop career competencies and transition into the workforce. These findings appear in the 2nd edition, Specifications Grading 2.0: Restoring Rigor, Motivating Students, Saving Faculty Time, and Developing Career Competencies (2026), by myself and Joe Packowski.
The system works effectively because it gives students choices and control while holding their work to high academic standards. Specs grading is based on acceptable/unacceptable grading of assignments and tests, tokens for limited re-do’s and extensions, and bundles of assessments linked to learning outcomes and final letter grades. Participants will hear or see actual course examples of specs for various assignments and assessment bundles varying by the amount and/or challenge of the work required. For activities, participants will develop specs for an assignment and bundles for final course grades. By the end, they will be able to:
• Summarize new evidence on specs grading's merits and challenges
• Explain how specs grading works
• Implement it, in whole or in part, in their own courses.
In addition, most students prefer specs grading to points-based grading, and it’s easy to implement in any size class and on any platform.
Implementation of Specifications Grading in a Master’s Level Public Health Course
Miruna Buta, Anya Kazanjian Novel Implementations
Wednesday, June 17, 2:00 — 3:15 PM EDT
The University of Washington requires all Master of Public Health students to take 6 core courses designed to provide foundational public health competencies. One of these courses is PHI 511: Foundations of Public Health. In Autumn 2025, we implemented a specifications grading system in the online version of PHI 511.
The course included 11 weekly writing practices graded Pass/Revise, 10 weekly knowledge checks graded 0-100%, and a group project graded Excellent/Satisfactory/Needs Improvement. The course met synchronously on Zoom 6 times over the quarter.
We created 6 assignment bundles corresponding to grades between 4.0 and 2.7. The 4.0 bundle included 11 writing practices graded Pass, 10 knowledge checks scored 80% or above, engagement in 6 Zoom sessions, and Excellent group project participation. Other bundles included various combinations of these assignments.
Analysis of key quantitative measures from student evaluations (2021 to present) revealed that the Autumn 2025 offering of the course received higher-than-average scores. The Overall Summative Rating of the course was 4.7, compared to a historical mean of 4.5 (range: 4.2-4.7). The Challenge and Engagement Index was 5.3, compared to a historical mean of 4.8 (range: 4.5-5.2). In qualitative comments, students overwhelmingly described grading as fair, supportive, and focused on learning. They valued the Pass/Revise approach and the chance to revise writing after constructive feedback but requested clearer writing expectations and quicker grading turnaround time.
The Pass/Revise grading system created anxiety for some students and made it difficult for instructors to highlight excellent work. The different types of assignments made bundles quite complex, and challenging to convert to the 4.0 system. In future offerings, we intend to provide more rationale for the system, reword rubrics to help reframe incompletes as a growth opportunity, and explore ways to simplify bundles.
Implementation of a Point-Free Mastery-Based Grading in a 200-Level Genetics Course at Ithaca College
Rebecca Brady Integrating Peers and Social Media
Tuesday, June 16, 2:45 — 4:00 PM EDT
Traditional grading schemes use numerical scores to measure mastery, enforce completion of formative assignments and provide feedback. However, this can penalize learning by lowering grades on early formative assignments. These systems also create a heavy grading load and often shift students’ focus from learning the material to simply earning points.
Beginning in Fall 2023, I implemented a non-numerical mastery-based grading approach in a 200-level Genetics course at Ithaca College. This course serves Biology and related majors with enrollments of ~20 students. Lectures are taught in a flipped format and primarily assessed with in-class exams. In the new system, students receive a Mastered, Progressing, or Not Yet on exams for each content area based on pre-determined rubrics. Students have three opportunities to demonstrate mastery. Semester grades are based on the total number of content areas mastered. Formative homework assignments do not count toward the final grade.
Students must submit a problem set on each content area before taking the exam, with written feedback given for each problem set. Reflective corrections on both the exam and the problem set are required before students may re-assess for that content area. Additional study guides and quizzes are provided but not required.
This system promotes continuous student engagement, as a student is never limited due to past performance. In 2025, all 16 students yet to master all content took the 3rd re-assessment, and all but one improved their mastery score. In the last 3 years (n=53), students mastered an average of 6 out of 9 content areas, with 58% mastering 7 or more. Many student evaluations mention this design prioritizes their learning. The reduced grading burden allows me to refocus my time on providing individual support. Future iterations of this system will focus on incorporating a wider variety of assessment types and finding effective ways to apply this system in larger courses.
Implementing Pointsification: Surveying Student Perceptions of Flexibility with Alternative Grading
Lauren DiSalvo Finding Flexibility, Equity, and Joy
Thursday, June 18, 4:00 — 5:15 PM EDT
This talk will focus on an alternative grading schema for lower-level, general education classes with high enrollment. The alternative grading schema that I created most comfortably fits under the classification of pointsification, which is the use of points to motivate students to engage with assignments. In my classes, students only ever accumulate points as they pick and choose which type of assignment to complete from an offering of variable assignments that constitute more available potential points than required for the top grade in the class. I have adopted this alternative grading system to give students more flexibility and control over their grades.
In addition to detailing the structure of the points system, I will cover strategies for onboarding students, keeping students informed about points remaining, working around LMS gradebook constraints, and providing students with individualized updates about the grades outside of the LMS.
While many alternative grading studies measure student motivation, engagement, or outcomes, I wanted to measure student perceptions of flexibility around four criteria: managing deadlines, managing course load, grade outcomes, and assignment choice. Additionally, the survey includes questions about student assignment management since the success of a flexible system depends on that.
While I will update with official data from the Spring 2026 survey, I have already tested for survey validity. Preliminary results suggest that students perceive high amounts of flexibility. Students rated themselves only average on their perception of their own assignment management abilities, which might explain the inverse bell curve for a question about whether the system provided students with too much flexibility. The results suggest that while students enjoy the flexibility, I need to build in strategies for managing assignments for this to truly be a successful system.
Implementing Standards-Based Grading in Coordinated Precalculus Pilots
Luvreet Sangha Novel Implementations in Math Courses
Thursday, June 18, 2:15 — 3:30 PM EDT
To reduce DFW rates in Precalculus, our department piloted two new courses: a "supported" model featuring active-learning discussion sections and a "stretch" model spanning two quarters. Both use a coordinated Standards-Based Grading (SBG) framework with unified learning objectives and a 0-4 proficiency scale.
This presentation discusses the logistics of aligning assessment problems and standards within multiple coordinated models. While quantitative data is forthcoming, early student narrative evaluations highlight how SBG’s transparency and reassessment flexibility provide vital support for vulnerable learners in these courses.
Participants will explore strategies for:
(1) Maintaining consistency and equity through shared standards.
(2) Coordinating SBG implementation across varying course lengths and structures.
(3) Leveraging qualitative student feedback to refine the courses.
Learning from Helpful Feedback: Strategies for Self-graded Mathematics Homework
Suzanne Dorée Enhancing Feedback and Communication
Thursday, June 18, 4:00 — 5:15 PM EDT
One pillar of Alternative Grading is providing students with helpful feedback. How often do we provide students with what we believe is helpful feedback only to find that they have not looked at the feedback (or did not understand it)? To help close the feedback loop, for the past few years I have been experimenting with several methods for having students grade their own homework in midlevel and advanced university mathematics courses (Discrete Mathematical Structures, Linear Algebra, Abstract Algebra, and Graph Theory) in class sizes of 20-30 students. One method is based on annotated solutions I provide. The other uses generative AI (LLMs). My findings are that students are capable of doing this work, they report increases in their learning (versus having comments from the instructor, a student grader, or online homework), they accept self-grading as a valid learning activity, and they are less likely to inappropriately use AI or other resources. It also saves me considerable time which allows me support students in other ways. In this talk, I will share the implementation logistics, recommendations to new adopters, assessment results, and my own reflections.
Lessons Learned from Standards Based Grading in Large Multivariable Calculus
Hunter Lehmann Scaling Implementation
Tuesday, June 16, 1:00 — 2:15 PM EDT
In this talk, I will discuss the strategies I used to successfully implement standards-based grading in a large lecture (200 student), loosely coordinated multivariable calculus course. The talk will focus on ways to mitigate the particular challenges of a large class, including managing assessment time and reattempts, leveraging (and falling victim to) technology, and training teaching assistants to grade consistently in this context. Particular takeaways include judicious use of multiple-choice testing to keep grading workload manageable, consistency of assessment design to help with consistency among graders, and use of a cumulative test framework to handle the challenge of proctoring large numbers of students without a university testing center.
Lessons from Faculty Learning Communities in Alternative Grading
Sarah Justice, Sarah Klanderman Impacts on Faculty and Students
Wednesday, June 17, 2:00 — 3:15 PM EDT
Alternative grading practices hold tremendous promise for improving student learning and equity, yet implementation remains challenging for many instructors. At Marian University, a private, Catholic primarily undergraduate institution in Indianapolis, we developed a learning community (LC) that supports educators through both the design and implementation phases of adopting alternative grading approaches. This session will share practical lessons learned from facilitating these communities and offers a framework for institutions seeking to support systemic grading reform. We will address critical components of successful LCs focused on alternative grading, including the importance of support during both design and implementation, the value of LC participants coming from diverse disciplines and career stages, and how to generate buy-in. Attendees will leave with strategies for launching or strengthening LCs at their own institutions including tips for effective facilitation, sample LC resources, and ways to sustain momentum beyond initial enthusiasm.
Mastery-based grading for a coordinated large first-year course
Xinli Wang, Jamie de Jong, Michelle Davidson Scaling Implementation
Tuesday, June 16, 1:00 — 2:15 PM EDT
In Fall 2025 we adapted a first-year course, MATH1010 (Applied Finite Mathematics), at the University of Manitoba to incorporate mastery-based grading. This course is commonly taken by students who are not confident in their mathematical abilities to meet the university's mathematics requirement. The course was previously run using a dual-track system where students can have one "redo" opportunity to switch from track A to track B. The dual track system was resource intensive, requiring two large rooms and two instructors in the same timeslot. The system resulted in long waitlists which made us look for other alternatives.
The core idea of mastery-based grading is allowing students to have multiple opportunities to demonstrate their learning without being penalized for mistakes they made. Once all 20 learning outcomes were clearly defined for this course, we were able to find a reasonable testing schedule which allowed students to attempt each learning outcome at least three times within the 12- to 13-week term. This adaptation has created more flexibility for students in their learning, while allowing the course to be expanded to meet the enrolment needs.
This course design was very successful in allowing students who take more time to grasp the material the flexibility to reattempt the assessments and choose how to best focus their efforts. On the instructor side, we were able to focus on teaching the students the needed skills, rather than on evaluating partial credit. There were some challenges, including a high workload for the initial implementation and helping students understand the new assessment structure.
In this talk we will discuss in more detail the motivation and results from our work with mastery-based grading in MATH1010, the challenges faced by the teaching team, the adjustments that have been made for the winter term, and the changes that are being made to fit the structure to the six-week offering in the summer.
Modeling Equity Through Ungrading: Negotiating Tensions, Structures and Discourses in Teacher Preparation
Rachel Silva, Maizie Dyess, Chrissy Johansen Integrating Teaching Teams
Thursday, June 18, 12:30 PM — 1:45 PM EDT
Implementing alternative grading systems within teacher education programs provides an opportunity to model one method of creating equitable and culturally responsive classrooms for developing educators. Using practitioner inquiry methods, three teacher educators teaching a course centered in equity and culturally responsive education aimed to explore their experience in utilizing ungrading to create more equitable education systems in their courses. This study was guided by the question: How did three first-time faculty members negotiate the tensions they experienced while integrating ungrading into an equity and culturally responsive based course? Through qualitative critical discourse analysis of faculty reflective journals, transcripts of collaborative planning meetings, and concept maps created during arts-based data analysis workshops, researchers found that although there was support for ungrading within their institution, university structures such as portfolio-based evaluations and systems related to state teaching licensure created tensions in their work. Conversations regarding grading also illuminated how their experiences with grading as students informed complex discourse regarding who “deserves” particular grades, and what those grades signify. From these findings, these researchers suggest universities expand opportunities for faculty to discuss discourse regarding grading and its connection to perpetuating inequities in education. These conversations about grading can explore how reform based grading initiatives can further inform conversations about disrupting inequities in education; however, they must also be complemented by structures that support alternative grading within universities, and more specifically, professional licensure granting programs.
More Than One Way to Capstone: A Tiered, Mastery-Based Approach to Equitable Grading
Ashley Black Systems of Scale
Thursday, June 18, 12:30 PM — 1:45 PM EDT
Traditional history capstones often culminate in a single, high-stakes research paper. At my small, rural HSI, this model can function less as a demonstration of mastery and more as a barrier to completion. My students are largely first-generation and Pell-eligible; many are from mixed-status families, work multiple jobs, and are caretakers. For some, the central goal is simply to earn the C required to pass the course and complete their degree—a reasonable goal that traditional capstone structures often fail to accommodate. With this in mind, in Spring 2026 I redesigned my Senior Seminar to better align expectations with varied student circumstances.
This presentation describes the design and implementation of a tiered, mastery-based grading structure inspired by specifications grading. The course begins with a C Tier focused on seven core disciplinary competencies, such as document analysis and construction of a scholarly argument. Students must demonstrate proficiency (assessed as Pass/Revise) in each competency to earn a C. Once mastery is achieved, students may elect to pursue either a B Tier research paper (roughly 10 pages) or an A Tier research paper (roughly 20 pages). Both pathways require proficiency-level work; they differ only in scope and ambition, not standards.
The structure is designed to respect that some students need a viable path to completion, while preserving rigorous expectations and encouraging others to pursue more ambitious projects. Early student feedback suggests increased clarity about expectations, a stronger sense of control over learning, and greater willingness to attempt the A pathway when a B-level alternative remains available. I argue that capstone redesign should begin by identifying core disciplinary competencies and building grading structures outward from demonstrated mastery rather than from a single scaled assignment. The session concludes with instructor reflections on lessons learned and future iterations.
Preparing Faculty for Un-grading
Elizabeth Harsma, Kevin Dover, Kelly Moreland Workshop
Tuesday, June 16, 4:30 — 5:45 PM EDT
In this workshop, presenters will share insights from their forthcoming book, Preparing Faculty for Equitable Assessment: A Guide for Un-grading Professional Development. The book builds on a professional development certificate they offered to university faculty in Summer 2023. This workshop will explain their flexible approach to un-grading professional development that is incremental, interdisciplinary, and collaborative. Presenters will also offer practical instructional design steps that model equitable facilitation, un-grading assessment, and address common challenges. As a next step, participants will then adapt the example program for use in their own contexts. This workshop is for participants with some experience in un-grading and an interest in facilitating un-grading professional development. By the end of the workshop, participants will be able to (1.) Explain the incremental, interdisciplinary, and collaborative approach to un-grading professional development, (2.) Review different types of un-grading assessment methods, (3.) Describe strategies for preparing faculty to respond to un-grading challenges, and (4.) Explore ways to implement this example program in their own context.
The 3. Additional Information section includes an outline of workshop activities aligned with the learning goals.
Project-Based Grading in an Abstract Algebra Classroom
Sayonita Ghosh Hajra Novel Implementations in Math Courses
Thursday, June 18, 2:15 — 3:30 PM EDT
This paper presents a project-based grading approach implemented in an undergraduate Abstract Algebra course. The course design emphasizes collaborative learning, mathematical communication, and authentic assessment. Instead of relying on exams, students earned a portion of their course grade through structured group projects.
Students worked in groups of three to explore abstract algebraic concepts in depth, synthesize their understanding, and communicate their findings through multiple formats. Each project included clearly defined milestones, such as draft and final submissions, and culminated in the creation of mathematical posters. Students presented their work at college-wide symposiums, providing opportunities to communicate complex mathematical ideas to both mathematical and non-specialist audiences. To support student success, presentation practice and formative feedback were provided during office hours prior to the symposiums.
Assessment focused on conceptual understanding, collaboration, clarity of communication, and reflection, using detailed rubrics and ongoing feedback throughout the semester. Assessment results and instructor reflections indicate increased student engagement, improved communication skills, and deeper conceptual understanding of abstract algebra topics. Student reflections further suggest that public presentations enhanced confidence and motivation, while collaborative projects supported peer learning and accountability.
The paper concludes with recommendations for instructors interested in adopting project-based grading in upper-division mathematics courses, including strategies for scaffolding projects, managing group dynamics, aligning grading criteria with learning objectives, and refining the approach for future course iterations.
Rediscovering Joy: My Journey from Adversary to Ungrading Facilitator
Michelle Abbott Finding Flexibility, Equity, and Joy
Thursday, June 18, 4:00 — 5:15 PM EDT
Battling mid-career burnout led me on a journey from strict late policies I did not enforce and detailed rubrics to comprehensive ungrading. Instead of functioning as an adversary with grade-based power over students, I am now a facilitator of student learning in courses where academic rigor is driven by student curiosity and growth mindset. In this session I will describe my experimentation with specifications grading, eliminating weekly deadlines, token systems, TILTing assignments, and ungrading in both Freshman Composition and Sophomore Literature courses, all of which I teach online, 95% asynchronously, at an open-access institution. My current approach incorporates written self-reflection after each major unit, a self-assigned final course grade, and grade-free, descriptive instructor feedback on drafts and revisions for all assessments (essays and multimodal projects only; no exams). Students are not prepared for ungraded learning spaces, so they do not take full advantage of this approach until I demonstrate my interest in their academic success and introduce them to ungrading research. Questions shift from, “What is my grade? What do I need to do to pass this class?” to “Should I move this quotation to paragraph two when I revise?” and “What can I do when I’m hit with Writer’s Block?” Pioneering research from Stommal, Nilsen, Blum, and others has proven ungrading to be a beneficial approach to assessment and grading, but I could not have envisioned how much I look forward to “grading” assessments now that my only goal is helping students improve. I love teaching composition and literature again, and I am hoping that sharing what I have learned will give faculty in all phases of their careers encouragement to explore ungrading themselves.
Reducing Cognitive Anchoring on Partial Credit: A Growth-Mindset Approach to Feedback in a Weighted High School SBG System
Jason Elsinger Enhancing Feedback and Communication
Thursday, June 18, 4:00 — 5:15 PM EDT
Students often focus on points rather than revision, treating partial credit as a stopping point instead of an opportunity for growth. This presentation describes a points-based standards grading system implemented within a traditional weighted high school grading structure, and the communication strategies necessary to sustain it. Grounded in a growth mindset framework, the modification is designed to reduce cognitive anchoring on partial credit and redirect student attention toward written feedback and reassessment.
In earlier iterations of my standards-based grading (SBG) system, assessments used four marks (0–3). In the revised high school model, I use three marks (0–2). When students earn partial credit, the numeric fraction is not written on the assessment. Students must instead interpret feedback, identify misconceptions, and revise their thinking before reassessment. This intentional friction promotes metacognitive engagement rather than passive acceptance of a score.
To examine the impact of this change, students will be surveyed about their attention to feedback in this course compared to traditionally graded courses. In addition, the STAI-5 and the Abbreviated Math Anxiety Scale (AMAS) will be administered at the beginning and end of the course to explore changes in test and math anxiety over time. Comparisons between college and high school implementations will highlight differences in communication, institutional constraints, and student perceptions of reassessment.
Implications for instructors balancing mastery-based assessment within traditional grading systems will be discussed.
Retesting without penalty to promote student learning in large-enrollment introductory courses
Danielle Condry Panel
Wednesday, June 17, 2:00 — 3:15 PM EDT
For most students, learning is not instantaneous—it develops through practice, feedback, reflection, and revision. Decades of research in the Learning Sciences demonstrate that students learn more deeply when they actively engage with instructor feedback and use metacognitive strategies to diagnose errors, monitor understanding, and adjust their approaches. Yet in many courses, traditional exam structures and fast-paced curricula turn assessments into endpoints: once grades are posted, the opportunity to learn from feedback has largely passed.
This panel explores retesting without penalty as an instructional practice that repositions exams as part of the learning process rather than the conclusion. Panelists will share diverse experiences implementing retesting across courses that vary in size, modality, disciplinary content, grading structures, and institutional constraints. Through these case-based perspectives, the panel will examine how retesting can create structured opportunities for students to engage meaningfully with feedback, revisit misconceptions, and demonstrate improved reasoning and accuracy on subsequent assessments.
Panelists will discuss evidence of student learning drawn from patterns of feedback engagement and qualitative improvements in retake performance, as well as instructional tradeoffs, sustainability, and lessons learned. The session is designed to serve instructors, instructional designers, and educational leaders seeking practical, evidence-informed assessment strategies that support inclusive and learning-centered teaching.
Rethinking the Grade: Problems with Traditional Grading
Ashleigh Fox Presentation
Tuesday, June 16, 1:00 — 2:15 PM EDT
This session situates alternative grading within decades of research critiquing traditional grading practices as ineffective and, in many cases, harmful to students. The talk will define “traditional grading” and provide a brief overview of its history before transitioning to the literature exploring the problems with traditional grading and learning, particularly regarding motivation (Blum, 2020; Butler, 1998; Chamberlin et al., 2018), consistency (Brookhart et al., 2016; Starch & Elliot, 1912), equity (Ashby-King et al., 2021; Chemaly, 2015; Inoue, 2020; Lince, 2021; Rapchak et al. 2023), mental health (Bouchrika, 2020; De Luca et al., 2016; Reinberg, 2018), and self-efficacy (Anderson, 2018; Lake et al., 2018). In light of these limitations, the session concludes by suggesting that alternative grading methods may better support learning through reflection of real-world contexts and recognition of the diverse ways individuals acquire and demonstrate knowledge.
Scaling Alternative Grading with a Computer-Based Testing Facility
Craig Zilles Systems of Scale
Thursday, June 18, 12:30 PM — 1:45 PM EDT
A keystone in many alternative grading approaches is the notion of "re-takes" because we are less concerned with _when_ a student demonstrates a learning objective than _that_ they demonstrate the learning objective. These re-take opportunities create an additional administrative and grading burden for faculty and their staff, which is a barrier to adoption for many faculty who otherwise see the value in offering re-takes.
For the past 12 years, the University of Illinois has been running a Computer-Based Testing Facility (CBTF) that drastically reduces the effort of running exams, greatly facilitating alternative grading practices. For example, in the 300-student computer organization class that I (solo) teach in CS, we run 7 mid-terms (every 2 weeks) and in the off weeks we run re-take exams, plus a comprehensive 2-hour final. In Fall 2026, 65 courses used the facility, and we ran over 120,000 exams. This idea has withstood the tests of time and scale and is being adopted by a number of other universities (e.g., UBC, UC Berkeley, UC San Diego, Michigan, NYU).
In this talk, I'll explain the four key ideas that make the CBTF work: (1) the ability to ask sophisticated questions (we don't want to dumb down exams) and, because you are capturing student work in a digital form, (deterministically) auto-grade a broad range of questions, (2) instead of writing individual questions, writing reusable "question generators" that use code/randomization to create a collection of unique questions, (3) running exams asynchronously (students self-schedule) to alleviate dealing with conflicts, and (4) using dedicated testing space, proctors, and institutional computers (so we can prevent the use of genAI). Furthermore, our CBTF effortlessly supports upwards of 98% of student testing accommodations in these classes.
Striking a balance between Completion-Based Grading and Grading for Accuracy in a High-Structure Organic Chemistry Course
Olivia Crandell, Kelsey Metzger Metzbertsel Enhancing Feedback and Communication
Thursday, June 18, 4:00 — 5:15 PM EDT
High-structure course design situates learning opportunities across three phases: a preparatory phase (pre-class assignments), a practice phase (student-centered, in-class activities), and a polish phase (homework and review assignments). We have implemented this course structure in an Organic Chemistry I course with the goal of shifting ~70% of content delivery to the “prepare” phase to maximize student-centered learning opportunities during class. However, establishing strong student buy-in and engagement with preparatory phase assignments is essential for success in this high-structure model. Preparatory assignments were graded using a combination of completion-based grading and accuracy-based grading throughout the semester depending on the targeted learning objectives. This study aims to investigate the ways that the varying grading structures for preparatory assignments impact students’ perceptions of the assignments as measured by quantitative survey data as well as understand any trends between students’ perceptions and students’ actual response scores for both grading structures. We have evidence suggesting strong student buy-in for preparatory assignments as measured by students’ perceptions of the value of preparatory assignments. We hope to understand how the varying between completion-based and accuracy-based grading approaches played a role in student buy-in and overall success with preparatory assignments. Implications for teaching will discuss the pedagogical goals guiding our decisions to frame certain assignments and learning objectives under each grading structure.
Student Agency Through Collaborative Grading
Christopher Sarkonak Workshop
Thursday, June 18, 4:00 — 5:15 PM EDT
Classrooms should be student-centered and students need to feel that they really do have ownership of their learning environment. This workshop looks at a high school physics teacher's implementation of a portfolio-based, collaborative grading approach that focuses on student agency and well-being. We will explore this approach, the successes and failures along the way, and some examples of how it can be modified and what it can look like in other disciplines. As we go down this journey, participants will work together to brainstorm ideas for how students can be more involved in the assessment process in their own contexts and walk away with strategies that can be further developed and implemented with the start of the new term. Teachers that may have had previous issues with student buy-in or are not sure where to start on co-constructing assessment with students can walk away with insights and strategies that will help them be successful in this shift. There will also be some exploration of the research that has been done in this classroom, insights into some of the considerations in the decisions that were made, and time for a general Q&A to answer participants' questions.
The +1 Framework for Pedagogical Change: Supporting Instructors in Small Moves toward Grading Equity
Melissa Ko, Rachel Weiher Finding Flexibility, Equity, and Joy
Thursday, June 18, 4:00 — 5:15 PM EDT
Instructors interested in alternative grading often encounter dissonance between our discourse of transformational change and reform in grading and the practical realities of their own teaching contexts. Departmental norms, inherited courses, limited capacity, and tensions with professional identity can all complicate a large-scale redesign. In this session, we describe the “+1” philosophy of our instructor development approach that emphasizes making one manageable and possibly reversible change tailored to one’s teaching. This model draws on Vygotsky’s Zone of Proximal Development (ZPD) and the Circle of Control to help instructors find the overlap: we support instructors in identifying a change that both fits in their individual sphere of control and presents a reasonable level of challenge given their pedagogical knowledge. Implemented through structured group workshops at large research universities in response to past participant feedback, our programming begins with diagnostic self-reflection and conversations that help surface instructors’ own constraints, autonomy, and readiness. In the spirit of differentiated instruction, instructors explore and choose from a menu of small moves toward a more equitable grading system. This choose-your-own-adventure structure normalizes incremental progress and low-risk experimentation for instructors navigating significant barriers. Early outcomes from this approach suggest that an empathetic and flexible approach, rather than a prescriptive one, leads to less resistance and overwhelm amongst instructors. From the lens of the expectancy-value theory of motivation, our instructors need institutional support to bolster their self-efficacy rather than an appeal to their values. This session concludes with recommendations for how educational developers can meet instructors where they are. We advocate for the incremental or “+1” approach as a scalable alternative to grading reform across any large, decentralized institution.
The Immediate and Long-Term Impact of Using Specifications-Based Grading in General Chemistry
Erin Wilson Impacts on Faculty and Students
Wednesday, June 17, 2:00 — 3:15 PM EDT
This presentation will focus on the short- and long-term outcomes of eight years of implementing a specifications-based grading practice in general chemistry at a small, liberal arts college. Outcomes for students who took general chemistry with traditional grading were compared with those who took general chemistry with specifications-based grading over this time period. The groups were compared on outcomes including general chemistry grade, grades and overall passing rate in subsequent chemistry courses, and direct measures of learning through nationally-normed American Chemical Society (ACS) exams. The results demonstrate that students who took general chemistry with specifications-based grading achieved higher grades in that and subsequent chemistry courses. This cohort of students also had a greater chance of passing subsequent chemistry courses, especially organic chemistry. Finally, sigificant improvements in ACS exam scores were observed in the specifications-based grading cohort. Of particular interest is that the improvements in both grades and learning measures are greater for men than for women. I hypothesize this may reflect reported differences in academically-important skill development between men and women. Girls and women in secondary and undergraduate schooling have consistently scored higher than boys and men in academic skills such as self-discipline, organization and goal-oriented behavior (collectively, consciensciousness). Specifications-based grading may develop skills in these areas, allowing men to close that skill gap and improve their performance throughout their college chemistry education.
The Original Portfolio: Leveraging Arts practices in Alternative grading
Kimberly Hall Panel
Thursday, June 18, 2:15 — 3:30 PM EDT
The theme of this panel is to explore arts grading practices that have emerged from an educational history that was almost completely without grading until the joining of art schools to arts universities in the 1980s (Muhammad 2023). The history of grading in art schools was rarely organised like academic subjects, with few exams and papers and more portfolios and objects. Current practice in teaching the arts can offer interesting examples of innovative approaches to learning and assessment that would benefit a wider audience. Assessment in the arts presents unique challenges: aesthetic judgement, subjectivity, creativity, and personal approach can feel impossible to grade clearly or objectively (Fleming 2012), but academics in other disciplines might find useful insight into the complexity of assessment from a new vantage point.
In my own experience of art school there were no grades at all, only portfolio review and progression decisions (or not). That experience has formed my own approach to teaching and assessment and brought me to alternative grading practices. This panel talk will bring together a range of teaching artists to discuss grading in an arts context, both to examine practices in the discipline and to share tools and approaches to the wider teaching audience. We will discuss portfolio presentations, the development of special tools artist-teachers use, how writing fits into an art school assessment, and what do artists think of rubrics and matrices.
M Fleming, ‘Assessment’, in ‘The arts in education’, Taylor and Francis, 2012.
Z Muhammad, ‘The Entire History of Art School’, The White Tube, 2023. Available at: https://thewhitepube.co.uk/texts/2023/history-art-skl/
Un-Grading for Equity: Implementing Contract Grading for Inclusive Assessment in a Spanish as a Heritage Language (SHL) Writing Classroom
Andrea Hernandez, Christian Puma Ninacuri Supporting Student Identity Development
Thursday, June 18, 2:15 — 3:30 PM EDT
Assessment practices significantly influence how SHL learners understand their writing abilities and construct their identities as bilingual writers. Traditional grading models often reinforce linguistic hierarchies that conflict with students’ lived language practices. This presentation examines the implementation of contract grading (Danielewicz & Elbow, 2009; Inoue, 2015) as an inclusive assessment framework that promotes students’ CLA (Fairclough, 1992). Rather than emphasizing the quality of a final product, contract grading repositions assessment as a collaborative, process-oriented practice that values effort, reflection, and growth.
Drawing on a study conducted in an SHL course at a liberal arts college in Maine, this presentation explores how instructors can design and sustain un-grading systems that affirm linguistic diversity while maintaining academic rigor. Using a qualitative mixed-methods design (including pre- and post-course surveys and reflective journals), the study illustrates both the opportunities and challenges of shifting from traditional assessment to contract grading. The analysis focuses on pedagogical design decisions, instructor mediation of linguistic ideologies, and the ways un-grading can foster a more equitable classroom ecology. Preliminary findings highlight specific strategies that supported students’ engagement and agency: clearly defined effort- and revision-based expectations, structured opportunities for reflection on writing choices, and explicit discussion of linguistic norms and ideologies. Students reported greater confidence experimenting with voice, register, and translanguaging.
Attendees will leave with actionable ideas for designing contract grading systems, promoting reflective writing, and supporting students’ bilingual voices in their HL classes, along with guidance for navigating common challenges in implementing these non-traditional assessment practices.
Ungrading for an Entrepreneurial Mindset: A Self-Determination Theory Approach to Business Education
Rosemary Fisher, Taylor Grogan, James Williams, Aron Perenyi, Richard Laferriere, Nikki Wragg Integrating Teaching Teams
Thursday, June 18, 12:30 PM — 1:45 PM EDT
In an era of rapid technological change and uncertain career paths, business schools increasingly recognise that an entrepreneurial mindset - encompassing innovation, adaptability, and informed risk-taking - is essential for all graduates. Yet traditional grading systems may undermine the very mindset they seek to develop, creating extrinsic motivation and risk-averse learning behaviours incompatible with entrepreneurial thinking. Drawing on Self-Determination Theory, this quasi-experimental study examines whether Assessment for Learning, an approach substituting grades with comprehensive formative feedback, can better support the psychological conditions necessary for entrepreneurial development. We compared traditional grading with A4L across four semesters in a core entrepreneurship subject mandated for all business students (N ≈ 160), measuring changes in basic psychological need satisfaction (autonomy, competence, relatedness) and entrepreneurial self-efficacy, alongside satisfaction and feedback engagement. Results examining changes in autonomy, competence, relatedness, and entrepreneurial self-efficacy across assessment conditions will be presented, alongside patterns in feedback engagement, and learning satisfaction. This study provides quasi-experimental evidence on A4L effects in business education, enhanced with reflections from the teaching team, offering evidence-based guidance for educators seeking assessment approaches aligned with entrepreneurial learning objectives.
Values-Based Grading Workshop – Align the Your Values with the Values Imbued in Grading!
Christopher Creighton, Sara Friedman Workshop
Tuesday, June 16, 2:45 — 4:00 PM EDT
Everything that you provide to students reflects us, our professions, our pedagogy, and shapes our students and their future work each of which are embedded with values -- is your grading system (and teaching) congruent with the values you want to communicate and see in the next generation?
This interactive 75 minute workshop features individual exercises, small group analysis, and full group problem-solving and knowledge-sharing discussions, to help you identify your professional values; start determining if your grading (and teaching) processes embody and is aligned with your belief systems; and provides peer support regarding how to begin to bridge possible values-to-grading disconnects that may be identified. We will cover some basic techniques for values alignment work, and review some inherent human challenges related to these processes. This is intended to be an initial conversation about grading and values that can evolve alongside your praxis.
Instructors and educational developers from all disciplines, with any amount of teaching experience and is anywhere on their values journey, are welcome and appreciated in this interactive format. The methodology and exercises employed in this workshop are aligned with the evidence-based social work practice of Motivational Interviewing, a clinical skill set dedicated to identifying and ranking behaviors regarding values, to achieve self-defined goals and complemented by faculty development expertise in course design and pedagogy.
As an instructor and faculty developer at UCCS, Dr. Chris Creighton has practiced ungrading in math courses, works with faculty across disciplines to reform their grading, and maintains research in ungrading. Sara Friedman is a Colorado Licensed Clinical Social Worker who utilizes a values-based approach in practice and training. She is a Doctor of Social Work candidate at Tulane University, researching values and empowerment in social work education, and teaches social work at UCCS.
What Becomes Possible When Grades Don't Exist: Faculty Perspectives from Evergreen State College
JuliA Metzker Panel
Thursday, June 18, 12:30 PM — 1:45 PM EDT
What becomes possible when a learning ecosystem is designed around personalized narrative evaluations of learning?
Most conversations about alternative grading begin with the problems of traditional letter grades and explore reforms within existing systems. But what if we started from a different foundation entirely—one where narrative evaluation is a founding design principle?
Evergreen State College has operated with narrative evaluations as its core assessment practice since its founding in 1971. For over five decades, faculty have planned curriculum and guided learning within an ecosystem built on evaluation practices that position students as active participants in assessment. This ecosystem includes faculty narrative evaluations of student learning, student self-evaluations, evaluation conferences where faculty and students engage in dialogue about learning progress, and academic statements where students synthesize and articulate their educational journey. This panel is an opportunity to examine what pedagogical culture, practices, and possibilities emerge when personalized feedback replaces the role of grades.
This panel brings together faculty from diverse disciplinary backgrounds (Math/Linguistics, Psychology, Art/Media) to explore how this evaluation ecosystem shapes their teaching, their relationships with students, and the learning culture in their classrooms. Their experiences reveal thematic insights about assessment practices, student metacognition, classroom culture, and the institutional structures that support narrative evaluation at scale.
The panel will address both the pedagogical possibilities and the practical realities of teaching within a narrative evaluation system, offering honest reflections on what works, what challenges emerge, and what becomes visible about learning when assessment centers on writing about student learning rather than comparative ranking.
“Problems with Grades:” Student Reflections on a Syracuse University Honors Seminar about Grades
Jessamyn Neuhaus Panel
Tuesday, June 16, 4:30 — 5:45 PM EDT
Currently, for the spring 2026 semester, I am teaching a class called “Problems with Grades”-- a one-credit seminar in the Honors program at Syracuse University. For this proposed panel, some of the students who took this course will answer questions about their experiences in the class. After I give a short overview of the course (the learning outcomes, reading assignments, and assessment structure), students will reflect on how studying the history of traditional grading, with an emphasis on contemporary critiques of grading systems, changed their own views and perceptions of educational systems generally and of their individual and unique educational experiences specifically.
Some initial Q & A questions for the student panel will include: What drew you to the “Problems with Grades” class? Why were you interested in critiques of traditional grading systems? In your view, what were some of the most important takeaways from our discussions about the assigned course reading, Joshua Eyler’s Failing Our Future: How Grades Harm Students and What We Can Do About it? In what ways did taking this course change how you viewed your previous experiences in school and in college? In what ways did taking this course change how you view your future planned experiences in college, and plans for post-college?
Our panel has two objectives: First, the panel will increase student voices in the scholarly discussion about grading practices. Taking into account student perspectives and experiences is a vital part of grading reform efforts, and students need to be more widely included in academic debates and forums like the Grading Conference. Second, this discussion will give conference participants insights into how we as teaching practitioners may facilitate, within the structures of higher education systems, students’ metacognitive understanding of grades and grading as part of their lived experiences in higher education.
Poster Abstracts
View Poster Gallery
"There's Nothing You Can Do To Make Me Less Focused on My Grades": Alt Assessment in The Age of AI (#79)
Jason Gulya
In this presentation, I will cover the assessment system I use in my course. I have eliminated traditional grades on assignments. This is what I do instead:
(1) I have grouped all assignments under specifid skills. In my Literature classes, for example, I include Collaborating, Communicating, Managing a Writing Process, and Reflecting as core skills.
(2) I give assignments either a Complete (meaning that showed that they're practicing that skill) or a Try Again.
(3) I work with my students to set a threshold for what percentage of Completes they need to demonstrate that they're working on that skill. For example, we might say that getting Completes on 80% of our collaborations means that they can collaborate.
(4) I assign final grades (since I do have to submit something, at the end of the day) based on how many skills they've hit. In my courses, I often include 4 core skills. If a student hit 1 skill, they get a D. If they hit 2 skills, they get a C. If they hit 3 skills, they hit a B. If they hit 4 skills, they get an A.
My goal here is to challenge the transactional model of education, encouraging students to focus less on the grades themselves and more on the learning process. I will argue that doing this is one the most important ways we can adapt to The Age of AI -- especially given the rise of Large Language Models (LLMs), agentic AI, and AI graders.
3-2-1 Ungrading Reconnaissance (#10)
Michael Forder, Jenny Inker
This abstract describes a 3-2-1 ungrading reconnaissance exercise in a 15-week, asynchronous online graduate-level gerontology course. In week 1, students are introduced to ungrading via syllabus descriptions and supplementary linked resources. The instructor describes how ungrading will work and why it is used (to deepen student learning and promote learner autonomy). Students then respond to a discussion prompt sharing three key highlights (important points they think are essential to understand about ungrading), two questions or concerns they have about it, and one source of inspiration about the course. The instructor responds personally to all initial posts, providing clarification and reassurance. At the end of week 1, all student questions and instructor answers are collated and shared via the course LMS page for reflection. This process reveals a range of student reactions - from enthusiasm to anxiety - and enables the instructor to address areas of concern such as how students will know how they are doing in the course without grades. Peer responses further support students, as those with prior ungrading experience typically share their initial apprehension about ungrading and their eventual appreciation for the approach's emphasis on learner autonomy and learning depth.
Instructor reflections indicate that the 3-2-1 exercise eases students into ungrading, reduces anxiety, fosters confidence, encourages early engagement with course principles, and promotes a collaborative learning environment. It is recommended that instructors provide opportunities for students to express excitement and concerns and ensure timely, individualized feedback. While the 3-2-1 reconnaissance is a strong starting point, ongoing support and variations in timing, prompts, or reflective assignments can further enhance student understanding and comfort with ungrading.
Alternative Grading Beyond the Gradebook (#36)
Ryne VanKrevelen, Heather Barker, Nicholas Bussberg
Three statistics instructors at a mid-sized private, liberal arts university have implemented and refined three different alternative grading approaches. These approaches have been used in statistics courses ranging from introductory level through upper-level undergraduate courses. Sections typically contain 20-30 students with an emphasis on engaged learning. The three instructors have implemented grading practices such as specifications, standards-based, ungrading, and combinations of the three. These alternative grading systems were designed to shift students’ focus from point accumulation toward conceptual understanding, revision, and growth in statistical thinking. Preliminary evidence from course artifacts and student reflections suggests these approaches support positive learning experiences and help students focus more intentionally on learning goals rather than course points. One persistent challenge across these alternative grading systems has been helping students clearly understand their current standing in the course. Traditional learning management system gradebooks, such as Moodle on our campus, are often structured around point totals and percentage-based calculations, which do not easily align with nontraditional grading frameworks. As a result, instructors have developed multiple strategies to communicate progress in transparent and meaningful ways. This session will focus on how the instructors have used 3 different ways to update students on their progress throughout the semester. These methods include a spreadsheet-based progress tracker, milestone report emails, and individual interviews. Data from IRB-approved pre- and post-semester will be shared in our posters, including student reflections on how they felt informed about their progress.
Alternative Grading as a Gateway to a Framework for Course Design in Engineering (#59)
Sara Wilson
Alternative grading has added a host of grading methods that can be brought to bear to motivate, improve, and reflect student learning. Identifying which methods should be used can be daunting. In this work, we compare two different engineering courses to examine how different alternative grading approaches were brought bear and what dimensions of the courses impacted these choices.
The first of these courses is a first-year course in computer programming for mechanical engineers. This course includes learning outcomes around specific skills and knowledge as well as learning outcomes around creative problem solving and teamwork. In this course, a mesh of grading approaches was used including a contract-grading approach to group projects. The second course is a senior level course in control systems for mechanical engineers with learning outcomes predominately focused on acquisition of a knowledge/skill set. This course was transformed to a standards-based grading approach.
From these experiences, it became clear that a few dimensions of a course can be quantified to guide the choice of assessment methods and inform course design. The first dimension is the level to which grades should reflect effort versus acquisition of a knowledge/skill set. These courses have both, with the first-year course having a 44/56% split between effort/knowledge and the senior level course having 30/70% split between effort/knowledge. A higher knowledge dimension supports approaches such as standards-based grading. Another dimension to consider is whether the goals of a course are to bring all students to an end-point (learning target) or to move students along a trajectory (learning growth). The latter supports contract grading as an assessment approach. Other dimensions can include the level of individual versus group work and challenges to student learning such as generative AI. Using such a framework supports a more systematic approach to course design.
Beyond the Contract: Developing An Effort-Forward Framework for Student Success (#83)
Michael Johnson
This poster presents a work-in-progress theoretical model for an “effort-forward” approach to assessment and feedback designed for project-driven courses. Building on a self-assessment system I first piloted in Fall 2023, this expanded framework situates that metacognitive assessment approach within a holistic framework that synthesizes labor-based contracts with elements of specifications grading to provide a rigorous yet equitable approach to learner-first grading.
The goal of this expanded framework is twofold. First, it seeks to enhance student learning by increasing agency, investment, and competence, while fostering a more equitable, inclusive, and empowering course environment. To achieve this, the model theorizes across the three interconnected dimensions of assessment, growth, and pedagogy to holistically support student success. Second, it seeks to address common concerns regarding the ability of contract-based approaches to maintain “standards of excellence.” To achieve this, the model integrates an expectations-based rubric influenced by specification grading practices paired with a requirement for students to demonstrate their learning through metacognitive evaluation focused on their performance, engagement, and growth in the course.
This poster outlines the current framework’s theoretical underpinnings and its implications for learner-first course design. By sharing this work-in-progress, I hope to invite generative conversations about the desirability, viability, and feasibility of this framework, and to solicit feedback on its potential implementation and application across varied course contexts.
Beyond the Grade: Rethinking Assessment for Equity and Engagement (#4)
Kristina Rouech, Holly Hoffman, JoDell Heroux
This poster examines the implementation of alternative grading strategies, including ungrading, contract grading, and portfolio assessment, in undergraduate courses to promote equity, engagement, and authentic learning. Traditional grading systems often create stress and inequities, while alternative approaches prioritize growth, reflection, and student agency. Based on prior implementation in education courses, this work synthesizes best practices and lessons learned, including the importance of transparency, co-created criteria, and iterative feedback. The poster will outline practical steps for adoption, highlight challenges such as institutional constraints, and share preliminary outcomes, including reduced student anxiety, improved intrinsic motivation, and enhanced metacognitive skills. Recommendations for instructors and educational developers include starting small, gathering student feedback, and advocating for departmental support. Future directions involve scaling these practices across programs and assessing long-term impacts on student learning and equity.
Customizing Learning in Introductory Statistics via Specialties (#56)
Brianna Hitt, Katherine Kinnaird
Introductory Statistics courses attract students from all disciplines, each bringing different motivations for learning statistics and varying levels of mathematical preparation. Traditionally, colleges and universities have addressed this diversity by offering multiple versions of introductory statistics (e.g., for pre-med students, mathematics majors, or engineering students). Despite differing emphasis on methods, applications, or advanced concepts, there tends to be substantial overlap in core content across these versions of introductory statistics.
In a new core introductory statistics course at the United States Air Force Academy, we developed a model, called Specialty Medals, that preserves a shared statistical foundation while allowing students to customize part of their learning, all in the same class. The course uses a standards-based grading framework centered on 12 common core competency standards. In addition, each student selects one of four specialty topics, consisting of two additional standards, that emphasize course material aligned with their academic interests or broader learning goals.
This model has been implemented in two formats: first within a timed assessment system and later as a mini-project option. While further iteration continues, we have found that Specialty Medals increase transparency around core statistical competencies, strengthen coherence across sections of introductory statistics, and better align assessment with students’ articulated goals. The design has also generated productive conversation with other departments, who are able to direct their students toward a particular specialty topic aligned with their programs.
As the course expands to approximately 1,000 students per year at the Air Force Academy, with plans for implementation at Smith College, Specialty Medals offer a scalable method for balancing consistency, customization, and proficiency within standards-based grading.
Developing An Alternative Grading Course Design Institute (#57)
Kylie Korsnack, Kitty Maynard, Nisha Gupta, Nancy Chick
For many faculty, making the leap from traditional grading to alternative grading requires significant investment of time and labor. In response to this challenge and with the support of a 2025 Summer Collaboration Grant from the Associated Colleges of the South (ACS), we designed a multi-day course design institute (CDI) to provide faculty and instructional staff with the structure, support, and resources they need to revamp their grading structures through their course design. Drawing on the recently published book Developing High-Impact Course Design Institutes (Troisi et al., 2025) and informed by Specifications Grading (Nilson, 2015) and Grading for Growth (Clark & Talbert, 2023), our CDI aims to guide participants through the process of designing a course that uses specifications grading as its primary assessment structure.
Our poster presentation will offer an overview of our CDI, including learning objectives, module topics, example activities, and strategies for program assessment. Our CDI will not be offered until late June 2026, so we will not be able to share outcomes or participant feedback; however, we offer our planned CDI as a model for others (faculty, staff, educational developers, etc.) who may be interested in developing their own to meet the needs and interests of the faculty at their respective institutions. We hope that increasing access to professional development opportunities will give more faculty the confidence and support they need to integrate alternative grading approaches into their teaching.
Clark, D. & Talbert, R. (2023). Grading for Growth : A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education.
Nilson, L. B. (2015). Specifications grading : restoring rigor, motivating students, and saving faculty time.
Troisi, J. D., Palmer, M. S., Wright, M. C., Hostetler, L. A., & Hurney, C. A. (2025). Developing High-Impact Course Design Institutes: A Model for Change.
Embedding Community‑Engaged Philosophy in a Specifications Grading Classroom (#67)
Megan Davis, Daphna Atias
This poster examines the early development of the DC Community Impact Project, a specifications‑graded, community‑engaged assignment in an undergraduate Philosophy & Nonviolence course. The project brings together public philosophy, civic engagement, and justice‑oriented pedagogy, asking students to apply frameworks of nonviolence, structural violence, and community‑based ethics to real conditions faced by DC organizations working for peace and justice. Through needs assessments, fieldwork, and partner‑informed interventions, students translate philosophical commitments into practical, non‑extractive, relational forms of action that support community partners.
The course’s specifications‑based grading model mirrors the iterative, accountability‑driven nature of community engagement by prioritizing process, encouraging creativity, and reducing punitive grading pressure. Students meet explicit criteria for each project stage and revise their work until it aligns with community‑defined expectations, reflecting the responsiveness required for authentic partnerships.
The poster highlights how the project’s design draws on its theoretical foundations. Nonviolence theory establishes commitments to dignity, reciprocity, and harm‑reduction that shape student–partner interactions. Structural violence analysis helps students identify systemic forces underlying community issues, informing designs attentive to both context and root causes. Community‑based ethical frameworks cultivate humility, shared authority, and justice, ensuring that partner priorities—rather than academic convenience—guide the work. Together, these frameworks support student agency, ethical awareness, and the capacity to navigate real‑world complexity.
This poster also seeks feedback on logistical challenges, strengthening ties between theory and grading, enhancing ethical scaffolds for student–partner collaboration, and refining the specifications model to support equitable, community‑rooted learning.
Engagement Points in Large-Enrollment Standards-Based Calculus (#35)
Edison Hauptman
The University of Pittsburgh teaches a few thousand students in its Calculus courses every year, spread out across many 75-person lectures during the fall and spring. Many Calculus students at Pitt are STEM majors, especially in Engineering. Thus, a key objective for my Mathematics department is to measure whether students have established basic competencies in the course material, so they can later apply it in the next Calculus course or in their major.
In my Calculus courses, I use a Standards-Based Grading system where students earn their grade by achieving a minimum standard across all assessment categories. In addition to timed, in-person Progress Checks, I have a wide variety of assignments in my course (including in-class group work, meta-cognitive reflections in students' weekly recitation, and mini-presentations). This academic year, I am only using one assessment category for my non-Progress Check assignments, which I call "Engagement Points".
Using a broad Engagement Points standard with one bucket of points has multiple benefits. These include that any missed assignments can be "made up" by completing a different assignment already on the schedule, and my students can prioritize the assignments that help them learn best or will best prepare them for future work in their major. Also, since the assessment category is broad by design, I recommend this model as a safe environment to try out new assignments.
In this talk, I will describe the assignments that make up my Engagement Points standard and the benefits and drawbacks of this strategy in a large-enrollment, service-oriented context. Using data from my Fall 2025 and Spring 2026 Calculus courses (~200 students across 3 sections), I will share correlations between assignments chosen and final grades, and I will share my advice for implementing a wide range of assignments in a large-enrollment course.
Enhancing Student Engagement Through Transparent and Feasible AI-Supported Assessment Systems (#74)
Jing Xie
A wide range of assessment practices, increasingly shaped by the development of artificial intelligence (AI) tools, has become prevalent in contemporary education. Among current learning technologies, AI tools are often regarded as effective in enhancing student engagement when integrated with reward-based systems; however, existing evidence indicates limitations in sustaining long-term engagement. Under transparency policies that require systematic, AI-supported assessment to motivate student learning, progress may be hindered by insufficient resources, including the absence of appropriate reward mechanisms. This limitation, in turn, reduces the effective recognition of students’ academic work. Moreover, students may experience emotional distraction, which can negatively affect learning outcomes—particularly for those who require convenience, accessibility, and flexibility to succeed. To improve student engagement and support sustainable learning, this proposal recommends revising the grading system through the integration of an alternative curriculum component aligned with reward policies. This approach aims to promote student development while ensuring both feasibility and transparency in assessment practices.
Extra Credit for Metacognition: Small Points, Meaningful Learning Gains (#65)
Aisling Dugan, Brianna Pham, Bella Malick
Extra credit assignments can empower students to demonstrate effort and initiative, yet they are often criticized for potentially exacerbating equity gaps. We sought to address this tension by designing an extra credit opportunity focused on study strategies, metacognition, and student self-efficacy. This assignment was offered in a large-enrollment undergraduate Introductory Immunology course serving first-year through senior students. To earn credit, students completed a 1–2 hour online module featuring Dr. Saundra McGuire, author of Teach Yourself How to Learn, and then reflected on how they would incorporate new learning strategies into their study practices, either through a written submission or discussion with a Teaching Assistant. Despite accounting for only 3% of an exam grade (approximately 0.45% of the final course grade), 135 of 169 students (80%) chose to participate. Students identified specific challenges they faced in the course and articulated concrete changes they planned to implement in their approach to learning. An end-of-semester survey administered two months later provided additional insight into how students applied these strategies and perceived their impact.
From Checklist to Grade: A Grading Tool for Process-Based Portfolio Assessment in General Psychology (#39)
Beliz Hazan
In this presentation, I describe the design of a process-based assessment system. Initial use suggests the system may support more consistent evaluation across sections and is helpful in supporting students as they adjust their expectations from product-focused to process-focused evaluation. Until recently, I assessed General Psychology courses using traditional methods such as multiple choice and short answer exams. With the emergence of AI, I began to reconsider this approach. This redesign was informed by practitioner discussions emphasizing process-based pedagogy (e.g., posts by Jason Gulya). While final student work can be assisted or shaped by AI, students' thinking processes remain unique. This shift motivated me to focus on learning processes rather than isolated outcomes. In practice, translating these principles into a manageable grading system for introductory courses remains a challenge. I implemented a Process Portfolio to make students' thinking visible through Process Logs and AI Logs. Students select and reflect on their own artifacts, preserving individual voice and decision making. The portfolio is evaluated using a mastery-based framework organized around four components: foundational knowledge, application, evaluation, and integration. To support this, I developed a grade calculator that synthesizes instructor evaluations rather than automates grading, using AI as a technical support tool during development. Instructors assess each component using rubrics and checklists, and the calculator makes relationships among components visible by showing how they contribute to a final grade. Instructor judgment remains central. Before completing the portfolio, students engage in low-stakes formative activities, including Mind Gym assignments and Psychology Diaries. Graded on completion, these help students practice articulating reasoning, reflecting on learning, and documenting AI use. Together, they prepare them for the portfolio's reflective demands.
Masters and Mastery: A Critical Appraisal of the Intersections between Alternative Grading and Online Master Courses (#70)
Laura Cruz, Deena Levy
The presenters will discuss the challenges and affordances of integrating alternative grading practices into the online master course model. In this case, the term “master” does not refer to the course level (i.e., masters’ degree) or to mastery grading, but rather to a course structure that is determined in advance then replicated across multiple sections over multiple semesters, often with limited space for instructors to make changes. While often implemented at fully online universities, the master course model is becoming increasingly prevalent at large research universities in the U.S., as the model strives to ensure quality and consistency in the student learning experience.
The decision to implement alternative grading, or not, often involves issues that are broader than course design and implementation, and draw the attention of multiple constituencies.
In this panel, we draw upon the perspectives of multiple instructors, program directors, and IDs across varied disciplines at our university (a large research institution) when discussing the possibilities for integrating alternative grading strategies into the master course model.
We will discuss questions such as:
What are the affordances and constraints of integrating alternative grading approaches into master course models?
Is there space for implementing alternative grading approaches in the online master course model?
Are there distinctive approaches to alternative grading that best align with the online master course model?
How might the implementation of alternative grading practices in online master courses affect student perceptions of learning experience quality and/or consistency?
How might the practice of alternative grading in online master courses affect perceptions of learning experience quality and/or consistency by instructors or course coordinators?
In what ways can learning designers support the implementation of alternative grading in this context?
Rethinking Assessment: Student Perceptions of Standards-Based Grading in General Chemistry (#41)
Jessica Thorpe
Traditional grading systems in higher education often emphasize compliance and test performance at the expense of meaningful learning and student motivation. In response, alternative grading practices such as standards-based grading (SBG) are growing in use as they have the potential to positively influence student learning. This qualitative comparative case study investigates undergraduate students’ perceptions of SBG in a general chemistry course at a small, private liberal arts college. Through semi-structured interviews and analysis of final exams, the study explores how students interpret the transparency of SBG and its influence on their motivation and learning. Results suggest that while SBG can enhance student engagement and promote deeper learning, successful adoption requires careful design and communication. This work contributes to the growing body of research that promotes the use of alternative grading in undergraduate chemistry classrooms.
Standards Based Grading in Calc 3 at a SLAC (#2)
Daniel Condon
We describe an implementation of standards based grading in Calculus 3 at a small liberal arts college, which we used during the Spring semester of 2026. We believe the implementation is lightweight, modular, and compatible with weighted grading schemes. We plan to discuss assessment results, student feedback, and the instructor experience.
Standards-Based Grading and Intentional Group Work in a Core Chemical Engineering Course (#61)
Elizabeth Corson
This project examines the implementation of standards based grading (SBG) and intentional group work in Chemical Engineering Thermodynamics I, a 70 student, sophomore level core course. The goal was to refine assessment and instructional practices to better support mastery oriented learning. New learning objectives were written and used to align homework, discussions, and assessments. The course employed seven assessments with multiple opportunities to demonstrate mastery. This structure enabled students to revise their work using targeted feedback. Platforms included Canvas, Gradescope, CATME, and iClicker.
The discussion section incorporated stable, intentionally formed groups paired with structured team building activities. These groups were designed to deepen engagement, promote equitable participation, and support collaborative problem solving. Students were coached on productive learning behaviors such as using feedback effectively, approaching problems actively, and contributing meaningfully during group work.
Impact was assessed through student surveys, peer evaluation surveys, and performance data compared to Spring 2025. Preliminary results indicate that students used reassessment opportunities to improve mastery, demonstrated higher quality problem solving during discussions, and reported greater clarity about expectations. At the instructor level, SBG redirected grading time toward actionable feedback and clarified which concepts students continued to struggle with.
Recommendations for future iterations include revising learning objectives to reduce their number and align with topic order, and ensuring full participation in the CATME survey for efficient group formation. This project highlights how aligned learning objectives, SBG, and purposeful group work can foster deeper learning and more equitable participation in core engineering courses.
Standards-Based Grading in Applied Thermodynamics (#62)
Julie Mendez
This work-in-progress poster describes the use of standards-based grading in an undergraduate applied thermodynamics course required in a mechanical engineering technology program. Fifteen learning targets were developed based on the course learning outcomes. The learning targets were primarily assessed using quizzes and scored using a three-level rubric. The levels were successful (S), revisable (R), and new attempt needed (N). For a learning target to be considered completed, a student had to earn an “S” mark on at least two assessments for that learning target. Work earning a mark of “R” could be revised and potentially converted to an “S” mark. One learning target included the pre-class and homework assignments and participation in the in-class activities, each of which was worth a specified number of points. Completing this learning target required accumulating a minimum number of total points. The final course grade was determined by how many learning targets were completed. Student progress toward learning target completion was tracked in the gradebook in the Brightspace learning management system, and custom formulas were used to calculate the course grade. Three times throughout the course, students were asked to complete brief writing assignments reflecting on their quiz performance. Future work includes analyzing the responses to the reflection assignments to study how students prepared for quizzes and used instructor feedback.
Standards-based Grading in Calculus with Precalculus (#51)
Jacquelyn Rische
This poster describes the implementation of a standards-based grading system into Calculus with Precalculus, a two-semester course that intermingles the topics of Precalculus and Calculus (completing both semesters is equivalent to Calculus I). Students take this class because it is a requirement for their major; the most common majors in the class are Biology, Engineering, and Computer Science. In the standards-based grading system, I determine the “skills” that I want my students to learn by the end of the semester. The completion of these skills is 60% of a student's grade (with the rest of a student's grade determined by homework and a final exam). Each skill appears on three quizzes in a row, and to complete a skill a student needs to solve its quiz questions correctly two times. Once a skill stops appearing on the quizzes, students can still complete it doing “retakes.” Skills are graded on a 3 point scale (3 points: problem is correct without any errors; 2 points: significant progress is made in the problem, but there are some minor errors; 1 points: some progress is made in the problem, but there are major errors; 0 points: little to no progress is made in the problem). Given my students' diverse backgrounds and preparation levels, the system works well for them. If a student completes a skill after two quizzes, they can skip that question on the third quiz. This lets them focus more on the skills they are struggling with. Students appreciate being able to come in and get help on (and retake) those skills. Instead of having this class act as a “gatekeeper” to Biology, Engineering, and Computer Science, both combining Calculus with Precalculus and using standards-based grading helps students of all backgrounds learn the material and succeed in the class. This type of standards-based grading system can be applied to classes from other disciplines as well. Building up the question banks can be time consuming the first time, but I find grading much easier.
What Must a Grade Communicate? A Minimal Framework for Institutional Interpretation (#42)
Pujit Mehrotra
Grades invite invalid and problematic inferences as they circulate across institutional contexts, yet proposed alternatives often address either educational quality or institutional legibility, rarely both, raising the question: what minimal information must a grade communicate to support interpretation across contexts and stakeholders?
This theoretical exploration yields a framework treating grades as structured institutional messages rather than scalar measurements. Through first-principles analytic reasoning about grading as observation and evaluation under conditions of learning, the framework derives a minimal grammar composed of Inputs (actions and activities), Outputs (performances or products), and Difficulty (learning conditions shaping interpretation). These distinctions are proposed as the smallest set required to prevent common inferential failures in grading systems.
Predominant theories serve as analytic and objective constraints. Argument-based validity and evidence-centered design frame the necessity of Inputs and Difficulty as distinct components, while opportunity-to-learn and cognitive residue motivate Difficulty’s conceptual decomposition into mechanical, normative, and epistemic components. The framework is evaluated through inferential stress-testing against familiar academic practices, such as GPAs and narrative evaluation, examining which interpretations they enable, obscure, or distort.
While this framework does not attempt to improve measurement accuracy or directly resolve equity concerns, we illustrate how it can improve downstream institutional inferences and outcomes (e.g., AI-mediated vs. constrained performance contexts)—and how it enables inter-operation of disparate assessment practices.
As such, the framework offers a portable analytic lens for comparing grading paradigms, redesigning grades for clearer institutional signaling, and evaluating students under conditions of rapid technological, political, and economic change.