You not only know what rubrics are, you’ve built them or already use them.
However, you’re not sure why you’re reading about the use of their data. Why is this newsworthy? You share the feedback with the student or use it for area improvement – end of story. Right?
Well, you’re not entirely wrong. Those are two ways to use your data, but given their rich composition and multi-purpose nature, there’s plenty to say about acting on rubric data.
Before I get rolling, a few disclaimers:
- This is the final installment of a three-part series about rubrics. Check out part one for an overview defining and describing benefits of rubrics, as well as part two for how to develop a rubric in five steps.
- While many things fall under the rubric umbrella, we are assuming rubrics are instruments used to measure student learning outcomes by faculty or student affairs practitioners.
- This post continues with the assumption that you’re using an analytic rubric, but much of the considerations would apply to holistic rubric use, too.
Rubrics afford quantitative scores for each of the rubric dimensions. Coupled with the quantitative scores are qualitative descriptions. These descriptions are useful for communicating to students what the scores mean, as well as interpreting the level of learning demonstrated or observed per dimension, given the results. Both the quantitative and qualitative information can be used for student and programmatic purposes.
Follow the steps below to move from data to action with your rubrics.
Aggregate your results
Look at results overall for your students per dimension of the rubric. Are there areas students overall are demonstrating higher levels of learning than others?
Disaggregate your results
Aggregate data does not tell the complete story (Heiser, Prince, & Levy, 2017), so look into disaggregating the results by student demographics. Is the intervention impacting all students the same way? What culturally-responsive pedagogy might be necessary to center the lives of marginalized individuals?
Even as outliers, they are students to be served and “critical practitioners acknowledge that significance and importance are not synonymous” (Heiser, Prince, & Levy, 2017).
- Create an executive summary of findings and implications.
- Identify audiences for sharing based on your and their interests.
- Consult with stakeholders to determine effective content for message and methods for sharing.
- Build a schedule of when you will share the results with whom.
- Share results with various audiences and stakeholders. This should always be an action with any assessment effort.
- Revise content, as needed. This may include foundational information (outcomes, alignment, success standards or targets), the intervention itself, or the rubric instrument.
- Adjust process, as needed. Reflect on your preparation, data collection, or reporting process and determine if changes should be made before the next cycle.
- Follow-up with students. This should occur at the individual level for praise or support, as well as for specific populations who may require culturally-responsive support or learning environments.
I can’t underscore enough to think of the many stakeholders for whom your rubric results may have implications. Intentionally think about who those people are and look to follow-up accordingly. A common division-wide rubric case is for student employee evaluations.
For example, if you are using a division-wide rubric for student employee evaluations, there could be implications for:
- HR (feedback on evaluation forms or processes)
- Supervisors (to know what behaviors and skills student employees are demonstrating and how)
- Student employees (same reason as above)
- Career services (to be aware of transferable skills students may look to relate to other experiences or convey resumes and cover letters)
- Faculty (to recognize nature of learning outcomes students gain outside the classroom)
- BI/IR (providers of demographic and/or institutional data)
- Student experience or specific multicultural offices (identity support, culturally responsive practices)
- Assessment folks (in case support is needed related to data collection/reporting processes)
- Marketing (support for branding, internal/external sharing guidelines)
- General students (to be aware of skills which can be gained from student employment)
Just like any assessment method, be intentional in your process and don’t forget to take action in the end. Allow your efforts to advance your practice and impact continuous quality improvement.
While rubrics may not be the right method for every situation, recognize where they are applicable and take care to design, review, and report on ways to spur and enact change for the institution.