So, you have people on campus engaged in assessment work.
That can be a big win and worthy of celebration, but at the same time, someone needs to make sure that the nature of their work is meaningful and manageable. Sometimes, the people to rein in are those who are most excited or eager to participate in assessment.
This is certainly a good problem to have, but it’s still worth addressing.
Your biggest assessment champion may want to assess anything and everything. They took to heart the principle of data-informed decision making — so much so that they want to administer five more surveys, conduct spontaneous focus groups whenever students are present, and are willing to collaborate with anyone collecting data about student experiences. (And why wouldn’t they? Knowledge is power, right?)
While having multiple data sources is generally great for assessment work, such a high volume of activity may result in more data than can be meaningfully interpreted or acted upon. It can prove challenging to walk the line between encouraging enthusiastic approaches and reinforcing manageable expectations.
You may also connect with people who believe their interventions (such as courses, programs, and other resources) achieve every outcome and satisfy every need of every student. Although they may be stellar experts doing amazing things on your campus, they can be what I call “map happy” when it comes to intervention mapping or alignment. For example, they may believe that a single course or workshop covers all of their department’s, and maybe even all of the institution’s, learning outcomes.
In light of the wondrous benefits they assume students must be receiving, these individuals may plan to assess every aspect of the experience because surely this is an exemplary practice yet to be recognized — or so they believe.
I’m sure you can fill in the gaps or identify people or areas on your campus where some of this activity may be playing out. The underlying problem is that people might be allowing their interests and wishes to dominate their assessment approach, letting real data needs – when captured – to get lost in the shuffle.
Data needs may not generate as much excitement or interest as wishes, but they are needs for a reason and should be treated as such.
I want to provide some tips and considerations for framing data collection. The elements below can help you and your team separate critical needs from mere wants, wishes, or interests.
Before you begin
Remember reporting requirements
Data has inherent implications for multiple audiences and their various perspectives.
Participants, such as students, may want to see all the results and understand implications for action. Collaborators may only care about the specific data points that they added to the gathering effort. The originator office or department likely has specific data required for strategic goals or objectives. Moreover, members of institutional leadership may need certain data to satisfy internal or external reporting needs.
So, a good deal of consideration should go into how you’ll customize results in order to effectively share them with each audience. The more you think about this in the beginning, the more likely you’ll be to capture data in a way that supports your reporting and sharing.
Let decisions prepare the path
Similar to reporting requirements, the type of decisions made based on data can shape how questions are formed or what is collected. Think about your institution’s budgeting process or your area’s resource allocations; how might data inform those processes?
And don’t be lopsided in your data collection; you have reason to collect data for programmatic outcomes and objectives, as well as student learning outcomes. Capturing these data together can maximize results for decision-making regarding operational impact.
Assemble your team
It helps to have collaborators — such as faculty, staff, and students — supporting your process.
Students can check for student-friendly language and provide pilot data. Faculty and staff can offer outside perspectives on what matters or pertains to them, helping shape instrument design and strategize for future reporting. Student affairs colleagues can also help recruit relevant perspectives for your effort.
Think about who you might reach out to for collaboration in your assessment effort. Consider the roles they might play, such as connectors, pilot respondents, or editors. Be clear on how their involvement supports you and the stakeholders of the data. It’s even better if you can share a personal benefit or incentive for their participation.
Considerations for practice
As you gather folks together and look toward planning, here are a few factors to work through to ensure that you’re all on the same page. You should sort through each of these pieces during the planning phase in order to safeguard against overeager data collection.
Account for alignment
Alignment is relevant to data collection when considering the relationship between activities and the connections between data points (and their sources).
Firstly, it’s important to know how your learning outcomes and programmatic goals relate to your activities. Connections likely exist here in multiple ways.
For example, you may have a programmatic objective (PO) of making career coaching more accessible to students. Plus, you may have student learning outcomes (SLO) associated with what you want students to know, think, or be able to do as a result of the coaching.
Knowing PO and SLO alignment adds meaning to the assessment of career coaching data. And with PO and SLO connections in mind, other data could be brought in to inform the outcomes. Furthermore, this data can roll up to the department, division, or institution based on connections articulated with other outcomes and objectives.
Alignment also has a more literal and direct implication in tying data points to the POs and SLOs. Knowing that the first two questions of a survey relate to PO 1 will direct you to look at data from both questions in order to draw appropriate conclusions. This level of detail ensures purpose with data to be collected, while also making it easier to spot if intended data sets are lacking.
If, for instance, a survey on career coaching only had questions related to POs, it becomes easy to see that SLOs are not being measured there — which may or may not have been part of the plan.
Separate assessment from research
Assessment efforts are meant to be specific and useful for informing decisions. For example, you may collect data to see if your orientation program informed students and made them feel welcome, then analyze the data to improve future versions of the program.
Assessment results are typically shared internally to relevant stakeholders and then used to inform actions for change.
Research is investigatory and intended for application to larger populations. For example, you might collect data to examine how the impact of orientation differs among students by race, as part of a larger conversation about culturally relevant practices for orientation interventions.
Results of research are often shared externally by being published in an article or presented at a conference.
Sometimes the line between assessment and research can get pretty blurry; some people even believe the two practices exist on a spectrum. Either way, it’s important to understand the intent and use of your data.
Research can require internal approvals to collect data, communicate with participants, and complete consent forms. Assessment is typically viewed as regularly occurring work pertinent to job responsibilities.
Regardless of how you classify your work, you should ask about any internal research or review board protocols at your institution if you plan to share or publish the results externally.
Promise action and sharing
In your assessment planning, be sure to develop a timeline and strategy for sharing data with appropriate stakeholders.
Also, consider the timing and seasonality of using data for action or to inform changes. If, for example, you plan to use data to inform budgeting, make sure the data is collected before budget season. Or if you want to use data to inform updates to your leadership program, make sure to give yourself time to make changes before the program begins.
While it may prove time-consuming, considering implications per data point for each assessment effort can help illuminate responsibility for using the data you collect.
Stay curious, yet cautious
While the idea of separating needs from wants makes sense conceptually, some folks may find it hard to stifle their own curiosity or to shut out their colleagues who request the addition of just one question or two.
After all, your colleagues might offer different perspectives worth integrating in your effort. Additionally, leveraging curiosity or interests can engage more faculty and staff in assessment work.
Moreover, you don’t know what you don’t know, so today’s interests could become tomorrow’s needs after reflecting on the data.
To be clear, I’m not saying you can’t combine efforts or infuse curiosity in assessment work. You just need to be intentional in your effort, so as to not allow your team to balloon an effort to collect data which may never be used.
You may think it’s no big deal if data doesn’t get used, but it wastes time and effort on your part and, perhaps more importantly, it ignores the voices of respondents. Consider this scenario:
Your assessment effort involves individual interviews with target students.
Now, imagine that as each student responds, you cover your ears when they answer any questions that you don’t intend to act upon. Alternatively, you could respond, “Thanks for answering my question, but I’m not going to do anything with this information.”
In either case, the student would likely be confused as to why you’re asking the questions in the first place. They may become frustrated or angry, which could impact their future responses or give them a reason to abandon the interview.
It would take a lot less time, plus you’d be more likely to gather intended data and less likely to have awkward encounters, if you just cut the unnecessary questions in the first place.
The scenario above should not be thought of as an outlandish metaphor. It’s a tangible way to consider how time, effort, and resources can be squandered due to insufficient planning.
Commitments for colleagues
We owe it to ourselves, our colleagues, and our students to only collect necessary data. Being good stewards of data, we should use results to inform our decision-making and take action for improvement. This call-to-action extends beyond our own individual efforts; everyone has some level of responsibility as data collectors and/or consumers. To help reinforce the message, we can model appropriate behavior in our work.
When we are involved in data collection efforts, we should ask the assessment coordinator questions to ensure we know the goal and necessity of the overall effort, as well as individual data points intended to be collected.
Does each data point serve a purpose? Sometimes we can forget about data that was already collected or may be available to us once we know who responded.
Likewise, we may be able to capture data from faculty and staff, instead of students. This can reduce the frequency at which we ask students for responses and (hopefully) increase the likelihood of data being collected from employees.
You should also consider representation. Think about what populations and perspectives were involved in the planning of your assessment effort. If you’re surveying students, have you checked your efforts against a student perspective? Your language and approach needs to make sense to the respondent, not just the people collecting the data.
Continuing with representation, how do identities involved in the effort compare with the intended participants or the eventual audiences? Just because you’re adding diverse voices or perspectives to the process, it doesn’t mean you’re broadening and enriching your perspective. Adding people to an effort doesn’t equate to adding data points; in fact, they may help trim unnecessary questions or change language for appropriateness.
You can reflect on and ask these questions not just of your own work, but also of your colleagues in order to help them respect the process and integrity of data. The more intentional we are in data collection and accountable for using its data, the more likely we are to bring about the intended change or improvement.
As faculty, staff, and students understand the purpose and use of data, they’ll be more likely to participate in the next request for feedback or collaboration on a project. As data is collected, we can continue to discern between priorities and needs versus “nice to know” or “interesting” information.
Again, acknowledging interests can lead to actionable insight when intentionally integrated. Remember your project’s purpose.