The Cycle of Responsiveness: For Faculty
The Cycle of Responsiveness
In order to be responsive to students, we must
1. Create opportunities for students to provide us information on whether and how they have learned what we want them to learn. These are assessments.
2. Collect the information. This is assessment data.
3. Look at the information to see whether students have learned what we want them to learn.
4. Analyze the information to draw informed conclusions about
4. Finally, when the information shows us that are students are not learning what we want them to learn we must use the information to enact changes in our courses or programs that we believe, based on the information itself, our teaching experience, and our subject-matter expertise--will help students learn what we want them to learn.
An outcomes is what a student will know or be able to do as a result of the course, program, major, or service that you are offering.
Create Assessment Activities
Once you have identified what you want your students to be able to know or do, create assessment activities that will allow the students to show you what they know or what they can do as a result of the course, program, etc.
The number and type of assessment activity will depend on the outcomes. More here.
Create a curriculum or service plan
After you have created the assessment, create the assignments (for courses) or service plan (for College offices) that will teach the student how to do what you want them to be able to do, or know what you want them to know.
A curriculum is aligned with its assessment when the assessment is an accurate test of whether the student has learned what you want them to learn as a result of your course or service.
There are two kinds of evidence or data used in assessment: direct and indirect. Indirect methods reveal characteristics associated with learning, but they only imply that learning has occurred. Examples of indirect sources of evidence include GPA, percent of classroom time spent on active learning, grades and course evaluations. Direct methods provide concrete evidence of whether a student has command of a specific subject or content area, can perform a certain task, exhibits a particular skill, demonstrates a certain quality in his or her work, or holds a particular value. (For example: embedded questions on exams, quizzes or assignments; research projects; and grades based on an internship or field experience.) Assessment in academic areas must include direct assessment of student learning. Additionally, these assignments can be built into the curriculum and count towards a grade, or assigned for the purpose of assessment. Assessment evidence must be evaluated using a rubric.
“You are not trying to achieve the perfect research design; you are trying to gather enough data to provide a reasonable basis for action.” (Walvoord, 2010)
Perhaps the assessment process collected data from embedded test questions, or by using a rubric to evaluate student work. The collected data has to be organized and summarized, perhaps using simple tallies and percentages, prior to being shared with colleagues. Tables, graphs and other visuals may be helpful. The data should be organized around the question(s) that this data was designed to answer. Was this data collection designed to explain, predict or explore? The faculty member(s) who is/are the assessment lead for this initiative may wish to consult the Office of Assessment and Evaluation or the Office of Institutional Research for assistance in aggregating the data and presenting it meaningfully. It is often helpful to “keep a sample of good, bad, and mediocre student work on file to provide firsthand evidence of your standards and rigor.” (Suskie, 2009)
Program and department faculty come together to analyze the summarized and tallied assessment findings. What do these results mean? What do these findings say about the way in which the curriculum is being delivered and the ways in which students are learning? What other stakeholders might have to be included in the consideration of these findings? The answers to questions such as these help faculty members make strategic decisions about ways in which the program or course may be improved. Program faculty may also find that they discover ways to improve the assessment process itself in future iterations.
Identify and Implement Changes
A critical part of doing assessment well is using the results to improve student learning. “Assessment reports that end up briefly perused and then filed without any resulting action are, to be blunt, a waste of time.” (Suskie, 2009) Actively involve all stakeholders in the decision-making process; this may involve faculty beyond those teaching within a specific program or discipline. If assessment findings seem disappointing, the situation is ripe for rich conversations. Review the learning goals. Are there too many? Are they appropriate? Does the curriculum need revision? Does the curriculum allow for sufficient time on task for effective student learning of key priorities? Are the assignments and activities well-matched to the learning goals? Are the teaching techniques well suited to the needs of learners? Are sufficient student supports (such as tutoring) in place? Once these kinds of questions are considered, faculty members can make well-informed changes and improvements.
Assess Impact of Change
The cycle of assessment is an iterative one. If changes are made with the intention to improve the quality of learning or the delivery of a service, then it is important to reassess after an appropriate span of time to make sure that it has had the desired effect. Is there evidence that the changes, in fact, have resulted in any improvements? If so, then it is a cause for celebration. If not, then faculty members will have to once again analyze the findings and determine and implement further strategies to better support student learning.