Are our feedback comprehensible to students?
Constructive and supportive feedback is an essential part of our personal growth. Feedback is only useful when receivers can understand the message and feel motivated to take actions to improve their performance. In my opinion, effective feedback should be an interactive process that involves “comprehensible input” in order for self-regulated learning to take place.
I am borrowing the linguistic term, “comprehensible input” to examine the feedback process. According to Dr. Stephen Krashen, comprehensible input refers to messages that people understand when acquiring a second language and it requires meaningful interaction. Communication breaks down when comprehensible language input is not provided and can not be understood by learners. In a way, I think the process for students to understand feedback for further improvement is quite similar to how we acquire an additional language. In order to make important decisions to advance their learning, students do need to understand and develop proficiency of the feedback language that we, teachers, use based on the criteria descriptors.
Teachers often spend tedious hours providing what they perceived as “quality feedback” and yet many students seem to take little interest in and benefit from it. The marking is time-expensive and the return on investment is simply too low. If feedback is seen as a communicative and interactive process, then it is very clear that there is a breakdown in communication. Teachers refer to the assessment criteria and provide accurate feedback, but their feedback is just not “effective” because students do not understand their feedback language. Here is an example given by Dylan Wiliam in his book, Embedded formative assessment (page 120):
‘I remember talking to a middle school student who was looking at the feedback his teacher had given him on a science assignment. The teacher had written, “you need to be more systematic in planning your scientific inquiries.” I asked the student what that meant to him, and he said, “I don’t know. If I knew how to be more systematic, I would have been more systematic the first time.” This kind of feedback is accurate – It is describing what needs to happen – but it is not helpful because the learner does not know how to use the feedback to improve. It is rather like telling unsuccessful comedian to be funnier – accurate, but not particularly helpful advice.’
‘I remember talking to a middle school student who was looking at the feedback his teacher had given him on a science assignment. The teacher had written, “you need to be more systematic in planning your scientific inquiries.” I asked the student what that meant to him, and he said, “I don’t know. If I knew how to be more systematic, I would have been more systematic the first time.” This kind of feedback is accurate – It is describing what needs to happen – but it is not helpful because the learner does not know how to use the feedback to improve. It is rather like telling unsuccessful comedian to be funnier – accurate, but not particularly helpful advice.’
Does this sound familiar? I definitely had provided this type of feedback to my students in the past. Teachers make good use of the MYP published assessment criteria and focus on what deficiencies about the work submitted, which the students were not able to submit, rather than on what to do to improve their future learning. If we want students to act upon feedback to advance their progress, our expectations must be clearly clarified and communicated in advance through the language that students can understand. In order for feedback to be “effective”, students and teachers must have a shared language, a language that can be understood by the learner leading to improvement and empowerment. This leads to my question, “How can we make assessment criteria comprehensible to students so that they can understand how they have been assessed in order to understand the feedback provided?”
Making assessment criteria comprehensible
“If you want to become proficient in playing the guitar, probably the worst thing you can do as a beginner is to try and play the guitar like an expert. Instead, it is far more expedient to break the complex domain of an expert guitar player into its constituent parts of scales and chord formations and practise those before moving onto the more complex demands of playing a complete song.” (Hendrick and Macpherson 105).
The descriptors of MYP published assessment criteria are often general, qualitative value statements about student achievement. (MYP: From principles into practice, page 88). Teachers can communicate task expectations and requirements through task-specific clarification in different formats. An example of one type of task-specific clarification is the task-specific rubric. It’s an assessment grid adapted by the teacher, that better identifies how the general achievement level descriptors can be addressed by the student for a given task. It’s not meant to replace the subject-specific criteria so students should also have access to the originals. (MYP: From principles into practice, page 133). Here is an example of task-specific rubrics that can be found online:
- MYP Year 5 Individuals and Societies criterion B ii shared via Google Plus MYP Coordinators communities
A well-crafted task-specific rubric has the potential to provide students with a clear understanding of expectations in their learning process. It also allows teachers to provide students with more focused feedback about their strengths and areas for improvement relating to subject-specific learning objectives. Although the task-specific rubrics might identify the topics, tasks, or content knowledge that students are learning and describe varying levels of mastery for each criterion, the language used might still maintain ambiguity and not actually be easy for students to understand. Simply adding the topics, tasks, content knowledge, or concepts to the criteria descriptors without explaining the type of thinking needed, the majority of the students might still find feedback confusing and difficult to grasp.
For example in an analysis of advertisement task for MYP year 5 phase 5 language acquisition class, we might create a task-specific rubric to clarify expectation for criterion B i (analyse and draw conclusions from information, main ideas and supporting details) for level 5-6 as below:
Criteria Descriptor
Level 5-6
The student analyses considerably and draws conclusions from information, main ideas and supporting ideas
Task-specific rubric
Level 5-6
You analyze considerably and draw conclusions from information, main ideas and supporting details from the print ad.
If we would have used the task-specific rubric to provide students feedback, It might be difficult for students to interpret this feedback as they do not know what “analyze considerably” look like. This illuminates the need of helping students “visualize” different levels of thinking. So, here are some ideas to make assessment criteria comprehensible:
- Refer back to global context exploration and concepts and identify what content that we want students to learning, including knowledge, understanding and skills. Think what type of assessment task will allow students to demonstrate their understanding of the unit statement of inquiry.
- Provide worked examples of various achievement levels. I highly encourage teachers to build own collections of previously assessed student work that exemplify the descriptors of various assessment band levels. Students can evaluate and critique the worked samples against assessment criteria. Having students conduct standardization of assessment can also further engage them in interpreting the assessment criteria.
- It is useful for teachers to refer to the definition of the commend terms provided in the subject guide. According to the definition provided in the MYP language acquisition guide (published in 2014, updated in 2017, page 115), the definition of the command term, anslyse, means break down in order to bring out the essential elements or structure. (To identify parts and relationships, and interpret information to reach conclusions.) Discuss with students what “analyse” mean in this given task context. What are the essential elements or structure in a print advertisement? Discuss with students what actions might take place before analysing the print ad, considering depth-of-knowledge, DOK). For example we might start with:
- (Level one: recall): outlining elements of print advertisement that are used to attract the target audience
- (Level two: skill/concept) describing how various elements of a print ad are used to express an intended message;
- (Level three: strategic thinking) explaining how graphic feature of a print ad is used to present information by citing specific evidence.
- (level four: extended thinking) Lastly, analyzing by responding to different elements, citing specific evidence from the print ad to support conclusions and also make some personal connections. S.O.A.P.S.Tone reading strategy is a useful strategy (model) to help students find evidence from the print ad to support their analysis and draw conclusions.
- I think this is an important step, identifying knowledge domain to be assessed, to help students understand how thinking is progressed. It can also help with differentiated instruction planning. DOK is just one of the useful tools to help teachers plan the cognitive complexity and assessment task. SOLO taxonomy is also another useful tool that categorize higher order thinking. Before students master the essential skills, there should be smaller steps for them to take and practice.
- Students can work in groups to identify levels of mastery (or quality). What do we mean by saying “has difficulty analysing”, “analyse adequately”, “analyse considerably”, and “analyse thoroughly”.
- Share the task-specific rubric students and apply it throughout the learning process.
After considering what was mentioned above, I revised the task-specific rubric (see below) and also change the pronoun from “You” to “I”. The task-specific clarification needs to be understood by students through dialogues and examining worked samples. It should be shared with students right in the beginning to assist them in setting learning goals. Students can also use this as a checklist before submitting their work. In this way, teachers thus can be more efficient and effectively provide feedback. Only if students understand the assessment criteria language, can they take appropriate actions to improve their learning. There is a benefit of spending time co-designing the task-specific rubrics with students. This can be used for peer assessment and student self-assessment.
Criteria Descriptor
Level 5-6
The student analyses considerably and draws conclusions from information, main ideas and supporting ideas.
Task-specific rubric
Level 5-6
I analyze considerably by identifying all elements presented in the print ad, including the speaker, occasion, audience, purpose, subject and tone. I draw conclusions from citing one or more specific evidence from the print ad in responding to each element (SOAPTone).
Steps of creating task-specific rubrics
References
- David Carless (2018): Feedback loops and the longer-term: towards feedback spirals, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2018.1531108
- Francis, Erik M. “What EXACTLY Is Depth of Knowledge? (Hint: It’s NOT a Wheel!).” ASCD Inservice, ASCD, 9 May 2017, inservice.ascd.org/what-exactly-is-depth-of-knowledge-hint-its-not-a-wheel/.
- Hendrick, Carl, et al. What Does This Look like in the Classroom?: Bridging the Gap between Research and Practice. John Catt Educational Ltd, 2017.
- International Baccalaureate. MYP: from principles into practices.
- Stronge, James H., et al. Designing Effective Assessments. Solution Tree Press, 2017.
- Wiliam, Dylan. Embedded Formative Assessment. Solution Tree, 2011.
very nice post thank you