Using Rubrics to Assess Interpretive Reading

rubricLast night’s #langchat was hopping!  One of the most lively discussions had to do with the topic of using rubrics to assess students’ communication in the interpretive mode.  So, at the request of @MmeBlouwolff, I’m sharing a few thoughts about how I use rubrics to assess reading in my classes.

Like many of my colleagues, I did not understand how I could use a rubric to assess reading comprehension when I first began using IPA’s.  It was not until I saw the ACTFL Interpretive template, that I realized I didn’t have to assess comprehension with discrete point measures.  After adopting the question types suggested by this guide, the switch to a more holistic grading system made perfect sense. A student’s comprehension is not adequately assessed by the number of questions they answered correctly, any more than their presentational writing can be evaluated by counting spelling errors. Furthermore, our current understanding of the interpretive mode of communication does not limit us to evaluating our students’ literal comprehension of a text.  Instead, we are encouraged to assess inferential strategies such as guessing meaning from context, making inferences, identifying the author’s perspective and making cultural connections.  Using a rubric to measure student growth on these skills allows me to show my students what they can do, as well as how they can improve their interpretive strategies.

Here’s a look at a sample of student work and how I used a rubric to assess both the student’s literal and interpretive comprehension. Please note that although I relied heavily on ACTFL’s Interpretive IPA Rubric, I changed the format to make it more similar to the Ohio proficiency rubrics that I use for the interpersonal and presentational modes.  In addition, I modified some of the wording to reflect my own practices and added a numerical score to each column.

As the completed rubric shows, I ask my students to assess themselves by circling the box which best reflects their own understanding of their performance on each section.  In addition to providing an opportunity for self-assessment, this step ensures that the students have a clear understanding of the expectations for the assessment and encourages goal-setting for future performances. This process also provides me with important information about the students’ metacognition. In this case, the student seemed to feel very confident about his/her responses to the Guessing Meaning from Context section, in spite of the fact that he only guessed one word correctly.

After collecting the assessments and student-marked rubrics, it’s my turn to assess the students.  The use of a rubric streamlines this process considerably, as I can quickly ascertain where each student’s performance falls without the laborious task of tallying each error.  I simply check the appropriate box on the rubric, and then project a key when I return the papers so that each student receives specific feedback on the correct responses for each item.  

When it comes to determining a score on the assessment, as a general rule I assign the score for which the student has met all, or nearly all of the descriptors. I do consider, however, how the class does as a whole when assigning numerical grades.  I am frequently unrealistic in my expectations for the Guessing Meaning from Context, for example, and as a result I do not weigh this category very heavily when assigning a final score.  In the case of this student’s work, I assigned a grade of 9.5/10 as s/he met many of the descriptors for Accomplished and demonstrated greater comprehension than the majority of his/her classmates.

While the use of rubrics for interpretive communication might not work for everyone, I have found that holistic grading provides better opportunities for self-assessment, encourages students by providing feedback on what they can do and saves me time on grading.  

As always, I look forward to your feedback, questions and suggestions!

Image credit: https://commons.wikimedia.org/wiki/File:Rubric.jpg

13 thoughts on “Using Rubrics to Assess Interpretive Reading

    1. madameshepard Post author

      In the rare instances when a student does not meet minimum expectations I would ask him/her to try again. Due to the nature of interpretive reading, I would not expect any student to fail unless a/he refused to complete the assessment.

      Reply
  1. Barb Milliken

    Salut Madame!

    I love the rubrics you also shared earlier this year that are based on the Ohio ones but are designed for first vs. second semester students. The question I have is this: is, for example, a 70 the lowest score a student can receive on the rubric you shared in this entry? On the ones you created for semesters,do you ever have more than one rubric copied on each side for the students who may be below standard, 0r under a C level grade? I am unsure if I am making sense so I’ll leave it at that!

    As always, thanks for the inspiration. I needed this.

    Barb

    Reply
    1. madameshepard Post author

      Salut, Barb! Your question makes perfect sense, and yes I did end up using a rubric that included the lower proficiency descriptors for those students who weren’t yet meeting the descriptors I had included. To be clear, the Ohio rubrics list descriptors for Novice Mid through Intermediate Mid and their website also includes a chart with the expected level for the end of each course–Novice Mid for the end of French 1, Novice High for the end of French 2, Intermediate Low for the end of French 3, etc. When I decided to use these rubrics to assign grades, I chose 8.5 as the score for the expected level. So I assigned a score of 8.5 for Novice High 2 at the end of French 2, for example. When assigned scores for the first semester, I simply moved two sublevels back (from Novice High 2 to Novice Low 3). As you mentioned, though, I can’t fit all of the levels on one sheet of paper, and, especially at the beginning of the year, not all of the students were meeting the lowest level on the paper, especially in level 3. What I actually ended up doing was putting a copy of the Level 2 rubric on the back, but changed the score for each column to create a continuum. I hope this makes sense! I would also add that I’ve been thrilled with how these rubrics have worked out. I had a student the other day say, “Look Madame, I can see how much I’ve improved because every time I get work back my checkmarks are farther to the right.” This was exactly what I was hoping for when I moved to these rubrics! Lisa

      Reply
  2. Kathy Zetts

    Hi Lisa,

    thanks for another great post!

    You state that using this sort of rubric saves you time on grading…I was using a similar one to this, where you assigned a point value to each of the descriptions such as
    5–identifies all or nearly all of the key words within the context o f the text
    4–identifies the majority of the key words within the context of the text
    3–identifies about half the key words….
    down to
    1 (or 2?)–no response (which I change to 0 because I don’t give points where there is no response to assess)

    Oddly enough I was thinking that this was MORE time-consuming than tallying errors! I seem to end up doing both (tallying errors to assign the numerical score) and then totaling the scores, then curving them if necessary (if, for example, the highest score possible was 25, but the top student scored 19).

    The tallying gets a bit involved because I usually give partial credit in the “Supporting Details” section if the student puts the check mark to show that the detail is present, but does not correctly identify the detail, or is only able to identify the detail in French (not English). For example, one question asked “What type of money is used in the United Kingdom?” and the student copied from the text “la livre sterling”, but did not know the English translation.

    Obviously I am missing something.

    Do you still use the scoring method I described, or have you completely switched over to what you show in this post?

    I do like the idea of having the students self-assess as they take the test.

    Thoughts? Suggestions? As always, merci beaucoup!

    Reply
    1. madameshepard Post author

      Hi, Kathy. Yes, my assessment methods are continually evolving. When I first started using IPAs, I thought it would be easier to have the rubric for each section included in the assessment directly following the section so I could mark it while it was fresh in my mind. I found all the tabulating and then establishing the curve very time consuming so I switched to the full rubric which I lay next to the student work as I’m assessing. I hope I’ve answered your question!

      Reply
  3. Rebecca

    Thanks again for writing this post. I will admit that I was surprised by (what I perceive to be) the generosity of this holistic grading method. I’m curious what your typical class average is for an interpretive assignment. I wonder if I were to adopt this rubric, would everyone get an A on every interpretive assessment? Unlike you, I tend to ask fewer questions (in order to do other activities beyond the assessment in a 45-minute period) and write less challenging questions that most students have a shot at answering correctly. This may skew my perspective a bit. My goal for Term 2 is to give one interpretive assignment that I will grade with a rubric!

    Reply
    1. madameshepard Post author

      Hi, Rebecca. My students definitely do not all get As (Although I would be thrilled if they did!) The average is probably about 8.5. I do feel like I need to ask a minimum number of questions to create a range of All, Most, About Half, etc. Although I try to write questions that I think my students will be able to answer, the questions are often more challenging than I expect. They always struggle in Guessing Meaning and usually have difficulty supporting their responses on the inferential sections with details from the text. You might have a very different experience, though. I hope you’ll share your reflections after you give the rubric a try!

      Reply
      1. Randa Thomas

        I’ve found the same challenges for many students. Explaining their thinking is very hard for many. I suspect this would directly correlate with their general writing skills in English. Because I am not evaluating them on their writing ability and because I tend to have small classes, I can often meet 1-1 with students who struggle to express themselves to explain orally, often with me asking more probing questions to elicit evidence of their understanding. If I don’t think I’m leading them to an answer I feel good about this approach.

        Reply
  4. Melanie Thomas

    Hi Lisa – What a great post. I had my own learning curve in accepting this transition but the template provided by ACTFL is very logical and the various criteria lets me choose what is most appropriate based on the nature of my “article”
    I love how you confess “I am frequently unrealistic in my expectations for the Guessing Meaning from Context” because I agree that sometimes I think they should be able to determine things that they can’t, but I”m proven wrong when majority can’t do it and it puts my thoughts / assumptions in check and gives me good reflection.
    Thanks for sharing your insight.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *