Musings on assessing Interpretive Reading

readingAlthough I still have a four days left in my current school year, my thoughts are already turning to the changes that I plan on making to improve my teaching for next year.  This great article in a current teen magazine (matin p. 1matin p. 2matin p. 3,  matin p. 4 ) prompted me to write this interpretive task for my first French 2 IPA in the fall, as well as to reflect on what has worked and not worked in assessing my students’ proficiency in the interpretive mode.  I have learned a lot after writing 30+ IPA’s this year!

As my readers will have noticed, I create all of my interpretive assessments based on the template provided by ACTFL.  For the most part, I love using this template for assessing reading comprehension.  The tasks on the template encourage my students to use both top-down and bottom-up processes to comprehend what they are reading and in most cases the descriptors in the rubric enable the teacher to pinpoint the students’ level of comprehension based on their responses in each section.  I do, however, have a few suggestions about using this template in the classroom and modifying the rubric in a way that will streamline the assessment process and increase student success. Below, I’ve addressed each section of the template, as well as the corresponding section of the ACTFL rubric.

Key Word Recognition: In this section, the students are given a list of English words or phrases and asked to find the corresponding French words/phrases in the text.  Because I don’t give many vocabulary quizzes, this section helps me identify whether the students have retained the vocabulary they have used in the unit. In the lower levels I also add cognates to this section, so that the students will become accustomed to identifying these words in the text.  I also include some previously-learned words here, to assess the students’ ability to access this vocabulary.  This section of the ACTFL rubric works well, as it objectively assesses the students on how many of these words the students are able to identify.  I have found it helpful to identify in advance what range of correct answers will be considered all, the majority, half and few, as these are the terms used to define the various levels of comprehension on the rubric.  The IPA that I’ve included here there are 14 key words, so I’ve set the following range for each level: Accomplished (13-14 correct responses), Strong Comprehension (9-12 correct responses), Minimal Comprehension (6-8 correct responses) and Limited Comprehension (5 or fewer correct responses). Establishing these numerical references helps streamline the process of evaluating the students on this section of the interpretive task and ensures that I am assessing the students as objectively as possible.

Main Idea: While this is a very important task, I have found that it is rather difficult to assess individual student responses, due the variety of ways that the students interpret the directions in this section. In the sample IPA, I would consider main idea of the text to be “to present the responses of a group of teenagers who were questioned about what gets them up in the morning.”  However, upon examining the rubric, it is clear that a more detailed main idea is required.  According to the rubric, to demonstrate an Accomplished level of comprehension, a student must identify “the complete main idea of the text;” a student who “misses some elements” will be evaluated as having Strong Comprehension and if she identifies “some part of the main idea,” she falls to the Minimal Comprehension category. Clearly, a strong response to the main idea task must include more than a simple statement.  In this example, a better main idea might be “to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”  Clearly, my students need more direction in identifying a main idea in order to demonstrate Accomplished Comprehension in this section.  I think a simple change to the directions might improve their performance here. Here’s my suggestion:

  • Main Idea(s). “Using information from the article, provide the main idea(s) of the article in English” (ACTFL wording) and provide a few details from the text to support the main idea.

An issue that I’ve had in assessing the students’ main ideas is that the descriptors require the teacher to have a clear, specific main idea in mind in order to assess how many “key parts” of the main idea the students have identified.  In my practice I have found that interpreting a text’s main idea is quite subjective.  I have found that students often identify an accurate main idea that may differ considerably from the one I had envisioned.  Therefore, I would suggest the following descriptors for this task.  The information in parenthesis suggest what a main idea might look like for the sample text.

  • Accomplished: Identifies the main idea of text and provides a few pertinent details/examples to support this main idea. (“The main idea is to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”)
  • Strong: Identifies the main idea and provides at least one pertinent detail/example to support the main idea. (The main idea is that a group of kids are telling when they get up in the morning and why.”)
  • Minimal: Provides a relevant main idea but does not support it with any pertinent details or examples. (It’s about why these kids have to get up in the morning.”)
  • Limited: May provide details from the text but is unable to determine a main idea. (“It’s about what these kids like to do.” or “It’s about what these kids want to be when they grow up.”)

Supporting Details. In my opinion, this is section is the meat of an interpretive assessment.  This is where I actually find out how much the students understood about the text.  As a result, I usually include more items here than ACTFL’s suggested five, with three distractors. While I like the general format of this task on the template, I quickly discovered when implementing it that I needed to make some slight changes. Namely, I had to eliminate the directive that the students identify where each supporting detail was located in the text and write the letter of the detail next to the corresponding information. In the real world, when I am grading 50+ IPA’s at a time, checking this task was entirely too cumbersome.  I photocopy the texts separately from the assessment, so that the students are not required to constantly flip through a packet to complete the IPA.  Therefore, if I were to assess this task, I would have to lay each student’s two packets next to each other and refer back and forth to their assessment and text to locate each letter.  I would then have to evaluate whether each letter was indeed placed close enough to the corresponding detail to indicate true comprehension.  I found this to be an extremely time-consuming, as well as subjective task, which did not provide the information I needed to determine how well the student comprehended the details in the text.  As a result, I quickly eliminated this requirement in this section.  I have, however, retained the requirement that students check each detail to indicate whether it was included in the text.  This provides the student who “knows it’s right there” but “doesn’t know what it says” to demonstrate his limited comprehension.  The most important aspect of this section, however, is that the students provide information to support the details they have checked.  Because this is the only section of the template that actually requires the student to demonstrate literal, sentence-level comprehension of the text, I think it’s important to evaluate it very carefully.  In my opinion, the descriptors in the ACTFL rubric do not allow the teacher to adequately assess this section. Consider this description for Minimal Comprehension, “Identifies some supporting details in the text and may provide limited information from the text to explain these details. Or identifies the majority of supporting details but is unable to provide information from the text to explain these details.”  In my opinion, this descriptor creates a false dichotomy between a student’s ability to identify the existence/location of relevant information and his ability to actually comprehend the text.  According to this rubric, a student who is unable to provide any actual information from the text would be considered as meeting expectations. In a real-life example, if a language learner knows that the driver’s manual tells which side of the street to drive on, but does not know whether she is to drive on the left or the right, I would not say she has met expectations for a minimal comprehension of the manual.  Rather than reinforce this dichotomy, I would prefer to delineate the levels of comprehension as:

  • Accomplished: Identifies all supporting details in the text and accurately provides information from the text to explain these details. (same as ACTFL)
  • Strong: Identifies most of the supporting details and provides pertinent information from the text to explain these details.
  • Minimal: Identifies most of the supporting details and provides pertinent information from the text to explain many of them.
  • Limited: Identifies most of the supporting details and provides pertinent information from the text to explain some of them.

As you can see, I expect the student to identify all or most of the supporting details at all levels of comprehension.  Since my students are identifying details by checking a blank, and 70-80% of the blanks will be checked (20%-30% are distractors), a student could randomly check all of the blanks and meet the descriptor for “identifying most of the details.”  Therefore, this part of the descriptor is less relevant than the amount of pertinent information from the text that is provided to explain the details.

Organization Feature: As I mentioned in a previous post about designing IPA’s, I understand the role that an understanding of a text’s organization has in a student’s comprehension.  However, in practice I often omit this section. The organization of the texts that I use are so evident that asking the students to identify them does not provide significant information about their comprehension.  If, however, I were to use a text that presented an unexpected organizational structure, I think this task would become relevant and I would include it and use the ACTFL rubric to assess the students.

Guessing Meaning from Context.  I love this section!  I think that a student’s responses here could tell me more about their comprehension than any other section of the interpretive task.  However, in practice this is not always the case.  In general, my students tend to perform below my expectations in this section. It may be that I am selecting passages that are above their level of comprehension or it may be that my students don’t take the time to locate the actual sentence in the article.  As a result, the less motivated students simply fill in a cognate, rather than a word that might make sense in the sentence.  Regardless, I think the ACTFL rubric works well here.  I do, however, usually include about five items here, rather than the suggested three.  This allows my students a greater possibility of success as they can score a “Minimal Comprehension” for inferring a plausible meaning for at least three (“most”) items.

Inference. I have found that this section also provides important information about my students’ overall comprehension of the text.  The greatest challenge is encouraging them to include enough textual support from the text to support their inferences.  A slight change I would suggest to the rubric would be to change the wording, which currently assessing students according to how many correct or plausible inferences they make.  Since the template suggests only two questions, it seems illogical that a student who makes a “few plausible inferences” would be assessed as having “Minimal Comprehension.”  In actual practice, I have assessed students more on how well they support their inferences than the number of inferences they have made.  If I were designing a rubric for this section, I would suggest the following descriptors here:

  • Accomplished Comprehension: “Infers and interprets the text’s meaning in a highly plausible manner” (ACTFL wording) and supports these inferences with detailed, pertinent information from the text.
  • Strong Comprehension: “Infers and interprets the text’s meaning in a partially complete and/or partially plausible manner” (ACTFL wording) and adequately supports these inferences with pertinent information from the text.
  • Minimal Comprehension: Makes a plausible inference regarding the text’s meaning but provides inadequate  information from the text to support this inference.
  • Limited Comprehension: Makes a plausible inference regarding the text’s meaning but does not support the inference with pertinent information from the text.

Author’s Perspective. Although not all text types lend themselves to this task, I include it whenever possible.  I do, however, deviate somewhat from the suggested perspectives provided by ACTFL.  Rather than general perspectives, such as scientific, moral, factual, etc., I have the students choose between three rather specific possible perspectives.  As with identifying inferences, I believe the most important aspect of the students’ responses on this task is the textual evidence that the students provide to support their choice.  In my opinion, the ACTFL rubric for this section provides good descriptors for determining a student’s comprehension based on the textual support they provide.

Comparing Cultural Perspectives. Although I find these items difficult to write, I think this section is imperative.  One of the most important reasons for using authentic resources is for the cultural information that they contain.  This task allows us to direct the students’ attention to the cultural information provided by the text, as well as to assess how well they are able to interpret this information to acquire new understandings of the target culture.  The first challenge in writing these items is that the teacher must phrase the question in a way that enables the student to connect the cultural product/practice to a cultural perspective.  This is especially difficult for novice learners who made have very little background knowledge about the target culture(s).  Because identifying a cultural perspective is such a sophisticated task, I think it’s important to provide a fair amount of guidance in these items. While the ACTFL template provides some sample questions, it’s important to realize that some of these questions do not allow the students to adequately identify a cultural perspective. In addition, many of the suggested questions assume that the students share a common culture and background knowledge.  I have made many mistaken assumptions when asking my students to compare a French practice to an American one.  Many of my students have not traveled outside of our community, have not had many cultural experiences, and lack basic knowledge about American history.  Therefore, they do not have the background knowledge about U.S. culture to make adequate comparisons. Furthermore, a significant percentage of my students were born outside of the U.S., so any question requiring them to demonstrate knowledge of American culture is unfair.  In the future, when writing a comparison question I will invite my students to compare the French practice to “a practice in a culture they are familiar with.”

My concern with the ACTFL rubric for this section is that a student is assessed mostly on his ability to connect the product or practice to a perspective.  While I think this high-level thinking skill is important, I have not found it to be closely related to the students’ comprehension of the text.  I have students who may wholly comprehend the text, but lack the cognitive sophistication to use what the read to make a connection to perspectives, placing them in the Limited Comprehension category.  In addition, many valuables texts simple don’t include the types of information needed to make these connections. Perhaps the following descriptors might be more realistic?

  • Accomplished Comprehension: Accurately identifies cultural products or practices from the text and uses them to infer a plausible cultural perspective.
  • Strong Comprehension: Accurately identifies at least one product or practice from the text and uses it to infer a plausible cultural perspective.
  • Minimal Comprehension: Accurately identifies at least one product or practice from the text but does not infer a plausible cultural perspective.
  • Limited Comprehension: May identify a superficial product or practice that is not directly related to the text but is unable to infer a plausible cultural perspective.

Please let me know if you’re using the ACTFL template and rubrics to assess Interpretive Communication and how it’s working for you.  I have lots to learn!

 

7 thoughts on “Musings on assessing Interpretive Reading

  1. Thuy

    Bonjour Lisa:
    Vous êtes géniale! Merci bien de nous avoir partagé vos leçons.
    C’est vraiment gentil de votre part.
    Thuy

    Reply
  2. Rebecca B.

    As always, it’s a pleasure to hear from a teacher who’s a few steps (at least!) ahead of me on this journey and who can put into words many of my still-disorganized thoughts on the process of devising interpretive reading assessments with authentic materials. Something I’ve loved about doing these for the first time this year is that they’re so quick and easy to grade. However, that’s because I’m not using a rubric and just scoring them out of 20 as follows: 1 point for each word in key word recognition (total = 6 points), 2 points for each of the 5 correct supporting details (1 point for circling it and 1 point for marking the letter in the text, total = 10 points), 4 points for student’s written summary. I’m curious, given that you felt the need to go back and add numerical ranges, if you think the rubric adds much to your assessment of students’ performances?

    Teaching kids how to take these well has been a huge learning experience for me. I found that I really needed to do it English, which I didn’t want to do in order to maintain TL – and then it ended up that I wasn’t really teaching them how to attack the assessment until midyear when I relented and did a good 20-minute session in English about using context, reading through all the statements at the beginning to get a sense of the topic, and answering who/what/where/when/why/how in the summary portion (I also tell them, write with enough detail that I could just read your summary and still be able to discuss this article with a friend).

    Finally, I’ve found that copying these onto 11×17″ paper is really helpful. I put the article on one side and the questions on the other. Then students can easily go between the two, nothing can get lost (at the start of the year kids kept forgetting to hand in the articles), and I can quickly identify the letters they’ve marked for supporting details. I make them circle or underline the corresponding text so that it’s clear what they’re referring to and it just takes a second to assess.

    Onward! Thanks again for sharing your insights here.

    Reply
    1. madameshepard Post author

      Hi, Rebecca
      I’m glad you asked about the rubric, because you’ve given me something to reflect about. I have lots of questions about your system! On the supporting detail part, do you give any points for what they actually fill in for the supporting detail, or don’t you do that part (do they just circle and label?)? I can see how a point system works well for these sections. Do you do the other sections on the ACTFL IPA template? It seems that those parts would be hard for me to do without a rubric. What a great idea about the way you copy it–I would have never thought of that! I appreciate you point about teaching the kids to take this kind of assessment. I didn’t really do that and I think it would have helped. I never really have time to go over the IPA’s after the kids take them–by the time I have all of the absent kids make them up, and then get them all graded, they seem like ancient history! I’ll definitely do a better job giving them some pointers before their first IPA next year! Thank you so much for your great suggestions!

      Reply
      1. Rebecca

        Hi again. I only do key word recognition, circling and labeling the supporting details, and summary. When my dept. head introduced us to these, he said those were the key parts for novice learners, so I just left it at that.

        One of my favorite ways to go over these is when I hand them back, I have kids read their summaries aloud in English (!) to a neighbor and note anything their partner included that they left out. Then we go through who, what, where, etc. (different kids volunteering for each little piece) as a class to make sure we’ve identified all the key stuff that they should pack in there. I still have some kids who’ll write “This article is about who wastes food, how much they waste, and why it’s a problem” instead of “This article describes how the typical French person wastes 20 kilos of food a year which uses up finite resources like land, energy, and water,” but as a group they’ve come sooooo far.

        I’ll send you a recent one tonight in case this wasn’t clear.

        Thanks again!

        Reply
        1. madameshepard Post author

          What a great way to improve their ability to write summaries. I love your example, too. It clarified to me what a good summary is!

          Reply
  3. Amanda Howard

    I love these clarifications and I was able to implement my first IPA in this style back in December, using a modified version of your rubric to fit our grading scale better. One of the things I had students do was to self-evaluate with their rubric as they were going, and then I graded them on the same exact rubric so that they could see how they did. I was also fortunate enough to spend some time on a day when we came back from break this week to go over IPAs with each student so that they have an idea of why they scored the way they did and how they could improve for next time. I’m still an infant in this whole proficiency and writing IPAs thing, but your work and explanations of your thinking help tremendously! Thank you SO much!

    Reply
    1. madameshepard Post author

      I think that have the students self-evaluate is a great idea and I’ve been doing the same thing! I definitely need to close the feedback loop, though–I’m horrible about making time to go over the IPA’s with my students. It seems like by the time I have all the absent students make up their work and get everything graded, I’m in the middle of something else. Your comments have definitely motivated me to try harder! Thanks so much for sharing your experiences! Lisa

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *