Category Archives: Assessment

Using Cartoons to Assess Interpretive Listening with Novice Learners

COUV. La Balle.indd

This week’s #langchat discussion about interpretive listening revealed that we language teachers are very diverse in the way we approach this skill, especially with novice learners. Although I reflected at length on the topic of assessing listening in an earlier post, I’d like to specifically address a few of the questions that came up during Thursday night’s discussion.

Question #1: What resources are appropriate for novice learners? While some teachers are hesitant to use authentic resources with early novices, I have found that first semester French 1 students can successfully interpret carefully selected authentic materials when given level-appropriate tasks.  My go-to resource for these students are cartoon videos for the following reasons:

  1. These videos were made for novice language learners—young children in the target culture! As a result, the vocabulary and sentence structures are relatively simple and the linguistic input is supported by strong visual cues. This is exactly what our novice learners need.
  2. The wide selection of these videos ensures that there are several choices available for any theme we’ve included in our novice curriculum. My favorites for my Level 1 and 2 students are Trotro, Petit Ours Brun and T’choupi et Doudou, because of the broad range of topics covered and the comprehensibility. I also occasionally use Peppa Pig with my level 2 students. Although originally recorded in (British) English, the French translation was clearly intended for French-speaking children, so I do consider these to be authentic resources.  However, the target culture would not, of course, be represented in these videos.
  3. Cartoons are very engaging to my students. They look forward to their turn at the computer and a few students have even mentioned that they have watched additional episodes of the series at home, “just for fun.”
  4. As authentic resources, these cartoon videos often integrate cultural products, practices and perspectives of the target culture. When Petit Ours Brun puts his shoes under the Christmas tree, his grandfather comments on the delicious turkey, and he wakes up to presents on Christmas morning, my students learn relevant cultural practices regarding Christmas celebrations in France.

Question #2: What types of tasks are appropriate for novice learners? I realized as I participated in Thursday night’s #langchat that I have interpreted ACTFL’s descriptors regarding interpretive listening differently than many of my colleagues. The Novice Mid (my goal for level 1) NCSSFL-ACTFL Can-Do Benchmark for interpretive listening reads, “I can recognize some familiar words and phrases when I hear them spoken.”  If I understood my colleagues’ responses correctly, many of us may be assessing listening by having students list the words and phrases that they hear.  Because it isn’t clear to me how this type of task would demonstrate interpretation/comprehension, I ask students to answer questions to show comprehension of the video, but phrase these questions in a way that the students can use previously-learned words/phrases (along with visual context clues) to respond.  This year I am using a multiple choice format for my formative listening assessments using our district’s recently-adopted Canvas learning management system.  Although I don’t feel that multiple choice is appropriate for many language tasks, this platform has the advantage of providing immediate feedback to my students.  In addition, since creating and assessing them requires a minimal time commitment on my part, I am able to provide more opportunities for listening than I was using other task types.  Lastly, this format provides students with additional context clues.  Their listening is more purposeful as they are listening for a specific response, as well as to eliminate distractors. While I typically use open-ended question types on my IPA’s, these multiple choice quizzes, which the students complete individually at a computer, provide the majority of my formative listening assessments.

In order to save time, I create these quizzes directly in Canvas, which unfortunately makes them very difficult to share.  For the purposes of this discussion, I’ve uploaded a Word document of screenshots from a quiz I made this morning for the video, Trotro et les cadeaux de Noel (https://www.youtube.com/watch?v=iRcv1pVaitY ). As this document shows, the questions that I’ve created enable these Novice Low-Mid students to demonstrate their ability to interpret this text using only previously-learned words and phrases and visual clues. While most of the items assess literal comprehension, I’ve included a few questions that require the students to make inferences and guess the meanings of new words using context clues. Here’s a quick explanation of my thought process for each question.

#1: While each of these questions would be appropriate to the context, my students will probably understand “pour moi” when they hear it.  They will also be able to eliminate the 2nd choice, because they know the word for Santa.  Although I’ve used the other question words in class, the students are not using them yet.  I included them in the distractors to encourage the students to start thinking about how questions are asked.

#2: This question is a “gimme.”  The students know the word for book and have visual clues as further support.  I created the question to improve the students’ confidence, enable all students to have some “correct” answers, and to provide more context for further questions.  As you can see, I write LOTS of questions, because I find the questions themselves provide important context and help the students follow along with the video.

#3: “Chouette” is a new word for these students, but it appears in a lot of children’s literature/videos and I think they’ll enjoy using it.  The context should make the meaning of this word clear.

#4/#5: The students have learned the word “jeux-video” so I think they’ll get “jeu.”  Also because Trotro also uses “jouer” I think they’ll understand it’s something to play with rather than listen to.

#6/#7 Students can answer by recognizing the previously-learned words “gros” and “belle.”

#8: Although this question does not assess listening comprehension (the word appears in written form), it does provide a contextualized way to introduce a new vocabulary word.

#9: The students can listen for the word “content” as well as eliminate the distractors based on previously-learned words.

#10: The students have heard “maintenant” repeatedly, but it hasn’t been formally introduced.  If they don’t recognize it, they should still be able to eliminate the other choices.

#11: Although the students will not understand the entire sentence in which it appears, they should be able to answer this question by identifying the word “cadeaux.”

#12: I’m curious what my students will do with this inference-based question.  They should recognize the phrase, “Moi, aussi” which should enable them to infer that Boubou got the same gift.

#13: The students should recognize the word “jouer” as well as be able to eliminate the distractors based on previously-learned vocabulary.

#14: The students should be able to use the visual context to guess the meaning of this new vocabulary.

#15: The phrase “c’est moi” should enable the students to choose the correct response for this one. As with several other items, I’ve included the transcription of the entire sentence to introduce new vocabulary—the verb “gagner.”

#16: Although my students won’t be able to use the linguistic content to answer this question, I’ve included it to encourage inference based on visual context clues.

#17: I’ll be curious how they do with this one.  “Bateau” is an unknown word and although they’ve seen “mer,” I’m not sure they’ll pick up on it.  Some might pick out “pirate” but I’ll be curious how many are able to answer this one correctly.

#18: The students have heard “rigolo” and this word even appears in Trotro’s theme song.  In addition, they should be able to eliminate the distractors based on previously-learned vocabulary.

While there’s nothing especially innovative about this assessment format, after completing many similar tasks during their first semester of language study most of my level 1 students are pretty accurate when completing this type of formative assessment.

Question #3: How should interpretive listening be assessed? I did want to make a point about grading these formative assessments.  Although I do my best to create questions that are mostly at the students’ current proficiency level, with a few items thrown in to encourage “stretch,” I rely heavily on my students’ results to determine how close I came to hitting this target.  Therefore, I do not decide how to grade these assessments until I have data on how the class scored as a whole.  In other words, this particular formative assessment will not necessarily by worth 18 points.  If, for example, the highest score is 16, I might make this the maximum score. For teachers that do not record a score on formative assessments, this isn’t an issue of course.  I only suggest that we expect and allow for student errors when assessing interpretive listening (even using objective evaluations) just as we do when assessing the other modes.

I’d love to hear from any of you who are willing to share your experiences and ideas about assessing listening with novice learners!

Image credit: www.gallimard-jeunesse.fr

5 Tips for Grading IPAs

teacherThe first grading period ended in my school this week so there was lots of talk in my department about how time-consuming it is to grade IPA’s.  While I am enough of a teacher nerd to actual enjoy creating IPA’s, I cannot say the same for grading them!  Here are a few suggestions that have helped me streamline the process and cut down the time I spend on this task.

  1. Assign a rough draft for the Presentational Writing. I often incorporate a series of learning stations before an IPA and one of these stations consists of writing a rough draft for the IPA. Since I have only 8 students at each station per day, the process of providing feedback is less overwhelming. The students benefit from this feedback on this formative assessment and usually do much better on the IPA as a result.
  2. Use rubrics. I began using the Ohio Department of Education rubrics this year and I really like them. Since Ohio has not yet created an Interpretive Rubric, I use the ACTFL rubric, which I’ve modified to meet my needs.  (See this post for a detailed explanation.) When grading the reading and writing sections of an IPA, I lay a rubric next to the student’s paper and check the corresponding box, making very few marks on the student’s paper. Since I will go over the interpretive sections with the class, I don’t find it necessary to mark each response on each student’s paper.  Likewise, having given specific feedback on the rough drafts, there is no need to do so on this final copy, which I will keep in my files after returning temporarily for feedback purposes.
  3. Avoid math. After I have checked the appropriate box in each section of the rubric, I determine a score for that section of the IPA. (My gradebook is divided according to language skills—reading, writing, listening, and speaking, so each IPA task gets its own score.) I use a holistic system, rather than mathematical calculations to determine an overall score for each task. If all of the checks are in the “Good” column, the student earns a 9/10.  If there are a few checks in the “Strong” column (and the rest are Good), the student earns a 10/10.  If the checks are distributed between the Good and the Developing column, the student earns an 8.  If the checks are all in the Developing column, the student earns a 7.  If there are several checks in the Emerging column, the student earns a 6.  If a student were unable to meet the criteria for Emerging, I would assign a score of 5/10, the lowest score I record.
  4. Grade the Interpersonal Speaking “live.” I know that many teachers have their students record their conversations and then listen to them later. If this works for you, you have my admiration. I know myself far too well—I would procrastinate forever if I had 30 conversations to listen to when I got home at night!  It works much better for me to call up two randomly-chosen students to my desk while the rest of the class is working on the presentational writing.  I can usually get most of the class done in one period, in part because I also place a time limit on their conversation— usually about 3 minutes for my novice students and 4-5 for my intermediates. I find that I can adequately assess their performance in that amount of time, and the students are relieved to know that there is a finite period of time during which they will be expected to speak.  I mark the rubric as they’re speaking, provide a few examples, and then write a score as they next pair is on their way to my desk.
  5. Use technology for lnterpretive Listening. Each of my IPA’s includes both an Interpretive Reading and an Interpretive Listening. Because I haven’t found the ACTFL Interpretive Template to work well with listening assessments (see this post), I am currently using basic comprehension, guessing meaning from context, and sometimes main idea and inference questions to assess listening.  Although I’ve used a short answer format for these items in the past, I am starting to experiment with creating multiple choice “quizzes” on Canvas (our learning management system).  I know that other teachers have had success creating assessment items using Zaption and other programs.  I’m still reflecting on the use of objective questions to assess listening, but these programs do offer a way for teachers to provide more timely feedback and for students to benefit from additional context to guide their listening.

If you have any tips for grading IPA’s,  please share!

Photo Credits

  • Comstock Images/Comstock/Getty Images

Assessing Proficiency: A SLO Pre-Assessment for French 2 Students

board-361516_1280In Ohio, as in an increasing number of states, teachers are now evaluated (in part) on the extent to which their students meet the Student Learning Objectives that have been set for them.  Fortunately, both the Ohio Foreign Language Association and the Ohio Department of Education have encouraged us to develop SLO’s based on student growth in proficiency.  Therefore, within the first couple of weeks of school, I will be giving this pre-assessment(French 2 SLO Pre-AssessmentSLO Article p. 1SLO Article p. 2t) to my French 2 students.  Rather than assessing their work using performance rubrics, as I do for the unit IPA’s, I will use these proficiency rubrics to assess this IPA. I will then give a post-assessment (IPA that is unrelated to a recent unit of study), to assess student growth.

How do you measure growth in your students?

Bienvenue Partie 2: Designing IPA’s for Novice Low Learners

bienvenue2 In conversations about Integrated Performance Assessments, my fellow teachers often share their concerns about using authentic texts with beginners. There seems to be a widespread belief that true beginners cannot derive meaning from texts created by native speakers for native speakers. I hope that these assessments, which will be implemented during the unit I shared in yesterday’s post, will demonstrate that even Novice Low learners can read and listen to authentic texts when the tasks are designed to correspond to their proficiency level.

As I explained in yesterday’s post, I created two separate IPA’s for this unit.  As often happens in real-life school settings, instructional decision-making is influenced by many factors.  Because this unit will not yet be completed before the interim progress report grades are due, I prepared a short IPA to be administered after about three weeks of instruction.  This assessment will provide information to my students and their families regarding their ability to use their brand-new language skills in an authentic context.

IPA #1 (Revised 9/14/2015)

As you can see, I did not follow the exact order (Interpretive-Interpersonal-Presentational) that is recommended in designing IPA’s.  In this case I used an alternative format to better meet the context of the assessments, which was a visit to a Francophone school.  Therefore, in this IPA the students will first listen to an authentic video about holidays and then read an article about France from an authentic children’s magazine (Les Pays…08082015) Next, they will respond to a note from a student in the class.  Lastly, they will answer the school secretary’s questions.  Although all of my previous IPA’s have incorporated student- to-student interaction for the interpersonal task, I will play the role of the school secretary in this instance, as the Novice Low ACTFL Can-Do’s reflect the students’ ability to introduce themselves at this level, but not to interview others. This is the “secretary’s” script:

Bonjour.

Comment ça va?

Tu t’appelles comment?

Comment ça s’écrit ?

Tu as quel âge ?

Quelle est la date de ton anniversaire?

Merci, Bonne journée.

Au revoir.

IPA #2 (Note: the video used for the listening assessment is no longer available, but a search on “Mes fournitures scolaires” on Youtube might provide a similar video. Edited 9/21/19: The text for my original IPA is no longer available.  However, Stacy Nordquist has generously shared a similar IPA that she created using a recent school supply list: IPA   List

In this summative assessment for the unit, I continued the context by explaining that the students were now preparing for their first day of school in their temporary home in Morocco.  Before the first day they will 1)Read the school’s list of required supplies (Interpretive Reading), 2) Listen to a video in which a student presents her school supplies (Interpretive Listening), 3) Discuss their school supplies with a neighbor (Interpersonal Communication) and 4) Make a list of school supplies they need to buy (Presentational Writing).

French 1 Unit 1 Formatives

As shown in the tentative agenda I included in yesterday’s post, I will administer a quick formative assessment after each lesson.  These quizzes are designed to assess the extent to which the students are able to identify new vocabulary words.  Any student who is not successful on any of these quizes will be given an opportunity to receive additional instruction and retake the assessment. As with the first IPA, the red text is teacher script and will not appear in the student copy.

Image Credit: http://claire-mangin.eklablog.com/

Grading: A necessary evil?

reportcardIf it were up to me, I would provide feedback, but not numerical or letter grades to my students. In my experience, assigning scores to assignments, assessments, and overall achievement often has a negative effect on the learning process. My more ambitious students are so focused on their scores for various assessments that they tend to disregard the feedback provided to help them increase their proficiency. The less motivated students sometimes regard a low score as an excuse to stop trying, rather than directing their attention to constructive feedback that would help them improve on future performances. Sometimes, after a few bad grades, the less motivated students will even use cheetahpapers.com to write their assignments for them in an attempt to increase their grades. After a while, the focus of the assignment goes on the grade, instead of the work itself. This means that students often feel as though they’re never improving, harming their future learning.

Grading can have a negative impact on students and, as much as I would like to completely eliminate the process of assigning grades to my students, I know this is not a realistic expectation given my current teaching situation. In my school, as in most large public high schools in the country, grades serve many purposes for the students and stakeholders in their education. Here are a few that immediately come to mind:

  • Some parents use grades to determine the extent to which they need to become more involved in their child’s schoolwork, limit extra-curricular activities, take disciplinary measures, etc.
  • Grades provide input to guidance counselors when making scheduling decisions.
  • Administrators consider grades when placing students in various educational programs.
  • Coaches make decisions about what types of intervention to provide based on student athletes’ grades.
  • Mental health professionals consider students’ grades when diagnosing certain learning differences or mental health issues.
  • Colleges use students’ grades to make decisions about whom to accept or give scholarships to.
  • Students make decisions about work habits and even whether to remain enrolled in a course based on their grades.

For these reasons, I am required to keep an (electronic) gradebook in which I record numerical scores for various assignments and assessments. These scores are then used to determine a numerical average, which is then converted to a letter grade based on the district’s grading scale.

Although I cannot totally eliminate the grading process, I do have a fair amount of autonomy in determining how these grades are tabulated. In my current teaching position, I am able to make the following decisions regarding the grading process:

  • The formula used to convert individual scores into an overall grade
  • The types of assignments/assessments that are graded
  • The methods I use to assign a numerical score to these assignments/assessments

When making choices about these aspects of the grading process, I take many factors into account. First and foremost, it is of utmost importance that my students’ grades reflect what they can do with language (and therefore their proficiency), rather than their compliance, behavior, effort, etc. Secondly, it is important that the scores provide targeted feedback on each student’s strengths and areas for improvement. Lastly, I want my grading system provide motivation for those students who are grade-driven, yet not be overly punitive for those students who are less motivated by grades. While I continue to tweak my grading system as my understanding of proficiency evolves, this is the grading system I will implement this year.

Formulating a Quarter Average In order to ensure that my students’ overall grades reflect the extent to which they have met the proficiency goals I have set for them, 80% of each student’s quarter grade is derived from his/her scores on the two or three IPA’s that I administer each quarter. Rather than recording one score for each IPA, however, I assign a separate score for each language skill that is assessed on the IPA. Therefore, each student will earn a Reading score for the interpretive reading task on the IPA, a Listening score for the interpretive listening task, a Speaking score for the interpersonal communication or presentational speaking task, and a Writing score for the presentational writing task. Each of these skill categories are worth 20% of the overall grade. The advantage of recording these scores in separate categories, rather than as a single score, is that I can immediately identify a student’s strengths and weaknesses and provide individualized coaching to help students improve. While some educators use the communicative modes, rather than language skill areas as their grading categories, my personal experience does not support this configuration. I have found little transfer, for example, between interpretive listening and interpretive reading skills. Likewise, my students with strong presentational speaking skills do not necessarily have the accuracy required to be strong writers. I do find, however, that students are fairly consistent across modes in terms of language skills. For instance, a student who can communicate effectively in a conversation can usually transfer these same skills to an oral presentation.

In addition to these language skill categories, I have a fifth section which includes all other assignments/assessments. Grades on classwork/formative assessments, quizzes, etc. are recorded as Miscellaneous scores. While many teachers don’t record scores on formatives assessments, I have found that many of my students are more motivated to complete classwork and to prepare for formative assessments if their scores on these evaluations will appear in the gradebook. Due to the large number of scores in this category, each individual score has only minor mathematical significance. As a result, a poor score on any of these assignments will have very little effect on a student’s overall grade, ensuring that the student’s quarter grade is primarily derived from his/her summative IPA’s.

Assigning Scores to IPA’s This year I will assess my IPA’s using the Ohio Department of Education’s Presentational Speaking, Presentational Writing and Interpersonal Communication Scoring Guides and the ACFTL IPA rubric for Interpretive Reading (with the modifications discussed in this earlier post). As I assess the IPA’s, I will check the appropriate box in each section of these rubrics in order to provide comprehensive feedback to my students. However, I will not provide a numerical score in order to ensure that the students remain focused on their learning, rather than their grade. As I will need a numerical score for my gradebook, I’ll use these formulas to convert the rubric evaluations into scores for record-keeping purposes.

Interpretive Listening: Because I have not found the ACTFL template to be an effective method of assessing interpretive listening skills (see this post), I am currently using a variety of comprehension questions to assess listening. My method for determining a grade based on student responses to these questions is, however, a work in progress. Although I try to create questions that could be answered using previously-learned vocabulary and context clues, my students’ performances have demonstrated that I am not always realistic in my expectations. It is clearly not reasonable to expect Novice students to answer all questions about an authentic video when “I can understand basic information in ads, announcements, and other simple recordings.” is an Intermediate Mid NCSSFL-ACTFL Can-Do statement. Therefore, I used data from my students’ responses on IPA’s (all of which were new last year) to inform my calculations. I then create a table such as this one. Because this process is norm-referenced rather than criterion-referenced, I am not entirely satisfied with this process and will continue to reflect on how best to assess my students on interpretive listening.

Assigning Scores to Formative Assessments While the primary purpose of my formative assessments is to provide feedback, I also assign scores to some of these assignments. Doing so provides additional motivation to some students as well as encourages absent students to make up their missed work. On most days, my students will complete at least one of the following, which may be scored as a formative assessment. I use these rubrics to formulate a score on the following types of formative assessments.

  1. Presentational Speaking – I sometimes choose 2-3 students to present on a topic that was assigned as homework (Novice) or to present what they have learned from a reading or conversation (Intermediate).
  2. Interpersonal Speaking – I circulate among my students as they are completing the interpersonal speaking activities during the unit. While I cannot spend enough time with each pair/group to adequately assess them, I do choose to 3-4 groups to assess during each interpersonal speaking activity.
  3. Presentational Writing – My students complete several presentational writing assignments throughout the unit that are designed to help them practice the skills they will need to be successful on the IPA. While I cannot assess all of these assignments, I will provide feedback (or use peer feedback) as often as possible. In addition, by randomly selecting several papers to score on each assignment, I can ensure that all students will have at least one writing formative assessment score for each unit.
  4. Interpretive Reading/Listening – In many cases, I provide whole class feedback by going over the correct responses to interpretive activities. However, I do sometimes collect student work in order to evaluate and provide feedback on individual performance. Depending on how much time I have available, I might correct all or parts of an interpretive task for feedback purposes and then assign a score using the interpretive formative assessment rubric.

While I will continue to evaluate my grading practices, it is hoped that this system will allow me to assess my students’ progress on the goals I have established and to provide the necessary feedback that will enable them to make continued progress along the path to proficiency.

 

Musings on Assessing Interpretive Listening

listeningA couple of weeks ago I shared my thoughts about assessing interpretive reading.  In that post gave my opinion  that ACTFL’s IPA Template was a generally effective way to design an assessment of reading comprehension and that, with a couple of modifications, their rubric was well-aligned with the tasks on the template. I have reservations, however, about the use of the ACTFL IPA template to assess listening.  Here are a couple of my thoughts about assessing listening, please share yours!

Assessing Interpretive Listening is Important

By defining both listening and reading comprehension as Interpretive Communication, ACTFL has given us an out when writing IPA’s.  We can choose to include either one, but are not required to include both.   My guess is that when given the choice, most of us are choosing authentic written rather than recorded texts for the interpretive portion of our IPA’s.  There are several good reasons why this may be the case.

  1. Authentic written texts are usually relatively easy to find. A quick Google search of our topic + Infographie will often produce a content-filled, culturally-rich text with the visual support that Novice learners need. Picture books, ads, social media posts, etc. provide additional resources.  For our Intermediate students, our options are even greater as their proficiency allows them to read a wider variety of short texts, an unlimited supply of which are quickly located online.
  2. Written texts can be easily evaluated regarding their appropriateness for our interpretive assessment tasks. A quick skim will reveal whether a text being considered contains the targeted language and structures, culturally-relevant content, and appropriate visual support that we are looking for.
  3. Assessments of interpretive reading are easy to administer. We need only a Xerox machine to provide a copy of the text to each student, who can then complete the interpretive task at her own pace. When a student is absent, we can simply hand him a copy of the text and interpretive activity and he can complete the task in a corner of the room or any other area of the building where make-ups are administered.

Curating oral texts and assessing their interpretation, however, is considerably more time-consuming.  While we have millions of videos available to us on YouTube (my person go-to for authentic listening texts), videos cannot be skimmed like written texts.  We actually have to listen to the videos that our searches produce in order to evaluate whether they are appropriate to the proficiency of the students for whom they are intended.  In some cases, we have to listen to dozens of videos before finding that gem that contains the appropriate vocabulary, cultural content and visual support that our learners need.  When it comes to administering these assessments, we often face additional challenges.  In my school, YouTube is blocked on student accounts.  Therefore, I have to log into 30 computers in a lab (which is seldom available) or my department’s class set of IPads (sometimes available) for all of my students to individually complete a listening assessment at the same time. While many of us play and project our videos to the class as a whole, I think this places an undue burden on our Novice students who “require repetition, rephrasing, and/or a slowed rate of speech for comprehension” (ACTFL Proficiency Guidelines, 2012). A student who has her own device can pause and rewind when needed, as well as slow the rate of speech when appropriate technology is available.

In spite of these challenges to evaluating listening comprehension, I think we have a responsibility to assess our students’ ability to interpret oral texts. As Dave Burgess said at a conference I recently attended, “It’s not supposed to be easy, it’s supposed to be worth it.”  Assessing interpretive listening skills IS worth it. As the adage says, “we teach what we test.”  If we are not evaluating listening, we are not teaching our students what they need to comprehend and participate in verbal exchanges with members of the target culture.  While technology may allow us to translate a written text in nanoseconds, no app can allow us to understand an unexpected public announcement or participate fully in a natural conversation with a native speaker. In my opinion, our assessment practices are not complete if we are not assessing listening comprehension to the same extent as reading comprehension. As a matter of fact, I include separate categories for each of these skills in my electronic gradebook.  While others may separate grades according to modes of communication, I’m not sure this system provides as much information regarding student progress toward proficiency. Although both reading and listening may require interpretation of a text, they are clearly vastly different skills.  Students who are good readers are not necessarily good listeners, and vice versa. In their Proficiency Guidelines, ACTFL clearly differentiates these two skills, don’t we need to do the same when evaluating our students using an IPA?

Designing Valid Interpretive Listening Assessments is Difficult

 In my opinion, ACTFL has provided us very little direction in assessing interpretive listening.  While we are advised to use the same IPA Interpretive template, I find that many of these tasks do not effectively assess listening comprehension. Consider the following:

Key Words. While students can quickly skim a written text to find key words, the same is not true of recorded texts.  Finding isolated key words requires listening to the video multiple times and attempting to isolate a single word in a sentence.  I find this task needlessly time-consuming, as I will be assessing literal comprehension in other tasks.  Furthermore, this task puts some students, especially those with certain learning disabilities at a significant disadvantage.  Many of these students have excellent listening comprehension, but are not able to accurately transfer what they understand aurally into written form.

Main Idea. Although this task seems fairly straightforward, I question its validity in assessing comprehension for Novice and Intermediate learners. According to the ACTFL Proficiency Guidelines, Novice-level listeners are “largely dependent on factors other than the message itself” and Intermediate listeners “require a controlled listening environment where they hear what they may expect to hear.”  This means that all of my students will be highly dependent on the visual content of the videos I select to ascertain meaning.  Therefore, any main idea they provide will most likely be derived from what the students see rather than what they hear.  A possible solution might be for the teacher to provide a series of possible main ideas (all of which could be extrapolated from the visual information) and have the students choose the best one.  However, this task would certainly be unrealistic for our novice learners who are listening at word level.

Supporting Details.  I think this task on the IPA template is the most effective in providing us feedback regarding our students’ ability to understand a recorded text.  By providing a set of details which may be mentioned, we provide additional context to help our students understand the text and by requiring them to fill in information we are assessing their literal comprehension of what they hear.  In addition, this type of task can easily be adjusted to correspond to the proficiency level of the students. Providing information to support the detail, “Caillou’s age” for example, is a realistic expectation for a novice listener who is watching a cartoon.

Organizational Features While I see little value in this task for interpretive reading, I see even less for listening.  As previously mentioned, even intermediate listeners need to be assessed using straightforward texts so that they can anticipate the information they will hear.  Having my students describe the organization of a recorded text would not provide additional information about their comprehension.

Guessing meaning from context. As much as I value this task on reading assessments, I do not find it to be a valid means of assessing aural comprehension.  The task requires the teacher to provide a sentence from the text and then guess at the meaning of an underlined word.  As soon as I provide my students with a written sentence, the task becomes an assessment of their ability to read this sentence, rather than understand it aurally.      

Inferences As with the main idea, I think Novice and Intermediate listeners will be overly dependent on visual cues to provide inferences.  While I believe students should be taught to use context to help provide meaning, I prefer to assess what they are actually able to interpret verbally. ACTFL does suggest providing multiple choice inferences in the IPA template, but again the teacher would have to provide choices whose plausibility could not be derived from visual information in order to isolate listening comprehension.

Author’s Perspective. While I regularly include author’s perspective items on my assessments for my AP students, I feel this is an unrealistic task for Novice and Intermediate Low listeners.  Students who are able to understand words, phrases, and sentence-length utterances will most likely be unable to identify an author’s perspective using only the verbal content of a video.

Cultural Connections. Authentic videos are one of the best tools we have for providing cultural content to our students.  The content provided by the images is more meaningful, memorable, and complete than any verbal information could be.  However, once again it is difficult to isolate the verbal content from the visual, creating a lack of validity for assessment purposes.

 Conclusion

For now, I’m planning on using supporting detail or simple comprehension questions when formally assessing my students’ interpretive listening skills in order to ensure that I am testing what I intend to test. When practicing these interpretive skills, however, I plan on including some of the other tasks from the IPA template in order to fully exploit the wealth of information that is included in authentic videos.  I’m looking forward to hearing from you about how you assess your students on interpretive listening!

Checking for Comprehension: Providing feedback on interpretive tasks

checklistA couple of weeks ago I shared the checklists I created to streamline the feedback process with the new Ohio World Language Scoring Guides. These checklists were designed to quickly inform students of their strengths and areas for improvement on Presentational Speaking/Writing and Interpersonal assessments. Although Ohio did not create their own Interpretive scoring guide, I decided to make up a quick checklist to accompany the ACTFL interpretive rubric so that I would have a complete set of these checklists to guide my feedback process on both formative and summative assessments.  Here’s a copy of the checklist: interpretive feedback .

In order to maintain consistency with the other checklists, I wrote the expectations in the middle column.  Most of the wording I used here came from the “Strong Comprehension” column on the ACTFL rubric, although I made a few slight changes, based on my reflections in this previous post.  In the column on the right, I have listed some suggestions that I will check for students who don’t meet the descriptors for Strong Comprehension.  On the left are comments designed to let the students know what their specific strengths were on the task.  As it is my intention that this feedback checklist would be used in conjunction with the ACTFL rubric, I have also included a section at the bottom where I will check which level of comprehension the student demonstrated on the interpretive task being assessed.

Because I rely heavily on authentic resources and corresponding interpretive tasks in designing my units, it was very important for me to be able to provide timely feedback on these assignments/formative assessments.  It is my hope that these checklists will help me quickly give specific feedback that will enable the students to demonstrate greater comprehension on their summative assessments (IPA) for each unit.

Musings on assessing Interpretive Reading

readingAlthough I still have a four days left in my current school year, my thoughts are already turning to the changes that I plan on making to improve my teaching for next year.  This great article in a current teen magazine (matin p. 1matin p. 2matin p. 3,  matin p. 4 ) prompted me to write this interpretive task for my first French 2 IPA in the fall, as well as to reflect on what has worked and not worked in assessing my students’ proficiency in the interpretive mode.  I have learned a lot after writing 30+ IPA’s this year!

As my readers will have noticed, I create all of my interpretive assessments based on the template provided by ACTFL.  For the most part, I love using this template for assessing reading comprehension.  The tasks on the template encourage my students to use both top-down and bottom-up processes to comprehend what they are reading and in most cases the descriptors in the rubric enable the teacher to pinpoint the students’ level of comprehension based on their responses in each section.  I do, however, have a few suggestions about using this template in the classroom and modifying the rubric in a way that will streamline the assessment process and increase student success. Below, I’ve addressed each section of the template, as well as the corresponding section of the ACTFL rubric.

Key Word Recognition: In this section, the students are given a list of English words or phrases and asked to find the corresponding French words/phrases in the text.  Because I don’t give many vocabulary quizzes, this section helps me identify whether the students have retained the vocabulary they have used in the unit. In the lower levels I also add cognates to this section, so that the students will become accustomed to identifying these words in the text.  I also include some previously-learned words here, to assess the students’ ability to access this vocabulary.  This section of the ACTFL rubric works well, as it objectively assesses the students on how many of these words the students are able to identify.  I have found it helpful to identify in advance what range of correct answers will be considered all, the majority, half and few, as these are the terms used to define the various levels of comprehension on the rubric.  The IPA that I’ve included here there are 14 key words, so I’ve set the following range for each level: Accomplished (13-14 correct responses), Strong Comprehension (9-12 correct responses), Minimal Comprehension (6-8 correct responses) and Limited Comprehension (5 or fewer correct responses). Establishing these numerical references helps streamline the process of evaluating the students on this section of the interpretive task and ensures that I am assessing the students as objectively as possible.

Main Idea: While this is a very important task, I have found that it is rather difficult to assess individual student responses, due the variety of ways that the students interpret the directions in this section. In the sample IPA, I would consider main idea of the text to be “to present the responses of a group of teenagers who were questioned about what gets them up in the morning.”  However, upon examining the rubric, it is clear that a more detailed main idea is required.  According to the rubric, to demonstrate an Accomplished level of comprehension, a student must identify “the complete main idea of the text;” a student who “misses some elements” will be evaluated as having Strong Comprehension and if she identifies “some part of the main idea,” she falls to the Minimal Comprehension category. Clearly, a strong response to the main idea task must include more than a simple statement.  In this example, a better main idea might be “to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”  Clearly, my students need more direction in identifying a main idea in order to demonstrate Accomplished Comprehension in this section.  I think a simple change to the directions might improve their performance here. Here’s my suggestion:

  • Main Idea(s). “Using information from the article, provide the main idea(s) of the article in English” (ACTFL wording) and provide a few details from the text to support the main idea.

An issue that I’ve had in assessing the students’ main ideas is that the descriptors require the teacher to have a clear, specific main idea in mind in order to assess how many “key parts” of the main idea the students have identified.  In my practice I have found that interpreting a text’s main idea is quite subjective.  I have found that students often identify an accurate main idea that may differ considerably from the one I had envisioned.  Therefore, I would suggest the following descriptors for this task.  The information in parenthesis suggest what a main idea might look like for the sample text.

  • Accomplished: Identifies the main idea of text and provides a few pertinent details/examples to support this main idea. (“The main idea is to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”)
  • Strong: Identifies the main idea and provides at least one pertinent detail/example to support the main idea. (The main idea is that a group of kids are telling when they get up in the morning and why.”)
  • Minimal: Provides a relevant main idea but does not support it with any pertinent details or examples. (It’s about why these kids have to get up in the morning.”)
  • Limited: May provide details from the text but is unable to determine a main idea. (“It’s about what these kids like to do.” or “It’s about what these kids want to be when they grow up.”)

Supporting Details. In my opinion, this is section is the meat of an interpretive assessment.  This is where I actually find out how much the students understood about the text.  As a result, I usually include more items here than ACTFL’s suggested five, with three distractors. While I like the general format of this task on the template, I quickly discovered when implementing it that I needed to make some slight changes. Namely, I had to eliminate the directive that the students identify where each supporting detail was located in the text and write the letter of the detail next to the corresponding information. In the real world, when I am grading 50+ IPA’s at a time, checking this task was entirely too cumbersome.  I photocopy the texts separately from the assessment, so that the students are not required to constantly flip through a packet to complete the IPA.  Therefore, if I were to assess this task, I would have to lay each student’s two packets next to each other and refer back and forth to their assessment and text to locate each letter.  I would then have to evaluate whether each letter was indeed placed close enough to the corresponding detail to indicate true comprehension.  I found this to be an extremely time-consuming, as well as subjective task, which did not provide the information I needed to determine how well the student comprehended the details in the text.  As a result, I quickly eliminated this requirement in this section.  I have, however, retained the requirement that students check each detail to indicate whether it was included in the text.  This provides the student who “knows it’s right there” but “doesn’t know what it says” to demonstrate his limited comprehension.  The most important aspect of this section, however, is that the students provide information to support the details they have checked.  Because this is the only section of the template that actually requires the student to demonstrate literal, sentence-level comprehension of the text, I think it’s important to evaluate it very carefully.  In my opinion, the descriptors in the ACTFL rubric do not allow the teacher to adequately assess this section. Consider this description for Minimal Comprehension, “Identifies some supporting details in the text and may provide limited information from the text to explain these details. Or identifies the majority of supporting details but is unable to provide information from the text to explain these details.”  In my opinion, this descriptor creates a false dichotomy between a student’s ability to identify the existence/location of relevant information and his ability to actually comprehend the text.  According to this rubric, a student who is unable to provide any actual information from the text would be considered as meeting expectations. In a real-life example, if a language learner knows that the driver’s manual tells which side of the street to drive on, but does not know whether she is to drive on the left or the right, I would not say she has met expectations for a minimal comprehension of the manual.  Rather than reinforce this dichotomy, I would prefer to delineate the levels of comprehension as:

  • Accomplished: Identifies all supporting details in the text and accurately provides information from the text to explain these details. (same as ACTFL)
  • Strong: Identifies most of the supporting details and provides pertinent information from the text to explain these details.
  • Minimal: Identifies most of the supporting details and provides pertinent information from the text to explain many of them.
  • Limited: Identifies most of the supporting details and provides pertinent information from the text to explain some of them.

As you can see, I expect the student to identify all or most of the supporting details at all levels of comprehension.  Since my students are identifying details by checking a blank, and 70-80% of the blanks will be checked (20%-30% are distractors), a student could randomly check all of the blanks and meet the descriptor for “identifying most of the details.”  Therefore, this part of the descriptor is less relevant than the amount of pertinent information from the text that is provided to explain the details.

Organization Feature: As I mentioned in a previous post about designing IPA’s, I understand the role that an understanding of a text’s organization has in a student’s comprehension.  However, in practice I often omit this section. The organization of the texts that I use are so evident that asking the students to identify them does not provide significant information about their comprehension.  If, however, I were to use a text that presented an unexpected organizational structure, I think this task would become relevant and I would include it and use the ACTFL rubric to assess the students.

Guessing Meaning from Context.  I love this section!  I think that a student’s responses here could tell me more about their comprehension than any other section of the interpretive task.  However, in practice this is not always the case.  In general, my students tend to perform below my expectations in this section. It may be that I am selecting passages that are above their level of comprehension or it may be that my students don’t take the time to locate the actual sentence in the article.  As a result, the less motivated students simply fill in a cognate, rather than a word that might make sense in the sentence.  Regardless, I think the ACTFL rubric works well here.  I do, however, usually include about five items here, rather than the suggested three.  This allows my students a greater possibility of success as they can score a “Minimal Comprehension” for inferring a plausible meaning for at least three (“most”) items.

Inference. I have found that this section also provides important information about my students’ overall comprehension of the text.  The greatest challenge is encouraging them to include enough textual support from the text to support their inferences.  A slight change I would suggest to the rubric would be to change the wording, which currently assessing students according to how many correct or plausible inferences they make.  Since the template suggests only two questions, it seems illogical that a student who makes a “few plausible inferences” would be assessed as having “Minimal Comprehension.”  In actual practice, I have assessed students more on how well they support their inferences than the number of inferences they have made.  If I were designing a rubric for this section, I would suggest the following descriptors here:

  • Accomplished Comprehension: “Infers and interprets the text’s meaning in a highly plausible manner” (ACTFL wording) and supports these inferences with detailed, pertinent information from the text.
  • Strong Comprehension: “Infers and interprets the text’s meaning in a partially complete and/or partially plausible manner” (ACTFL wording) and adequately supports these inferences with pertinent information from the text.
  • Minimal Comprehension: Makes a plausible inference regarding the text’s meaning but provides inadequate  information from the text to support this inference.
  • Limited Comprehension: Makes a plausible inference regarding the text’s meaning but does not support the inference with pertinent information from the text.

Author’s Perspective. Although not all text types lend themselves to this task, I include it whenever possible.  I do, however, deviate somewhat from the suggested perspectives provided by ACTFL.  Rather than general perspectives, such as scientific, moral, factual, etc., I have the students choose between three rather specific possible perspectives.  As with identifying inferences, I believe the most important aspect of the students’ responses on this task is the textual evidence that the students provide to support their choice.  In my opinion, the ACTFL rubric for this section provides good descriptors for determining a student’s comprehension based on the textual support they provide.

Comparing Cultural Perspectives. Although I find these items difficult to write, I think this section is imperative.  One of the most important reasons for using authentic resources is for the cultural information that they contain.  This task allows us to direct the students’ attention to the cultural information provided by the text, as well as to assess how well they are able to interpret this information to acquire new understandings of the target culture.  The first challenge in writing these items is that the teacher must phrase the question in a way that enables the student to connect the cultural product/practice to a cultural perspective.  This is especially difficult for novice learners who made have very little background knowledge about the target culture(s).  Because identifying a cultural perspective is such a sophisticated task, I think it’s important to provide a fair amount of guidance in these items. While the ACTFL template provides some sample questions, it’s important to realize that some of these questions do not allow the students to adequately identify a cultural perspective. In addition, many of the suggested questions assume that the students share a common culture and background knowledge.  I have made many mistaken assumptions when asking my students to compare a French practice to an American one.  Many of my students have not traveled outside of our community, have not had many cultural experiences, and lack basic knowledge about American history.  Therefore, they do not have the background knowledge about U.S. culture to make adequate comparisons. Furthermore, a significant percentage of my students were born outside of the U.S., so any question requiring them to demonstrate knowledge of American culture is unfair.  In the future, when writing a comparison question I will invite my students to compare the French practice to “a practice in a culture they are familiar with.”

My concern with the ACTFL rubric for this section is that a student is assessed mostly on his ability to connect the product or practice to a perspective.  While I think this high-level thinking skill is important, I have not found it to be closely related to the students’ comprehension of the text.  I have students who may wholly comprehend the text, but lack the cognitive sophistication to use what the read to make a connection to perspectives, placing them in the Limited Comprehension category.  In addition, many valuables texts simple don’t include the types of information needed to make these connections. Perhaps the following descriptors might be more realistic?

  • Accomplished Comprehension: Accurately identifies cultural products or practices from the text and uses them to infer a plausible cultural perspective.
  • Strong Comprehension: Accurately identifies at least one product or practice from the text and uses it to infer a plausible cultural perspective.
  • Minimal Comprehension: Accurately identifies at least one product or practice from the text but does not infer a plausible cultural perspective.
  • Limited Comprehension: May identify a superficial product or practice that is not directly related to the text but is unable to infer a plausible cultural perspective.

Please let me know if you’re using the ACTFL template and rubrics to assess Interpretive Communication and how it’s working for you.  I have lots to learn!

 

It’s all about the feedback: Checklists to accompany Ohio’s Scoring Guidelines for World Languages

feedback The arrival of the new Ohio Scoring Guides for World Languages, as well as an excellent post by Amy Lenord served as an important reminder that I need to improve the feedback that I give my students.  Although I have used a checklist for feedback in the past, I haven’t been completely consistent in using it as of late.  Furthermore, my previous checklist was not aligned to these new scoring guidelines.  It was definitely time to do some updating!

Fortunately, the Ohio Scoring Guides for Performance Evaluation provide a great framework for meaningful feedback.  Each of the rubrics includes an additional page that lists the criteria, as well as blank spaces for self or teacher feedback. Unfortunately, I know that my written comments do not always meet the students’ needs, especially on speaking assessments. The notes that I do jot down while listening to their production are most likely incomprehensible to my students.  My hurried handwriting is illegible, and it is difficult for my students to see the connection between my comments and their success on the performance. In order to address these issues, I prepared a series of checklists that I will incorporate when providing feedback using these rubrics.  For each set of criteria on the ODE rubrics I have added specific comments to target the student’s strengths, as well as a list of comments to identify suggestions for improvement.  By providing these specific comments, I hope to provide legible, focused feedback to my students on both formative and summative performance tasks.  In addition, I envision having the students do their own goal-setting by highlighting specific areas of the rubric that they would like to focus their attention on.

When developing my comments, I considered both the criteria, and the comments that I find myself using over and over again.  As a French teacher, I specifically addressed common errors made by English speakers, especially in terms of pronunciation and common structures.  In addition, I have included an “Other” line, for strengths/errors that are not specifically addressed on the checklist.  It was important to me that my checklist fit on one sheet of paper for ease of use, so I tried to include only those errors that are the most often made by my students.

It is my hope that these checklists will help my students identify both their strengths and areas for improvement and streamline their progress toward higher levels of proficiency. Here are the checklists, let me know what you think! Feedback checklists

Unpacking the new Ohio Scoring Guides for World Languages

suitcase

Although it may seem unfathomable to some of the younger teachers out there, I still remember the first time I saw the word “rubric” in the title of a session at a foreign language conference years ago. At the time, I had no idea what a rubric was or how it related to assessing language learners. Needless to say, that session forever changed the way that I evaluated student learning in my classroom.  I was so excited about this new way of assessing students that I started creating rubrics for everything.  At first I preferred analytic rubrics—assigning separate scores to each aspect of a written or oral product just seemed more objective.  However, I eventually realized that the quality of a performance could not necessarily be calculated by adding up the separate parts of a whole, so I switched to a holistic rubric that I tweaked periodically over the years.  I have realized this year, however, that I needed to do some major revising to reflect my current proficiency-based methodology. The descriptors I was using didn’t adequately reflect the elements of proficiency as described by ACTFL. Since my own performance is now being evaluated according to my students’ proficiency, it is important that I am methodical in providing feedback to my students that is clearly related to their progress in proficiency.  Fortunately for me, the state of Ohio has recently published a series of scoring guidelines that will help me do just that!

You can find the rubrics in their entirety here and my comments below.

http://education.ohio.gov/Topics/Ohio-s-New-Learning-Standards/Foreign-Language/World-Languages-Model-Curriculum/World-Languages-Model-Curriculum-Framework/Instructional-Strategies/Scoring-Guidelines-for-World-Languages

 

  1. Performance Evaluation. These are the rubrics designed to use with end of unit assessments. There are three separate rubrics—Presentational Speaking, Presentational Writing, and Interpersonal Communication. I think that these scoring guidelines will be an invaluable asset in my assessment practices for the following reasons:
  • The heading of the rubric provides a means for the teacher to indicate the targeted performance level of the assessment. As a Level 1-5 teacher, it may be helpful for me to have one set of guidelines to use with all students, rather than a series of level-specific rubrics.   The wording in the descriptors allows the teacher to adjust for the unit content and proficiency level with phrases such as, “appropriate vocabulary,” “practiced structures,” “communicative goal,” and “targeted level.” The Interpersonal Communication rubric even includes specific descriptors for both Novice and Intermediate interaction.
  • Each rubric includes a page designed for either student self-assessment and/or teacher feedback for each section of the rubric. The overall descriptors are given for each criteria, along with separate columns for strengths and areas of improvement.  I think this format will allow me to provide specific, targeted feedback to my students.  They will know exactly what they need to do in order to progress in their performance. As a result, I anticipate using this page alone to provide feedback on formative assessments.
  • The wording in these rubrics is well-suited to Integrated Performance Assessments. All three guidelines include a descriptor about whether the student’s response was supported with an authentic resource (or detail.)
  • These rubrics convey the vital role that cultural content must play in all performances with a criteria devoted entirely to “Cultural Competence.” The presence of this descriptor will serve as an important reminder to the teacher that s/he must include a cultural component when developing assessments and to the student who must demonstrate that this knowledge has been attained.
  1. Proficiency Evaluation. These are the rubrics designed to assess the students’ overall proficiency level in Presentational Speaking, Presentational Writing and Interpersonal Communication. Therefore, a separate rubric is included for each proficiency level that is targeted in a secondary program (Novice Mid-Intermediate Mid). The design of these rubrics will enable me to clearly identify my students’ proficiency for the following reasons:
  • Each rubric is aligned to the ACTFL descriptors for the targeted proficiency level. I will no longer have to page through the ACTFL documents to find the descriptors that I need for each level.
  • Each rubric also contains Interculturality descriptors, based on the NCSSFL Interculturality Can-Do statements.
  • Each rubric contains descriptors for three sub-levels of the targeted proficiency level. This is vital for those of us who are required to measure growth over less than a year’s time.  In my district, for example, our proficiency post-test must be given in March, before many students are able to demonstrate progress to the next full proficiency level.
  • Although my current understanding is that proficiency can only be measured by unrehearsed tasks that are not related to a recent unit of study, teachers who use proficiency-based grading might use these rubrics throughout the academic year.

Because Ohio has deferred to the ACTFL rubrics for assessing Interpretive Reading and Listening, I’ll look forward to addressing these guidelines in a further post.  In the meantime, I’d love to hear others’ opinions of these new rubrics.