Monthly Archives: May 2015

Musings on Assessing Interpretive Listening

listeningA couple of weeks ago I shared my thoughts about assessing interpretive reading.  In that post gave my opinion  that ACTFL’s IPA Template was a generally effective way to design an assessment of reading comprehension and that, with a couple of modifications, their rubric was well-aligned with the tasks on the template. I have reservations, however, about the use of the ACTFL IPA template to assess listening.  Here are a couple of my thoughts about assessing listening, please share yours!

Assessing Interpretive Listening is Important

By defining both listening and reading comprehension as Interpretive Communication, ACTFL has given us an out when writing IPA’s.  We can choose to include either one, but are not required to include both.   My guess is that when given the choice, most of us are choosing authentic written rather than recorded texts for the interpretive portion of our IPA’s.  There are several good reasons why this may be the case.

  1. Authentic written texts are usually relatively easy to find. A quick Google search of our topic + Infographie will often produce a content-filled, culturally-rich text with the visual support that Novice learners need. Picture books, ads, social media posts, etc. provide additional resources.  For our Intermediate students, our options are even greater as their proficiency allows them to read a wider variety of short texts, an unlimited supply of which are quickly located online.
  2. Written texts can be easily evaluated regarding their appropriateness for our interpretive assessment tasks. A quick skim will reveal whether a text being considered contains the targeted language and structures, culturally-relevant content, and appropriate visual support that we are looking for.
  3. Assessments of interpretive reading are easy to administer. We need only a Xerox machine to provide a copy of the text to each student, who can then complete the interpretive task at her own pace. When a student is absent, we can simply hand him a copy of the text and interpretive activity and he can complete the task in a corner of the room or any other area of the building where make-ups are administered.

Curating oral texts and assessing their interpretation, however, is considerably more time-consuming.  While we have millions of videos available to us on YouTube (my person go-to for authentic listening texts), videos cannot be skimmed like written texts.  We actually have to listen to the videos that our searches produce in order to evaluate whether they are appropriate to the proficiency of the students for whom they are intended.  In some cases, we have to listen to dozens of videos before finding that gem that contains the appropriate vocabulary, cultural content and visual support that our learners need.  When it comes to administering these assessments, we often face additional challenges.  In my school, YouTube is blocked on student accounts.  Therefore, I have to log into 30 computers in a lab (which is seldom available) or my department’s class set of IPads (sometimes available) for all of my students to individually complete a listening assessment at the same time. While many of us play and project our videos to the class as a whole, I think this places an undue burden on our Novice students who “require repetition, rephrasing, and/or a slowed rate of speech for comprehension” (ACTFL Proficiency Guidelines, 2012). A student who has her own device can pause and rewind when needed, as well as slow the rate of speech when appropriate technology is available.

In spite of these challenges to evaluating listening comprehension, I think we have a responsibility to assess our students’ ability to interpret oral texts. As Dave Burgess said at a conference I recently attended, “It’s not supposed to be easy, it’s supposed to be worth it.”  Assessing interpretive listening skills IS worth it. As the adage says, “we teach what we test.”  If we are not evaluating listening, we are not teaching our students what they need to comprehend and participate in verbal exchanges with members of the target culture.  While technology may allow us to translate a written text in nanoseconds, no app can allow us to understand an unexpected public announcement or participate fully in a natural conversation with a native speaker. In my opinion, our assessment practices are not complete if we are not assessing listening comprehension to the same extent as reading comprehension. As a matter of fact, I include separate categories for each of these skills in my electronic gradebook.  While others may separate grades according to modes of communication, I’m not sure this system provides as much information regarding student progress toward proficiency. Although both reading and listening may require interpretation of a text, they are clearly vastly different skills.  Students who are good readers are not necessarily good listeners, and vice versa. In their Proficiency Guidelines, ACTFL clearly differentiates these two skills, don’t we need to do the same when evaluating our students using an IPA?

Designing Valid Interpretive Listening Assessments is Difficult

 In my opinion, ACTFL has provided us very little direction in assessing interpretive listening.  While we are advised to use the same IPA Interpretive template, I find that many of these tasks do not effectively assess listening comprehension. Consider the following:

Key Words. While students can quickly skim a written text to find key words, the same is not true of recorded texts.  Finding isolated key words requires listening to the video multiple times and attempting to isolate a single word in a sentence.  I find this task needlessly time-consuming, as I will be assessing literal comprehension in other tasks.  Furthermore, this task puts some students, especially those with certain learning disabilities at a significant disadvantage.  Many of these students have excellent listening comprehension, but are not able to accurately transfer what they understand aurally into written form.

Main Idea. Although this task seems fairly straightforward, I question its validity in assessing comprehension for Novice and Intermediate learners. According to the ACTFL Proficiency Guidelines, Novice-level listeners are “largely dependent on factors other than the message itself” and Intermediate listeners “require a controlled listening environment where they hear what they may expect to hear.”  This means that all of my students will be highly dependent on the visual content of the videos I select to ascertain meaning.  Therefore, any main idea they provide will most likely be derived from what the students see rather than what they hear.  A possible solution might be for the teacher to provide a series of possible main ideas (all of which could be extrapolated from the visual information) and have the students choose the best one.  However, this task would certainly be unrealistic for our novice learners who are listening at word level.

Supporting Details.  I think this task on the IPA template is the most effective in providing us feedback regarding our students’ ability to understand a recorded text.  By providing a set of details which may be mentioned, we provide additional context to help our students understand the text and by requiring them to fill in information we are assessing their literal comprehension of what they hear.  In addition, this type of task can easily be adjusted to correspond to the proficiency level of the students. Providing information to support the detail, “Caillou’s age” for example, is a realistic expectation for a novice listener who is watching a cartoon.

Organizational Features While I see little value in this task for interpretive reading, I see even less for listening.  As previously mentioned, even intermediate listeners need to be assessed using straightforward texts so that they can anticipate the information they will hear.  Having my students describe the organization of a recorded text would not provide additional information about their comprehension.

Guessing meaning from context. As much as I value this task on reading assessments, I do not find it to be a valid means of assessing aural comprehension.  The task requires the teacher to provide a sentence from the text and then guess at the meaning of an underlined word.  As soon as I provide my students with a written sentence, the task becomes an assessment of their ability to read this sentence, rather than understand it aurally.      

Inferences As with the main idea, I think Novice and Intermediate listeners will be overly dependent on visual cues to provide inferences.  While I believe students should be taught to use context to help provide meaning, I prefer to assess what they are actually able to interpret verbally. ACTFL does suggest providing multiple choice inferences in the IPA template, but again the teacher would have to provide choices whose plausibility could not be derived from visual information in order to isolate listening comprehension.

Author’s Perspective. While I regularly include author’s perspective items on my assessments for my AP students, I feel this is an unrealistic task for Novice and Intermediate Low listeners.  Students who are able to understand words, phrases, and sentence-length utterances will most likely be unable to identify an author’s perspective using only the verbal content of a video.

Cultural Connections. Authentic videos are one of the best tools we have for providing cultural content to our students.  The content provided by the images is more meaningful, memorable, and complete than any verbal information could be.  However, once again it is difficult to isolate the verbal content from the visual, creating a lack of validity for assessment purposes.


For now, I’m planning on using supporting detail or simple comprehension questions when formally assessing my students’ interpretive listening skills in order to ensure that I am testing what I intend to test. When practicing these interpretive skills, however, I plan on including some of the other tasks from the IPA template in order to fully exploit the wealth of information that is included in authentic videos.  I’m looking forward to hearing from you about how you assess your students on interpretive listening!

Checking for Comprehension: Providing feedback on interpretive tasks

checklistA couple of weeks ago I shared the checklists I created to streamline the feedback process with the new Ohio World Language Scoring Guides. These checklists were designed to quickly inform students of their strengths and areas for improvement on Presentational Speaking/Writing and Interpersonal assessments. Although Ohio did not create their own Interpretive scoring guide, I decided to make up a quick checklist to accompany the ACTFL interpretive rubric so that I would have a complete set of these checklists to guide my feedback process on both formative and summative assessments.  Here’s a copy of the checklist: interpretive feedback .

In order to maintain consistency with the other checklists, I wrote the expectations in the middle column.  Most of the wording I used here came from the “Strong Comprehension” column on the ACTFL rubric, although I made a few slight changes, based on my reflections in this previous post.  In the column on the right, I have listed some suggestions that I will check for students who don’t meet the descriptors for Strong Comprehension.  On the left are comments designed to let the students know what their specific strengths were on the task.  As it is my intention that this feedback checklist would be used in conjunction with the ACTFL rubric, I have also included a section at the bottom where I will check which level of comprehension the student demonstrated on the interpretive task being assessed.

Because I rely heavily on authentic resources and corresponding interpretive tasks in designing my units, it was very important for me to be able to provide timely feedback on these assignments/formative assessments.  It is my hope that these checklists will help me quickly give specific feedback that will enable the students to demonstrate greater comprehension on their summative assessments (IPA) for each unit.

Musings on assessing Interpretive Reading

readingAlthough I still have a four days left in my current school year, my thoughts are already turning to the changes that I plan on making to improve my teaching for next year.  This great article in a current teen magazine (matin p. 1matin p. 2matin p. 3,  matin p. 4 ) prompted me to write this interpretive task for my first French 2 IPA in the fall, as well as to reflect on what has worked and not worked in assessing my students’ proficiency in the interpretive mode.  I have learned a lot after writing 30+ IPA’s this year!

As my readers will have noticed, I create all of my interpretive assessments based on the template provided by ACTFL.  For the most part, I love using this template for assessing reading comprehension.  The tasks on the template encourage my students to use both top-down and bottom-up processes to comprehend what they are reading and in most cases the descriptors in the rubric enable the teacher to pinpoint the students’ level of comprehension based on their responses in each section.  I do, however, have a few suggestions about using this template in the classroom and modifying the rubric in a way that will streamline the assessment process and increase student success. Below, I’ve addressed each section of the template, as well as the corresponding section of the ACTFL rubric.

Key Word Recognition: In this section, the students are given a list of English words or phrases and asked to find the corresponding French words/phrases in the text.  Because I don’t give many vocabulary quizzes, this section helps me identify whether the students have retained the vocabulary they have used in the unit. In the lower levels I also add cognates to this section, so that the students will become accustomed to identifying these words in the text.  I also include some previously-learned words here, to assess the students’ ability to access this vocabulary.  This section of the ACTFL rubric works well, as it objectively assesses the students on how many of these words the students are able to identify.  I have found it helpful to identify in advance what range of correct answers will be considered all, the majority, half and few, as these are the terms used to define the various levels of comprehension on the rubric.  The IPA that I’ve included here there are 14 key words, so I’ve set the following range for each level: Accomplished (13-14 correct responses), Strong Comprehension (9-12 correct responses), Minimal Comprehension (6-8 correct responses) and Limited Comprehension (5 or fewer correct responses). Establishing these numerical references helps streamline the process of evaluating the students on this section of the interpretive task and ensures that I am assessing the students as objectively as possible.

Main Idea: While this is a very important task, I have found that it is rather difficult to assess individual student responses, due the variety of ways that the students interpret the directions in this section. In the sample IPA, I would consider main idea of the text to be “to present the responses of a group of teenagers who were questioned about what gets them up in the morning.”  However, upon examining the rubric, it is clear that a more detailed main idea is required.  According to the rubric, to demonstrate an Accomplished level of comprehension, a student must identify “the complete main idea of the text;” a student who “misses some elements” will be evaluated as having Strong Comprehension and if she identifies “some part of the main idea,” she falls to the Minimal Comprehension category. Clearly, a strong response to the main idea task must include more than a simple statement.  In this example, a better main idea might be “to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”  Clearly, my students need more direction in identifying a main idea in order to demonstrate Accomplished Comprehension in this section.  I think a simple change to the directions might improve their performance here. Here’s my suggestion:

  • Main Idea(s). “Using information from the article, provide the main idea(s) of the article in English” (ACTFL wording) and provide a few details from the text to support the main idea.

An issue that I’ve had in assessing the students’ main ideas is that the descriptors require the teacher to have a clear, specific main idea in mind in order to assess how many “key parts” of the main idea the students have identified.  In my practice I have found that interpreting a text’s main idea is quite subjective.  I have found that students often identify an accurate main idea that may differ considerably from the one I had envisioned.  Therefore, I would suggest the following descriptors for this task.  The information in parenthesis suggest what a main idea might look like for the sample text.

  • Accomplished: Identifies the main idea of text and provides a few pertinent details/examples to support this main idea. (“The main idea is to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”)
  • Strong: Identifies the main idea and provides at least one pertinent detail/example to support the main idea. (The main idea is that a group of kids are telling when they get up in the morning and why.”)
  • Minimal: Provides a relevant main idea but does not support it with any pertinent details or examples. (It’s about why these kids have to get up in the morning.”)
  • Limited: May provide details from the text but is unable to determine a main idea. (“It’s about what these kids like to do.” or “It’s about what these kids want to be when they grow up.”)

Supporting Details. In my opinion, this is section is the meat of an interpretive assessment.  This is where I actually find out how much the students understood about the text.  As a result, I usually include more items here than ACTFL’s suggested five, with three distractors. While I like the general format of this task on the template, I quickly discovered when implementing it that I needed to make some slight changes. Namely, I had to eliminate the directive that the students identify where each supporting detail was located in the text and write the letter of the detail next to the corresponding information. In the real world, when I am grading 50+ IPA’s at a time, checking this task was entirely too cumbersome.  I photocopy the texts separately from the assessment, so that the students are not required to constantly flip through a packet to complete the IPA.  Therefore, if I were to assess this task, I would have to lay each student’s two packets next to each other and refer back and forth to their assessment and text to locate each letter.  I would then have to evaluate whether each letter was indeed placed close enough to the corresponding detail to indicate true comprehension.  I found this to be an extremely time-consuming, as well as subjective task, which did not provide the information I needed to determine how well the student comprehended the details in the text.  As a result, I quickly eliminated this requirement in this section.  I have, however, retained the requirement that students check each detail to indicate whether it was included in the text.  This provides the student who “knows it’s right there” but “doesn’t know what it says” to demonstrate his limited comprehension.  The most important aspect of this section, however, is that the students provide information to support the details they have checked.  Because this is the only section of the template that actually requires the student to demonstrate literal, sentence-level comprehension of the text, I think it’s important to evaluate it very carefully.  In my opinion, the descriptors in the ACTFL rubric do not allow the teacher to adequately assess this section. Consider this description for Minimal Comprehension, “Identifies some supporting details in the text and may provide limited information from the text to explain these details. Or identifies the majority of supporting details but is unable to provide information from the text to explain these details.”  In my opinion, this descriptor creates a false dichotomy between a student’s ability to identify the existence/location of relevant information and his ability to actually comprehend the text.  According to this rubric, a student who is unable to provide any actual information from the text would be considered as meeting expectations. In a real-life example, if a language learner knows that the driver’s manual tells which side of the street to drive on, but does not know whether she is to drive on the left or the right, I would not say she has met expectations for a minimal comprehension of the manual.  Rather than reinforce this dichotomy, I would prefer to delineate the levels of comprehension as:

  • Accomplished: Identifies all supporting details in the text and accurately provides information from the text to explain these details. (same as ACTFL)
  • Strong: Identifies most of the supporting details and provides pertinent information from the text to explain these details.
  • Minimal: Identifies most of the supporting details and provides pertinent information from the text to explain many of them.
  • Limited: Identifies most of the supporting details and provides pertinent information from the text to explain some of them.

As you can see, I expect the student to identify all or most of the supporting details at all levels of comprehension.  Since my students are identifying details by checking a blank, and 70-80% of the blanks will be checked (20%-30% are distractors), a student could randomly check all of the blanks and meet the descriptor for “identifying most of the details.”  Therefore, this part of the descriptor is less relevant than the amount of pertinent information from the text that is provided to explain the details.

Organization Feature: As I mentioned in a previous post about designing IPA’s, I understand the role that an understanding of a text’s organization has in a student’s comprehension.  However, in practice I often omit this section. The organization of the texts that I use are so evident that asking the students to identify them does not provide significant information about their comprehension.  If, however, I were to use a text that presented an unexpected organizational structure, I think this task would become relevant and I would include it and use the ACTFL rubric to assess the students.

Guessing Meaning from Context.  I love this section!  I think that a student’s responses here could tell me more about their comprehension than any other section of the interpretive task.  However, in practice this is not always the case.  In general, my students tend to perform below my expectations in this section. It may be that I am selecting passages that are above their level of comprehension or it may be that my students don’t take the time to locate the actual sentence in the article.  As a result, the less motivated students simply fill in a cognate, rather than a word that might make sense in the sentence.  Regardless, I think the ACTFL rubric works well here.  I do, however, usually include about five items here, rather than the suggested three.  This allows my students a greater possibility of success as they can score a “Minimal Comprehension” for inferring a plausible meaning for at least three (“most”) items.

Inference. I have found that this section also provides important information about my students’ overall comprehension of the text.  The greatest challenge is encouraging them to include enough textual support from the text to support their inferences.  A slight change I would suggest to the rubric would be to change the wording, which currently assessing students according to how many correct or plausible inferences they make.  Since the template suggests only two questions, it seems illogical that a student who makes a “few plausible inferences” would be assessed as having “Minimal Comprehension.”  In actual practice, I have assessed students more on how well they support their inferences than the number of inferences they have made.  If I were designing a rubric for this section, I would suggest the following descriptors here:

  • Accomplished Comprehension: “Infers and interprets the text’s meaning in a highly plausible manner” (ACTFL wording) and supports these inferences with detailed, pertinent information from the text.
  • Strong Comprehension: “Infers and interprets the text’s meaning in a partially complete and/or partially plausible manner” (ACTFL wording) and adequately supports these inferences with pertinent information from the text.
  • Minimal Comprehension: Makes a plausible inference regarding the text’s meaning but provides inadequate  information from the text to support this inference.
  • Limited Comprehension: Makes a plausible inference regarding the text’s meaning but does not support the inference with pertinent information from the text.

Author’s Perspective. Although not all text types lend themselves to this task, I include it whenever possible.  I do, however, deviate somewhat from the suggested perspectives provided by ACTFL.  Rather than general perspectives, such as scientific, moral, factual, etc., I have the students choose between three rather specific possible perspectives.  As with identifying inferences, I believe the most important aspect of the students’ responses on this task is the textual evidence that the students provide to support their choice.  In my opinion, the ACTFL rubric for this section provides good descriptors for determining a student’s comprehension based on the textual support they provide.

Comparing Cultural Perspectives. Although I find these items difficult to write, I think this section is imperative.  One of the most important reasons for using authentic resources is for the cultural information that they contain.  This task allows us to direct the students’ attention to the cultural information provided by the text, as well as to assess how well they are able to interpret this information to acquire new understandings of the target culture.  The first challenge in writing these items is that the teacher must phrase the question in a way that enables the student to connect the cultural product/practice to a cultural perspective.  This is especially difficult for novice learners who made have very little background knowledge about the target culture(s).  Because identifying a cultural perspective is such a sophisticated task, I think it’s important to provide a fair amount of guidance in these items. While the ACTFL template provides some sample questions, it’s important to realize that some of these questions do not allow the students to adequately identify a cultural perspective. In addition, many of the suggested questions assume that the students share a common culture and background knowledge.  I have made many mistaken assumptions when asking my students to compare a French practice to an American one.  Many of my students have not traveled outside of our community, have not had many cultural experiences, and lack basic knowledge about American history.  Therefore, they do not have the background knowledge about U.S. culture to make adequate comparisons. Furthermore, a significant percentage of my students were born outside of the U.S., so any question requiring them to demonstrate knowledge of American culture is unfair.  In the future, when writing a comparison question I will invite my students to compare the French practice to “a practice in a culture they are familiar with.”

My concern with the ACTFL rubric for this section is that a student is assessed mostly on his ability to connect the product or practice to a perspective.  While I think this high-level thinking skill is important, I have not found it to be closely related to the students’ comprehension of the text.  I have students who may wholly comprehend the text, but lack the cognitive sophistication to use what the read to make a connection to perspectives, placing them in the Limited Comprehension category.  In addition, many valuables texts simple don’t include the types of information needed to make these connections. Perhaps the following descriptors might be more realistic?

  • Accomplished Comprehension: Accurately identifies cultural products or practices from the text and uses them to infer a plausible cultural perspective.
  • Strong Comprehension: Accurately identifies at least one product or practice from the text and uses it to infer a plausible cultural perspective.
  • Minimal Comprehension: Accurately identifies at least one product or practice from the text but does not infer a plausible cultural perspective.
  • Limited Comprehension: May identify a superficial product or practice that is not directly related to the text but is unable to infer a plausible cultural perspective.

Please let me know if you’re using the ACTFL template and rubrics to assess Interpretive Communication and how it’s working for you.  I have lots to learn!


It’s all about the feedback: Checklists to accompany Ohio’s Scoring Guidelines for World Languages

feedback The arrival of the new Ohio Scoring Guides for World Languages, as well as an excellent post by Amy Lenord served as an important reminder that I need to improve the feedback that I give my students.  Although I have used a checklist for feedback in the past, I haven’t been completely consistent in using it as of late.  Furthermore, my previous checklist was not aligned to these new scoring guidelines.  It was definitely time to do some updating!

Fortunately, the Ohio Scoring Guides for Performance Evaluation provide a great framework for meaningful feedback.  Each of the rubrics includes an additional page that lists the criteria, as well as blank spaces for self or teacher feedback. Unfortunately, I know that my written comments do not always meet the students’ needs, especially on speaking assessments. The notes that I do jot down while listening to their production are most likely incomprehensible to my students.  My hurried handwriting is illegible, and it is difficult for my students to see the connection between my comments and their success on the performance. In order to address these issues, I prepared a series of checklists that I will incorporate when providing feedback using these rubrics.  For each set of criteria on the ODE rubrics I have added specific comments to target the student’s strengths, as well as a list of comments to identify suggestions for improvement.  By providing these specific comments, I hope to provide legible, focused feedback to my students on both formative and summative performance tasks.  In addition, I envision having the students do their own goal-setting by highlighting specific areas of the rubric that they would like to focus their attention on.

When developing my comments, I considered both the criteria, and the comments that I find myself using over and over again.  As a French teacher, I specifically addressed common errors made by English speakers, especially in terms of pronunciation and common structures.  In addition, I have included an “Other” line, for strengths/errors that are not specifically addressed on the checklist.  It was important to me that my checklist fit on one sheet of paper for ease of use, so I tried to include only those errors that are the most often made by my students.

It is my hope that these checklists will help my students identify both their strengths and areas for improvement and streamline their progress toward higher levels of proficiency. Here are the checklists, let me know what you think! Feedback checklists

Paris: A Novice Unit and IPA

paris  Based on social media posts from my virtual colleagues, it seems that many of us are ending the year with a unit on Paris for our French 1 students.  In my case, I found that this unit was a great way to bring in some vocabulary for places in a city and transportation.  In addition, the students are gaining important knowledge about a city that many of them will have an opportunity to visit at one time in their lives.

Because I think it’s very important for the students to become familiar with various attractions in Paris, I began this unit with a series of learning stations designed to introduce the students to the tourist attractions they might visit on a trip to Paris.  Although these stations are based on authentic resources that I have accumulated over the years (and can’t realistically share here), I’ve included a quick explanation of each station.

Paris Monument Cards: The students read a series of cards, each of which had a photograph of the monument as well as historical information.  These cards were originally attached, forming a fan, but I separated them so that they could be read individually.  I bought the fan in Paris, but it may be available elsewhere.  If anyone else has this resource,  here are the questions I wrote: Paris Monument Fan (3/30/16: Here’s a link to purchase the fan from

Paris Listening Station: The students watched two authentic videos and responded to questions in English.  Here’s the document I created with the links and comprehension questions: Listening Station

Paris Children’s Books: I have two children’s books that students could read at this station.  They worked with a partner on the comprehension activity, so that four students in the group were able to read a book in its original form, without relying on photocopies.

Paris ID Station: At this station students completed a series of teacher-created activities designed to help them learn to identify each monument visually.  I included a Go Fish game that I made with photographs of the monuments, a matching activity in which they had to identify unlabeled photographs by comparing them to Paris monument postcards, a lotto game in which they had to fill a board by drawing pictures of monuments from a pile , an activity in which they had to identify monuments based on photographs I had taken at unusual angles, etc.

Paris Brochures: Here, the students read brochures that I have brought back from various monuments and answered comprehension questions. These brochures were a great way to reinforce vocabulary for days of the week, months of the year, and telling time!

After the students had completed all 5 stations, we spent a few days reviewing the monuments using a Google Presentation ( which I would project and ask questions about. To further reinforce this information, I had the students create Bingo boards on a sheet of paper.  For this very low-tech activity, they made a grid of 5 x 5 squares and wrote the name of an attraction on each one.  I would then give a clue (either a picture or a fact about the monument) and they placed a chip on the square with the appropriate monument.  Note: There are more than 25 attractions, so some students won’t have some of the monuments I describe, just as with regular Bingo.  Due to the simple nature of the language used, the students were able to understand my questions and clues for these activities with very little difficulty. As an additional resource, I created this Google Presentation with photographs only. ( .  Because I will eventually hold the students responsible for identifying the monuments visually, I wanted to provide an easy way to practice identification. (Note: There are currently a few extra attractions on this presentation. When time permits, I’ll update them so that they have the exact same attractions.) The students have access to both presentations, so that they can further review the facts and images from home.

After the students are able to identify the monuments and know some factual information about each one, they will begin preparing for their IPA.  This document contains a few structures and vocabulary items they’ll need to know and a couple of quick activities to practice the skills they will use on the IPA (Paris IPA Practice ) Namely, they will list activities they would like to do in Paris, practice discussing these activities with a partner, and then will create an itinerary for a trip.

After these practice activities, they should be ready for their IPA (French 1 Paris IPA ) which contains the following tasks.

Interpretive Listening: The students will watch a video about 10 Paris attractions and complete an interpretive task. Interpretive Reading: The students will read several pages paris 1paris 2paris 3paris 4paris 5paris 6paris 7paris )from a children’s book about Paris and complete comprehension activites. Interpersonal Communication: The students will discuss possible activities to do in Paris and co-create a simple itinerary. Presentational Writing: The students will write a letter to a family member in which they describe their itinerary and ask for a financial contribution for the trip.

This IPA, along with a simple assessment on Paris monuments, will be the final exam for these students.  I’m so proud of what these students have accomplished this year and looking forward to following their progress in the years to come!




Unpacking the new Ohio Scoring Guides for World Languages


Although it may seem unfathomable to some of the younger teachers out there, I still remember the first time I saw the word “rubric” in the title of a session at a foreign language conference years ago. At the time, I had no idea what a rubric was or how it related to assessing language learners. Needless to say, that session forever changed the way that I evaluated student learning in my classroom.  I was so excited about this new way of assessing students that I started creating rubrics for everything.  At first I preferred analytic rubrics—assigning separate scores to each aspect of a written or oral product just seemed more objective.  However, I eventually realized that the quality of a performance could not necessarily be calculated by adding up the separate parts of a whole, so I switched to a holistic rubric that I tweaked periodically over the years.  I have realized this year, however, that I needed to do some major revising to reflect my current proficiency-based methodology. The descriptors I was using didn’t adequately reflect the elements of proficiency as described by ACTFL. Since my own performance is now being evaluated according to my students’ proficiency, it is important that I am methodical in providing feedback to my students that is clearly related to their progress in proficiency.  Fortunately for me, the state of Ohio has recently published a series of scoring guidelines that will help me do just that!

You can find the rubrics in their entirety here and my comments below.


  1. Performance Evaluation. These are the rubrics designed to use with end of unit assessments. There are three separate rubrics—Presentational Speaking, Presentational Writing, and Interpersonal Communication. I think that these scoring guidelines will be an invaluable asset in my assessment practices for the following reasons:
  • The heading of the rubric provides a means for the teacher to indicate the targeted performance level of the assessment. As a Level 1-5 teacher, it may be helpful for me to have one set of guidelines to use with all students, rather than a series of level-specific rubrics.   The wording in the descriptors allows the teacher to adjust for the unit content and proficiency level with phrases such as, “appropriate vocabulary,” “practiced structures,” “communicative goal,” and “targeted level.” The Interpersonal Communication rubric even includes specific descriptors for both Novice and Intermediate interaction.
  • Each rubric includes a page designed for either student self-assessment and/or teacher feedback for each section of the rubric. The overall descriptors are given for each criteria, along with separate columns for strengths and areas of improvement.  I think this format will allow me to provide specific, targeted feedback to my students.  They will know exactly what they need to do in order to progress in their performance. As a result, I anticipate using this page alone to provide feedback on formative assessments.
  • The wording in these rubrics is well-suited to Integrated Performance Assessments. All three guidelines include a descriptor about whether the student’s response was supported with an authentic resource (or detail.)
  • These rubrics convey the vital role that cultural content must play in all performances with a criteria devoted entirely to “Cultural Competence.” The presence of this descriptor will serve as an important reminder to the teacher that s/he must include a cultural component when developing assessments and to the student who must demonstrate that this knowledge has been attained.
  1. Proficiency Evaluation. These are the rubrics designed to assess the students’ overall proficiency level in Presentational Speaking, Presentational Writing and Interpersonal Communication. Therefore, a separate rubric is included for each proficiency level that is targeted in a secondary program (Novice Mid-Intermediate Mid). The design of these rubrics will enable me to clearly identify my students’ proficiency for the following reasons:
  • Each rubric is aligned to the ACTFL descriptors for the targeted proficiency level. I will no longer have to page through the ACTFL documents to find the descriptors that I need for each level.
  • Each rubric also contains Interculturality descriptors, based on the NCSSFL Interculturality Can-Do statements.
  • Each rubric contains descriptors for three sub-levels of the targeted proficiency level. This is vital for those of us who are required to measure growth over less than a year’s time.  In my district, for example, our proficiency post-test must be given in March, before many students are able to demonstrate progress to the next full proficiency level.
  • Although my current understanding is that proficiency can only be measured by unrehearsed tasks that are not related to a recent unit of study, teachers who use proficiency-based grading might use these rubrics throughout the academic year.

Because Ohio has deferred to the ACTFL rubrics for assessing Interpretive Reading and Listening, I’ll look forward to addressing these guidelines in a further post.  In the meantime, I’d love to hear others’ opinions of these new rubrics.





Implementing a Passion Project with Intermediate Learners

Untitled-1For the first 90% of the school year I planned units that I thought would be relevant and interesting to most of my students. I was thrilled with the progress they made in their proficiency and for the most part they remained engaged throughout the year.  As a result, I’ve decided to try something new with my Intermediate learners. For the last two weeks of the semester (and as their final exam grade), I’m going to put each of my Intermediate learners in charge of designing his or her own curriculum. Each of my French 3, 4 and 5 students will research a topic of their choice and then present what they have learned to their classmates.  Their presentations, as well as the journal entries they will write to document their research, will determine their final exam grade.

Never having implemented this type of project, I did some quick research on Genius Hour and Passion Project ideas.  There are so many great ideas out there!  Based on what I found out, I’ve developed these guidelines: passionprojectdirections

The students will first complete a Google Doc with general questions about their topic, how it relates to a Francophone culture, their big idea question, some beginning research questions, and how they will share what they learn. ( Here’s a Word version of the document that I made:googledoc ) Afterward, they will have 5 class periods to research their topic in class.  I have encouraged each student to considering bringing in a device for this research and I have 8 classroom computers for those without a smart phone/tablet/laptop etc. They will only be permitted to read/listen about their topic in French while in class.  During the last 10-15 minutes of each class period (or at home), they will complete a blog entry (on the same Google Doc).  As indicated on the project guide, I will randomly select one or more blog entries to grade for each student.  For the second week, the students will create the visual aid for their presentation, create index cards, and practice their presentations.  Finally, they will present their projects to the class.  While I think many of the students will choose a Powerpoint/Google Presentation, I’d also accept videos or possible other formats that they suggest.

Although we won’t begin researching until this week, most of my students are excited about being able to study “anything they want.”  While I’m thrilled that they’re engaged by this project, I’m also more than a little nervous about putting them in the driver’s seat.  Like many teachers, I might have just a tiny bit of a control issue!  As a result, I’ve decided to assign a daily participation/interpersonal speaking grade. Although I don’t normally grade participation, I wanted to make sure to have some documentation about their work on this project. In order to justify this grade to the “proficiency voice inside my head,” I added a descriptor about discussing their research with Madame.  As I circulate among the students as they work, I plan on conducting quick interviews to gauge their progress, as well as make them accountable for staying on task. Even though I’m a little nervous, I can’t wait to see what these students come up with!

If you’ve implemented a Passion Project or Genius Hour, I’d love to hear your words of wisdom!