Category Archives: Assessment

Using Rubrics to Assess Interpretive Reading

rubricLast night’s #langchat was hopping!  One of the most lively discussions had to do with the topic of using rubrics to assess students’ communication in the interpretive mode.  So, at the request of @MmeBlouwolff, I’m sharing a few thoughts about how I use rubrics to assess reading in my classes.

Like many of my colleagues, I did not understand how I could use a rubric to assess reading comprehension when I first began using IPA’s.  It was not until I saw the ACTFL Interpretive template, that I realized I didn’t have to assess comprehension with discrete point measures.  After adopting the question types suggested by this guide, the switch to a more holistic grading system made perfect sense. A student’s comprehension is not adequately assessed by the number of questions they answered correctly, any more than their presentational writing can be evaluated by counting spelling errors. Furthermore, our current understanding of the interpretive mode of communication does not limit us to evaluating our students’ literal comprehension of a text.  Instead, we are encouraged to assess inferential strategies such as guessing meaning from context, making inferences, identifying the author’s perspective and making cultural connections.  Using a rubric to measure student growth on these skills allows me to show my students what they can do, as well as how they can improve their interpretive strategies.

Here’s a look at a sample of student work and how I used a rubric to assess both the student’s literal and interpretive comprehension. Please note that although I relied heavily on ACTFL’s Interpretive IPA Rubric, I changed the format to make it more similar to the Ohio proficiency rubrics that I use for the interpersonal and presentational modes.  In addition, I modified some of the wording to reflect my own practices and added a numerical score to each column.

As the completed rubric shows, I ask my students to assess themselves by circling the box which best reflects their own understanding of their performance on each section.  In addition to providing an opportunity for self-assessment, this step ensures that the students have a clear understanding of the expectations for the assessment and encourages goal-setting for future performances. This process also provides me with important information about the students’ metacognition. In this case, the student seemed to feel very confident about his/her responses to the Guessing Meaning from Context section, in spite of the fact that he only guessed one word correctly.

After collecting the assessments and student-marked rubrics, it’s my turn to assess the students.  The use of a rubric streamlines this process considerably, as I can quickly ascertain where each student’s performance falls without the laborious task of tallying each error.  I simply check the appropriate box on the rubric, and then project a key when I return the papers so that each student receives specific feedback on the correct responses for each item.  

When it comes to determining a score on the assessment, as a general rule I assign the score for which the student has met all, or nearly all of the descriptors. I do consider, however, how the class does as a whole when assigning numerical grades.  I am frequently unrealistic in my expectations for the Guessing Meaning from Context, for example, and as a result I do not weigh this category very heavily when assigning a final score.  In the case of this student’s work, I assigned a grade of 9.5/10 as s/he met many of the descriptors for Accomplished and demonstrated greater comprehension than the majority of his/her classmates.

While the use of rubrics for interpretive communication might not work for everyone, I have found that holistic grading provides better opportunities for self-assessment, encourages students by providing feedback on what they can do and saves me time on grading.  

As always, I look forward to your feedback, questions and suggestions!

Image credit:

Providing Direction: A Path to Proficiency Action Plan

path2As I shared in a recent post, one of my goals for this year is to use proficiency-based rubrics to assess my students’ performance.  I feel that this type of rubric will provide my students with more targeted feedback on where they are on their path to proficiency and what they need to do to make progress on this path.  As I assessed by first stack of papers using these rubrics, I realized that I needed to be able to provide my students with very specific instructions on exactly how they could demonstrate increasing levels of proficiency on their writing. However, first I needed to deepen my own understanding of the terms used in the proficiency descriptors. Although I am embarrassed to admit it, I didn’t know the exact definition of a “connected sentence,” “complex sentence,” and “cohesive device.” Fortunately, ACTFL’s glossary provided most of the information I needed and Google did the rest.  As I continued to study the descriptors used for each proficiency level, I realized that I also needed to reflect on grammatical structures in a more intentional way for the following reasons:

  1. The proficiency descriptors, as well as the rubrics I’ve chosen, repeatedly use the term “practiced structures.”  As a result, I needed to decide exactly which structures I would “practice” (by providing lots of input, pop-up grammar lessons, and communicative contexts) at each level.
  2. Although the descriptors do not mention specific grammatical structures, certain structures are inherent in the process of progressing through the levels.  The difference between “making a reference” to the past and “narrating” in the past seems to require the ability to use the imparfait, passé composé and plus-que-parfait as well as past infinitives for additional cohesiveness. Therefore, I need to expose my students to these structures in a meaningful way.
  3. I needed to provide my students with language they needed to work on these structures independently.  As much as I have eschewed grammatical terminology for the past couple of years, my students need to have a basic vocabulary of grammatical terms if they are to individualize their learning as it relates to proficiency.  

As a result of this research and reflection, I designed this Path to Proficiency Action Plan document for my students.  As the directions indicate, I will give this document to my students throughout the year to help them set goals for their own progress toward proficiency.  Based on the feedback I give the students when assessing their writing, they will create an action plan for progressing to the next level.  Depending on their own individual performance, they may focus on increasing the detail of their responses, creating more sophisticated sentence types, increasing their organization or become more accurate on the use of various structures.  In addition, I have provided links to exercises on lepointdufle for each grammatical structure.  While I do not typically use this type of discrete grammar practice in my teaching, I think that it is possible that these exercises might benefit some students.  As time permits, I would like to provide my students with a more specific list of activities, as I think some of these exercises are more helpful than others.   It is my hope that the goal-setting my students will do via this document will help them increase their proficiency in writing, as well as take more ownership of their own learning.  In future posts, I hope to share similar action plans for other language skills.

As always, your feedback is appreciated!

Taking the plunge into proficiency-based grading

grade-28199_960_720A couple of years ago when I decided to drastically change what I taught (cultural content instead of vocabulary and structures) and how I taught it (by using authentic resources instead of textbook exercises), I took a close look at my assessment practices.  While I embraced the concept of IPA’s, I struggled a bit on how to assign a grade to these assessments.  In the beginning I used my own holistic rubrics and later adopted the Ohio Department of Education’s Performance Scoring Guides for World Languages. Being a rule follower, I chose the Performance rubrics because that’s what ODE’s website said that teachers should use for IPA’s.  Although I knew that some teachers were linking students’ grades to their proficiency level this practice didn’t fit with my understanding of proficiency, which I’ve been taught can only be measured by an OPI.  Because I understood that my classroom assessments were clearly performances (measurements of what my students had learned as a result of my instruction), I used the Performance rubrics.  While these are great rubrics, as I continue to adapt my instruction, I find that I will need to make some changes to my assessment practices in order to meet my goals for this year.  Specifically, I want my students to be more involved in their own learning. Rather than passively waiting for me to assign a numerical score to all of their performances, I want my students to understand their proficiency level, set their own proficiency goals, understand how to meet those goals, and self-assess their progress in reaching these goals. Because the descriptors in ODE’s Performance Rubrics do not reflect different proficiency levels (There is only one scoring guide for each skill/mode.), my students were not able to determine their current level of proficiency based on my completing this rubric.  Furthermore, they were not able to determine exactly what they needed to do to improve their proficiency (or grade). In the absence of clear descriptors for each level of proficiency, the students were faced with trying to hit a moving target.  As my performance assessments required increasingly greater levels of proficiency, a similar score on a string of assessments did not allow the students to see the progress that they were making.

In order to remedy this situation, I’ve decided to use ODE’s Proficiency Scoring Guides this year. Based on my current understanding of the common language of world language educators, I will be able to describe my students’ performances as exhibiting characteristics of a proficiency level, without implying that I am able to assign a specific proficiency level to an individual student.  But most importantly, because these rubrics contain separate descriptors for each proficiency level, they will enable my students to define their performances as exemplifying a targeted proficiency level.  Not only will my feedback allow them to identify their current level of performance, they will know exactly what they need to do to achieve the next level.  I especially love that these rubrics include three levels for each proficiency level (NH1, NH2, NH3, for example).  As a result, I hope to be able to measure each increment of progress in my students’ path to proficiency.

For many of us, of course, it is not enough to only identify a student’s proficiency level, we must also assign a numerical (or letter) grade for each performance.  After reading many outstanding teachers’ methodology for doing so, I’ve determined the following guidelines for implementing my proficiency-based grading system.

  1. Students who reach  ACTFL Proficiency Target will earn an 85% (B).  Because it seems unfair and unrealistic for the students to reach an end of course target first semester, I have (somewhat arbitrarily) determined that the first semester goal will be two sublevels below the end of the course target.  For example, since Novice High 2 is the targeted proficiency level for the end of French 2, Novice Mid 3 is the targeted level for first semester.   This table shows what score a student will earn for each proficiency level. (The numerical scores reflect my preferred maximum score of 10 rather than 100 [a percentage].)
  2. In order to more easily implement this system, I have prepared a first semester and a second semester rubric for each course. As indicated on the rubrics, the language is taken directly from the ODE scoring guides for each skill/mode. I simply chose which 5 columns I felt would be the most likely to cover the range of levels for a particular course and typed them on a single page, with an additional column for comments. I also took the liberty of creating a separate rubric for each Presentational skill and removed the comments about pronunciation from the Writing rubric in order to streamline the feedback process. I can easily use a lower level rubric (changing the scores accordingly) for those students who are unable to meet the lowest level on the rubric for his/her course.  Note: I have not included a 2nd semester rubric for French 4, as the ODE rubrics stopt Intermediate Mid 3. I’ll use my own judgment in assigning a score for any students who exceed this level.
  3. Because ODE does not have an Interpretive rubric (They provide only a link to the ACTFL IPA Interpretive Rubric), I will use the ACTFL rubric for interpretive reading tasks at each level. Because it is the task, rather than level of performance which demonstrates a student’s proficiency in interpretive assessments, the same rubric is appropriate for all levels. I will assign the following numerical scores to each level on the rubric: Limited Comprehension (7), Minimal Comprehension (8), Strong Comprehension (9) and Accomplished Comprehension (10).  A student who does not meet the descriptors for Limited Comprehension will earn a 6.

I’m sure that I’ll make modifications to these guidelines as I implement proficiency-based grading, so if you’re assessing according to proficiency, I’d love to know how it’s working in your classes!

Resources for Planning and a Food Unit for Intermediate Low French Students


As regular readers may have noticed, I ended up taking a hiatus from blogging this spring.  It all started when I welcomed an awesome student teacher to my classroom who was so well-skilled in proficiency-based instructional methods that I didn’t need to create any new lessons for several weeks. Then I decided to relocate closer to family, creating a whirlwind of life changes which including finding a new position, selling a house, buying a new house, moving and setting up a new household.  Needless to say, I had to put aside my blogging for a few months!  However, now that I’m settled into my new home I’m anxious to share some of the materials I’ve been working on for my new students.

Creating units for students that I’ve never met, in a school with a different curriculum and culture than the one I left has been a bit of a challenge.  Although I don’t know much about the proficiency level or personal interests of my new students, I can’t wait until August to begin preparing instructional materials for my new kiddos.

Besides, reading Chapter 1 of The Keys to Planning for Learning for #langbook has me thinking about all of the ways I can improve my planning and I’m excited to start implementing some of the ideas that are reinforced in this book.

I decided to start with my French 3 curriculum, since I will have three different French classes this year–half of my school day.  In addition to reading The Keys to Planning for Learning, I completed the self-assessment survey provided by the TELL Project before developing this unit.  As a result of this self-assessment, I realized I needed to be more intentional in developing daily objectives for my lessons. Although I had previously created Can Do Statements for each unit, I hadn’t provided my students with a clear objective for each lesson.  I have therefore included daily performance objectives in addition to the Essential Questions and Can Do Statements for this unit.  

Because the first theme in my new French 3 curriculum, “Nourriture,” is so broad, I have broken it down into three topics–breakfast, school lunch, and Francophone specialties. This Google Slide Presentation contains the unit plan as well as links to the materials I’ve created/borrowed for each of the 19 lessons in the unit.I am hoping that this format will improve transitions, encourage the students to work more independently and allow absent students to complete work from home. It will also facilitate sharing this work as I can continue to make edits/correct errors without having to reload word documents to this blog. While I’ve previously shared some of these materials, many others are new, including several Edpuzzle video quizzes that will serve as formative assessments in the 1:1 learning environment of my new school.  

While I have not included assessments in the presentation, you can click here for the breakfast IPA and here for the school lunch IPA. As the agenda shows, the students will prepare a presentation, rather than a full IPA as a summative assessment on the Francophone specialty topic.


As always, I welcome feedback on these materials!


Image Credit:


4 Suggestions for Assessing Interpersonal Communication with Novice Learners

600px-Two-people-talking-logoAs I’ve evolved in my teaching practice, I’ve made significant changes to how I assess oral communication. Here are a few suggestions that have helped me improve my assessment of my Novice students’ interpersonal communication, resulting in increased proficiency among these early language learners.

Suggestion #1: Just do it! It seems that many of my colleagues are hesitant to assess their Novice students’ interpersonal communication. It is their belief that that because these students are entirely dependent on memorized language, no true interpersonal communication can occur.  Fortunately, we’ve agreed to disagree on this point! In my opinion, a Novice speaker’s reliance on practiced or memorized language does not preclude her from true communication on an unrehearsed task.  While the topics that these learners discuss will be limited to those which have been practiced, they will still be able to demonstrate interpersonal communication when given an appropriate communicative task.  In fact, the NCSSFL-ACTFL Can-Do Statements list the following for Novice Mid Interpersonal Communication:

  • I can greet and leave people in a polite way.
  • I can introduce myself and others.
  • I can answer a variety of simple questions.
  • I can make some simple statements in a conversation
  • I can ask some simple questions
  • I can communicate basic information about myself and people I know
  • I can communicate basic information about my everyday life

Clearly we can expect students to be able to demonstrate an ability to communicate about such basic topics as likes/dislikes, leisure activities, family members, school subjects and supplies, eating habits, etc. While appropriate questions, answers, and rejoinders may be practiced in advance with Novice learners, we can ensure that they are demonstrating actual communication in this mode by creating prompts that prevent memorization of a script.  For example, by pairing a student with a classmate with whom he hasn’t yet practiced, we ensure that he is unable to memorize the exact questions he will ask and the answers or rejoinders that he will give.  Consider a unit on likes and dislikes, often one of the first topics in a Level 1 curriculum. We might give an assessment prompt such as, “You are choosing a roommate for choir camp and you want to make sure you end up with someone that likes the same things you do. Discuss your likes and dislikes in order to find out what you have in common.”  While these students will have practiced expressing their preferences (“I like…”) , asking questions (“Do you like…?), and replying to a partner (“Me, too.” “Not me.” “Me neither.”), they will not know in advance which questions they will be asked or which responses their partner will provide.  As a result, each student must be prepared to ask a variety of questions, in order to avoid repeating those asked by his partner. Likewise, working with a new partner will require a student to comprehend his interlocutor’s response (rather than simply memorizing a script) in order to choose the appropriate rejoinder. (“Me neither” is not an appropriate response to a partner who has stated that she likes something, for example.) Furthermore, even Novice learners can make some adjustments in order to clarify meaning for an interlocutor who has demonstrated a lack of comprehension.  Requests for repetition are often all that’s needed in order to understand a message, whether because the original speaker is able to correct an error that impeded comprehension or because the repetition enable the interlocutor to establish additional meaning. Clearly, even a task as simple as this one does require the negotiation of meaning which typifies interpersonal communication.

Suggestion #2:  Stay out of it! I rely almost entirely on student-to-student interaction for my interpersonal assessments, even for Novice learners, for the following reasons:

  1. In my experience, allowing students talk to each other great increases the quality of the interaction. I have followed the suggestion of Colleen Lee (@CoLeeSensei) who said in a #langchat discussion, “I teach my [students] that your partner not understanding you is YOUR responsibility to clear up!” Nothing is more magical than hearing a level 1 student encourage a classmate by suggesting possible language chunks that would provide the necessary clarification to allow communication to occur. In an early unit this year, for example, one student negotiated meaning by asking C’est ta mère ou ta sœur? when her partner used the incorrect vocabulary word when describing his family photos. This clarification gave valuable feedback to the speaker who had made the error, as well as allowed his interlocutor to stretch beyond the rehearsed statements and questions she had anticipated using during this assessment. As a result of this negotiation, both students are likely to make progress toward proficiency that wouldn’t have been likely had the conversation occurred between a teacher and student.
  2. I have found that being able to talk to a peer, rather than the teacher, greatly reduces the students’ affective filter. A conversation between a teacher and student, regardless of the prompt, is a conversation between an expert/evaluator and a student, which creates a certain level of anxiety in many learners. When a student’s focus is on communicating with a peer, however, she is often able to disregard the presence of the teacher (who is most likely busily taking notes in order to provide feedback to the speakers). As a result of this decrease in anxiety, the quality of the communication is considerably greater than it would have been if the student was speaking to the teacher.
  3. In my opinion, the authenticity of the communication is significantly reduced when one of the speaker’s primary motivation is assessment, rather than comprehension. In a teacher-student interpersonal assessment, the student’s goal is most likely to avoid errors, and the teacher’s is to note them as part of the feedback process. As a result, the communicative content of the conversation often loses its significance.
  4. Lastly, assessing pairs of students saves valuable class time. While I can generally assess all 30 of my students in one class period when listening to two speakers at a time, I would not be able to do so if I were assessing each one individually.

Suggestion #3: Use a great rubric. I love the one from the Ohio Department of Education ( ) because it includes an interculturality component, as well as great descriptors related to the quality of the interaction.  The wording in the comprehensibility section makes it clear that some errors are to be expected, even for those students rated as Strong. Knowing that the content and quality of the interaction are as important as accuracy encourages students to make more risks during interpersonal tasks. This risk-taking leads allows the learners to demonstrate greater proficiency than they would if their only goal was to avoid grammatical errors.

Suggestion #4: Don’t forget to incorporate culture. As I discussed in an earlier post, I am experimenting with using role plays in order to incorporate more culture into my novice interpersonal communication assessments.  My previous prompts, in which I asked students to discuss their own personal preferences and experiences, often failed to produce adequate evidence of interculturality.  On the other hand, I was pleased with the results I had during a recent holiday unit when I assigned a role to each member of the conversation pair. In this assessment, I asked one student to play the role of someone who had traveled to France for the holidays and the other to play the role of someone who had traveled to Canada.  When these students discussed the pictures they had “taken” (a Google Slides presentation I prepared), they were able to demonstrate their cultural competence in a comprehensible way, in spite of their Novice proficiency level.

I’ll look forward to hearing what has worked for you when assessing impersonal communication with Novice students!


Using Cartoons to Assess Interpretive Listening with Novice Learners

COUV. La Balle.indd

This week’s #langchat discussion about interpretive listening revealed that we language teachers are very diverse in the way we approach this skill, especially with novice learners. Although I reflected at length on the topic of assessing listening in an earlier post, I’d like to specifically address a few of the questions that came up during Thursday night’s discussion.

Question #1: What resources are appropriate for novice learners? While some teachers are hesitant to use authentic resources with early novices, I have found that first semester French 1 students can successfully interpret carefully selected authentic materials when given level-appropriate tasks.  My go-to resource for these students are cartoon videos for the following reasons:

  1. These videos were made for novice language learners—young children in the target culture! As a result, the vocabulary and sentence structures are relatively simple and the linguistic input is supported by strong visual cues. This is exactly what our novice learners need.
  2. The wide selection of these videos ensures that there are several choices available for any theme we’ve included in our novice curriculum. My favorites for my Level 1 and 2 students are Trotro, Petit Ours Brun and T’choupi et Doudou, because of the broad range of topics covered and the comprehensibility. I also occasionally use Peppa Pig with my level 2 students. Although originally recorded in (British) English, the French translation was clearly intended for French-speaking children, so I do consider these to be authentic resources.  However, the target culture would not, of course, be represented in these videos.
  3. Cartoons are very engaging to my students. They look forward to their turn at the computer and a few students have even mentioned that they have watched additional episodes of the series at home, “just for fun.”
  4. As authentic resources, these cartoon videos often integrate cultural products, practices and perspectives of the target culture. When Petit Ours Brun puts his shoes under the Christmas tree, his grandfather comments on the delicious turkey, and he wakes up to presents on Christmas morning, my students learn relevant cultural practices regarding Christmas celebrations in France.

Question #2: What types of tasks are appropriate for novice learners? I realized as I participated in Thursday night’s #langchat that I have interpreted ACTFL’s descriptors regarding interpretive listening differently than many of my colleagues. The Novice Mid (my goal for level 1) NCSSFL-ACTFL Can-Do Benchmark for interpretive listening reads, “I can recognize some familiar words and phrases when I hear them spoken.”  If I understood my colleagues’ responses correctly, many of us may be assessing listening by having students list the words and phrases that they hear.  Because it isn’t clear to me how this type of task would demonstrate interpretation/comprehension, I ask students to answer questions to show comprehension of the video, but phrase these questions in a way that the students can use previously-learned words/phrases (along with visual context clues) to respond.  This year I am using a multiple choice format for my formative listening assessments using our district’s recently-adopted Canvas learning management system.  Although I don’t feel that multiple choice is appropriate for many language tasks, this platform has the advantage of providing immediate feedback to my students.  In addition, since creating and assessing them requires a minimal time commitment on my part, I am able to provide more opportunities for listening than I was using other task types.  Lastly, this format provides students with additional context clues.  Their listening is more purposeful as they are listening for a specific response, as well as to eliminate distractors. While I typically use open-ended question types on my IPA’s, these multiple choice quizzes, which the students complete individually at a computer, provide the majority of my formative listening assessments.

In order to save time, I create these quizzes directly in Canvas, which unfortunately makes them very difficult to share.  For the purposes of this discussion, I’ve uploaded a Word document of screenshots from a quiz I made this morning for the video, Trotro et les cadeaux de Noel ( ). As this document shows, the questions that I’ve created enable these Novice Low-Mid students to demonstrate their ability to interpret this text using only previously-learned words and phrases and visual clues. While most of the items assess literal comprehension, I’ve included a few questions that require the students to make inferences and guess the meanings of new words using context clues. Here’s a quick explanation of my thought process for each question.

#1: While each of these questions would be appropriate to the context, my students will probably understand “pour moi” when they hear it.  They will also be able to eliminate the 2nd choice, because they know the word for Santa.  Although I’ve used the other question words in class, the students are not using them yet.  I included them in the distractors to encourage the students to start thinking about how questions are asked.

#2: This question is a “gimme.”  The students know the word for book and have visual clues as further support.  I created the question to improve the students’ confidence, enable all students to have some “correct” answers, and to provide more context for further questions.  As you can see, I write LOTS of questions, because I find the questions themselves provide important context and help the students follow along with the video.

#3: “Chouette” is a new word for these students, but it appears in a lot of children’s literature/videos and I think they’ll enjoy using it.  The context should make the meaning of this word clear.

#4/#5: The students have learned the word “jeux-video” so I think they’ll get “jeu.”  Also because Trotro also uses “jouer” I think they’ll understand it’s something to play with rather than listen to.

#6/#7 Students can answer by recognizing the previously-learned words “gros” and “belle.”

#8: Although this question does not assess listening comprehension (the word appears in written form), it does provide a contextualized way to introduce a new vocabulary word.

#9: The students can listen for the word “content” as well as eliminate the distractors based on previously-learned words.

#10: The students have heard “maintenant” repeatedly, but it hasn’t been formally introduced.  If they don’t recognize it, they should still be able to eliminate the other choices.

#11: Although the students will not understand the entire sentence in which it appears, they should be able to answer this question by identifying the word “cadeaux.”

#12: I’m curious what my students will do with this inference-based question.  They should recognize the phrase, “Moi, aussi” which should enable them to infer that Boubou got the same gift.

#13: The students should recognize the word “jouer” as well as be able to eliminate the distractors based on previously-learned vocabulary.

#14: The students should be able to use the visual context to guess the meaning of this new vocabulary.

#15: The phrase “c’est moi” should enable the students to choose the correct response for this one. As with several other items, I’ve included the transcription of the entire sentence to introduce new vocabulary—the verb “gagner.”

#16: Although my students won’t be able to use the linguistic content to answer this question, I’ve included it to encourage inference based on visual context clues.

#17: I’ll be curious how they do with this one.  “Bateau” is an unknown word and although they’ve seen “mer,” I’m not sure they’ll pick up on it.  Some might pick out “pirate” but I’ll be curious how many are able to answer this one correctly.

#18: The students have heard “rigolo” and this word even appears in Trotro’s theme song.  In addition, they should be able to eliminate the distractors based on previously-learned vocabulary.

While there’s nothing especially innovative about this assessment format, after completing many similar tasks during their first semester of language study most of my level 1 students are pretty accurate when completing this type of formative assessment.

Question #3: How should interpretive listening be assessed? I did want to make a point about grading these formative assessments.  Although I do my best to create questions that are mostly at the students’ current proficiency level, with a few items thrown in to encourage “stretch,” I rely heavily on my students’ results to determine how close I came to hitting this target.  Therefore, I do not decide how to grade these assessments until I have data on how the class scored as a whole.  In other words, this particular formative assessment will not necessarily by worth 18 points.  If, for example, the highest score is 16, I might make this the maximum score. For teachers that do not record a score on formative assessments, this isn’t an issue of course.  I only suggest that we expect and allow for student errors when assessing interpretive listening (even using objective evaluations) just as we do when assessing the other modes.

I’d love to hear from any of you who are willing to share your experiences and ideas about assessing listening with novice learners!

Image credit:

5 Tips for Grading IPAs

teacherThe first grading period ended in my school this week so there was lots of talk in my department about how time-consuming it is to grade IPA’s.  While I am enough of a teacher nerd to actual enjoy creating IPA’s, I cannot say the same for grading them!  Here are a few suggestions that have helped me streamline the process and cut down the time I spend on this task.

  1. Assign a rough draft for the Presentational Writing. I often incorporate a series of learning stations before an IPA and one of these stations consists of writing a rough draft for the IPA. Since I have only 8 students at each station per day, the process of providing feedback is less overwhelming. The students benefit from this feedback on this formative assessment and usually do much better on the IPA as a result.
  2. Use rubrics. I began using the Ohio Department of Education rubrics this year and I really like them. Since Ohio has not yet created an Interpretive Rubric, I use the ACTFL rubric, which I’ve modified to meet my needs.  (See this post for a detailed explanation.) When grading the reading and writing sections of an IPA, I lay a rubric next to the student’s paper and check the corresponding box, making very few marks on the student’s paper. Since I will go over the interpretive sections with the class, I don’t find it necessary to mark each response on each student’s paper.  Likewise, having given specific feedback on the rough drafts, there is no need to do so on this final copy, which I will keep in my files after returning temporarily for feedback purposes.
  3. Avoid math. After I have checked the appropriate box in each section of the rubric, I determine a score for that section of the IPA. (My gradebook is divided according to language skills—reading, writing, listening, and speaking, so each IPA task gets its own score.) I use a holistic system, rather than mathematical calculations to determine an overall score for each task. If all of the checks are in the “Good” column, the student earns a 9/10.  If there are a few checks in the “Strong” column (and the rest are Good), the student earns a 10/10.  If the checks are distributed between the Good and the Developing column, the student earns an 8.  If the checks are all in the Developing column, the student earns a 7.  If there are several checks in the Emerging column, the student earns a 6.  If a student were unable to meet the criteria for Emerging, I would assign a score of 5/10, the lowest score I record.
  4. Grade the Interpersonal Speaking “live.” I know that many teachers have their students record their conversations and then listen to them later. If this works for you, you have my admiration. I know myself far too well—I would procrastinate forever if I had 30 conversations to listen to when I got home at night!  It works much better for me to call up two randomly-chosen students to my desk while the rest of the class is working on the presentational writing.  I can usually get most of the class done in one period, in part because I also place a time limit on their conversation— usually about 3 minutes for my novice students and 4-5 for my intermediates. I find that I can adequately assess their performance in that amount of time, and the students are relieved to know that there is a finite period of time during which they will be expected to speak.  I mark the rubric as they’re speaking, provide a few examples, and then write a score as they next pair is on their way to my desk.
  5. Use technology for lnterpretive Listening. Each of my IPA’s includes both an Interpretive Reading and an Interpretive Listening. Because I haven’t found the ACTFL Interpretive Template to work well with listening assessments (see this post), I am currently using basic comprehension, guessing meaning from context, and sometimes main idea and inference questions to assess listening.  Although I’ve used a short answer format for these items in the past, I am starting to experiment with creating multiple choice “quizzes” on Canvas (our learning management system).  I know that other teachers have had success creating assessment items using Zaption and other programs.  I’m still reflecting on the use of objective questions to assess listening, but these programs do offer a way for teachers to provide more timely feedback and for students to benefit from additional context to guide their listening.

If you have any tips for grading IPA’s,  please share!

Photo Credits

  • Comstock Images/Comstock/Getty Images

Assessing Proficiency: A SLO Pre-Assessment for French 2 Students

board-361516_1280In Ohio, as in an increasing number of states, teachers are now evaluated (in part) on the extent to which their students meet the Student Learning Objectives that have been set for them.  Fortunately, both the Ohio Foreign Language Association and the Ohio Department of Education have encouraged us to develop SLO’s based on student growth in proficiency.  Therefore, within the first couple of weeks of school, I will be giving this pre-assessment(French 2 SLO Pre-AssessmentSLO Article p. 1SLO Article p. 2t) to my French 2 students.  Rather than assessing their work using performance rubrics, as I do for the unit IPA’s, I will use these proficiency rubrics to assess this IPA. I will then give a post-assessment (IPA that is unrelated to a recent unit of study), to assess student growth.

How do you measure growth in your students?

Bienvenue Partie 2: Designing IPA’s for Novice Low Learners

bienvenue2 In conversations about Integrated Performance Assessments, my fellow teachers often share their concerns about using authentic texts with beginners. There seems to be a widespread belief that true beginners cannot derive meaning from texts created by native speakers for native speakers. I hope that these assessments, which will be implemented during the unit I shared in yesterday’s post, will demonstrate that even Novice Low learners can read and listen to authentic texts when the tasks are designed to correspond to their proficiency level.

As I explained in yesterday’s post, I created two separate IPA’s for this unit.  As often happens in real-life school settings, instructional decision-making is influenced by many factors.  Because this unit will not yet be completed before the interim progress report grades are due, I prepared a short IPA to be administered after about three weeks of instruction.  This assessment will provide information to my students and their families regarding their ability to use their brand-new language skills in an authentic context.

IPA #1 (Revised 9/14/2015)

As you can see, I did not follow the exact order (Interpretive-Interpersonal-Presentational) that is recommended in designing IPA’s.  In this case I used an alternative format to better meet the context of the assessments, which was a visit to a Francophone school.  Therefore, in this IPA the students will first listen to an authentic video about holidays and then read an article about France from an authentic children’s magazine (Les Pays…08082015) Next, they will respond to a note from a student in the class.  Lastly, they will answer the school secretary’s questions.  Although all of my previous IPA’s have incorporated student- to-student interaction for the interpersonal task, I will play the role of the school secretary in this instance, as the Novice Low ACTFL Can-Do’s reflect the students’ ability to introduce themselves at this level, but not to interview others. This is the “secretary’s” script:


Comment ça va?

Tu t’appelles comment?

Comment ça s’écrit ?

Tu as quel âge ?

Quelle est la date de ton anniversaire?

Merci, Bonne journée.

Au revoir.

IPA #2 (Note: the video used for the listening assessment is no longer available, but a search on “Mes fournitures scolaires” on Youtube might provide a similar video.)

In this summative assessment for the unit, I continued the context by explaining that the students were now preparing for their first day of school in their temporary home in Morocco.  Before the first day they will 1)Read the school’s list of required supplies (Interpretive Reading), 2) Listen to a video in which a student presents her school supplies (Interpretive Listening), 3) Discuss their school supplies with a neighbor (Interpersonal Communication) and 4) Make a list of school supplies they need to buy (Presentational Writing).

French 1 Unit 1 Formatives

As shown in the tentative agenda I included in yesterday’s post, I will administer a quick formative assessment after each lesson.  These quizzes are designed to assess the extent to which the students are able to identify new vocabulary words.  Any student who is not successful on any of these quizes will be given an opportunity to receive additional instruction and retake the assessment. As with the first IPA, the red text is teacher script and will not appear in the student copy.

Image Credit:

Grading: A necessary evil?

reportcardIf it were up to me, I would provide feedback, but not numerical or letter grades to my students.  In my experience, assigning scores to assignments, assessments, and overall achievement often has a negative effect on the learning process.  My more ambitious students are so focused on their scores for various assessments that they tend to disregard the feedback provided to help them increase their proficiency. The less motivated students sometimes regard a low score as an excuse to stop trying, rather than directing their attention to constructive feedback that would help them improve on future performances. Furthermore, parents and other stakeholders are inclined to request opportunities for their students to “earn more points,” rather than suggestions for how these students can improve their proficiency.

As much as I would like to completely eliminate the process of assigning grades to my students, I know this is not a realistic expectation given my current teaching situation.  In my school, as in most large public high schools in the country, grades serve many purposes for the students and stakeholders in their education.  Here are a few that immediately come to mind:

  • Some parents use grades to determine the extent to which they need to become more involved in their child’s schoolwork, limit extra-curricular activities, take disciplinary measures, etc.
  • Grades provide input to guidance counselors when making scheduling decisions.
  • Administrators consider grades when placing students in various educational programs.
  • Coaches make decisions about what types of intervention to provide based on student athletes’ grades.
  • Mental health professionals consider students’ grades when diagnosing certain learning differences or mental health issues.
  • Colleges use students’ grades to make decisions about whom to accept or give scholarships to.
  • Students make decisions about work habits and even whether to remain enrolled in a course based on their grades.

For these reasons, I am required to keep an (electronic) gradebook in which I record numerical scores for various assignments and assessments.  These scores are then used to determine a numerical average, which is then converted to a letter grade based on the district’s grading scale.

Although I cannot totally eliminate the grading process, I do have a fair amount of autonomy in determining how these grades are tabulated.  In my current teaching position, I am able to make the following decisions regarding the grading process:

  • The formula used to convert individual scores into an overall grade
  • The types of assignments/assessments that are graded
  • The methods I use to assign a numerical score to these assignments/assessments

When making choices about these aspects of the grading process, I take many factors into account.  First and foremost, it is of utmost importance that my students’ grades reflect what they can do with language (and therefore their proficiency), rather than their compliance, behavior, effort, etc.  Secondly, it is important that the scores provide targeted feedback on each student’s strengths and areas for improvement. Lastly, I want my grading system provide motivation for those students who are grade-driven, yet not be overly punitive for those students who are less motivated by grades. While I continue to tweak my grading system as my understanding of proficiency evolves, this is the grading system I will implement this year.

Formulating a Quarter Average In order to ensure that my students’ overall grades reflect the extent to which they have met the proficiency goals I have set for them, 80% of each student’s quarter grade is derived from his/her scores on the two or three IPA’s that I administer each quarter. Rather than recording one score for each IPA, however, I assign a separate score for each language skill that is assessed on the IPA.  Therefore, each student will earn a Reading score for the interpretive reading task on the IPA, a Listening score for the interpretive listening task, a Speaking score for the interpersonal communication or presentational speaking task, and a Writing score for the presentational writing task.  Each of these skill categories are worth 20% of the overall grade.  The advantage of recording these scores in separate categories, rather than as a single score, is that I can immediately identify a student’s strengths and weaknesses and provide individualized coaching to help students improve.  While some educators use the communicative modes, rather than language skill areas as their grading categories, my personal experience does not support this configuration.  I have found little transfer, for example, between interpretive listening and interpretive reading skills.  Likewise, my students with strong presentational speaking skills do not necessarily have the accuracy required to be strong writers.  I do find, however, that students are fairly consistent across modes in terms of language skills.  For instance, a student who can communicate effectively in a conversation can usually transfer these same skills to an oral presentation.

In addition to these language skill categories, I have a fifth section which includes all other assignments/assessments.  Grades on classwork/formative assessments, quizzes, etc. are recorded as Miscellaneous scores. While many teachers don’t record scores on formatives assessments, I have found that many of my students are more motivated to complete classwork and to prepare for formative assessments if their scores on these evaluations will appear in the gradebook. Due to the large number of scores in this category, each individual score has only minor mathematical significance.  As a result, a poor score on any of these assignments will have very little effect on a student’s overall grade, ensuring that the student’s quarter grade is primarily derived from his/her summative IPA’s.

Assigning Scores to IPA’s This year I will assess my IPA’s using the Ohio Department of Education’s Presentational Speaking, Presentational Writing and Interpersonal Communication  Scoring Guides and the ACFTL IPA rubric for Interpretive Reading (with the modifications discussed in this earlier post).  As I assess the IPA’s, I will check the appropriate box in each section of these rubrics in order to provide comprehensive feedback to my students.  However, I will not provide a numerical score in order to ensure that the students remain focused on their learning, rather than their grade. As I will need a numerical score for my gradebook, I’ll use these formulas to convert the rubric evaluations into scores for record-keeping purposes.

Interpretive Listening: Because I have not found the ACTFL template to be an effective method of assessing interpretive listening skills (see this post), I am currently using a variety of comprehension questions to assess listening.  My method for determining a grade based on student responses to these questions is, however, a work in progress.  Although I try to create questions that could be answered using previously-learned vocabulary and context clues, my students’ performances have demonstrated that I am not always realistic in my expectations.  It is clearly not reasonable to expect Novice students to answer all questions about an authentic video when “I can understand basic information in ads, announcements, and other simple recordings.” is an Intermediate Mid NCSSFL-ACTFL Can-Do statement. Therefore, I used data from my students’ responses on IPA’s (all of which were new last year) to inform my calculations. I then create a table such as this one.  Because this process is norm-referenced rather than criterion-referenced, I am not entirely satisfied with this process and will continue to reflect on how best to assess my students on interpretive listening.

Assigning Scores to Formative Assessments While the primary purpose of my formative assessments is to provide feedback, I also assign scores to some of these assignments.  Doing so provides additional motivation to some students as well as encourages absent students to make up their missed work. On most days, my students will complete at least one of the following, which may be scored as a formative assessment.  I use these rubrics to formulate a score on the following types of formative assessments.

  1. Presentational Speaking – I sometimes choose 2-3 students to present on a topic that was assigned as homework (Novice) or to present what they have learned from a reading or conversation (Intermediate).
  2. Interpersonal Speaking – I circulate among my students as they are completing the interpersonal speaking activities during the unit. While I cannot spend enough time with each pair/group to adequately assess them, I do choose to 3-4 groups to assess during each interpersonal speaking activity.
  3. Presentational Writing – My students complete several presentational writing assignments throughout the unit that are designed to help them practice the skills they will need to be successful on the IPA. While I cannot assess all of these assignments, I will provide feedback (or use peer feedback) as often as possible. In addition, by randomly selecting several papers to score on each assignment, I can ensure that all students will have at least one writing formative assessment score for each unit.
  4. Interpretive Reading/Listening – In many cases, I provide whole class feedback by going over the correct responses to interpretive activities. However, I do sometimes collect student work in order to evaluate and provide feedback on individual performance. Depending on how much time I have available, I might correct all or parts of an interpretive task for feedback purposes and then assign a score using the interpretive formative assessment rubric.

While I will continue to evaluate my grading practices, it is hoped that this system will allow me to assess my students’ progress on the goals I have established and to provide the necessary feedback that will enable them to make continued progress along the path to proficiency.