Category Archives: Assessment

Bienvenue Partie 2: Designing IPA’s for Novice Low Learners

bienvenue2 In conversations about Integrated Performance Assessments, my fellow teachers often share their concerns about using authentic texts with beginners. There seems to be a widespread belief that true beginners cannot derive meaning from texts created by native speakers for native speakers. I hope that these assessments, which will be implemented during the unit I shared in yesterday’s post, will demonstrate that even Novice Low learners can read and listen to authentic texts when the tasks are designed to correspond to their proficiency level.

As I explained in yesterday’s post, I created two separate IPA’s for this unit.  As often happens in real-life school settings, instructional decision-making is influenced by many factors.  Because this unit will not yet be completed before the interim progress report grades are due, I prepared a short IPA to be administered after about three weeks of instruction.  This assessment will provide information to my students and their families regarding their ability to use their brand-new language skills in an authentic context.

IPA #1 (Revised 9/14/2015)

As you can see, I did not follow the exact order (Interpretive-Interpersonal-Presentational) that is recommended in designing IPA’s.  In this case I used an alternative format to better meet the context of the assessments, which was a visit to a Francophone school.  Therefore, in this IPA the students will first listen to an authentic video about holidays and then read an article about France from an authentic children’s magazine (Les Pays…08082015) Next, they will respond to a note from a student in the class.  Lastly, they will answer the school secretary’s questions.  Although all of my previous IPA’s have incorporated student- to-student interaction for the interpersonal task, I will play the role of the school secretary in this instance, as the Novice Low ACTFL Can-Do’s reflect the students’ ability to introduce themselves at this level, but not to interview others. This is the “secretary’s” script:

Bonjour.

Comment ça va?

Tu t’appelles comment?

Comment ça s’écrit ?

Tu as quel âge ?

Quelle est la date de ton anniversaire?

Merci, Bonne journée.

Au revoir.

IPA #2 (Note: the video used for the listening assessment is no longer available, but a search on “Mes fournitures scolaires” on Youtube might provide a similar video.)

In this summative assessment for the unit, I continued the context by explaining that the students were now preparing for their first day of school in their temporary home in Morocco.  Before the first day they will 1)Read the school’s list of required supplies (Interpretive Reading), 2) Listen to a video in which a student presents her school supplies (Interpretive Listening), 3) Discuss their school supplies with a neighbor (Interpersonal Communication) and 4) Make a list of school supplies they need to buy (Presentational Writing).

French 1 Unit 1 Formatives

As shown in the tentative agenda I included in yesterday’s post, I will administer a quick formative assessment after each lesson.  These quizzes are designed to assess the extent to which the students are able to identify new vocabulary words.  Any student who is not successful on any of these quizes will be given an opportunity to receive additional instruction and retake the assessment. As with the first IPA, the red text is teacher script and will not appear in the student copy.

Image Credit: http://claire-mangin.eklablog.com/

Grading: A necessary evil?

reportcardIf it were up to me, I would provide feedback, but not numerical or letter grades to my students.  In my experience, assigning scores to assignments, assessments, and overall achievement often has a negative effect on the learning process.  My more ambitious students are so focused on their scores for various assessments that they tend to disregard the feedback provided to help them increase their proficiency. The less motivated students sometimes regard a low score as an excuse to stop trying, rather than directing their attention to constructive feedback that would help them improve on future performances. Furthermore, parents and other stakeholders are inclined to request opportunities for their students to “earn more points,” rather than suggestions for how these students can improve their proficiency.

As much as I would like to completely eliminate the process of assigning grades to my students, I know this is not a realistic expectation given my current teaching situation.  In my school, as in most large public high schools in the country, grades serve many purposes for the students and stakeholders in their education.  Here are a few that immediately come to mind:

  • Some parents use grades to determine the extent to which they need to become more involved in their child’s schoolwork, limit extra-curricular activities, take disciplinary measures, etc.
  • Grades provide input to guidance counselors when making scheduling decisions.
  • Administrators consider grades when placing students in various educational programs.
  • Coaches make decisions about what types of intervention to provide based on student athletes’ grades.
  • Mental health professionals consider students’ grades when diagnosing certain learning differences or mental health issues.
  • Colleges use students’ grades to make decisions about whom to accept or give scholarships to.
  • Students make decisions about work habits and even whether to remain enrolled in a course based on their grades.

For these reasons, I am required to keep an (electronic) gradebook in which I record numerical scores for various assignments and assessments.  These scores are then used to determine a numerical average, which is then converted to a letter grade based on the district’s grading scale.

Although I cannot totally eliminate the grading process, I do have a fair amount of autonomy in determining how these grades are tabulated.  In my current teaching position, I am able to make the following decisions regarding the grading process:

  • The formula used to convert individual scores into an overall grade
  • The types of assignments/assessments that are graded
  • The methods I use to assign a numerical score to these assignments/assessments

When making choices about these aspects of the grading process, I take many factors into account.  First and foremost, it is of utmost importance that my students’ grades reflect what they can do with language (and therefore their proficiency), rather than their compliance, behavior, effort, etc.  Secondly, it is important that the scores provide targeted feedback on each student’s strengths and areas for improvement. Lastly, I want my grading system provide motivation for those students who are grade-driven, yet not be overly punitive for those students who are less motivated by grades. While I continue to tweak my grading system as my understanding of proficiency evolves, this is the grading system I will implement this year.

Formulating a Quarter Average In order to ensure that my students’ overall grades reflect the extent to which they have met the proficiency goals I have set for them, 80% of each student’s quarter grade is derived from his/her scores on the two or three IPA’s that I administer each quarter. Rather than recording one score for each IPA, however, I assign a separate score for each language skill that is assessed on the IPA.  Therefore, each student will earn a Reading score for the interpretive reading task on the IPA, a Listening score for the interpretive listening task, a Speaking score for the interpersonal communication or presentational speaking task, and a Writing score for the presentational writing task.  Each of these skill categories are worth 20% of the overall grade.  The advantage of recording these scores in separate categories, rather than as a single score, is that I can immediately identify a student’s strengths and weaknesses and provide individualized coaching to help students improve.  While some educators use the communicative modes, rather than language skill areas as their grading categories, my personal experience does not support this configuration.  I have found little transfer, for example, between interpretive listening and interpretive reading skills.  Likewise, my students with strong presentational speaking skills do not necessarily have the accuracy required to be strong writers.  I do find, however, that students are fairly consistent across modes in terms of language skills.  For instance, a student who can communicate effectively in a conversation can usually transfer these same skills to an oral presentation.

In addition to these language skill categories, I have a fifth section which includes all other assignments/assessments.  Grades on classwork/formative assessments, quizzes, etc. are recorded as Miscellaneous scores. While many teachers don’t record scores on formatives assessments, I have found that many of my students are more motivated to complete classwork and to prepare for formative assessments if their scores on these evaluations will appear in the gradebook. Due to the large number of scores in this category, each individual score has only minor mathematical significance.  As a result, a poor score on any of these assignments will have very little effect on a student’s overall grade, ensuring that the student’s quarter grade is primarily derived from his/her summative IPA’s.

Assigning Scores to IPA’s This year I will assess my IPA’s using the Ohio Department of Education’s Presentational Speaking, Presentational Writing and Interpersonal Communication  Scoring Guides and the ACFTL IPA rubric for Interpretive Reading (with the modifications discussed in this earlier post).  As I assess the IPA’s, I will check the appropriate box in each section of these rubrics in order to provide comprehensive feedback to my students.  However, I will not provide a numerical score in order to ensure that the students remain focused on their learning, rather than their grade. As I will need a numerical score for my gradebook, I’ll use these formulas to convert the rubric evaluations into scores for record-keeping purposes.

Interpretive Listening: Because I have not found the ACTFL template to be an effective method of assessing interpretive listening skills (see this post), I am currently using a variety of comprehension questions to assess listening.  My method for determining a grade based on student responses to these questions is, however, a work in progress.  Although I try to create questions that could be answered using previously-learned vocabulary and context clues, my students’ performances have demonstrated that I am not always realistic in my expectations.  It is clearly not reasonable to expect Novice students to answer all questions about an authentic video when “I can understand basic information in ads, announcements, and other simple recordings.” is an Intermediate Mid NCSSFL-ACTFL Can-Do statement. Therefore, I used data from my students’ responses on IPA’s (all of which were new last year) to inform my calculations. I then create a table such as this one.  Because this process is norm-referenced rather than criterion-referenced, I am not entirely satisfied with this process and will continue to reflect on how best to assess my students on interpretive listening.

Assigning Scores to Formative Assessments While the primary purpose of my formative assessments is to provide feedback, I also assign scores to some of these assignments.  Doing so provides additional motivation to some students as well as encourages absent students to make up their missed work. On most days, my students will complete at least one of the following, which may be scored as a formative assessment.  I use these rubrics to formulate a score on the following types of formative assessments.

  1. Presentational Speaking – I sometimes choose 2-3 students to present on a topic that was assigned as homework (Novice) or to present what they have learned from a reading or conversation (Intermediate).
  2. Interpersonal Speaking – I circulate among my students as they are completing the interpersonal speaking activities during the unit. While I cannot spend enough time with each pair/group to adequately assess them, I do choose to 3-4 groups to assess during each interpersonal speaking activity.
  3. Presentational Writing – My students complete several presentational writing assignments throughout the unit that are designed to help them practice the skills they will need to be successful on the IPA. While I cannot assess all of these assignments, I will provide feedback (or use peer feedback) as often as possible. In addition, by randomly selecting several papers to score on each assignment, I can ensure that all students will have at least one writing formative assessment score for each unit.
  4. Interpretive Reading/Listening – In many cases, I provide whole class feedback by going over the correct responses to interpretive activities. However, I do sometimes collect student work in order to evaluate and provide feedback on individual performance. Depending on how much time I have available, I might correct all or parts of an interpretive task for feedback purposes and then assign a score using the interpretive formative assessment rubric.

While I will continue to evaluate my grading practices, it is hoped that this system will allow me to assess my students’ progress on the goals I have established and to provide the necessary feedback that will enable them to make continued progress along the path to proficiency.

 

Musings on Assessing Interpretive Listening

listeningA couple of weeks ago I shared my thoughts about assessing interpretive reading.  In that post gave my opinion  that ACTFL’s IPA Template was a generally effective way to design an assessment of reading comprehension and that, with a couple of modifications, their rubric was well-aligned with the tasks on the template. I have reservations, however, about the use of the ACTFL IPA template to assess listening.  Here are a couple of my thoughts about assessing listening, please share yours!

Assessing Interpretive Listening is Important

By defining both listening and reading comprehension as Interpretive Communication, ACTFL has given us an out when writing IPA’s.  We can choose to include either one, but are not required to include both.   My guess is that when given the choice, most of us are choosing authentic written rather than recorded texts for the interpretive portion of our IPA’s.  There are several good reasons why this may be the case.

  1. Authentic written texts are usually relatively easy to find. A quick Google search of our topic + Infographie will often produce a content-filled, culturally-rich text with the visual support that Novice learners need. Picture books, ads, social media posts, etc. provide additional resources.  For our Intermediate students, our options are even greater as their proficiency allows them to read a wider variety of short texts, an unlimited supply of which are quickly located online.
  2. Written texts can be easily evaluated regarding their appropriateness for our interpretive assessment tasks. A quick skim will reveal whether a text being considered contains the targeted language and structures, culturally-relevant content, and appropriate visual support that we are looking for.
  3. Assessments of interpretive reading are easy to administer. We need only a Xerox machine to provide a copy of the text to each student, who can then complete the interpretive task at her own pace. When a student is absent, we can simply hand him a copy of the text and interpretive activity and he can complete the task in a corner of the room or any other area of the building where make-ups are administered.

Curating oral texts and assessing their interpretation, however, is considerably more time-consuming.  While we have millions of videos available to us on YouTube (my person go-to for authentic listening texts), videos cannot be skimmed like written texts.  We actually have to listen to the videos that our searches produce in order to evaluate whether they are appropriate to the proficiency of the students for whom they are intended.  In some cases, we have to listen to dozens of videos before finding that gem that contains the appropriate vocabulary, cultural content and visual support that our learners need.  When it comes to administering these assessments, we often face additional challenges.  In my school, YouTube is blocked on student accounts.  Therefore, I have to log into 30 computers in a lab (which is seldom available) or my department’s class set of IPads (sometimes available) for all of my students to individually complete a listening assessment at the same time. While many of us play and project our videos to the class as a whole, I think this places an undue burden on our Novice students who “require repetition, rephrasing, and/or a slowed rate of speech for comprehension” (ACTFL Proficiency Guidelines, 2012). A student who has her own device can pause and rewind when needed, as well as slow the rate of speech when appropriate technology is available.

In spite of these challenges to evaluating listening comprehension, I think we have a responsibility to assess our students’ ability to interpret oral texts. As Dave Burgess said at a conference I recently attended, “It’s not supposed to be easy, it’s supposed to be worth it.”  Assessing interpretive listening skills IS worth it. As the adage says, “we teach what we test.”  If we are not evaluating listening, we are not teaching our students what they need to comprehend and participate in verbal exchanges with members of the target culture.  While technology may allow us to translate a written text in nanoseconds, no app can allow us to understand an unexpected public announcement or participate fully in a natural conversation with a native speaker. In my opinion, our assessment practices are not complete if we are not assessing listening comprehension to the same extent as reading comprehension. As a matter of fact, I include separate categories for each of these skills in my electronic gradebook.  While others may separate grades according to modes of communication, I’m not sure this system provides as much information regarding student progress toward proficiency. Although both reading and listening may require interpretation of a text, they are clearly vastly different skills.  Students who are good readers are not necessarily good listeners, and vice versa. In their Proficiency Guidelines, ACTFL clearly differentiates these two skills, don’t we need to do the same when evaluating our students using an IPA?

Designing Valid Interpretive Listening Assessments is Difficult

 In my opinion, ACTFL has provided us very little direction in assessing interpretive listening.  While we are advised to use the same IPA Interpretive template, I find that many of these tasks do not effectively assess listening comprehension. Consider the following:

Key Words. While students can quickly skim a written text to find key words, the same is not true of recorded texts.  Finding isolated key words requires listening to the video multiple times and attempting to isolate a single word in a sentence.  I find this task needlessly time-consuming, as I will be assessing literal comprehension in other tasks.  Furthermore, this task puts some students, especially those with certain learning disabilities at a significant disadvantage.  Many of these students have excellent listening comprehension, but are not able to accurately transfer what they understand aurally into written form.

Main Idea. Although this task seems fairly straightforward, I question its validity in assessing comprehension for Novice and Intermediate learners. According to the ACTFL Proficiency Guidelines, Novice-level listeners are “largely dependent on factors other than the message itself” and Intermediate listeners “require a controlled listening environment where they hear what they may expect to hear.”  This means that all of my students will be highly dependent on the visual content of the videos I select to ascertain meaning.  Therefore, any main idea they provide will most likely be derived from what the students see rather than what they hear.  A possible solution might be for the teacher to provide a series of possible main ideas (all of which could be extrapolated from the visual information) and have the students choose the best one.  However, this task would certainly be unrealistic for our novice learners who are listening at word level.

Supporting Details.  I think this task on the IPA template is the most effective in providing us feedback regarding our students’ ability to understand a recorded text.  By providing a set of details which may be mentioned, we provide additional context to help our students understand the text and by requiring them to fill in information we are assessing their literal comprehension of what they hear.  In addition, this type of task can easily be adjusted to correspond to the proficiency level of the students. Providing information to support the detail, “Caillou’s age” for example, is a realistic expectation for a novice listener who is watching a cartoon.

Organizational Features While I see little value in this task for interpretive reading, I see even less for listening.  As previously mentioned, even intermediate listeners need to be assessed using straightforward texts so that they can anticipate the information they will hear.  Having my students describe the organization of a recorded text would not provide additional information about their comprehension.

Guessing meaning from context. As much as I value this task on reading assessments, I do not find it to be a valid means of assessing aural comprehension.  The task requires the teacher to provide a sentence from the text and then guess at the meaning of an underlined word.  As soon as I provide my students with a written sentence, the task becomes an assessment of their ability to read this sentence, rather than understand it aurally.      

Inferences As with the main idea, I think Novice and Intermediate listeners will be overly dependent on visual cues to provide inferences.  While I believe students should be taught to use context to help provide meaning, I prefer to assess what they are actually able to interpret verbally. ACTFL does suggest providing multiple choice inferences in the IPA template, but again the teacher would have to provide choices whose plausibility could not be derived from visual information in order to isolate listening comprehension.

Author’s Perspective. While I regularly include author’s perspective items on my assessments for my AP students, I feel this is an unrealistic task for Novice and Intermediate Low listeners.  Students who are able to understand words, phrases, and sentence-length utterances will most likely be unable to identify an author’s perspective using only the verbal content of a video.

Cultural Connections. Authentic videos are one of the best tools we have for providing cultural content to our students.  The content provided by the images is more meaningful, memorable, and complete than any verbal information could be.  However, once again it is difficult to isolate the verbal content from the visual, creating a lack of validity for assessment purposes.

 Conclusion

For now, I’m planning on using supporting detail or simple comprehension questions when formally assessing my students’ interpretive listening skills in order to ensure that I am testing what I intend to test. When practicing these interpretive skills, however, I plan on including some of the other tasks from the IPA template in order to fully exploit the wealth of information that is included in authentic videos.  I’m looking forward to hearing from you about how you assess your students on interpretive listening!

Checking for Comprehension: Providing feedback on interpretive tasks

checklistA couple of weeks ago I shared the checklists I created to streamline the feedback process with the new Ohio World Language Scoring Guides. These checklists were designed to quickly inform students of their strengths and areas for improvement on Presentational Speaking/Writing and Interpersonal assessments. Although Ohio did not create their own Interpretive scoring guide, I decided to make up a quick checklist to accompany the ACTFL interpretive rubric so that I would have a complete set of these checklists to guide my feedback process on both formative and summative assessments.  Here’s a copy of the checklist: interpretive feedback .

In order to maintain consistency with the other checklists, I wrote the expectations in the middle column.  Most of the wording I used here came from the “Strong Comprehension” column on the ACTFL rubric, although I made a few slight changes, based on my reflections in this previous post.  In the column on the right, I have listed some suggestions that I will check for students who don’t meet the descriptors for Strong Comprehension.  On the left are comments designed to let the students know what their specific strengths were on the task.  As it is my intention that this feedback checklist would be used in conjunction with the ACTFL rubric, I have also included a section at the bottom where I will check which level of comprehension the student demonstrated on the interpretive task being assessed.

Because I rely heavily on authentic resources and corresponding interpretive tasks in designing my units, it was very important for me to be able to provide timely feedback on these assignments/formative assessments.  It is my hope that these checklists will help me quickly give specific feedback that will enable the students to demonstrate greater comprehension on their summative assessments (IPA) for each unit.

Musings on assessing Interpretive Reading

readingAlthough I still have a four days left in my current school year, my thoughts are already turning to the changes that I plan on making to improve my teaching for next year.  This great article in a current teen magazine (matin p. 1matin p. 2matin p. 3,  matin p. 4 ) prompted me to write this interpretive task for my first French 2 IPA in the fall, as well as to reflect on what has worked and not worked in assessing my students’ proficiency in the interpretive mode.  I have learned a lot after writing 30+ IPA’s this year!

As my readers will have noticed, I create all of my interpretive assessments based on the template provided by ACTFL.  For the most part, I love using this template for assessing reading comprehension.  The tasks on the template encourage my students to use both top-down and bottom-up processes to comprehend what they are reading and in most cases the descriptors in the rubric enable the teacher to pinpoint the students’ level of comprehension based on their responses in each section.  I do, however, have a few suggestions about using this template in the classroom and modifying the rubric in a way that will streamline the assessment process and increase student success. Below, I’ve addressed each section of the template, as well as the corresponding section of the ACTFL rubric.

Key Word Recognition: In this section, the students are given a list of English words or phrases and asked to find the corresponding French words/phrases in the text.  Because I don’t give many vocabulary quizzes, this section helps me identify whether the students have retained the vocabulary they have used in the unit. In the lower levels I also add cognates to this section, so that the students will become accustomed to identifying these words in the text.  I also include some previously-learned words here, to assess the students’ ability to access this vocabulary.  This section of the ACTFL rubric works well, as it objectively assesses the students on how many of these words the students are able to identify.  I have found it helpful to identify in advance what range of correct answers will be considered all, the majority, half and few, as these are the terms used to define the various levels of comprehension on the rubric.  The IPA that I’ve included here there are 14 key words, so I’ve set the following range for each level: Accomplished (13-14 correct responses), Strong Comprehension (9-12 correct responses), Minimal Comprehension (6-8 correct responses) and Limited Comprehension (5 or fewer correct responses). Establishing these numerical references helps streamline the process of evaluating the students on this section of the interpretive task and ensures that I am assessing the students as objectively as possible.

Main Idea: While this is a very important task, I have found that it is rather difficult to assess individual student responses, due the variety of ways that the students interpret the directions in this section. In the sample IPA, I would consider main idea of the text to be “to present the responses of a group of teenagers who were questioned about what gets them up in the morning.”  However, upon examining the rubric, it is clear that a more detailed main idea is required.  According to the rubric, to demonstrate an Accomplished level of comprehension, a student must identify “the complete main idea of the text;” a student who “misses some elements” will be evaluated as having Strong Comprehension and if she identifies “some part of the main idea,” she falls to the Minimal Comprehension category. Clearly, a strong response to the main idea task must include more than a simple statement.  In this example, a better main idea might be “to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”  Clearly, my students need more direction in identifying a main idea in order to demonstrate Accomplished Comprehension in this section.  I think a simple change to the directions might improve their performance here. Here’s my suggestion:

  • Main Idea(s). “Using information from the article, provide the main idea(s) of the article in English” (ACTFL wording) and provide a few details from the text to support the main idea.

An issue that I’ve had in assessing the students’ main ideas is that the descriptors require the teacher to have a clear, specific main idea in mind in order to assess how many “key parts” of the main idea the students have identified.  In my practice I have found that interpreting a text’s main idea is quite subjective.  I have found that students often identify an accurate main idea that may differ considerably from the one I had envisioned.  Therefore, I would suggest the following descriptors for this task.  The information in parenthesis suggest what a main idea might look like for the sample text.

  • Accomplished: Identifies the main idea of text and provides a few pertinent details/examples to support this main idea. (“The main idea is to present a group of students’ responses when interviewed about how they wake up, why they get up at a certain time, and how their morning habits reflect their goals for the future.”)
  • Strong: Identifies the main idea and provides at least one pertinent detail/example to support the main idea. (The main idea is that a group of kids are telling when they get up in the morning and why.”)
  • Minimal: Provides a relevant main idea but does not support it with any pertinent details or examples. (It’s about why these kids have to get up in the morning.”)
  • Limited: May provide details from the text but is unable to determine a main idea. (“It’s about what these kids like to do.” or “It’s about what these kids want to be when they grow up.”)

Supporting Details. In my opinion, this is section is the meat of an interpretive assessment.  This is where I actually find out how much the students understood about the text.  As a result, I usually include more items here than ACTFL’s suggested five, with three distractors. While I like the general format of this task on the template, I quickly discovered when implementing it that I needed to make some slight changes. Namely, I had to eliminate the directive that the students identify where each supporting detail was located in the text and write the letter of the detail next to the corresponding information. In the real world, when I am grading 50+ IPA’s at a time, checking this task was entirely too cumbersome.  I photocopy the texts separately from the assessment, so that the students are not required to constantly flip through a packet to complete the IPA.  Therefore, if I were to assess this task, I would have to lay each student’s two packets next to each other and refer back and forth to their assessment and text to locate each letter.  I would then have to evaluate whether each letter was indeed placed close enough to the corresponding detail to indicate true comprehension.  I found this to be an extremely time-consuming, as well as subjective task, which did not provide the information I needed to determine how well the student comprehended the details in the text.  As a result, I quickly eliminated this requirement in this section.  I have, however, retained the requirement that students check each detail to indicate whether it was included in the text.  This provides the student who “knows it’s right there” but “doesn’t know what it says” to demonstrate his limited comprehension.  The most important aspect of this section, however, is that the students provide information to support the details they have checked.  Because this is the only section of the template that actually requires the student to demonstrate literal, sentence-level comprehension of the text, I think it’s important to evaluate it very carefully.  In my opinion, the descriptors in the ACTFL rubric do not allow the teacher to adequately assess this section. Consider this description for Minimal Comprehension, “Identifies some supporting details in the text and may provide limited information from the text to explain these details. Or identifies the majority of supporting details but is unable to provide information from the text to explain these details.”  In my opinion, this descriptor creates a false dichotomy between a student’s ability to identify the existence/location of relevant information and his ability to actually comprehend the text.  According to this rubric, a student who is unable to provide any actual information from the text would be considered as meeting expectations. In a real-life example, if a language learner knows that the driver’s manual tells which side of the street to drive on, but does not know whether she is to drive on the left or the right, I would not say she has met expectations for a minimal comprehension of the manual.  Rather than reinforce this dichotomy, I would prefer to delineate the levels of comprehension as:

  • Accomplished: Identifies all supporting details in the text and accurately provides information from the text to explain these details. (same as ACTFL)
  • Strong: Identifies most of the supporting details and provides pertinent information from the text to explain these details.
  • Minimal: Identifies most of the supporting details and provides pertinent information from the text to explain many of them.
  • Limited: Identifies most of the supporting details and provides pertinent information from the text to explain some of them.

As you can see, I expect the student to identify all or most of the supporting details at all levels of comprehension.  Since my students are identifying details by checking a blank, and 70-80% of the blanks will be checked (20%-30% are distractors), a student could randomly check all of the blanks and meet the descriptor for “identifying most of the details.”  Therefore, this part of the descriptor is less relevant than the amount of pertinent information from the text that is provided to explain the details.

Organization Feature: As I mentioned in a previous post about designing IPA’s, I understand the role that an understanding of a text’s organization has in a student’s comprehension.  However, in practice I often omit this section. The organization of the texts that I use are so evident that asking the students to identify them does not provide significant information about their comprehension.  If, however, I were to use a text that presented an unexpected organizational structure, I think this task would become relevant and I would include it and use the ACTFL rubric to assess the students.

Guessing Meaning from Context.  I love this section!  I think that a student’s responses here could tell me more about their comprehension than any other section of the interpretive task.  However, in practice this is not always the case.  In general, my students tend to perform below my expectations in this section. It may be that I am selecting passages that are above their level of comprehension or it may be that my students don’t take the time to locate the actual sentence in the article.  As a result, the less motivated students simply fill in a cognate, rather than a word that might make sense in the sentence.  Regardless, I think the ACTFL rubric works well here.  I do, however, usually include about five items here, rather than the suggested three.  This allows my students a greater possibility of success as they can score a “Minimal Comprehension” for inferring a plausible meaning for at least three (“most”) items.

Inference. I have found that this section also provides important information about my students’ overall comprehension of the text.  The greatest challenge is encouraging them to include enough textual support from the text to support their inferences.  A slight change I would suggest to the rubric would be to change the wording, which currently assessing students according to how many correct or plausible inferences they make.  Since the template suggests only two questions, it seems illogical that a student who makes a “few plausible inferences” would be assessed as having “Minimal Comprehension.”  In actual practice, I have assessed students more on how well they support their inferences than the number of inferences they have made.  If I were designing a rubric for this section, I would suggest the following descriptors here:

  • Accomplished Comprehension: “Infers and interprets the text’s meaning in a highly plausible manner” (ACTFL wording) and supports these inferences with detailed, pertinent information from the text.
  • Strong Comprehension: “Infers and interprets the text’s meaning in a partially complete and/or partially plausible manner” (ACTFL wording) and adequately supports these inferences with pertinent information from the text.
  • Minimal Comprehension: Makes a plausible inference regarding the text’s meaning but provides inadequate  information from the text to support this inference.
  • Limited Comprehension: Makes a plausible inference regarding the text’s meaning but does not support the inference with pertinent information from the text.

Author’s Perspective. Although not all text types lend themselves to this task, I include it whenever possible.  I do, however, deviate somewhat from the suggested perspectives provided by ACTFL.  Rather than general perspectives, such as scientific, moral, factual, etc., I have the students choose between three rather specific possible perspectives.  As with identifying inferences, I believe the most important aspect of the students’ responses on this task is the textual evidence that the students provide to support their choice.  In my opinion, the ACTFL rubric for this section provides good descriptors for determining a student’s comprehension based on the textual support they provide.

Comparing Cultural Perspectives. Although I find these items difficult to write, I think this section is imperative.  One of the most important reasons for using authentic resources is for the cultural information that they contain.  This task allows us to direct the students’ attention to the cultural information provided by the text, as well as to assess how well they are able to interpret this information to acquire new understandings of the target culture.  The first challenge in writing these items is that the teacher must phrase the question in a way that enables the student to connect the cultural product/practice to a cultural perspective.  This is especially difficult for novice learners who made have very little background knowledge about the target culture(s).  Because identifying a cultural perspective is such a sophisticated task, I think it’s important to provide a fair amount of guidance in these items. While the ACTFL template provides some sample questions, it’s important to realize that some of these questions do not allow the students to adequately identify a cultural perspective. In addition, many of the suggested questions assume that the students share a common culture and background knowledge.  I have made many mistaken assumptions when asking my students to compare a French practice to an American one.  Many of my students have not traveled outside of our community, have not had many cultural experiences, and lack basic knowledge about American history.  Therefore, they do not have the background knowledge about U.S. culture to make adequate comparisons. Furthermore, a significant percentage of my students were born outside of the U.S., so any question requiring them to demonstrate knowledge of American culture is unfair.  In the future, when writing a comparison question I will invite my students to compare the French practice to “a practice in a culture they are familiar with.”

My concern with the ACTFL rubric for this section is that a student is assessed mostly on his ability to connect the product or practice to a perspective.  While I think this high-level thinking skill is important, I have not found it to be closely related to the students’ comprehension of the text.  I have students who may wholly comprehend the text, but lack the cognitive sophistication to use what the read to make a connection to perspectives, placing them in the Limited Comprehension category.  In addition, many valuables texts simple don’t include the types of information needed to make these connections. Perhaps the following descriptors might be more realistic?

  • Accomplished Comprehension: Accurately identifies cultural products or practices from the text and uses them to infer a plausible cultural perspective.
  • Strong Comprehension: Accurately identifies at least one product or practice from the text and uses it to infer a plausible cultural perspective.
  • Minimal Comprehension: Accurately identifies at least one product or practice from the text but does not infer a plausible cultural perspective.
  • Limited Comprehension: May identify a superficial product or practice that is not directly related to the text but is unable to infer a plausible cultural perspective.

Please let me know if you’re using the ACTFL template and rubrics to assess Interpretive Communication and how it’s working for you.  I have lots to learn!

 

It’s all about the feedback: Checklists to accompany Ohio’s Scoring Guidelines for World Languages

feedback The arrival of the new Ohio Scoring Guides for World Languages, as well as an excellent post by Amy Lenord served as an important reminder that I need to improve the feedback that I give my students.  Although I have used a checklist for feedback in the past, I haven’t been completely consistent in using it as of late.  Furthermore, my previous checklist was not aligned to these new scoring guidelines.  It was definitely time to do some updating!

Fortunately, the Ohio Scoring Guides for Performance Evaluation provide a great framework for meaningful feedback.  Each of the rubrics includes an additional page that lists the criteria, as well as blank spaces for self or teacher feedback. Unfortunately, I know that my written comments do not always meet the students’ needs, especially on speaking assessments. The notes that I do jot down while listening to their production are most likely incomprehensible to my students.  My hurried handwriting is illegible, and it is difficult for my students to see the connection between my comments and their success on the performance. In order to address these issues, I prepared a series of checklists that I will incorporate when providing feedback using these rubrics.  For each set of criteria on the ODE rubrics I have added specific comments to target the student’s strengths, as well as a list of comments to identify suggestions for improvement.  By providing these specific comments, I hope to provide legible, focused feedback to my students on both formative and summative performance tasks.  In addition, I envision having the students do their own goal-setting by highlighting specific areas of the rubric that they would like to focus their attention on.

When developing my comments, I considered both the criteria, and the comments that I find myself using over and over again.  As a French teacher, I specifically addressed common errors made by English speakers, especially in terms of pronunciation and common structures.  In addition, I have included an “Other” line, for strengths/errors that are not specifically addressed on the checklist.  It was important to me that my checklist fit on one sheet of paper for ease of use, so I tried to include only those errors that are the most often made by my students.

It is my hope that these checklists will help my students identify both their strengths and areas for improvement and streamline their progress toward higher levels of proficiency. Here are the checklists, let me know what you think! Feedback checklists

Unpacking the new Ohio Scoring Guides for World Languages

suitcase

Although it may seem unfathomable to some of the younger teachers out there, I still remember the first time I saw the word “rubric” in the title of a session at a foreign language conference years ago. At the time, I had no idea what a rubric was or how it related to assessing language learners. Needless to say, that session forever changed the way that I evaluated student learning in my classroom.  I was so excited about this new way of assessing students that I started creating rubrics for everything.  At first I preferred analytic rubrics—assigning separate scores to each aspect of a written or oral product just seemed more objective.  However, I eventually realized that the quality of a performance could not necessarily be calculated by adding up the separate parts of a whole, so I switched to a holistic rubric that I tweaked periodically over the years.  I have realized this year, however, that I needed to do some major revising to reflect my current proficiency-based methodology. The descriptors I was using didn’t adequately reflect the elements of proficiency as described by ACTFL. Since my own performance is now being evaluated according to my students’ proficiency, it is important that I am methodical in providing feedback to my students that is clearly related to their progress in proficiency.  Fortunately for me, the state of Ohio has recently published a series of scoring guidelines that will help me do just that!

You can find the rubrics in their entirety here and my comments below.

http://education.ohio.gov/Topics/Ohio-s-New-Learning-Standards/Foreign-Language/World-Languages-Model-Curriculum/World-Languages-Model-Curriculum-Framework/Instructional-Strategies/Scoring-Guidelines-for-World-Languages

 

  1. Performance Evaluation. These are the rubrics designed to use with end of unit assessments. There are three separate rubrics—Presentational Speaking, Presentational Writing, and Interpersonal Communication. I think that these scoring guidelines will be an invaluable asset in my assessment practices for the following reasons:
  • The heading of the rubric provides a means for the teacher to indicate the targeted performance level of the assessment. As a Level 1-5 teacher, it may be helpful for me to have one set of guidelines to use with all students, rather than a series of level-specific rubrics.   The wording in the descriptors allows the teacher to adjust for the unit content and proficiency level with phrases such as, “appropriate vocabulary,” “practiced structures,” “communicative goal,” and “targeted level.” The Interpersonal Communication rubric even includes specific descriptors for both Novice and Intermediate interaction.
  • Each rubric includes a page designed for either student self-assessment and/or teacher feedback for each section of the rubric. The overall descriptors are given for each criteria, along with separate columns for strengths and areas of improvement.  I think this format will allow me to provide specific, targeted feedback to my students.  They will know exactly what they need to do in order to progress in their performance. As a result, I anticipate using this page alone to provide feedback on formative assessments.
  • The wording in these rubrics is well-suited to Integrated Performance Assessments. All three guidelines include a descriptor about whether the student’s response was supported with an authentic resource (or detail.)
  • These rubrics convey the vital role that cultural content must play in all performances with a criteria devoted entirely to “Cultural Competence.” The presence of this descriptor will serve as an important reminder to the teacher that s/he must include a cultural component when developing assessments and to the student who must demonstrate that this knowledge has been attained.
  1. Proficiency Evaluation. These are the rubrics designed to assess the students’ overall proficiency level in Presentational Speaking, Presentational Writing and Interpersonal Communication. Therefore, a separate rubric is included for each proficiency level that is targeted in a secondary program (Novice Mid-Intermediate Mid). The design of these rubrics will enable me to clearly identify my students’ proficiency for the following reasons:
  • Each rubric is aligned to the ACTFL descriptors for the targeted proficiency level. I will no longer have to page through the ACTFL documents to find the descriptors that I need for each level.
  • Each rubric also contains Interculturality descriptors, based on the NCSSFL Interculturality Can-Do statements.
  • Each rubric contains descriptors for three sub-levels of the targeted proficiency level. This is vital for those of us who are required to measure growth over less than a year’s time.  In my district, for example, our proficiency post-test must be given in March, before many students are able to demonstrate progress to the next full proficiency level.
  • Although my current understanding is that proficiency can only be measured by unrehearsed tasks that are not related to a recent unit of study, teachers who use proficiency-based grading might use these rubrics throughout the academic year.

Because Ohio has deferred to the ACTFL rubrics for assessing Interpretive Reading and Listening, I’ll look forward to addressing these guidelines in a further post.  In the meantime, I’d love to hear others’ opinions of these new rubrics.

 

 

 

 

A Novice High IPA on “Les Loisirs”

computer

This week my students will be completing their IPA on Les Loisirs.  I’ve been really pleased with their work throughout this unit, and I’m looking forward to seeing their results on this IPA.  While I began my journey into proficiency-based/non-textbook/non-explicit grammar lesson teaching with a significant trepidation, I am thrilled with the results of my new methodologies.  These students are now writing comprehensible connected paragraphs about how they spend their free time and using a variety of present-time verbs with some accuracy.  They are able to discuss these activities with their peers and they can understand some details given by native speakers on these topics.  While their writing and speech are not grammar-free, I did not produce perfect speakers and writers when I taught using more traditional methods either.  What I know for sure is that this year was the most satisfying of my 26-year career.  My students, many of whom have diagnosed learning and behavioral disabilities, are experiencing academic success and feeling proud of their achievements.  I couldn’t be happier for them!

So, here it is, my penultimate French I IPA:  loisirs_ipa

For the interpretive reading task, they will read an infographic about French opinions of an ideal weekend and complete interpretive tasks based on the ACTFL template.  I have designed this assessment based on the ACTFL Can-Do “I can sometimes understand short, simple descriptions with the help of pictures or graphs.” My students have been reading increasingly complex infographics all year, and I know that they will be able to accomplish this task without much difficulty.

For the interpretive listening task, they will listen to two different news reports about leisure activities that are of interest to these students. The first is about technology-related leisure activities, and the second about sports and exercise. These resources will be significantly more difficult than previous videos, many of which have been cartoons, but I chose them because of their relevance to the topics we covered in class. The fact that many of the requested details are numbers, a notoriously difficult linguistic concept, will further challenge these students. Because this task is closer to what would be expected of an Intermediate Low-Mid learner, I will score it accordingly.

For the interpersonal task, the students  will discuss their leisure activities with a partner.  While I have not always written an interpretive task that is clearly dependent on the interpretive one, it is my goal to do so as I evolve in my understanding of evaluating students’ language performance and proficiency. Therefore, I have included a requirement that they discuss how their leisure activities compare to those that are listed in the infographic. Therefore, this this task will address the Novice High Can-Do “I can exchange information using texts, graphs, or pictures.”

For the presentational writing task, the students will write an e-mail to a hypothetical exchange student about their leisure activities, therefore addressing the Novice High Can-Do “I can write information about my daily life in a letter, blog, discussion board, or email message.”  After receiving feedback on similar messages that they wrote throughout the unit, I think the students will be prepared for this task.

While my district and state have established the expectation that students will reach the Novice Mid level of proficiency by the end of French 1, it is my opinion that this Novice-High assessment is appropriate for these learners.  Because each task is based on the theme we have been studying, I have higher expectations of this performance-based assessment, than I would for an unrehearsed assessment of overall proficiency.

Oops!

  My apologies to anyone who tried to download the IPA from this post: http://wp.me/p4SSyG-56 .  I inadvertently linked the wrong document for the sample IPA.  I’ve now corrected my mistake in the original post.   Please accept my sincere apologies and chalk the error up to my being an overworked, sleep-deprived teacher and not a completely incompetent one.  In the future, I’d be eternally grateful to any of you who are willing to take the time to make me aware of any similar, glaring errors !

7 Steps to Creating an Integrated Performance Assessment (IPA)

yayAlthough I am embarrassed to admit it, I had never heard of an IPA (or IP-Yay, as one of my classes calls them) until May, 2013 when I attended a presentation by my state’s foreign language association.  Because our state was including a student progress measure on our teacher evaluation system for the first time, our association took an active role in encouraging us to use a measurement of student proficiency, rather than as assessment of vocabulary/grammatical accuracy to document our students’ learning.  Although I had been somewhat familiar with various proficiency descriptors, I had never designed my instruction to increase and assess student proficiency until attending this important session.  However, as the result of this presentation and two others that I subsequently attended, I made dramatic changes to my instructional and assessment strategies in order to encourage student proficiency, rather than simply grammar and vocabulary accuracy.  While I read what I found online about IPA’s, I did not have a good overall understanding of the specifics of this type of assessment until I finally stumbled across the manual found here: http://www.actfl.org/publications/guidelines-and-manuals/implementing-integrated-performance-assessment .  While I make some modifications based on my own teaching situation and students’ needs, I have relied heavily on this manual when developing my own assessments.  However, I am no expert in this process and have not received any specific ACTFL training.  The ideas expressed here reflect only my current understanding of proficiency-based assessment.  What I can say for sure is that the transition from traditional tests to IPA’s, has improved student learning in my classes more than any other change I’ve made during my 25 year teaching career.

Based on my own experience, here are the steps that I suggest when designing an IPA:

  1. Decide what you want to assess. Based on the principle of backwards design, writing the summative assessment is one of the very first steps in developing a unit plan. Therefore, I write my IPA before developing any of my lesson plans for the unit. I recently heard a speaker say, “You teach what you test.”  While this seemed counter-intuitive to me at first, as I thought about it more it made perfect sense.  When we choose how to assess our students, we are demonstrating what we wanted them to learn. If it’s not on the “test,” we probably didn’t think it was that important.  In a proficiency-based classroom, that means that we assess our students’ ability to communicate across the modes of communication.  So, the first step in designing an IPA is to choose a benchmark for each mode based on the ACTFL Can-Do Statements (http://www.actfl.org/global_statements ). Note: The bold print statement is the benchmark, and the statements which follow (preceded by a box) are examples.  Sometimes I choose one of the examples, but other times I create my own based on the theme I have chosen for my unit.

 

  1. Choose an authentic written text. Based on the interpretive reading benchmark I have chosen, I find an authentic, culturally-rich text that will enable the students to demonstrate that they have achieved the benchmark. The type of text that will be appropriate depends on the proficiency-level of the students. Novice learners are highly dependent on using visuals when interpreting so I use infographics, picture books, brochures, catalogues, or comic strips for these students.  Note: Try using Google Images, rather than just Google when searching for texts for these students. Here are some other resources I use for Novice learners:

http://www.iletaitunehistoire.com/genres/documentaires

https://www.envolee.com/fr/catalogue/1/8-du-plaisir-a-lire (Although there is a fee for downloading these texts, I have found this to be money well spent.)

http://www.scholastic.ca/education/envol_en_litteratie/downloadablebooks.html (Also worth the fee)

 

While Intermediate Low to Mid learners are able to interpret lengthier texts, they are often unable to understand authentic texts written for their developmental age (in language programs that begin in high school).   These students usually enjoy tapping into their “inner child” by reading magazine articles and online material written for younger students. I find it helpful to use a children’s search engine when researching materials for these students.  Here are a few that I’ve used:

 

http://www.takatrouver.net/takamag/index.php

http://www.coolgo.fr/

http://www.kidadoweb.com/

http://www.lespagesjuniors.com/

http://www.webjunior.net/

 

When possible, I use articles from authentic children’s magazines for interpretive tasks. These materials are often more visually rich than online materials. I have found several relevant articles for French 2 students in Astrapi, while I prefer Okapi for my French 3 students. My level 4/5 students are able to interpret the texts written for teenagers in Phosphore.

 

  1. Write the interpretive reading task. Once I have chosen the authentic text, I write the actual assessment based on the template found here: http://www.actfl.org/publications/guidelines-and-manuals/implementing-integrated-performance-assessment . While I think the template is self-explanatory, I thought I’d share a few tips for each section.

#1: I write the Key Word Recognition section by typing about the English translation of about 10 words/phrases from the text that I think the students will already know, based on prior instruction or the activities in the unit being assessed. Note: Sometimes I “cheat” here to introduce a new vocabulary word that the students will need in order to interpret the text.  I do so by writing the entire phrase in which the word appears.  When searching for the phrase in the text, they will be able to use the context to determine the meaning of the targeted vocabulary item.

#2: Main Idea.  Most of my students don’t write the main idea until they have completed the Supporting Details section. It would probably make sense to move this section accordingly when typing an IPA.  When assigning a literary text, I change the wording here so that the students are writing a 2-3 sentence summary, rather than a main idea.

#3: Supporting Details. This is the core section of the task, where students will really have a chance to show you what they know. Remember that these are statements, not questions.  The students are not providing answers, they’re writing details to show that they understood more than the main idea the text.  I try to give them a chance to really show off here by writing very general statements, so that they can use as many details as possible.  Although the template suggests writing 8 statements (3 of which will be distractors) I often write more based on the length of the text.  I have noticed that I often learn more about my students’ comprehension from their errors on the distractors than on the details they provide for the “correct” statements.  Because most of my texts are informational in nature, the students can use their background knowledge to provide logical answers, without actually understanding the text.  However, when they provide a detail for a distractor, I know that they are using what they know rather than what they’ve read.  Note: I generally omit the requirement that the students “write the letter of the detail next to where it appears in the text.”  I did not find that this step was difficult to assess and didn’t supply me with any additional information about what the students understood.

#4: Organizational Features.  In the interest of full disclosure, I have to admit that I sometimes omit this section.  While I recognize that identifying the organizational structure is an important top-down process, I do not feel that assessing this skill provides much additional information about the students’ comprehension of a text.  For novice learners who are interpreting infographics or other simple texts, the organization is so obvious that the students often see this step as needless busywork.

#5: Guessing Meaning from Text.  My students consistently score the lowest on this section of the interpretive task.  As suggested in the ACTFL manual, I type the entire sentence in which the word is located, as well as provide a description of where in the text the sentence is located.  The students’ often respond with an English word that is similar in spelling, but not meaning to the French word.  This indicates to me that they are not using the context clues to infer the meaning of the word.  Because of the difficulty of this task, I usually include more than three items in this section so that students have a greater opportunity for success.  In addition, I’ll continue to encourage them to rely more on context for these items.

#6: Inferences. The ACTFL template gives great examples of appropriate inference tasks.  Make sure to note that for novice learners it’s appropriate to give a choice of possible inferences and have the students choose which one is supported by the text and provide justification.  When I began using this template, I was pleasantly surprised by how much this section demonstrates the depth of the students’ comprehension.

#7: Author’s Perspective. While I think this task is very important for my upper level students, it is often not applicable for the types of texts that Novice learners read.  Although I suppose we could say that the perspective of the “author” of a menu is that the food is delicious, I don’t think requiring the students to explain this is a good use of their time or mine.

#8: Comparing Cultural Perspectives. I again rely heavily on the suggestions given in the ACTFL template for this task.  I have found, however, that the students need more specific directions for this section.  For example, I have had to explain that “It would be written in English.” Is not an acceptable response to the question regarding how the article would be different if it was written for an American audience.  I have also been quite liberal in the type of answer that might demonstrate a perspective, rather than just a product or practice.  While this distinction is an important one, Novice learners are sometimes not able to even identify the relationship between practices and perspectives in their own culture, let alone a culture about which they know very little.

#9: Personal Reaction. I generally omit this section from the interpretive task.  Because the instructions state that the response should be in the target language, I consider it a presentational rather than an interpretive task.  I do consider this prompt, however, when I design the presentational task for the unit.

 

Although it is writing the interpretive task is the most time-consuming part of designing an IPA, I’m only a little bit embarrassed to say that I think it’s kind of fun.  I like the creativity involved in writing the tasks and always look forward to seeing what the students are able to achieve.

 

  1. Choose an authentic recorded text. In my initial research on IPA’s, I was surprised to find out that most authors advocated using either a reading OR a listening interpretive task rather than both. Even the ACTFL IPA manual has very little to say about listening comprehension. In my opinion, we do a great disservice to our students if we do not adequately address the importance of interpreting oral texts.  As a matter of fact, I believe that of all communicative tasks, this is the one for which the students need us the most.  They may be able to use Google Translate to interpret written texts or provide comprehensible written messages, but they will not be able to understand what they hear without being exposed to a wealth of authentic recorded texts in an instructional setting.  For these reasons, I nearly always include both an interpretive reading and an interpretive listening task on my IPA’s.  When selecting an appropriate “text”, I rely almost entirely on YouTube videos. Because Novice learners need a lot of visual support, I often use cartoons with them.  Trotro l’Ane, Petit Ours Brun, Caillou and Peppa Pig are a few cartoon characters whose videos I’ve used successfully with Novice learners.  More proficient students are able to interpret amateur videos made by French teenagers on a variety of topics or on authentic instructional videos.  My French 4/5 students can generally interpret news videos related to the topic of study.

 

  1. Write the interpretive listening task. Unfortunately, the ACTFL IPA manual provides very little direction when it comes to interpreting an oral text. While teachers are encouraged to use the same interpretive template to assess both reading and listening, this would not work very well given the constraints of my teaching situation. Because I have only 8 computers in my classroom, students circulate between the reading and listening sections of the IPA. Therefore, I need to limit the amount of time that an individual students spends at the computer.  The nature of the tasks on the Interpretive template would require the students to listen to the videos in their entirety several times, which would be extremely time-consuming without necessarily demonstrating deeper understanding.  In addition, many of the sections on this template would not adequately address listening comprehension.  It goes without saying that any word or phrase that is written in the target language on the IPA becomes a reading rather than a listening assessment.  While I will continue to evolve in my understanding regarding listening comprehension, I am currently relying heavily on English comprehension questions for the videos I include in my IPA’s.  Although I write the questions in the order in which the students will hear them (to save time), I try to vary the types of questions in order to assess the various levels of comprehension that are included in the ACTFL interpretive template.  In addition to informational questions about what is happening in the video (supporting details), I also ask the students to provide a main idea or summary after watching the video.  When possible, I will also include a few items which require the students to infer the meaning of a new word, based on the context of the sentence (with the understanding that the inclusion of the written sentence will negate the role of listening). When appropriate, I might also include an inference or cultural context question.  Note: because some of the cartoon videos might not come from the target culture, they may not provide a cultural context.  However, I consider them to be authentic in that the French translation was produced for a target culture audience.

 

  1. Write the interpersonal task. Based on my observation, I think this may be the area in which we have the greatest opportunity for growth. I think that many of us are labeling oral performances as interpersonal when there is little or no negotiation of meaning.  It seems to me that if a student knows in advance what s/he or the other speaker will say, it is not an interpersonal task.  In addition, if a speaker is required only to give answers, rather than also questioning, the task cannot be considered interpersonal.  While Novice Low to Novice Mid learners can only communicate using memorized phrases, this does not mean that they should be expected to reproduce memorized dialogues. For these learners, I often rely on an information gap types of activities to provide contextual support while at the same time allowing for unscripted language use.  For example, students might be given a series of pictures and asked to discuss them in order to ascertain whether each one is the same or different. This type of activity allows for a continuum of proficiency (some will use single words, while others will use simple sentences) and requires each student to both ask and answer questions.  As students become more proficient, they can manage more open-ended tasks such as discussing familiar subjects such as preferences, activities, family, eating habits, etc.  While the actual task will be highly dependent on the theme of the unit, the benchmarks in the ACTFL manual provide many good examples.  Because my upper level classes are organized around a novel or film, rather than a broader theme, I often assign role plays for interpersonal speaking tasks.  Rather than asking them to replay a scene from the film/book, I create a hypothetical situation (which could have, but did not happen) and ask the students to role play the situation.  Although I might allow them advance practice in class, they will not be assigned a role or a partner until I call on them to be assessed.   Note: With more open-ended tasks, I think it’s important to give the students a minimum time limit.  While students might prepare some of their statements in advance, based on the topic, I think it’s the “stretch” that takes place when they run out of things to say that increases proficiency in this skill.

 

  1. Write the presentational task. Most of my IPA’s include a written presentational task, but not a spoken one. Because I assess their interpersonal skills, it would often be repetitive to also assess their presentational speaking, as they would use many of the same vocabulary and structures. In addition, multiple class periods are required to listen to 30 students present on a particular topic, which I have determined is not the best use of my limited instructional time.  When developing written presentational tasks, I again rely heavily on the wording used in the IPA manual.  Thus, Novice Low-Mid students write lists and Novice High – Intermediate Low students write hypothetical blog posts, e-mails, etc. in which they express their personal experiences as they relate to the theme.  Ideally, this task will be dependent on the interpretive task.  For example, when my French 1 students read an authentic post by a family looking for an au pair, they wrote a similar post for their own family.  Likewise, after my French 2 students read a magazine article about a Canadian student’s typical day, they wrote a magazine article about their own typical day.  Because many of my upper level students will be taking the AP test, I rely heavily on prompts requiring persuasive speech that is expected on this exam.  As with the role plays, I often ask them to write a letter from character in a film/novel to another, persuading him/her to perform some action. Given the nature of each of the Presentational context, in most cases I assign a rough draft of the task as a formative assessment. I then provide feedback on this draft before having them produce a final draft on the IPA.

 

Whew, I think it took longer to write about it than it does to actually write an IPA!  If you’d like an additional example (or if you’re using the French 2 unit I described in my previous post), here’s the IPA developed for my current French 2 unit.School Unit IPA

As usual, I’d love to hear how IPA’s are working for you as well as any suggestions that you have!