MLibrary Instructor College

An online forum for University of Michigan Library instructors

Archive for the ‘Assessment’ Category

Using the New MLibrary Evaluation Forms

with one comment

Jeffrey Cordell is Instructional Pedagogy Librarian in the Undergraduate Library.  As part of the Assessment Working Group, he has helped to roll out the new evaluation forms being used to assess teaching across the library.

Using Assessment to Build Instruction Strategy

Over the past two years, the library has designed a set of evaluation forms for instruction sessions; over the course of fall semester, we generated reports on those evaluations for each instructor.  At a workshop in late February, individual reports were given out and discussed.  I want to give a sense of what we talked about in that meeting and to share some thoughts about evaluations and their uses for teaching.

Scantron

Photo by Flickr user COCOEN

If you weren’t able to attend the workshop and would like a copy of your report, contact Jen Green at greenjen@umich.edu and ask for one.  As you look over your report, it’s important to remember that while it can tell you much about your teaching, it is not a core sample of who you are as a teacher.  Anytime we represent the experience of teaching and learning numerically, we engage in a kind of fiction or wishful thinking that says that we can capture experience quantitatively—that what happens in the classroom can be anything like adequately expressed through numbers.  It would be nice to think so, because it would mean that the experience of one classroom could be replicated exactly in another, as if all instructors with a 4.5 were doing exactly the same thing.  Rather, the kinds of numbers we see on course evaluation forms are rough approximations of the experience of being a student and inevitably do little to convey the rich complexity of what happens in the classroom, where each instructor works from his or her distinct, incommensurable qualities as a teacher.

Because they are such crude tools, course evaluations can sometimes seem to measure only the entertainment value of a class.  However, I do think they measure something useful, and that something is how students feel about their experience in the classroom.  At first glance, “feelings” may seem too subjective, even irrelevant to the process of finding information, to bother measuring.  After all, learning is not about feelings; it’s about acquiring information.  But learning is always bound up with one’s emotions, and, as Plato told us long ago, learning at its best matches the thrill of falling in love, and is, indeed, indistinguishable from it.  When we truly grasp an idea, we feel it lodge in ourselves, and that always carries with it change, change in our perceptions and, therefore, in who we are.  More pragmatically, there is a sense in which learning becomes more readily available to our consciousness when we’re aware that we are learning.  So, the use of course evaluations comes in part from the measure, however tentative and approximate, of students’ perception of their learning in a class.  While course evaluations cannot do the work of systematic assessment of skills (for that, we need tests), they can suggest to us, however crudely, how students perceive their experience in the classroom and whether they find that experience valuable.

The reports that the library has generated also help us, as instructors, to get a fuller sense of how our teaching fits into the larger mission of the library. The reports present your numbers against the mean for all the responses from a given evaluation form (that is, when you look at your report, you can see whether your numbers are higher, lower, or similar to the library mean for that report, be it “intro,” “advanced,” etc).   That, in turn, can give you a sense of what, in your classes, students seem to appreciate and where you might want to focus your attention as you revise your teaching techniques and in-class exercises.   Often, it’s not even so much a matter of changing how you teach as it is explicitly underscoring for students what it is they’re learning.  An interactive in-class exercise may not be perceived as interactive by your students until you say that it is.  Additionally, the evaluation forms give a sense of what we, as instructors at the library, value and hope to achieve.  On that level, these reports are part of an ongoing conversation that may take in questions such as: what do we consider successful numbers at the library? How do we want to use them (for example, they could be used as part of teaching portfolios)?  What kinds of institutional support would we like to have available to instructors who find things they want to change in their teaching based on these reports?  And so forth.

I have had many years experience in reading and using course evaluations, but, being new to the world of the library, am curious to hear your opinions of the new evaluations and reports, and to hear your ideas about how you are going to use the information they give us.

Written by Instructor College

April 25, 2011 at 3:23 pm

Assessment Conversation with Larry Gruppen

leave a comment »

On October 29th, the Instructor College hosted a conversation on assessment with Dr. Larry Gruppen.  A summary of the conversation is below.

Formative assessment

Can you assess students in library sessions when they all have different levels of ability, the session is so short, and we don’t know the students?

As students come into class, chat with them – it helps to build rapport and you can start to get feedback before class even starts. Ask if they’ve had a library session before and what would be useful to them. The entire class can also be asked this question before the formal instruction begins.

Throughout the class you can use formative assessment in a variety of ways. It can be as simple as a visual check of comprehension, “Yeah, they got it, it’s clicking.” You can scan the class to check facial expressions, note attentiveness, whether or not they are following instructions, and what they are actually doing at their workstations (a second instructor is really helpful in this case.) The time you invest in any assessment is only worth it if it pays off. Invest in getting to know students a little bit if it will pay off for you in terms of getting honest feedback.

Classroom Assessment Techniques (CATs)

The CATs on the handout are very good, but there are many more. They can be informal, low-cost and easy to apply. Some of the CATs are a bit artificial; for example, a concept map may not fit with your teaching methods whereas a background probe might be worth the investment of time. CATs offer structured ways to assess understanding. Since we teach in a distracting environment, feedback in the session is good.

Methods we can try to apply in our classes:

  • Build assessment into the instructional strategy by giving them a problem to work on. Because there’s never enough time in an instruction session, build assessment into your teaching.  This can’t be easily done in 30 minutes but it does emphasize that you have a goal for the class.
  • Assigning them tasks can provide feedback rather than trying to infer it “from their level of consciousness” (i.e. how awake they are!).
  • Make sure the goal of your instruction drives what you teach.
    • How to get past the different levels of knowledge that may exist in a given class? It’s a problem for everyone who just lectures because they don’t get feedback.
    • Ask students to present to their peers. It can be efficient to have students present in the class because peers tend to provide honest feedback and address issues that they have had themselves, which may be the case for classmates, though you lose some of your expertise in instruction which is a tradeoff in the efficient use of time. You want to see the product of their application of what you’ve taught them.
    • Pair those whom you’ve learned already know some of your content with the newbies and have them teach or mentor the new learners. Peers are often more comfortable asking questions about something they don’t know in a peer to peer situation.
    • Alternatively, to be more proactive and deliberate, get the advanced learners started on something else more challenging while you train the newbies.
    • But asking them to evaluate their peers doesn’t work.  It’s tough for them to do this honestly.
    • Building a rapport fosters trust, which leads to students providing honest feedback on their learning.
    • If you’ve built rapport, make the CATs into questions and talk with the students.
    • If it’s important to you get to get information from those that might not participate in a class discussion, use writing as a way to get information from the quieter students.
      • But to get a more broadly-based understanding of issues (e.g. “muddiest point”) you might still have everyone write something down.
      • Finally, you have to ask yourself who cares about this information.  If the administration cares, then try to create a structured and rigorous assessment (which might be expensive).  If you care, you have to figure out how much time and effort you want to put into it.
        • Concept maps takes commitment and time, but it can help crystallize ideas and how things come together; helps to see how they are thinking about things. Concept mapping works best if you give them terms they have to organize rather than start with concepts.
        • Some of the techniques might not help the class you are currently teaching, but may help the next one.

Wrap Up

Designing good assessment questions is an almost universal problem. It’s easy to assess surface knowledge (the whats and the wheres), but can you give them a problem they can solve on their own?  Don’t just assess how they feel about the class. Find out if they can they do something now that they couldn’t do before. It is possible for learners to be present and even listen/follow along in a workshop but not evaluate their own understanding. Knowledge is easy to assess but not necessarily what you want to assess, which, in the case of library instruction, is students’ application of tools/skills.

We’re relying on them taking what they’ve learned and applying it later; without a purpose to use the tool, they won’t use it.

Written by Instructor College

November 3, 2010 at 3:02 pm