Archive for the ‘Brown Bags’ Category
Jeffrey Cordell is Instructional Pedagogy Librarian in the Undergraduate Library. As part of the Assessment Working Group, he has helped to roll out the new evaluation forms being used to assess teaching across the library.
Using Assessment to Build Instruction Strategy
Over the past two years, the library has designed a set of evaluation forms for instruction sessions; over the course of fall semester, we generated reports on those evaluations for each instructor. At a workshop in late February, individual reports were given out and discussed. I want to give a sense of what we talked about in that meeting and to share some thoughts about evaluations and their uses for teaching.
If you weren’t able to attend the workshop and would like a copy of your report, contact Jen Green at firstname.lastname@example.org and ask for one. As you look over your report, it’s important to remember that while it can tell you much about your teaching, it is not a core sample of who you are as a teacher. Anytime we represent the experience of teaching and learning numerically, we engage in a kind of fiction or wishful thinking that says that we can capture experience quantitatively—that what happens in the classroom can be anything like adequately expressed through numbers. It would be nice to think so, because it would mean that the experience of one classroom could be replicated exactly in another, as if all instructors with a 4.5 were doing exactly the same thing. Rather, the kinds of numbers we see on course evaluation forms are rough approximations of the experience of being a student and inevitably do little to convey the rich complexity of what happens in the classroom, where each instructor works from his or her distinct, incommensurable qualities as a teacher.
Because they are such crude tools, course evaluations can sometimes seem to measure only the entertainment value of a class. However, I do think they measure something useful, and that something is how students feel about their experience in the classroom. At first glance, “feelings” may seem too subjective, even irrelevant to the process of finding information, to bother measuring. After all, learning is not about feelings; it’s about acquiring information. But learning is always bound up with one’s emotions, and, as Plato told us long ago, learning at its best matches the thrill of falling in love, and is, indeed, indistinguishable from it. When we truly grasp an idea, we feel it lodge in ourselves, and that always carries with it change, change in our perceptions and, therefore, in who we are. More pragmatically, there is a sense in which learning becomes more readily available to our consciousness when we’re aware that we are learning. So, the use of course evaluations comes in part from the measure, however tentative and approximate, of students’ perception of their learning in a class. While course evaluations cannot do the work of systematic assessment of skills (for that, we need tests), they can suggest to us, however crudely, how students perceive their experience in the classroom and whether they find that experience valuable.
The reports that the library has generated also help us, as instructors, to get a fuller sense of how our teaching fits into the larger mission of the library. The reports present your numbers against the mean for all the responses from a given evaluation form (that is, when you look at your report, you can see whether your numbers are higher, lower, or similar to the library mean for that report, be it “intro,” “advanced,” etc). That, in turn, can give you a sense of what, in your classes, students seem to appreciate and where you might want to focus your attention as you revise your teaching techniques and in-class exercises. Often, it’s not even so much a matter of changing how you teach as it is explicitly underscoring for students what it is they’re learning. An interactive in-class exercise may not be perceived as interactive by your students until you say that it is. Additionally, the evaluation forms give a sense of what we, as instructors at the library, value and hope to achieve. On that level, these reports are part of an ongoing conversation that may take in questions such as: what do we consider successful numbers at the library? How do we want to use them (for example, they could be used as part of teaching portfolios)? What kinds of institutional support would we like to have available to instructors who find things they want to change in their teaching based on these reports? And so forth.
I have had many years experience in reading and using course evaluations, but, being new to the world of the library, am curious to hear your opinions of the new evaluations and reports, and to hear your ideas about how you are going to use the information they give us.
On April 11, Karen Reiman-Sendi led an Instructor College-sponsored brown bag discussion of using web-based guides in instruction, touching on the research literature surrounding the use of guides and our local data about patron use of guides. The slide presentation from the event is available. Attendees collaboratively developed the following best practice statements:
Best Practices for Course-Related and Workshop-Related Guides (April 2011)
- Use a course-related/workshop-related guide:
- to solve a specific problem/to accomplish a task or assignment
- to provide easy accessibility to needed information
- as an opportunity to work with a faculty member
- Think about logical organization of material to meet needs of your audience and the identified educational outcomes
- Avoid too much text – extraneous information is awful – and keep a balance between content and white space
- Include some visual interest appropriate to the purpose of the guide (but don’t rely on color to do this)
- Use screencasts or screenshots appropriate to the task at hand, to illustrate points, strategies, concepts, etc. (See MLibrary Instructional Videos guide)
- Avoid using too many tabs (“pages”) but do use tabs to help define “modules” or sections of the guide
- Help students choose the appropriate resources, information, strategies, etc. by using smaller box content, by using headings within boxes, by providing some “navigation” within and between your tabs (“pages”)
- Provide not just a list of appropriate resources for a specific assignment, but include strategies for understanding the assignment/for completing the assignment, based on your educational goals/outcomes for the guide
- Keep in mind that guides will be viewed on mobile devices and by individuals with visual challenges (accessibility)
- Take care in using drop-down menus in tabs (“pages”) because these areas may be difficult to see or to navigate
- Wherever possible collaborate with colleagues on guide creation to eliminate unnecessary duplication
- Provide uploaded files of your instructional slides and handouts, as appropriate
- Keep the guide up to date and/or take it “offline” when no longer needed
- When providing a list of resources, include short annotations to help students choose
Provide links to related guides where appropriate
- Provide a link to the course/workshop/session evaluation
- Include your profile box on the first tab (“page”) of guide as well as the Ask a Librarian contact box
- Background of guide must be white
- Title of course guide should be the course catalog label, e.g. AMCULT 209: History of American Popular Music
- Guide description should include the purpose of the guide, and ideally the name of the instructor and academic term, e.g. Key information resources and services for completing the honor’s thesis. Prof. John Doe. Winter 2011.
- Instructor profile must appear on “home” tab/page in right column. The profile box can be labeled “Library Contact” or “Library Instructor” or “Workshop Instructor” etc.
- “Library Help” box appears below the instructor profile box and contact information
- Tabs/pages will vary from course to course but might include “Introduction,” “Your Assignment,” “Finding Articles,” “Finding Data,” “How to [do something],” “Citation Styles,” etc.
- When published, include the “course_guide” or “technology_guide” tag
- Set guide to “private” at the end of the current term if the content will not be taught in the following term
This was an open discussion by library instructors about teaching ArticlesPlus. The description from the announcement is below, followed by notes from the discussion.
The Instructor College Steering Committee invites you to a brown bag discussion about this semester’s teaching of the new ArticlesPlus search feature. Come hear what’s working — or not working — for your colleagues, and share your own insights, successes, or concerns. Even if you haven’t taught ArticlesPlus yet, join us for this discussion to get new ideas of how it might work in your situation for you.
While most people have pointed out ArticlesPlus in their instructional sessions, as someone said, “teaching seems like an overstatement…. There’s not much to teach because it’s so easy”. Several people felt that it’s a good place for students to start, particularly undergraduates, and that it’s very good for interdisciplinary research. However, you need to be sure to stress that it isn’t everything.
When demonstrating ArticlesPlus, instructors often compare it to GoogleScholar, Proquest, or some other specific database. This allows the instructor to highlight certain features of individual databases and/or of ArticlesPlus. People also mentioned that they don’t always bring up ArticlesPlus – just depends on the instructional session.
Several instructors also mentioned that they use the Advanced screen, either putting in a basic search and then using the facets to limit, or using specific fields and getting a tighter results page from the start.
Some people start with ArticlesPlus them move to individual databases, and some do the opposite.
If they’re excited about ArticlesPlus they might be more likely to return… ease of use is important. There are worse things than students being excited about something even if it’s not 100% perfect. Most of the feedback that the web team gets has been positive, and there is a high satisfaction level overall.
People liked that ArticlesPlus has a built-in database recommender, but the downside is that it’s not great, and we can’t control it.
The biggest complaint is that there’s no way to modify the search, and that the initial search disappears. A fix is in development, and should be moved over to the production server soon.
Some times the sheer number of results is overwhelming for students, so teaching them about limiting by facet is important. At the same time, others mentioned that sometimes it’s hard to hone in on what you’re trying to focus on – this is a good time to move to a more specific database.
The links to specific articles through MGetIt don’t always work Judy asked that people be sure to report it to EAU Support when it doesn’t work, because they can tweak the priority list.
Someone said it seemed to be a bit slower than the “out of the box” version like they have at Dartmouth.
While people like the facets and that it looks similar to Mirlyn, so people struggle with what is available there. For example, some of the subjects are too broad for interdisciplinary research; unfortunately the publishers supply these keywords and we don’t have any control over them. Again, if using these subjects starts to get frustrating for the students, it’s a good time to point out individual databases.
Should it be marketed as a “start here” place? If so, we need to be able to get them easily and clearly to other options such as research guides, specific databases and so on.
Great for finding incomplete citations, and AskaLibrarian (as well as others) are using it more and more to help people find what they’re looking for.
ArticlesPlus gives people something they can succeed at, therefore creating a positive experience and giving them confidence. Specific databases are sometimes more confusing, so again, ArticlesPlus is a good starting point as they are learning to do research.
Judy asked if there had been any problems specifically with newspapers – none were reported.
Have there been any issues with a recent issues not being available but you know it should be there? Someone did note that they found a gap in coverage for something in particular. People were encouraged to report these specifics to EAU Support so they can be tracked/fixed.
Someone asked if Summon will be included in Ulrich’s so you can see where a particular item is indexed. Not likely, says the web team, because the indexes would be institution specific, and the information is constantly changing. However, they can bring it up with ProQuest.
Other Comments (not specific to ArticlesPlus)
In the context of a study about whether students learn from screencasts, Amanda Peters, Angie Oehrli, and Julie Piacentine have found that students find the result page (from the main web page search box) confusing, and that not all students understand what a “database” is. There was a suggestion for more standardized descriptions of the databases to make it more clear to students what they will find.
On October 29th, the Instructor College hosted a conversation on assessment with Dr. Larry Gruppen. A summary of the conversation is below.
Can you assess students in library sessions when they all have different levels of ability, the session is so short, and we don’t know the students?
As students come into class, chat with them – it helps to build rapport and you can start to get feedback before class even starts. Ask if they’ve had a library session before and what would be useful to them. The entire class can also be asked this question before the formal instruction begins.
Throughout the class you can use formative assessment in a variety of ways. It can be as simple as a visual check of comprehension, “Yeah, they got it, it’s clicking.” You can scan the class to check facial expressions, note attentiveness, whether or not they are following instructions, and what they are actually doing at their workstations (a second instructor is really helpful in this case.) The time you invest in any assessment is only worth it if it pays off. Invest in getting to know students a little bit if it will pay off for you in terms of getting honest feedback.
Classroom Assessment Techniques (CATs)
The CATs on the handout are very good, but there are many more. They can be informal, low-cost and easy to apply. Some of the CATs are a bit artificial; for example, a concept map may not fit with your teaching methods whereas a background probe might be worth the investment of time. CATs offer structured ways to assess understanding. Since we teach in a distracting environment, feedback in the session is good.
Methods we can try to apply in our classes:
- Build assessment into the instructional strategy by giving them a problem to work on. Because there’s never enough time in an instruction session, build assessment into your teaching. This can’t be easily done in 30 minutes but it does emphasize that you have a goal for the class.
- Assigning them tasks can provide feedback rather than trying to infer it “from their level of consciousness” (i.e. how awake they are!).
- Make sure the goal of your instruction drives what you teach.
- How to get past the different levels of knowledge that may exist in a given class? It’s a problem for everyone who just lectures because they don’t get feedback.
- Ask students to present to their peers. It can be efficient to have students present in the class because peers tend to provide honest feedback and address issues that they have had themselves, which may be the case for classmates, though you lose some of your expertise in instruction which is a tradeoff in the efficient use of time. You want to see the product of their application of what you’ve taught them.
- Pair those whom you’ve learned already know some of your content with the newbies and have them teach or mentor the new learners. Peers are often more comfortable asking questions about something they don’t know in a peer to peer situation.
- Alternatively, to be more proactive and deliberate, get the advanced learners started on something else more challenging while you train the newbies.
- But asking them to evaluate their peers doesn’t work. It’s tough for them to do this honestly.
- Building a rapport fosters trust, which leads to students providing honest feedback on their learning.
- If you’ve built rapport, make the CATs into questions and talk with the students.
- If it’s important to you get to get information from those that might not participate in a class discussion, use writing as a way to get information from the quieter students.
- But to get a more broadly-based understanding of issues (e.g. “muddiest point”) you might still have everyone write something down.
- Finally, you have to ask yourself who cares about this information. If the administration cares, then try to create a structured and rigorous assessment (which might be expensive). If you care, you have to figure out how much time and effort you want to put into it.
- Concept maps takes commitment and time, but it can help crystallize ideas and how things come together; helps to see how they are thinking about things. Concept mapping works best if you give them terms they have to organize rather than start with concepts.
- Some of the techniques might not help the class you are currently teaching, but may help the next one.
Designing good assessment questions is an almost universal problem. It’s easy to assess surface knowledge (the whats and the wheres), but can you give them a problem they can solve on their own? Don’t just assess how they feel about the class. Find out if they can they do something now that they couldn’t do before. It is possible for learners to be present and even listen/follow along in a workshop but not evaluate their own understanding. Knowledge is easy to assess but not necessarily what you want to assess, which, in the case of library instruction, is students’ application of tools/skills.
We’re relying on them taking what they’ve learned and applying it later; without a purpose to use the tool, they won’t use it.