Blended Assessments of Learning
Edited by Linda Futch and Baiyun Chen.
BlendKit Reader Second Edition Review Team included Linda Futch, Wendy Clark, Loretta Driskel, Wilma Hodges, Cub Kahn, Apostolos Koutropoulos, Denise Landrum-Geyer, and John Okewole. If the second edition is helpful, thank the review team. If not, blame the editor.
BlendKit Reader First Edition edited by Kelvin Thompson, Ed.D.
Portions of the following chapter are adapted from “Design of Blended Learning in K-12” in Blended Learning in K-12 under the terms of a Creative Commons Attribution-ShareAlike 3.0 Unported license. In addition, portions of the following chapter are adapted from “Assessment and Evaluation” by Dan O’Reilly and Kevin Kelly and “Evaluating and Improving Your Online Teaching Effectiveness” by Kevin Kelly in the Commonwealth of Learning’s Education for a Digital World under the terms of a Creative Commons Attribution-ShareAlike 3.0 International license.
Questions to Ponder
- How much of the final course grade do you typically allot to testing? How many tests/exams do you usually require? How can you avoid creating a “high stakes” environment that may inadvertently set students up for failure/cheating?
- What expectations do you have for online assessments? How do these expectations compare to those you have for face-to-face assessments? Are you harboring any biases?
- What trade-offs do you see between the affordances of auto-scored online quizzes and project-based assessments? How will you strike the right balance in your blended learning course?
- How will you implement formal and informal assessments of learning into your blended learning course? Will these all take place face-to-face, online, or in a combination?
A blended learning class is like any other – when teaching takes place, it is imperative that assessment is provided to check the depth of students’ learning. Looking back at the learning objectives and design documents (e.g., Course Blueprint) can help answer assessment questions. For instance Riley et al. (2014) suggest that faculty ask themselves: “How well does your course make connections between learning objectives, course activities, and selection of site tools to accomplish the assignments? How well do face-to-face and out of class time learning activities complement each other?” (p. 164).
The most crucial step needed in each unit of instruction is the preparation for students’ transfer of learning to new contexts. If learning is not transferred from the place of learning to practical application, there can be no positive return on investment of the time needed to create, implement, and evaluate instruction. Students are smarter than we might think. If the lesson doesn’t apply to something tangible or if it can’t be used in real life, you can expect them to ask, “When are we ever going to use this stuff?” Make sure that your objectives are made clear to the students. The learning standards must be addressed, yes, but also find a real life application to better your students’ understanding of the materials covered. If this is not done, much of your time, and your students’ time, has been greatly wasted. A second look to ensure that students have indeed learned the objectives might trigger revisions, allowing for more (or better) class activities and instructor feedback. This should be done before any evaluation strategy. Technology is useful in simplifying this task of transferring the learning strategy. Many times a lesson taught with the use of online instruction or with technology as its main tool provides a built-in application. Students see more clearly how the concepts are used in real life situations, and because the lesson was applied practically, the student retains the information and skills much longer.
Despite the importance of real life application of knowledge and skills, perhaps the most common type of assessment is still the traditional multiple choice exam. Placing such tests (or non-graded self-assessment versions) online is one of the most popular approaches to blended assessment of learning. (As a result we will devote a portion of this chapter to considering issues associated with implementing traditional tests in blended courses.) Instructors designing such assessments might ask themselves more detailed questions such as: “Is my test content-valid, based upon the methods of content presentation?” “Should my test include a short review time via a traditional classroom setting, or would an online review better prepare my students for assessment?” “Should the test be performed online or in the presence of the instructor?”
Online tests make for easy and quick grading by the instructor or teaching assistant, but security of the test might be diminished depending on the software and implementation methods used by the instructor. Tests taken exclusively in the classroom setting using paper and pencil, however, negate the affordances of technology. Faculty who evaluate their students’ performances by using a mixture of tests – some online, some offline – have experienced more fruitful outcomes. Supplying examples to read as text online or offline proves to be helpful. Presenting video explanations or examples online, where students can view a snippet of the content repeatedly gives enough exposure to solidify an idea or concept. Any tool that can be afforded the student should be considered to improve learning. For instance, Riley et al. (2014) observe that when implementing online tests, “‘scaffolding’ – the integration of [online] self-assessments and review modules – is integral to a more in-depth understanding of the material” (p. 170).
Caution must be practiced when using online tests in a blended course. If this method was never practiced during the preceding unit of instruction, the student finds herself at a bit of a disadvantage when being tested. Instead of devoting proper time to the non-technical concepts taught, the student might be fighting her way through the technical tool she must use to perform the task at hand. Conversely, Walker et al. (2014) found that non-credit, online practice exams can actually benefit student performance on in-class, graded exams.
The online environment does provide blended learning instructors with opportunities to implement a variety of learning assessments using new and innovative tools. The following section reviews two broad categories of assessments, formal and informal, that can help shape how you assess your students in blended courses. (Many of the ideas presented here are applicable in the face-to-face and online portions of blended courses, but we’ll frame each assessment description for the online context specifically given that most readers will likely have more familiarity with the face-to-face environment.)
- Formal assessments provide a systematic way to measure students’ progress. These types of assessments also contribute to the final grade, which indicates a student’s mastery of the subject, e.g., midterm, and finals.
- Informal assessments generally provide the faculty member the ability to gauge their students’ comprehension of course material. It does not involve assigning grades. Furthermore, they can be used to allow students to practice the material prior to a formal assessment, e.g., self-tests.
Note: The subcategories in this section are adapted from those used by Wiggins and McTighe (1998) in Understanding by Design.
Multiple choice and short answer tests (or quizzes) are useful for assessing students’ abilities to recognize and recall content. They are also fairly easy to grade; and when faced with a large class size, you can make the grading automatic depending on the question type. However, these online tools also arguably provide students with “more ways to be academically dishonest” (Watson and Sottile, 2010).
In the design of effective assessments of learning, Hoffman and Lowe (2011, January) note that the “focus must be on student learning, not student control.” Particularly when dealing with online assessment (e.g., the ubiquitous auto-scored multiple choice quiz tools within learning management systems) it is tempting to design a testing environment in which all variables are controlled and student responses do naught [do nothing] but reveal students’ mastery of course objectives. However, as Dietz-Uhler and Hurn (2011) note, “the evidence, although scant, suggests that academic dishonesty occurs frequently and equally in online and face-to-face courses” (p. 75). It is counter-productive to adopt an adversarial stance as we attempt to fence in students to prevent them from cheating (in any modality). Nevertheless, there are steps we can take to make online testing more effective. Many of these are applicable to face-to-face environments as well.
Creating Effective Online Tests
Hoffman and Lowe (2011, January) identify a number of techniques for creating effective online assessments. These are grouped into online assessment tool features and assessment design strategies.
Online Assessment Tool Features
Online quizzing tools typically provide some affordance for randomization of test items. Depending upon how the instructor uses the tool, this may range from merely randomizing the order in which the same set of items appears to each student all the way to sophisticated alternative test versions in which test items in various content categories and at different levels of difficulty are dynamically-generated for each student (i.e., each student receives a different test, but each version is equivalent). The instructor may impose assessment time limits such that the test is only available within a certain window of opportunity (e.g., an entire week or just one evening). Additionally, time limits can also be placed on the period between the opening of the quiz and its submission (e.g., a few minutes to multiple hours). Related to this restriction, the instructor can also allow students to see the entire test at once or only one test item at a time. Supported by the online quizzing tool, the instructor may choose to establish rules for assessment completion. For instance, students may be required to complete the quiz in one sitting once the quiz is launched, or they may have the option to start the quiz, log-out, and come back later (within whatever time restrictions have been established). Online assessment tools also support proctoring if the instructor (or institution) chooses to undertake the logistical arrangements involved in vetting proctors. An approved individual receives a password to unlock the quiz and then he remains present while the student takes the test. The proctor may be asked to verify the student’s identity and/or ensure compliance with certain test-taking protocols (e.g., open/closed book, etc.). Commercial tools for remote proctoring have appeared on the market in recent years. The functionality of the proctoring programs range from taking pictures of students during an exam to a remote individual watching students via live video feed. The conundrum here is whether the institution or student pays the fees to proctor an exam. [See a review of online proctoring tools provided by Faculty eCommons].
Assessment Design Strategies
Apart from the affordances of the online testing tools, online auto-scored assessments may also benefit from well-designed multiple-choice items with an emphasis on application and higher-level thinking. While many online quizzes (especially many of those available as supplemental instructor resources) focus on low-level factual recall, multiple-choice items may be written at the higher application, analysis, synthesis, or evaluation levels. Such items often involve some sort of scenario aimed at promoting learning transfer from one context to another. Additional strategies might require students to view a chart/graph and select the most accurate interpretation from among several alternatives or even to collaborate with classmates in selecting the best justification statement for why a given answer is correct prior to individually submitting their quizzes.
For detailed information on the kinds of assessment design strategies summarized above along with numerous supporting resources, you may wish to visit Hoffman and Lowe’s (2011, January) web page at https://online.ucf.edu/faculty-seminar01/. In particular, if you would like a refresher on writing effective multiple choice items at various cognitive levels, you may wish to review the following PDF documents:
- Using Bloom’s Taxonomy to Create Multiple-Choice Questions
- Effective Assessment Examples
- Question Improvement Suggestions
Many of the above techniques for creating more effective assessments are relevant for online quizzes, traditional face-to-face exams, and online testing implemented in a face-to-face environment. There is a range of automated assessment options in a blended learning course.
Assessments that require a subjective analysis are often more difficult and time consuming to grade, however this type of assessment is appropriate for gauging how well students are able to apply the concepts learned in class. Within most learning management systems (LMS) there are a variety of tools to facilitate these types of assessments. Such platforms typically include the following tools at a minimum:
- Discussion area –often used for generating student-to-student interaction based on an instructor-specified critical thinking challenge.
- Assessment tool – can be used to construct essay-type questions (which must be manually scored)
- Assignment tool – can be used to submit papers, essays, or other types of assignments.
Authentic student assessment strategies for the online environment
Often when we talk of assessment in an online environment, we think of automated quizzes and grade books. While useful in many circumstances, automated quizzes do not always accurately reflect a student’s abilities, especially when you are asking them to achieve a higher level of difficulty in the cognitive learning domain, to demonstrate a physical skill in the psychomotor learning domain, or to evaluate attitudes in the affective learning domain (see description of learning domains and degrees of difficulty at http://www.nwlink.com/~donclark/hrd/bloom.html). Authentic assessment—assessing student abilities to apply knowledge, skills, and attitudes to real world problems—is not only possible in an online environment; it is getting more popular.
When you consider what types of online assessment strategies to choose, the list will be very similar to the print-based strategies that you know and already use. However, there are a few additional assessment strategies that the online environment makes possible. The list below is not comprehensive by any means. It also does not show which tools could be used to facilitate the different types of assessment strategies. Some of these activities may require students to have access to equipment or software applications to complete.
Table 14.1. Assessment strategies and disciplines that may commonly use them
|Type of assessment strategy||Disciplines that might use each assessment strategy|
|lab manual||physical sciences|
|computer code||computer science|
|technical writing||technical and professional writing|
|reflection||teacher education, health education, social work|
|observation log||teacher education, nursing, laboratory sciences|
|image gallery||art, industrial design|
|web page or website||multiple|
|presentation||business, public administration|
|video||theatre arts (monologue), marketing|
Notice that some assessment strategies require participation by someone other than the student. For example, a K–12 master teacher would submit an observation log for a credential student performing his or her student teaching. Similarly, a health clinic supervisor would submit an observation log for a nursing student related to his or her abilities to draw blood for testing. A theatre arts student may need someone to record his or her monologue.
Some assessment strategies allow students to get creative. It is important to make sure that students have access to, or ability to use the technologies required to complete the tasks, but once you do that, you could ask students to create a video advertisement that demonstrates the application of marketing principles, an audio recording that demonstrates mastery of inflection and tone when speaking Mandarin Chinese, or a PowerPoint slide show with audio clips that demonstrates competency with teacher education standards. The age-old practice of storytelling has been “remastered” as digital storytelling through blogs, wikis, podcasts, and more. Students are taking advantage of these new media formats to illustrate that they have met certain requirements. In some cases, each product becomes an “asset” or “artifact” in a larger electronic portfolio that contains items for a single class, an entire program or department, or all curricular and co-curricular work that a student does.
Regardless of what products students provide to show their abilities, you need a way to evaluate their work.
After determining how students will show how they can meet the learning objectives, it is time to choose an evaluation method. You can use a number of tools, ranging from a simple checklist of criteria to a rubric that contains the same criteria as well as a range of performance and degrees to which students meet the criteria.
You can use qualitative or quantitative degrees to evaluate criteria (see Table 14.2 for an example of each). Share the checklist or rubric with students before they begin the assignment, so they know what will be expected of them. In some cases, instructors create the entire rubric, or portions of it, with the students.
Table 14.2. Portion of a student presentation assessment rubric
|Student supports main presentation points with stories or examples.||Student effectively used stories and/or examples to illustrate key points.||Presenter used storiesand/or examples somewhat effectively to illustrate some key points.||Presenter used some unrelated stories and/or examples that distracted from key points.||Presenter did not use stories or examples to illustrate key points.|
|Cover project completely, including: 1) Needs Assessment Objectives, 2) Extant Data Analysis, 3) Data Collection Methods, 4) Brief Summary of Data, 5) Collected Data Analysis, 6) Recommendations||Presentation covered all 6 of the areas to the left.||Presentation covered 4 or 5 of the areas to the left.||Presentation covered 2 or 3 of the areas to the left.||Presentation covered 1 or 0 of the areas to the left.|
Preparing an Assignment for Assessment
The first step to assessing online work is to prepare each assignment. Since students may not have you around to ask questions, you need to anticipate the types of information that students need. There are some standard items to include in your instructions for all types of online assignments:
- Name of the assignment (This should be the same name as listed in the syllabus).
- Learning objective(s) to which this assignment relates.
- When the assignment is due.
- Any resources that you recommend using to complete the assignment.
- Expectations (length, level of effort, number of citations required, etc.).
- Level of group participation (individual assignments, group or team projects, and entire class projects).
- Process (how students turn in the assignment, if they provide peer review, how peers give feedback, how you give feedback).
- Grading criteria (include rubric if you are using one).
By including these items, you give students a better idea of what you want them to do.
Informal assessments are an integral part of any quality course. Many blended learning faculty incorporate these types of assessments into their courses to increase their presence in the online environment and to keep track of their students’ learning using tools within the learning management system (LMS) or publicly-available alternatives if necessary. Approaches to informal assessment vary. For instance, some LMSs (or free online tools) allow faculty to create practice exams/self-tests for students to complete. While unscored, these informal assessments often provide data for the instructor to review as one indicator of student learning. As one example, in the context of an introductory biology course, Walker et al. (2014) studied the comparative effect of non-credit, online practice exams on students’ performance on in-class graded exams. This rigorous study found that “students who took these practice exams achieved significantly higher scores on the corresponding for-credit exams….”(p. 154). Since the study controlled for potential intervening variables, “the results show that the benefit of practice exams is not simply an artifact of student self-selection” (p. 154). In the case of a different discipline, Riley et al. (2014) found that online self-assessment quizzes as part of online modules in a blended writing course were “crucial for the students’ subsequent execution of the class’ main essay assignments” (p. 168).
As an additional approach to informal assessment, summative and formative evaluations can be conducted by collecting anonymous input from students during and after the course using either a survey tool within the LMS or one of the many free web-based survey tools. Following is a more substantive description of a few other approaches to informal assessment in the online environment.
The one-sentence summary is another classroom assessment technique that I adapt to the online environment. Designed to elicit higher level thinking, a one-sentence summary demonstrates whether or not students are able to synthesize a process or concept. Students answer seven questions separately: “Who? Does What? To Whom (or What)? When? Where? How? And Why?” Then they put those answers together into one sentence. Angelo and Cross (1993) also describe this exercise in their book about classroom assessment techniques. Examples I have seen include assigning nursing students to write a one-sentence summary of a mock patient’s case, as nurses are often required to give a quick synopsis about each patient, and asking engineering students to write a summary about fluid dynamics in a given situation.
It is fairly easy to use this technique online. You can set up a discussion forum to collect the student entries. The online environment also makes it fairly easy to engage students in a peer review process and to provide timely feedback.
When looking at the results of the students’ summaries, you can identify areas where large numbers of students did not demonstrate an understanding of the topic or concept. The most common problem area for students revolves around the question “Why?” Figure 24.4 is an example of a one-sentence summary submitted via discussion thread. The instructor’s reply gives suggestions for improvement and shows the student how the instructor interpreted the sentence components.
by Student B—Friday, 2 September, 12:35 PM
In order to adequately address teaching effectiveness an instructor needs to use an effective tool to measure specific activities or deficiencies in student performance by using techniques including but not limited to: surveys, analysis of performance, and questionnaires.
Re: My Sentence
by Instructor—Sunday, 4 September, 08:31 PM
This is a good start. WHEN does it happen? Keep in mind that the process does not end with using a data collection tool. There is analysis of the process before the course begins, and after collecting the data. Also, WHERE does it happen? Is this online, in the classroom, or both?
In order to adequately address teaching effectiveness [7 WHY] an instructor [1 WHO] needs to use an effective tool to measure specific activities or deficiencies [2 DOES WHAT] in student performance [3 TO WHOM] by using techniques including but not limited to: surveys, analysis of performance, and questionnaires [6 HOW]
Figure 24.4 Example one-sentence summary student submission with instructor’s reply
Student-generated test questions
Ask students to create three to five test questions each. Tell them that you will use a certain number of those questions on the actual test. By doing this, you get the benefit of seeing the course content that the students think is important compared to the content that you think they should focus on. You can make revisions to your presentations to address areas that students did not cover in their questions. If there are enough good student questions you can also use some for test review exercises.
In this chapter we have considered how formal and informal learning assessments might be implemented in blended courses. We have focused on the online environment of blended courses, but many of the assessment principles are applicable to the face-to-face context as well. As we turn our attention in the next chapter to the experiences with content and activities we design for students in blended courses we need to remember that the content and activities are merely a means to an end (i.e., assessment of students’ learning) rather than the ends themselves.
Angelo, T. A. & Cross, K. P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers (2nd ed.). San Francisco, CA: Jossey-Bass Publishers.
Dietz-Uhler, B. and Hurn, J.(2011). Academic dishonesty in online courses. In Smith, P. (Ed.) Proceedings of the 2011 ASCUE Summer Conference. Myrtle Beach, SC. Retrieved from http://www.ascue.org/files/proceedings/2011-final.pdf
Hoffman, B. and Lowe, D. (2011, January). Effective online assessment: Scalable success strategies. In Faculty Seminars in Online Teaching. Seminar series conducted at the University of Central Florida, Orlando, FL. Retrieved from https://online.ucf.edu/faculty-seminar01/
Riley, J.E., Gardner, C., Cosgrove, S., Olitsky, N., O’Neil, C., and Du, C. (2014). Implementation of blended learning for the improvement of student learning, In A. Picciano, C. Dziuban, and C. Graham (Eds.), Blended learning: Research perspectives, volume 2. NY: Routledge.
Walker, J.D., Brooks, D.C., Hammond, K., Fall, B.A., Peifer, R.W., Schnell, R., and Schottel, J.L. (2014). Practice makes perfect? Assessing the effectiveness of online practice exams in blended learning biology classes, In A. Picciano, C. Dziuban, and C. Graham (Eds.), Blended learning: Research perspectives, volume 2. NY: Routledge.
Watson, G. and Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses? Online Journal of Distance Learning Administration, 23,1. Retrieved from http://www.westga.edu/~distance/ojdla/spring131/watson131.html
Wiggins, G., and McTighe, J. (1998). Understanding by Design. Association for Supervision and Curriculum Design.