Quality Assurance in Blended Learning
Edited by Kelvin Thompson, Ed.D.
Portions of the following chapter are adapted from “ What is Online Course Quality?” by Kelvin Thompson under the terms of a Creative Commons Attribution-ShareAlike3.0Unported license and “Design of Blended Learning in K-12” in Blended Learning in K-12 under the terms of a Creative Commons Attribution-ShareAlike3.0Unported license. Portions of the following chapter labeled as the property of the Commonwealth of Learning are used in compliance with the Commonwealth of Learning’s legal notice and may not be re-mixed apart from compliance with their repackaging guidelines.
Questions to Ponder
- How will you know whether your blended learning course is sound prior to teaching it? How will you know whether your teaching of the course was effective once it has concluded?
- With which of your trusted colleagues might you discuss effective teaching of blended learning courses? Is there someone you might ask to review your course materials prior to teaching your blended course? How will you make it easy for this colleague to provide helpful feedback?
- How are “quality” and “success” in blended learning operationally defined by those whose opinions matter to you? Has your institution adopted standards to guide formal/informal evaluation?
- Which articulations of quality from existing course standards and course review forms might prove helpful to you and your colleagues as you prepare to teach blended learning courses?
Blended Course Quality
A definitive statement of what constitutes the best combination of online and face-to-face learning experiences is impossible. No such statement exists for the best combination of traditional practices much less for the newer world of blended learning. Singh & Reed (2001) state “Little formal research exists on how to construct the most effective blended program designs” (p. 6). However, observers have begun to collate principles that, at least anecdotally, lead to greater success.
It is not uncommon to speak, in generic terms, of “good” or “bad” blended learning courses, without specifying the attributes that contribute to these designations. Neophytes may do this because they have no basis for a more differentiated description, while those intimately acquainted with blended courses may use such labels as a shorthand reference. However, as seasoned blended course instructors/designers know, there are countless nuances that distinguish one course from another (and, for that matter, that distinguish one semester’s offering of a course from another semester’s offering of the same course). Until such time as patterns within these characteristics are identified and associated with positive or negative outcomes, though, it is difficult to justify labeling a blended course with such simplistic descriptors. Nevertheless, administrators and faculty feel pressured from time to time to compare one course to another or one instructor to another in their attempts to ensure that blended courses produce various desirable outcomes (e.g., sufficient enrollment, adequate retention, academic rigor, student success, student satisfaction) at rates comparable to face-to-face courses (as if meeting face-to-face is, itself, a mark of excellence) or to the level of satisfaction of an accrediting agency. (Students might be motivated to make such comparisons between course modalities as well, but, undoubtedly, the qualities in which some students are interested will vary markedly from the interests of faculty and administrators.) Thus, there is likely always to be some degree of comparison since it seems that there is always someone concerned with whether this course is “good enough,” and it is certainly appropriate to ensure that baseline acceptability is met across specific domains. If this were not enough, some individual faculty, motivated by their own enlightened self-interest, look for guidance in determining what improvements might be made to their courses. In either case, however, the question is whether we have justification for the judgments we make about online courses.
Accrediting bodies (e.g., Southern Association of Colleges and Schools, Western Association of Schools and Colleges, Northwest Commission on Colleges and Universities, etc.) and education compact organizations (e.g., Southern Regional Education Board, Western Interstate Commission for Higher Education, Midwestern Higher Education Compact, etc.) have articulated broad requirements or statements of good practice for academic programs in higher education (including online courses if not blended courses). Such statements typically define levels of minimum acceptability for particular dimensions (e.g., curriculum and instruction, institutional context and mission, evaluation and assessment, etc.) of institutional offerings. While some statements have direct implications for what happens within courses, these guidelines are necessarily broad in order to facilitate compliance at the institutional level. Articulating analogous quality standards at the course level is difficult for at least three reasons. First, there is no one authoritative body that can (or is willing to) address minimum levels of acceptability for blended learning in all its manifestations within the diversity of approaches found in even one state’s higher education institutions. Thus, there are no universal standards for blended course quality. Second, if such standards did exist, it is difficult to create an evaluative tool which could be used consistently across all courses, programs, and institutions. Third, if such a tool were available, it is actually quite time consuming to evaluate an individual course. It is difficult to imagine an organization willing to commit to such an undertaking for all higher education institutions within its jurisdiction.
Online Course Standards
In recent years, course-level standards have begun to emerge for online courses even if comparable standards have not been articulated to the same degree for face-to-face courses or blended learning courses. (Perhaps it is because of the ubiquity of face-to-face courses and the lack of a consistent definition of blended course that these modalities have not received the level of attention given to online courses. However, some online course standards do identify areas of relevance for blended learning courses.) Nevertheless, online course standards do provide the closest analogue to articulations of quality for blended learning courses. An overview of online course standards follows.
Specific standards of online course quality have emerged not from traditional authoritative bodies but from for-profit companies (e.g., Blackboard’s Exemplary Course Program, groups of institutions (e.g., Quality Matters ), or, more typically, from individual institutions. Most of these groups embed their standards in a review form (i.e., a checklist or rubric) and include a summative, ordinal rating. The advantage of such review forms is that, ostensibly, they are quite easy to implement for faculty, designers, and administrators for whom time is already in short supply. After reviewing an online course with a review form, one is usually left with a “punch list” of items on which to focus one’s attention, making evident how a course may be improved before it is taught next. Table 1 provides a selection of online course standards/review forms.
Table 1. Selected examples of online course standards
|Blackboard’s Exemplary Course Program||http://www.blackboard.com/Platforms/Learn/Resources/Community-Programs/Meet-Your-Peers/Exemplary-Courses.aspx|
|Online Course Evaluation Project||http://www.montereyinstitute.org/ocep|
|CSU Chico’s Rubric for Online Instruction||http://www.csuchico.edu/celt/roi|
|Michigan Virtual University’s Standards for Quality Online Courses||http://standards.mivu.org/standards
(Best viewed in Internet Explorer)
|Texas Virtual School Network’s Scoring Rubric for Online Courses||http://www.txvsn.org/AboutTxVSN/CourseReview/ReviewProcess/iNACOLStandards.aspx|
|Mountain Empire Community College’s Online Course Quality Review Form||http://www.me.vccs.edu/forms/peer-review.pdf|
|Florida Gulfcoast University’s Principles of Online Design||http://www.fgcu.edu/onlinedesign|
Limitations of Online Course Standards
Sets of standards such as those described above do have their limitations vis-à-vis online course quality. These limitations have to do with the prescriptiveness, credibility, scope, and atomism of such standards groupings. Each of these will be addressed in turn.
It is the nature of standards to prescribe how things should be. However, it is challenging to formulate prescriptive statements in such a manner as to fit all contexts which give rise to online courses. For instance, the statement, “evaluating and validating Web-based information in completing assignments” certainly applies to many online courses, but if a course does not feature assignments that require students to consult Web-based resources, this standard is obviously irrelevant. Also, in prescribing what should be, there is a tendency to focus on minimum acceptability to the exclusion of excellence or innovation. Review instruments which incorporate actual rubrics (e.g., CSU Chico’s Rubric for Online Instruction ) mitigate this limitation by presenting upper-end requirements as a counterpoint to the “bare minimums,” but one has to question whether it is likely that the usefully finite number of categories in such rubrics will account for all manner of innovations.
The provenance of standards affects their credibility. For instance, most online course standards are written by small groups of individuals with some personal experience with online teaching and learning. Although there is nothing wrong with a group’s expertise serving as the basis for such standards, it is not uncommon for online course standards to be accepted uncritically, with no recognition that they arose from a particular context with its own idiosyncratic needs. Interestingly, there are numerous instances in which standards from one review instrument have been copied-and-pasted into new review instruments as if the standards are axiomatic. There are rarely any explicit connections made between standards and theory-based or research-based frameworks. If online course standards are to have enduring significance in addressing quality, they must be credible.
Nearly all sets of online course standards bear the imprint of an overt instructional design emphasis (e.g., instructional objectives, constructivist influence, technology-dominated, etc.). While, of course, it is reasonable for this field to leave its mark on what is deemed acceptable in online courses, such an emphasis typically leads to a focus on the designed environment of the course to the exclusion of the experience of instructors and students in the teaching/learning process. The problems this causes can perhaps more easily be seen if we look for an analogous set of relationships within a different setting. For instance, one can design and construct a building, a house, or a classroom. But such constructions are intended to support the lives of those who interact, who live, within their walls. While a tour of an unoccupied kindergarten classroom and an inventory of its resources might provide some indication of the nature of the teaching and learning that occur there, it is the lived experiences of the students and teachers, their actual interactions, in which teaching and learning are made manifest. Limiting the scope of online course quality to considerations of the designed environment results in a significant blind spot. This should be avoided.
The final limitation of online course standards to be presented here is the necessity for such standards to be atomistic. That is, online courses are viewed only as an aggregation of disparate parts, reducible to simple “should” statements. As discussed above, the activity of reviewing courses in any kind of collective way necessitates having a scalable process. This includes using a review instrument that is relatively quick to complete. However, it must be observed that, by their nature, atomistic approaches lend themselves to quantification, sums, and scores. Holistic approaches, by contrast, result in one, integrated complete-as-possible picture which is more difficult to quantify (i.e., nominal classification). Thus, it is unlikely for a simple course review instrument to reveal the complexities of an online course instructional experience, but, with the above caveats in mind, such an instrument is likely to reveal whether some agreed-upon minimum acceptability has been achieved. (By contrast, see the Online Course Criticism Model (Thompson 2005) for a holistic, non-standards-based, robust approach to evaluating online courses. Further, the Online Course Evaluation Project provides a rare balance between most checklist-based reviews and the intensity of the criticism model.)
Apart from institutional efforts to foster quality in online and blended courses, perhaps the best use of quality standards is by individual instructors in self-assessment and informal peer-reviews of teaching effectiveness. A consideration of teaching effectiveness appears below.
The following section is excerpted from “Evaluating and Improving Your Online Teaching Effectiveness” by Kevin Kelly in the Commonwealth of Learning’s Education for a Digital World in compliance with the Commonwealth of Learning’s legal notice and may not be re-mixed apart from compliance with their repackaging guidelines.
Teaching effectiveness describes instructors’ ability to affect student success. It is usually defined according to several factors, such as how well instructors organize courses, how well they know the course material, how clearly they communicate with students, how frequently they provide timely feedback, and other criteria. In the classroom, effectiveness sometimes depends on the instructor’s enthusiasm or disposition. During fully online and blended learning courses, students often need more structure and support to succeed because their course activities usually require them to take greater responsibility for their own learning success. Therefore, many of the criteria take on even more importance when evaluating online teaching effectiveness.
Online teaching is often held to higher standards than classroom teaching, and sometimes these standards have nothing to do with the teacher’s ability. For example, a technological breakdown can have a negative impact on students’ evaluation of an instructor’s work, though the instructor is rarely responsible for the technical failure.
To succeed, you should find some allies to help. If you are new to online teaching and learning, let your students know. They will usually give you a lot of leeway. Some of the students may offer to help you set up or facilitate technology-based activities or at least respond positively to your requests for technological help. Overall, you will find it well worth the effort to evaluate and improve your online teaching effectiveness.
There are many ways to evaluate teaching effectiveness in either the physical or virtual environments. Getting pointers and advice before the term begins can save you from making revisions later. Formative feedback, collected during an ongoing course, improves that specific course. Summative feedback, collected after a course ends, improves the next iterations. Feedback that applies to the instructor’s process can also improve other courses.
Ask a peer to let you review an online course to see what you like or do not like about how it is constructed, how the instructor(s) provide feedback, how students are assessed, and so on. If you are inheriting an online course from someone else, try to get feedback about what has already been done. Before your course begins, you should ask a peer to tell you about how appropriate the learning objectives are for the topics, as you might do for a face-to-face course.
Depending on your school district or campus, seek additional people who might provide comprehensive feedback in a faculty development centre or an academic technology unit. You might also try to find a fellow teacher who has supplemented face-to-face instruction, taught a hybrid course, or taught a fully online course. Even if this person works in a different department or unit, it is helpful to share your online teaching experiences with someone who has gone through the process.
If this is your first time teaching an online course, or using online components for your face-to-face or hybrid course, you do not have to use every online tool or strategy. Instead, choose one or two strategies based on your learning objectives
Writing personal teaching goals is one more practice you can try as you prepare the online environment and the materials and activities to go in it. Creating an online teaching journal allows you to track your thoughts and actions over time. Including personal teaching goals among the first entries will get you off to a good beginning.
You can conduct formative feedback for a number of reasons: to check how things are going at a certain point; to evaluate the effectiveness of a specific assignment or resource; or to gauge student attitudes. The frequency with which instructors obtain feedback can range from once per session to once in the middle of the term. Direct methods to collect formative feedback include, but are not limited to, the following: peer review and self-evaluation, online suggestion box, one-minute threads, polling, and focus groups.
As important as student engagement can be, student evaluations by themselves are not sufficient. Solicit peer review of specific resources, activities, or assessment strategies, your course structure, your communication strategies, or anything else about which you might have concerns. If you cannot find anyone in your school, department or college who is also teaching online you can ask school or district administrators, academic technology staff members, or faculty development centre staff members to identify prospective peer mentors for this type of feedback. In some cases, the staff members themselves may be able to help you as well.
Another strategy is to create benchmarks for yourself and take time each week to see how you are doing. For example, if you set a goal to answer a certain number of discussion threads in a particular forum, keep track of how many replies you submit, and make adjustments. If you want to return all students’ written assignments in a certain amount of time, note how many you were able to complete within your self-imposed deadline. This will help you create more realistic expectations for yourself for future assignments.
Conduct summative feedback for a number of reasons: to check how things went, to evaluate the effectiveness of a specific assignment or resource, or to gauge student attitudes about the course as a whole. The summative feedback will be a useful set of data for course redesign. While the current students will not benefit from any changes you make, future students will have a better experience.
Similar to the formative feedback surveys, you can use a closing survey to find out what students feel about specific aspects of your online teaching or their overall experience. There are numerous survey tools out there. Some are stand-alone, online survey tools and some are integrated into learning management systems.
Most importantly, do try again. Regardless of how you feel about your first attempt at online teaching, it will get better each time you try. Online course offerings provide students with more flexibility. Hybrid, or blended learning, courses can combine the best of both worlds. Online environments that supplement fully face-to- face instruction can help students to stay on task, to plan ahead, to access resources at any time of day, and more. In all three types of online learning, the pros outweigh the cons. Most students will appreciate your efforts, which is a good thing to remember if you ever question why you are teaching online in the first place.
Singh, H. & Reed, C. (2001). A white paper: achieving success with blended learning. Centra Software. Retrieved June 26, 2011 from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.821&rep=rep1&type=pdf
Thompson, K. (2005). Constructing educational criticism of online courses: A model for implementation by practitioners. Unpublished doctoral dissertation. University of Central Florida: Orlando, FL. Accessed July 7, 2011 from http://purl.fcla.edu/fcla/etd/CFE0000657