Course Home | Schedule | Learning Activities | DIY Tasks | Readings | Blogging | Badges Real Time Sessions/Archive | Stories

Chapter 1 | Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5

Quality Assurance in Blended Learning

Third Edition
BlendKit Reader Third Edition edited by Linda Futch, Baiyun Chen and Sue Bauer. The Review Team included Cub Khan, Apostolos Koutropoulos and Elizabeth Robinson.

BlendKit Reader Second Edition Review Team included Linda Futch, Wendy Clark, Loretta Driskel, Wilma Hodges, Cub Kahn, Apostolos Koutropoulos, Denise Landrum-Geyer, and John Okewole. If the second edition is helpful, thank the review team. If not, blame the editor.

Originally edited by Kelvin Thompson, Ed.D.

Portions of the following chapter are adapted from “What is Online Course Quality?” by Kelvin Thompson under the terms of a Creative Commons Attribution-ShareAlike 3.0 Unported license and “Design of Blended Learning in K-12” in Blended Learning in K-12 under the terms of a Creative Commons Attribution-ShareAlike 3.0 Unported license. In addition, portions of the following chapter are adapted from “Evaluating and Improving Your Online Teaching Effectiveness” by Kevin Kelly in the Commonwealth of Learning’s Education for a Digital World under the terms of a Creative Commons Attribution-ShareAlike 3.0 International license.

Questions to Ponder

  • How will you know whether your blended learning course is sound prior to teaching it? How will you know whether your teaching of the course was effective once it has concluded?
  • With which of your trusted colleagues might you discuss effective teaching of blended learning courses? Is there someone you might ask to review your course materials prior to teaching your blended course? How will you make it easy for this colleague to provide helpful feedback?
  • How are “quality” and “success” in blended learning operationally defined by those whose opinions matter to you? Has your institution adopted standards to guide formal/informal evaluation?
  • Which articulations of quality from existing course standards and course review forms might prove helpful to you and your colleagues as you prepare to teach blended learning courses?

Blended Course Quality

It is fitting that we consider course quality as the culminating chapter in the BlendKit Reader.

It is not uncommon to speak, in generic terms, of “good” or “bad” blended learning courses, without specifying the attributes that contribute to these designations. Neophytes may do this because they have no basis for a more differentiated description, while those intimately acquainted with blended courses may use such labels as a shorthand reference. However, as seasoned blended course instructors/designers know, there are countless nuances that distinguish one course from another (and, for that matter, that distinguish one semester’s offering of a course from another semester’s offering of the same course). Until such time as patterns within these characteristics are identified and associated with positive or negative outcomes, though, it is difficult to justify labeling a blended course with such simplistic descriptors. Nevertheless, administrators and faculty feel pressured from time to time to compare one course to another or one instructor to another in their attempts to ensure that blended courses produce various desirable outcomes (e.g., demonstrated mastery of learning objectives, sufficient enrollment, adequate retention, academic rigor, student success, student satisfaction) at rates comparable to face-to-face courses (as if meeting face-to-face is, itself, a mark of excellence) or to the level of satisfaction of an accrediting agency. (Students might be motivated to make such comparisons between course modalities as well, but, undoubtedly, the qualities in which some students are interested will vary markedly from the interests of faculty and administrators.) Thus, there is some degree of comparison since it seems that there is always someone concerned with whether this course is “good enough,” and it is certainly appropriate to ensure that baseline acceptability is met across specific domains. If this were not enough, some individual faculty, motivated by their own enlightened self-interest, look for guidance in determining what improvements might be made to their courses. In either case, however, the question is whether we have justification for the judgments we make about blended courses.

A definitive statement of what constitutes the best combination of online and face-to-face learning experiences is impossible. No such statement exists for the best combination of traditional practices much less for the newer world of blended learning.  In the early years of online and blended courses, Singh & Reed (2001) noted “Little formal research exists on how to construct the most effective blended program designs” (p. 6). Since the publication of this statement two volumes of Blended Learning Research Perspectives (Picciano and Dziuban, 2007 and Picciano, Dziuban, and Graham, 2014) have been published. In the latter volume a number of quality differentiators have been identified (e.g., rigorous learning assessment, Riley et al., 2014; responsiveness to learner characteristics, Skibba, 2014 and Dziuban, Hartman, and Mehaffy, 2014; student engagement, Vaughan et al., 2014 and Dringus and Seagull, 2014; etc.) with the authors of the volume’s final chapter summing up that “[c]onclusively, the data show that high quality faculty development is the cornerstone of effective blended programs” (Dziuban, Hartman, and Mehaffy, 2014, p. 326). [Editor’s Note: One might argue that faculty in meaningful dialogue with other faculty about the teaching/learning process is the most effective form of faculty development with everything else being merely layers of facilitation.] Yet in a coda touching upon unanswered questions these authors ask: “How will we address the quality issue?” p. 327. Ensuring blended course quality is undeniably a challenging issue. In this chapter we will search for hallmarks of quality in blended learning and examine processes for determining whether such indicators are present. Both are important for designers and instructors of blended courses.

“An accreditation [accrediting] body is an organisation delegated to make decisions, on behalf of the higher education sector, about the status, legitimacy or appropriateness of an institution, or programme” (www.qualityresearchinternational.com/glossary/accreditationbody.htm). Accrediting bodies (e.g., Southern Association of Colleges and Schools Commission on Colleges, Western Association of Schools and Colleges, Northwest Commission on Colleges and Universities, New England Association of Schools and Colleges) and education compact organizations (e.g., Southern Regional Education Board, Western Interstate Commission for Higher Education, Midwestern Higher Education Compact) have articulated broad requirements or statements of good practice for academic programs in higher education (including online courses if not blended courses). Such statements typically define levels of minimum acceptability for particular dimensions (e.g., curriculum and instruction, institutional context and mission, evaluation and assessment, etc.) of institutional offerings. While some statements have direct implications for what happens within courses, these guidelines are necessarily broad in order to facilitate compliance at the institutional level. Articulating analogous quality standards at the course level is difficult for at least three reasons. First, there is no one authoritative body that can (or is willing to) address minimum levels of acceptability for blended learning in all its manifestations within the diversity of approaches found in even one state’s higher education institutions. Thus, there are no universal standards for blended course quality. Second, if such standards did exist, it is difficult to create an evaluative tool which could be used consistently across all courses, programs, and institutions. Third, if such a tool were available, it is actually quite time consuming to evaluate an individual course. It is difficult to imagine an organization willing to commit to such an undertaking for all higher education institutions within its jurisdiction.

Blended and Online Course Standards

As online learning has developed during the past two decades, course-level standards have begun to emerge for online courses. However, when the first edition of the BlendKit Reader was released in 2011 one would’ve been hard pressed to find similar publicly-accessible standards or course evaluation rubrics with an explicit focus on blended (or “hybrid”) courses.  Since that time a few notable exceptions have emerged.

Perhaps it is because of the ubiquity of face-to-face courses and the lack of a consistent definition of blended courses that these modalities have not received the level of attention given to online courses. However, some online course standards do identify areas of relevance for blended learning courses. (In fact, one might note that a few of the most popular online course standards have been re-framed as benefitting blended courses as well.) In the absence of standards focused exclusively on blended courses, online course standards do provide the closest analogue to articulations of quality for blended learning courses. An overview of online course standards follows.

Specific standards of online course quality have emerged not from traditional authoritative bodies but from for-profit companies (e.g., Blackboard’s Exemplary Course Program, groups of institutions (e.g., Quality Matters), or, more typically, from individual institutions. Most of these groups embed their standards in a review form (i.e., a checklist or rubric) and include a summative, ordinal rating. The advantage of such review forms is that, ostensibly, they are quite easy to implement for faculty, designers, and administrators, for whom time might already be in short supply. After reviewing an online course with a review form, one is usually left with a “punch list” of items on which to focus one’s attention, making evident how a course may be improved before it is taught next.  Table 1 provides a selection of online course standards/review forms.

Table 1. Selected examples of online course standards

Title URL
Quality Matters http://www.qmprogram.org/rubric
Blackboard’s Exemplary Course Program http://www.blackboard.com/Community/Catalyst-Awards/Exemplary-Course-Program.aspx
Monterey Institute’s Online Course Evaluation Project http://www.montereyinstitute.org/pdf/OCEP%20Evaluation%20Categories.pdf
California State University’s Quality Assurance (QOLT) for blended and online courses http://courseredesign.csuprojects.org/wp/qualityassurance/
Michigan Virtual University’s Guidelines and Model Review Process for Online Courses http://media.mivu.org/institute/pdf/guidelines_model_2013.pdf
iNACOL National Standards for Quality Online Teaching (Note: in column to left of linked text) http://www.inacol.org/resource/inacol-national-standards-for-quality-online-teaching-v2/
Illinois Online Network (ION) Quality Online Course Initiative Rubrics https://www.uis.edu/ion/resources/qoci/ 
University of Southern Mississippi’s Online Course Development Guide and Rubric http://ablendedmaricopa.pbworks.com/f/LEC_Online_course+rubric.pdf
Florida Gulf Coast University’s Principles of Online Design http://www.fgcu.edu/onlinedesign
Open SUNY Course Quality Review (OSCQR) Model https://oscqr.suny.edu/

Limitations of Blended and Online Course Standards

Sets of standards such as those described above do have their limitations vis-à-vis course quality. These limitations have to do with the prescriptiveness, credibility, scope, and atomism of such standards groupings. Each of these will be addressed in turn.

It is the nature of standards to prescribe how things should be. However, it is challenging to formulate prescriptive statements in such a manner as to fit all contexts which give rise to blended or online courses. For instance, the statement, “evaluating and validating Web-based information in completing assignments” certainly applies to many courses, but if a course does not feature assignments that require students to consult Web-based resources, this standard is obviously irrelevant. Also, in prescribing what should be, there is a tendency to focus on minimum acceptability to the exclusion of excellence or innovation. Review instruments which incorporate actual rubrics (e.g., California State University QOLT Quality Assurance for blended & Online courses) mitigate this limitation by presenting upper-end requirements as a counterpoint to the “bare minimums,” but it is likely that such rubrics will account for all manner of innovations.

The provenance of standards affects their credibility. For instance, most blended and online course standards are written by small groups of individuals with some personal experience with blended/online teaching and learning. Although there is nothing wrong with a group’s expertise serving as the basis for such standards, we need to recognize that they arose from a particular context with its own idiosyncratic needs. Interestingly, there are numerous instances in which standards from one review instrument have been copied-and-pasted into new review instruments as if the standards are axiomatic. There are rarely any explicit connections made between standards and theory-based or research-based frameworks. If course standards are to have enduring significance in addressing quality, they must be credible.

Nearly all sets of blended/online course standards bear the imprint of an overt instructional design emphasis (e.g., instructional objectives, constructivist influence, technology-dominated). While, of course, it is reasonable for this field to leave its mark on what is deemed acceptable in blended and online courses, such an emphasis typically leads to a focus on the designed (online) environment of the course to the exclusion of the experience of instructors and students in the teaching/learning process (whether online or face-to-face). The problems this causes can perhaps more easily be seen if we look for an analogous set of relationships within a different setting. For instance, one can design and construct a building, a house, or a classroom. But such constructions are intended to support the lives of those who interact, who live, within their walls. While a tour of an unoccupied kindergarten classroom and an inventory of its resources might provide some indication of the nature of the teaching and learning that occur there, it is the lived experiences of the students and teachers, their actual interactions, in which teaching and learning are made manifest. Limiting the scope of blended or online course quality to considerations of the designed environment results in a significant blind spot. This should be avoided.

The final limitation of blended/online course standards presented here is the necessity for such standards to be atomistic. That is, courses are viewed only as an aggregation of disparate parts, reducible to simple “should” statements. As discussed above, the activity of reviewing courses in any kind of collective way necessitates having a scalable process. This includes using a review instrument that is relatively quick to complete. However, it must be observed that, by their nature, atomistic approaches lend themselves to quantification, sums, and scores. Holistic approaches, by contrast, result in one, integrated complete-as-possible picture which is more difficult to quantify (i.e., nominal classification). Thus, it is unlikely for a simple course review instrument to reveal the complexities of a blended (or online) course instructional experience, but, with the above caveats in mind, such an instrument is likely to reveal whether some agreed-upon minimum acceptability has been achieved. By contrast, see the Online Course Criticism Model  (Thompson 2005) for a holistic, non-standards-based, robust approach to evaluating online courses. Further, Monterey Institute’s Online Course Evaluation Project provides a rare balance between most checklist-based reviews and the intensity of the criticism model.

Apart from institutional efforts to foster quality in online and blended courses, perhaps the best use of quality standards is by individual instructors in self-assessment and informal peer-reviews of teaching effectiveness. Most of the rubrics and standards lists related to online and blended courses that are linked above emphasize design documents and the designed environment.  However, it is in the lived experience of teaching a course (regardless of modality) that much can go wrong (or right). Perhaps it is in this area that collegial advice might be most valued. A consideration of teaching effectiveness appears below.

Teaching Effectiveness

Teaching effectiveness describes instructors’ ability to affect student success. It is usually defined according to several factors, such as how well instructors organize courses, how well they know the course material, how clearly they communicate with students, how frequently they provide timely feedback, and other criteria. In the classroom, effectiveness sometimes depends on the instructor’s enthusiasm or disposition. During fully online and blended learning courses, students often need more structure and support to succeed because their course activities usually require them to take greater responsibility for their own learning success. Therefore, many of the criteria take on even more importance when evaluating online teaching effectiveness.

Online teaching is often held to higher standards than classroom teaching, and sometimes these standards have nothing to do with the teacher’s ability. For example, a technological breakdown can have a negative impact on students’ evaluation of an instructor’s work, though the instructor is rarely responsible for the technical failure.

Some Practical Advise to Help You Succeed!

To succeed, you should find some allies to help. If you are new to online or blended teaching and learning, let your students know. They will usually give you a lot of leeway. Some of the students may offer to help you set up or facilitate technology-based activities or at least respond positively to your requests for technological help. Overall, you will find it well worth the effort to evaluate and improve your online teaching effectiveness.

There are many ways to evaluate teaching effectiveness in either the physical or virtual environments. Getting pointers and advice before the term begins can save you from making revisions later. Formative feedback, collected during an ongoing course, improves that specific course. Summative feedback, collected after a course ends, improves the next iterations. Feedback that applies to the instructor’s process can also improve other courses.

Ask a peer to let you review a blended course to see what you like or do not like about how it is constructed, how the instructor(s) provide feedback, how students are assessed, and so on. If you are inheriting an online course (or the online portion of a blended course) from someone else, try to get feedback about what has already been done. Before your course begins, you should ask a peer to tell you about how appropriate the learning objectives are for the topics, as you might do for a face-to-face course.

Depending on your school district or campus, seek additional people who might provide comprehensive feedback in a faculty development centre or an academic technology unit. You might also try to find a fellow teacher who has supplemented face-to-face instruction, taught a blended course, or taught a fully online course. Even if this person works in a different department or unit, it is helpful to share your blended teaching experiences with someone who has gone through a similar process.

If this is your first time teaching an online course, or using online components for your face-to-face or blended course, you do not have to use every online tool or strategy. Instead, choose one or two strategies based on your learning objectives

Writing personal teaching goals is one more practice you can try as you prepare the online environment and the materials and activities to go in it. Creating an online teaching journal allows you to track your thoughts and actions over time. Including personal teaching goals among the first entries will get you off to a good beginning.

You can collect formative feedback for a number of reasons: to check how things are going at a certain point; to evaluate the effectiveness of a specific assignment or resource; or to gauge student attitudes. Also, you can roll in a weekly reflection to encourage students to become metacognitively aware of their learning process. The frequency with which instructors obtain feedback can range from once per session to once in the middle of the term. Direct methods to collect formative feedback include, but are not limited to, the following: peer review and self-evaluation, online suggestion box, one-minute written reflection, polling, and focus groups.

As important as student engagement can be, student evaluations by themselves are not sufficient. Solicit peer review of specific resources, activities, or assessment strategies, your course structure, your communication strategies, or anything else about which you might have concerns. If you cannot find anyone in your school, department or college who is also teaching online you can ask school or district administrators, academic technology staff members, or faculty development centre staff members to identify prospective peer mentors for this type of feedback. In some cases, the staff members themselves may be able to help you as well.

Another strategy is to create benchmarks for yourself and take time each week to see how you are doing. One method is to create calendar appointments with your goals or what you want to accomplish when you log into your course. For example, if you set a goal to answer a certain number of discussion threads in a particular forum, keep track of how many replies you submit, and make adjustments. If you want to return all students’ written assignments in a certain amount of time, note how many you were able to complete within your self-imposed deadline. This will help you create more realistic expectations for yourself for future assignments.

Collect summative feedback for a number of reasons: to check how things went, to evaluate the effectiveness of a specific assignment or resource, or to gauge student attitudes about the course as a whole. The summative feedback will be a useful set of data for course redesign. While the current students will not benefit from any changes you make, future students will have a better experience.

Online Survey

Similar to the formative feedback surveys, you can use a closing survey to find out what students feel about specific aspects of your online teaching or their overall experience. There are numerous survey tools out there. Some are stand-alone, online survey tools and some are integrated into learning management systems.

Most importantly, do try again. Regardless of how you feel about your first attempt at blended teaching, it will get better each time you try. Blended learning, or hybrid, courses can combine the best of both worlds. Online environments that supplement fully face-to-face instruction can help students to stay on task, to plan ahead, to access resources at any time of day, and more. In all three types of online learning, the pros outweigh the cons. Most students will appreciate your efforts, which is a good thing to remember if you ever question why you are teaching in the first place.

Conclusion

In this chapter we have considered the complex issue of determining what constitutes quality in blended learning courses and programs with the goal of identifying principles and practices that designers of blended courses might enact as they create environments and experiences (Thompson, 2005) most likely to result in student success and satisfaction.  The component themes addressed in the preceding chapters (i.e., blended learning definitions/design, interaction, assessment, and content/assignment modules) are undeniably contributing factors to quality blended learning courses and programs. Yet, we must conclude that there is a work-in-progress aspect to conceptualizing quality in blended learning. Blended courses/programs are still relatively new, and research and innovation will most certainly result in new understandings of how to best design blended courses.  Those of us involved with blended design will need to adopt the attitude of learners, examining our practices and seeking continually to improve based upon the most current information available. Perhaps this is best done in dialogue with trusted colleagues. Future editions of the BlendKit Reader will continue to address new findings as they emerge.

References

Dringus, L.P. and Seagull, A.B. (2014). A five-year study of sustaining blended learning initiatives to enhance academic engagement in computer and information sciences campus courses. In A. Picciano, C. Dziuban, and C. Graham (Eds.), Blended learning: Research perspectives, volume 2. NY: Routledge.

Dziuban, C.D., Hartman, J.L., and Mehaffy, G.L. (2014). Blending it all together, In A. Picciano, C. Dziuban, and C. Graham (Eds.). Blended learning: Research perspectives, volume 2. NY: Routledge.

Picciano, A. and Dziuban, C. (2007). Blended learning: Research perspectives. Needham, MA: Sloan Consortium of Colleges and Universities.

Picciano, A., Dziuban, C. and Graham, C. (2014). Blended learning: Research perspectives, volume 2. NY: Routledge.

Riley, J.E., Gardner, C., Cosgrove, S., Olitsky, N., O’Neil, C., and Du, C. (2014). Implementation of blended learning for the improvement of student learning, In A. Picciano, C. Dziuban, and C. Graham (Eds.). Blended learning: Research perspectives, volume 2. NY: Routledge.

Singh, H. & Reed, C. (2001). A white paper: achieving success with blended learning. Centra Software. Retrieved June 26, 2011 from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.821&rep=rep1&type=pdf

Skibba, K. (2014). Choice does matter: Faculty lessons learned teaching adults in a blended program, In A. Picciano, C. Dziuban, and C. Graham (Eds.), Blended learning: Research perspectives, volume 2. NY: Routledge.

Thompson, K. (2005). Constructing educational criticism of online courses: A model for implementation by practitioners. Unpublished doctoral dissertation. University of Central Florida: Orlando, FL. Accessed July 7, 2011 from http://purl.fcla.edu/fcla/etd/CFE0000657

Vaughan, N., LeBlanc, A., Zimmer, J., Naested, I., Nickel, J., Sikora, S., Sterenberg, G., and O’Connor, K. (2014). To be or not to be: Student and faculty perceptions of engagement in a blended bachelor of education program. In A. Picciano, C. Dziuban, and C. Graham (Eds.), Blended learning: Research perspectives, volume 2. NY: Routledge.