"We soon realized that our plan to adopt a single rubric not tied to any specific discipline, pedagogy, or course, offered benefits beyond quality improvements to individual online courses. Having a single rubric defused the potential for feelings of persecution: individual courses would not be targeted. Also, training of faculty and department chairs to use the rubric would be simplified. Chief among the benefits, however, would be the opportunity to re-examine the way we assess quality in ALL courses."

Tomorrow's Professor Msg.#929 The Online Course Assessment Gap

 

Folks:

The posting below looks at some key elements in assessing on-line courses.  It is by Michael L. Rodgers, Southeast Missouri State University in Cape Girardeau, Missouri and is #44 in a series of selected excerpts from the NT&LF newsletter reproduced here as part of our "Shared Mission Partnership." NT&LF has a wealth of information on all aspects of teaching and learning. If you are not already a subscriber, you can check it out at [http://www.ntlf.com/] The on-line edition of the Forum--like the printed version - offers subscribers insight from colleagues eager to share new ways of helping students reach the highest levels of learning. National Teaching and Learning Forum Newsletter, Volume 17, Number 6, October 2008.© Copyright 1996-2008. Published by James Rhem & Associates, Inc. All rights reserved worldwide. Reprinted with permission.

Regards,

Rick Reis
reis@stanford.edu
UP NEXT: On the Future of Engagement

                                        Tomorrow's Teaching and Learning

          ----------------------------------- 1,919 words ------------------------------------

                                     The Online Course Assessment Gap

    A fragment of a conversation during a Deans' retreat ...

    Jonathan: How is your online program doing? You've had it - what - two years now?

    Elyse: Oh, that. OK, I think. We're on track to graduate our first students from the program next
    year. Enrollments are strong. Finding people to teach all the courses has been challenging, but
    we've made it work. I had to reduce the number of face-to-face offerings in order to teach the
    online sections, and that led to my biggest problem with the program: I'm getting complaints from
    students who have been forced to take some of their foundational courses online, because the
    face-to-face sections weren't being taught often enough to fit their schedules.

    Jonathan: You mean they don't want to take courses online?

    Elyse: Well, many of the complaints are based on claims that the online and face-to-face versions
    of some of the required courses are so dissimilar, that they are really different courses
    altogether - not a good situation if the courses are foundational.

    Jonathan: The content doesn't match up?

    Elyse: I don't think that's it, exactly. I've looked at some syllabi: they're nearly identical.
    Apparently, the students' ideas about online courses are different from ours. . . .

                                                              Support Was Job 1

Ten years ago, when my university began to develop online courses, our challenges centered on support: Could we provide the infrastructure and skills an instructor would need to deliver a complete course entirely on the Internet? As we began to define and address our needs, we joined the swelling ranks of institutions making decisions about hardware, software, and training to support online courses: network bandwidth, reliability, and security improved greatly; course management software was installed; faculty development programs were implemented to give instructors the skills and confidence necessary to teach online. We worked to put in place a system robust enough to support as many online courses as student demand and institutional needs required. The effort must have sufficed: we now offer over 150 online courses each semester, and several degree programs are available entirely online. Our experience cannot be unique: universities across the nation now advertise their online offerings in our service region, competing with us for students, their dollars, and their loyalties.

                                                    Maturing the Online Course

Our goal was always to produce online courses that were "equivalent" to face-to-face courses bearing the same catalog number. Indeed, we make no distinction between face-to-face and online in our student transcripts. Nevertheless, we made a strategic decision early on to focus on the number of courses developed, so that our development effort could compete for resources and maintain momentum on campus. However, as technologies and faculty skills improved, we turned our attention to online course quality as key to realizing our goal of "equivalence." Certainly we needed a way to assess individual course quality. We settled on a single, generalized assessment rubric that could be applied to each online course. The purpose of the assessment was not to measure learner outcomes: exams, papers, and projects would continue to perform that role, just as standardized exams, portfolios, and other outcomes measures would continue to assess the performance of entire programs. Rather, we sought assessment that would reveal how well each online course served as a learning environment: was the course designed to present content in a meaningful, readily accessible way? Did the course support student engagement with content, the instructor, and, where appropriate, other students? Was the pedagogy sound? Did students have ready access to support services (tech support, library, academic support services)?

We soon realized that our plan to adopt a single rubric not tied to any specific discipline, pedagogy, or course, offered benefits beyond quality improvements to individual online courses. Having a single rubric defused the potential for feelings of persecution: individual courses would not be targeted. Also, training of faculty and department chairs to use the rubric would be simplified. Chief among the benefits, however, would be the opportunity to re-examine the way we assess quality in ALL courses. Suspicion of online courses, born out of their unfamiliarity, was one reason for the willingness of faculty and department chairs to do a comprehensive review: previously, only courses taught by probationary faculty were regularly reviewed, and then primarily for performance, not pedagogy or design. This practice supported the mistaken belief by some that the quality of one's course was no longer a major concern of the institution after tenure was granted. But, the online course rubric would be applied to courses because they are online, not because they are taught by probationary faculty. Thus, for the first time, a mechanism for formalized review of courses taught by tenured and senior faculty would appear on campus.

                                       Is There a Rubric in the House?

Research into existing online course assessment models led us to the Quality Matters peer review process.1 Quality Matters (QM) was attractive for its rubric's thorough coverage of course design, a faculty-centered approach to assessment, and the use of review teams trained to consider the course from a student perspective. Indeed, we were so impressed with QM that my institution paid the fee for several on our team to take the training. However, the QM rubric intentionally addressed course design only,leaving out several contributors to quality that were important to us, including "course delivery (i.e. teaching, faculty performance), course content, . . . [and] student engagement and readiness."2 We built a broader rubric, which included instructor performance and offered examples of discipline- independent best practices for the 26 items that eventually comprised our draft. Consistent with campus convention for documents that guide teaching practices, our rubric was rooted in the "Seven Principles for Good Practice in Undergraduate Education," as applied to a technology-enhanced learning environment.3 The 26 items were organized into four broad categories:

i.  Course overview, introduction, and learning objectives (competencies), which generally serve to make
course organization and learning objectives clear to students;

ii.  Assessment and measurement, which measure student progress toward stated learning objectives;

iii.  Resources, materials, and learner support, which explore how well the instructional materials selected for the course support the learning objectives; and

iv.  Course technology and security, which gauge how effectively the technology used in the course supports instruction and promotes student interactivity.

                                          Something Important Was Missing

Our draft was reviewed by Chairs and Deans in Spring 2008. In May, we used a session in our regular technology training Institute to introduce the rubric to faculty.4 The process was collegial; the rubric met with widespread approval. But students were never consulted. Like the QM rubric, we tried to fashion our rubric to apply from a student's point of view, but our rubric gave only a superficial sense of student preferences. Surely students would complain if course navigation was unclear, student expectations were left unstated, feedback from the instructor (including grades) was absent, or the course required students to seek out their own content without instructor guidance. Other rubric items, such as copyright compliance, attention to online security, a listing of technical requirements, and attention to accessibility issues, might become important to students only if they got in trouble over copyright, had their security breached, or found that something didn't work right. Still other items, such as measurable and appropriate learning objectives, website design, and relevance might never be visible to students, beyond an implicit sense of how smoothly the course functions. All of these items are important to online course quality, but would any of them induce admiration in our students? Would our students register joy? Would skillful execution of the rubric items be enough to quell the student complaints that Elyse described to Jonathan?

We had raised some valuable, even probing questions designed to assess the quality of our online courses, indeed any course. But despite the fact that our rubric was designed to consider the course from a student perspective, we sensed that we'd missed something. Although our team knew that some students felt dissatisfaction with courses taught online in comparison with the same courses taught face to face, no acknowledgement that students' ideas about online courses might be different from ours appeared in our rubric. For us, the "student perspective" was limited to matters of basic information and website organization: Did courses contain the links that we believed students would need? Were websites intuitive in their designs, so that students would use them without much coaching? So what was the difference if it wasn't in course content, learning objectives, navigability, or even pedagogy?

Obviously, the mode of delivery - the Web- was different, but weren't today's students actually more comfortable with that technology than many faculty? That shouldn't be a barrier if handled smoothly from a technological perspective . . . unless there was a dimension of expectation accompanying the new technology that our course designs weren't grasping and thus not satisfying.

Could it be that these elusive differences in expectations of the learning environment lay at the heart of student dissatisfaction? Students have a solid set of expectations of face-to-face courses that have been built up over generations in our culture. Indeed, it's been often said that one of the biggest impediments to pedagogical reform lies not in faculty unwillingness to teach in new ways, but students' unwillingness to be taught in new ways. If student expectations of the face-to-face learning environment aren't as flattering as we might like-i.e., many don't expect intellectual excitement, interactivity, or to feel the joy of learning- they are firmly set. So what was missing in our online versions of courses that left them feeling dissatisfied?
As we thought about it more, we came to suspect the dissatisfaction lay more in expectations they have of an online experience than expectations they have of a course experience, and that led us to ponder the iPhone and its success.

            Take a Lesson from the iPhone

Managing students' encounters with content is the mission of any online course. But should competent management be the limit of our aspirations? The enormous success of the Apple iPhone in a market full of less expensive alternatives suggests that people seek decidedly pleasing ways to engage with content. The iPhone's multi-touch user interface allows users to interact almost physically with information: for example, images on screen can be resized by pinching or spreading two fingers. It is difficult to imagine such an activity becoming onerous, or even routine. Like the computer mouse of an earlier generation, the multi-touch interface facilitates interaction with information in a much more natural and intuitive way than its predecessors, and the market proves that users prefer it to the alternatives. But surely even systems that lack the innovative multi-touch technology can offer ways to engage with content that surprise the user and bring pleasure to the work. Could an online course be designed in such a way that students would-of their own volition-seek out a deeper interaction with the content than was necessary to meet the course objectives?

With more than 10% of all credit hours at my institution generated by online courses, having an eye for quality in online course design and delivery is now very important to our continuing mission. Rubrics for assessment can and should be developed and used. But any rubric that we generate should, in addition to all the other things, assess how well course design and practices bring satisfaction and even pleasure to students who take the course. How better to engage students with the content that they need?

Notes
1 <http://qualitymatters.org>
2 <http://pgcconline.blackboard.com/ webapps/blackboard/content/listContent.jsp?course_id=_17872_1>
3 <http://www.tltgroup.org/programs/seven.html>
4 <http://cstl.semo.edu/institute/ 2008Summer/Rubric/Rubric.htm>
5 "Relevance" is the quality that enables students to "easily determine the purpose of all materials, technologies and methods used in the class and know which materials are required and which are recommended resources."
6 <http://electronics.howstuffworks.com/iphone1.htm> . . . there was a dimension of expectation accompanying the new technology that our course designs weren't grasping and thus not satisfying. 

----------------------------------------------------------------------------------------------------
TOMORROW'S PROFESSOR MAILING LIST
Is sponsored by the STANFORD CENTER FOR TEACHING AND LEARNING
----------------------------------------------------------------------------------------------------