• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Whenever you search in PBworks or on the Web, Dokkio Sidebar (from the makers of PBworks) will run the same search in your Drive, Dropbox, OneDrive, Gmail, Slack, and browsed web pages. Now you can find what you're looking for wherever it lives. Try Dokkio Sidebar for free.



This version was saved 12 years, 12 months ago View current version     Page history
Saved by Vernon Smith
on April 4, 2010 at 9:38:01 pm

Welcome to Quality Measurement in Online Learning Wiki

This is a workspace to begin the conversation we will have in person on April 6th in Seattle, WA.  Click here for the draft meeting agenda.  Please start the conversation by responding to the following questions:


Where are we today?

What are the opportunities and challenges to quality measurement in online learning?  How do measurement rubrics and tools fit into the broader landscape of online learning quality improvement opportunities?  What tools are emerging?  What is working?  What are the challenges?

  • In terms of opportunities - I think there is definitely a growing public interest and demand for online learning (in both K to 12 and higher education) and that this is reflected in organizations like the National Survey of Student Engagement (NSSE) developing new questions banks in an attempt to measure the "quality" of the online learning experience from the perspective of the student and instructor
  • From my perspective,  one of the challenges is defining what "is" a quality online learning experience - as it can mean different things for different audiences (e.g. students, parents, faculty, administrators)
  • In terms of emerging tools - Karen Swan, Phil Ice, and others have developed and validated a Community of Inquiry survey instrument (http://communitiesofinquiry.com/methodology), which I think can help design, facilitate, and direct a "quality" online learning experience
  • Quality Matters has drawn significant interest across numerous institutions and has produced promising results in terms of assuring high quality programs.
  • Transparency by Design has garnered significant interest among and provides a roadmap for making meaningful comparisons across institutions.
  • WCET's Edutools project and Sloan-C's Effective Practice wiki are examples of knowledge sharing around technology and practices, respectively, that establish precedents for a clearinghouse model that could be elaborated on.
  • While somewhat dated, CSU-Chico's rubric for online courses ("What does a high quality online course look like?") still is relevant - see:  http://www.csuchico.edu/celt/roi/
  • For many years, the WebCT Exemplary Course Project identified high-quality courses.  Blackboard continues with its Exemplary Course Program - see:

    http://www.blackboard.com/Communities/Exemplary-Courses.aspx; Sakai has the annual Teaching with Sakai Innovation Awards (see: http://openedpractices.org/twsia)

  • With the exception of the Transparency by Design initiative, most of our quality assurance efforts have a very limited impact on the public, those who we seek to serve and those who develop public policy. Most of our efforts are "inside baseball" (well, its opening day and the real new year starts today) and whatever approaches we utilize, and there are several excellent ones noted herein, tend to have little impact on the public. Somehow we need to establish more effective approaches to enlighten the public about the quality of online programming. Most in the public don't understand how the accreditation process works, but they understand it has some value...we need to establish an approach that gets us to this same point. Even Transparency by Design, which is still in its infancy, has a limited reach and membership and primarily is a vehicle for proprietary institutions. Where are the publics and non-profits?  How do we engage/enroll them?
  • I think it is important to consider also the Sloan-C pillars (of quality) -- access, institutional committment, learning effectiveness, student satisfaction, and faculty satisfaction -- not only for their early insistence on quality and the collection of effective practices, but for their breadth, for going beyond the level of the course to recognize the importance of the larger context (programs, institutions, institutional culture) and the importance of access to all.  I hope we will retain something of this breadth because I think it is critical to understanding quality online education.
  • My sense is that the field is fairly well represented with rubrics and quality standards, enough for any one institution to find a tool that meets their needs. Additional attention can be directed towards refining and standardizing these instruments, BUT, not to the degree that they begin to constrain or restrict further evolution and innovation of these learning systems. We run the risk of trying to solidify an emerging field before it has even jelled. I believe that general guidelines of quality standards are fine but over regulation by any one organization or institution can be dangerous. I know that this may sound like heresy, but I believe that we need to practice caution in attempting to prematurely define the operating parameters on such a rapidly emerging and evolving field. LCR1




Where do we need to go?

What combination of tools is required to drive consistent, high-quality student experiences?  Where are the gaps to be filled?  How do we accommodate new web2.0 pedagogical approaches?  How do we design solutions for widespread adoption and sustainability?

  • In terms of combining tools, a colleague and myself have combined questions from the Classroom Survey of Student Engagement (CLASSE) and EDUCAUSE's Study of Undergraduate Students and Information Technology in order to create a survey that evaluates student experiences in first-year blended learning courses at our institution (Mount Royal University).  Here is a link to our survey instrument and a paper that provides an overview to our preliminary study results.
  • The variety of instruments that currently exist provide excellent frameworks for assessing quality from a number of perspectives, however, more work (such as CLASSE) needs to be undertaken to develop an understanding of the relationship between these various tools. The most desirable result of such exploration would be development and validation of a unified approach to assessing quality.
  • A centralized clearinghouse for dissemination of best practices, irrespective of an individuals organizational affiliation needs to be developed and promoted.
  • A centralized clearinghouse for impartial evaluation of emerging technologies (possibly an expansion of the Edutools model) is needed. Gaining buy-in from corporate entities to make beta testing available to evaluators would be desirable.
  • For all of the above initiatives, development of a "counterweight" body might be desirable to help expand the potential reach of emerging initiatives beyond the US and Canada.
  • Another dimension under defined is instructor performance standards or guidelines as it pertains to online instruction. As with the instructional guidelines, I believe the online instructor would benefit from and articulation of performance expectations/suggestions, perhaps in the form of best practices in order to assure a quality online learning experience.
  • Lastly, metrics defining quality are nebulous and fleeting, as is the entire field of online education. Addressing quality as the student’s achievement of the learning outcomes may be the most productive and fruitful approach regardless of design or even the methodology. Over prescribed dictates and mandates require constant watering and feeding as well as enforcement mechanisms. LCR1



Input from Burks Oakley at the University of Illinois at Springfield


Comments (0)

You don't have permission to comment on this page.