| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

FrontPage

Page history last edited by Kay Shattuck 14 years ago

Welcome to Quality Measurement in Online Learning Wiki

This is a workspace to begin the conversation we will have in person on April 6th in Seattle, WA.  Click here for the draft meeting agenda.  Please start the conversation by responding to the following questions:

 

Where are we today?

What are the opportunities and challenges to quality measurement in online learning?  How do measurement rubrics and tools fit into the broader landscape of online learning quality improvement opportunities?  What tools are emerging?  What is working?  What are the challenges?

  • In terms of opportunities - I think there is definitely a growing public interest and demand for online learning (in both K to 12 and higher education) and that this is reflected in organizations like the National Survey of Student Engagement (NSSE) developing new questions banks in an attempt to measure the "quality" of the online learning experience from the perspective of the student and instructor
  • From my perspective,  one of the challenges is defining what "is" a quality online learning experience - as it can mean different things for different audiences (e.g. students, parents, faculty, administrators)
  • In terms of emerging tools - Karen Swan, Phil Ice, and others have developed and validated a Community of Inquiry survey instrument (http://communitiesofinquiry.com/methodology), which I think can help design, facilitate, and direct a "quality" online learning experience
  • Quality Matters has drawn significant interest across numerous institutions and has produced promising results in terms of assuring high quality programs.  Quality Matters is participatory, replicable process focused on instructional design issues that allow its use regardless of academic discipline, so it's been welcomed as a practical way for faculty, designers, program managers, and faculty development folks to work collaboratively on one aspect of quality in online learning.  Additionally, it has opened discussion on campuses of the different tasks and skills needed to design, deliver/teach, and support online learning. After last evenings wonderful gathering I thought I might add just a bit more on what's happening at QM - some as background, some as immediate future activities. 
    • Quality Matters (QM) is an established inter-institutional quality assurance system for the improvement of online learning.   Quality assurance in online education is furthered by a rigorous faculty-centered, review process which utilizes the Quality Matters Rubric TM to determine when the design of an online course meets an established threshold of quality.   The Quality Matters RubricTM is comprised of 40 specific standards distributed across eight broad standards and is informed by nationally recognized best practices and  existing the distance education research literature.  QM is now actively engaged in promoting original research on the effectiveness of the rubric and the process.
    • QM, a program of MarylandOnline (MOL), is not just a rubric of standards, but an established, replicable process of review and improvement of courses.  By focusing on course design, QM is part of an institution’s efforts to systematically address quality in online learning.  While QM is the best known of MarylandOnline’s programs, current MOL projects being piloted focus on effective online teaching, including a faculty mentoring program, an adjunct faculty competency program, and a teaching competency program.  Additionally, a national study of issues impacting online course retention is underway.  QM will explore the potential of conducting program audits for QM subscribing institutions that take account of factors from institutional leadership to infrastructure support and everything in between that have a bearing on a program’s ability to serve distance learning students.
    • Originally designed for higher education, Quality Matters will launch a Grades 6-12 edition of the rubric and review process this summer.  That rubric was developed in partnership with Florida Virtual Schools.  It’s been field tested with virtual schools in 4 states.  Efforts are underway to develop an edition of the rubric and process for reviewing self-paced, professional development courses. 
  • Transparency by Design has garnered significant interest among and provides a roadmap for making meaningful comparisons across institutions.
  • WCET's Edutools project and Sloan-C's Effective Practice wiki are examples of knowledge sharing around technology and practices, respectively, that establish precedents for a clearinghouse model that could be elaborated on.
  • While somewhat dated, CSU-Chico's rubric for online courses ("What does a high quality online course look like?") still is relevant - see:  http://www.csuchico.edu/celt/roi/
  • For many years, the WebCT Exemplary Course Project identified high-quality courses.  Blackboard continues with its Exemplary Course Program - see:

    http://www.blackboard.com/Communities/Exemplary-Courses.aspx; Sakai has the annual Teaching with Sakai Innovation Awards (see: http://openedpractices.org/twsia)

  • With the exception of the Transparency by Design initiative, most of our quality assurance efforts have a very limited impact on the public, those who we seek to serve and those who develop public policy. Most of our efforts are "inside baseball" (well, its opening day and the real new year starts today) and whatever approaches we utilize, and there are several excellent ones noted herein, tend to have little impact on the public. Somehow we need to establish more effective approaches to enlighten the public about the quality of online programming. Most in the public don't understand how the accreditation process works, but they understand it has some value...we need to establish an approach that gets us to this same point. Even Transparency by Design, which is still in its infancy, has a limited reach and membership and primarily is a vehicle for proprietary institutions. Where are the publics and non-profits?  How do we engage/enroll them?
  • I think it is important to consider also the Sloan-C pillars (of quality) -- access, institutional committment, learning effectiveness, student satisfaction, and faculty satisfaction -- not only for their early insistence on quality and the collection of effective practices, but for their breadth, for going beyond the level of the course to recognize the importance of the larger context (programs, institutions, institutional culture) and the importance of access to all.  I hope we will retain something of this breadth because I think it is critical to understanding quality online education.
  • My sense is that the field is fairly well represented with rubrics and quality standards, enough for any one institution to find a tool that meets their needs. Additional attention can be directed towards refining and standardizing these instruments, BUT, not to the degree that they begin to constrain or restrict further evolution and innovation of these learning systems. We run the risk of trying to solidify an emerging field before it has even jelled. I believe that general guidelines of quality standards are fine but over regulation by any one organization or institution can be dangerous. I know that this may sound like heresy, but I believe that we need to practice caution in attempting to prematurely define the operating parameters on such a rapidly emerging and evolving field. LCR1

 

 

 

Where do we need to go?

What combination of tools is required to drive consistent, high-quality student experiences?  Where are the gaps to be filled?  How do we accommodate new web2.0 pedagogical approaches?  How do we design solutions for widespread adoption and sustainability?

  • In terms of combining tools, a colleague and myself have combined questions from the Classroom Survey of Student Engagement (CLASSE) and EDUCAUSE's Study of Undergraduate Students and Information Technology in order to create a survey that evaluates student experiences in first-year blended learning courses at our institution (Mount Royal University).  Here is a link to our survey instrument and a paper that provides an overview to our preliminary study results.
  • The variety of instruments that currently exist provide excellent frameworks for assessing quality from a number of perspectives, however, more work (such as CLASSE) needs to be undertaken to develop an understanding of the relationship between these various tools. The most desirable result of such exploration would be development and validation of a unified approach to assessing quality.
  • A centralized clearinghouse for dissemination of best practices, irrespective of an individuals organizational affiliation needs to be developed and promoted.
  • A centralized clearinghouse for impartial evaluation of emerging technologies (possibly an expansion of the Edutools model) is needed. Gaining buy-in from corporate entities to make beta testing available to evaluators would be desirable.
  • A centralized clearinghouse for dissemination of research findings perhaps organized around themes (eg., virtual labs, social networking, large classes, online discussion, and so on) with corresponding literature reviews.
  • For all of the above initiatives, development of a "counterweight" body might be desirable to help expand the potential reach of emerging initiatives beyond the US and Canada.
  • Another dimension under defined is instructor performance standards or guidelines as it pertains to online instruction. As with the instructional guidelines, I believe the online instructor would benefit from and articulation of performance expectations/suggestions, perhaps in the form of best practices in order to assure a quality online learning experience.
  • Lastly, metrics defining quality are nebulous and fleeting, as is the entire field of online education. Addressing quality as the student’s achievement of the learning outcomes may be the most productive and fruitful approach regardless of design or even the methodology. Over prescribed dictates and mandates require constant watering and feeding as well as enforcement mechanisms. LCR1
  • Standardized ways of measuring learning outcomes along the outcomes continuum (satisfaction, retention, success, achievement, proficiencies, performance) would go a long way, but some standardized measurement or description of the inputs (eg., course design (QM?), learner characteristics, professional development, student support) and processes (eg. CoI?, pedagogy, interactions, assessment) leading to such outcomes will be necessary for replicability.

 

 

Input from Burks Oakley at the University of Illinois at Springfield

 

Comments (0)

You don't have permission to comment on this page.