Riding the Storm: Improving Course Performance/Interaction through Analytics and Proactive Methods of Engagement

Today’s Friday Focus on e-Learning was a replay of a a session from the 18th annual Sloan Consortium Conference on Online Learning held in October 2012. The presenter was John Vivolo, Manager of Online Learning, Polytechnic Institute of New York University.

Most learning management systems (LMSs) can generate numbers that can be placed in spreadsheets to proactively make alterations in the course (as the course is taking place).

Reactive – make changes to a course AFTER poor student performance on an assignment/exam. Student evaluations are an example of reactive. Academic department evaluations are also typically reactive (done at end of semester).

Proactive – set up preventive measures prior to an assignment/exam. Time-based, Individual assignments/content, Discussion boards.

Interaction: student-student, student-faculty, student activities.

1. Time-based activity – days of the week, times of day, days of month. Can influence improvements in content availability, assignment due dates, virtual office hours, etc… When are students going in and doing activities in the course?

What is your goal?

  • To accommodate student schedules? To reach the class at peak activity? Solution: Make content available at the peak of activity or right beforehand.
  • To create an equal distribution of activity through the week? Solution: Make content available at the lowest point of activity. Global solution: Create an “Interaction policy” (proactive). This policy would refer to student & faculty interactions, as well as when students are expected to go in to get various items from the course (or complete specific activities).

2. Individual Assignments/Content: Hits – track how often students view a content item. Goal: Avoid a wait and see approach.

How to use the numbers of how many times students clicked on a podcast? If they are clicking on it a lot (far more than the # of students in the class) then it could indicate that they are having trouble with the content (don’t understand content).

If they’re clicking on content a LOT, then before moving on (or before the next exam) provide:

  • discussion board Q& A
  • create a review sheet
  • host a review webinar
  • create a non-graded quiz

If one student seems to be accessing the content over and above others, reach out to that specific student to see if s/he has questions.

3. Discussion boards – the most commonly used interactive tool. Purpose: simulate an in-class discussion but in an asynchronous method.

Common discussion board goal – create a discussion that remains a fluid conversation over the week.

Can create discussion board interaction policy – respond to initial question, post at least once before X day, may have more than one topic or thread, etc…

Alerts can be set in Blackboard:

  • attendance alerts – students/faculty don’t access course in X amount of time
  • assignment/content alerts – student does not access content or assignment
  • due date alerts – student has not submitted assignment (before/after) due date

Managers/Directors

  • can use analytics for online faculty oversight
  • data can be collected for faculty – are they interaction; what and how are they interacting

Q&A at end of session:

  1. Do faculty feed the analytics back to the students – “this is what I’m seeing” – Presenter recommends NOT to feed the analytics back to the students. He believes students will feel as if they are being watched.
  2. Who runs the analytics reports? The Manager of Online Learning runs them, distributes to faculty, and then faculty do what they will with the information.
  3. Is the Interaction Policy something that is set centrally (overall), by college or department or faculty? It’s variable.

How do YOU use analytics to assess performance and/or adjust your courses?

Next Generation Learning: What is it? And will it work?

Today’s Friday Focus on e-Learning is a replay of the 2013 EDUCAUSE Learning Initiative (ELI) session from 2/5/13. Dr. Barbara Means, Director of the Center for Technology in Learning @ SRI, an educational psychologist, is the presenter.

What is NGL?

  • NGL better prepares students for a world that values and rewards deeper learning, collaboration, skilled communication, self-management, the ability to work across disciplines, and innovation practices.
  • NGL meets each student where s/he is and provides content, pedagogy, & access opportunities to meet individual needs.
  • NGL capitalizes on affordance of technology for learning.
  • NGL collects detailed data about the process of learning that can be used to diagnose student needs, and provide feedback to the instructional developer.

Challenge areas:

  1. Deeper learning – richly interactive technologies that increase student engagement and learning of conceptual content and 21st century skills. Example: U of Wisconsin-Milwaukee’s U-Pace – self-paced intro psych course; mastery based, shorter modules & end of module quizzes; timely & tailored feedback.
  2. Blended learning – combinations of online and teacher-led instruction to improve learning, increase completion, and lower costs. Example: Cal State U Northridge – redesigned gateway math course as hybrid alternative to conventional college algebra.
  3. Open Core Courseware – high-quality, modular, openly licensed course-ware for developmental, gateway, & high-enrollment core courses. Example: Cerritos College’s Project Kaleidoscope – 12 different OER courses implemented on 9 campuses.
  4. Learning Analytics – software for collection, analysis, & real-time use of learning data by students, instructors, & advisors to improve student success. Example: Marist College’s Open Academic Analytics Initiative.

What was learned:

  • Most Wave 1 innovations didn’t really have evidence of effectiveness before the grants began.
  • Many technology components weren’t completely developed before the grants started.
  • The most commonly reported difficulties were technology problems followed by student resistance. Students often didn’t have comfort being in charge of their learning.

Broader implications of the data:

  • There are campus impediments to a fast start.
  • Many faculty volunteer to try out new learning technologies and they typically respond more positively to innovations.
  • There are few online and blended learning initiatives set up to collect rigorous evidence of the innovation’s impact on students.

Barriers to collecting rigorous evidence:

  • Campus policies or IRBs may prohibit assigning students to courses with significant online components at random.
  • Some campus research offices weren’t willing to release student-level data.
  • Different instructors often don’t want to administer the same assessment.
  • Valid, reliable assessments weren’t readily available for many of the projects’ learning objectives.

U-Pace project Outcomes:

  • compared 230 students in U-Pace psychology to 334 students in conventional psychology course
  • positive effects on % of students earning an A or B (ES = +.96) and course completion (ES = +.35)

Cal State Univ Northridge outcomes:

  • compared 4,479 who took the hybrid course to 1,825 students from past courses
  • again, large positive effect sizes

The presenter went on to describe MOOCs and how the features of those course delivery models fit with or vary across different platforms.

Want to hear all of this session yourself while you’re at your own computer? Let Cindy Russell know and you can obtain the login to watch it at your place and time.

 

Evaluating Class Size in Online Education

The American Association of Colleges of Nursing (AACN) sponsored this webinar, facilitated by Dr. Susan Taft of the Kent State University College of Nursing in Kent, OH. Taft co-authored “A Framework for Evaluating Class Size in Online Education” that was published in The Quarterly Review of Distance Education, 12(3), 2011, 181-197.

The bad news … there is NO one size fits all for determining optimal class sizes for online courses. The good news … there ARE guidelines for determining optimal class size.

Optimal class size is defined as healthy revenue generation PLUS desirable student learning outcomes.

Variables (factors) associated with workload in teaching online courses include:

  • faculty experience with distance education
  • the level at which the course is offered – graduate or undergraduate
  • content to be covered and course design
  • size of the class
  • online platform used, and presence or absence of technology support and/or teaching assistants
  • the mode of instruction (e.g. whether strictly web-based or combined with other modes of instruction)

Taft went on to review three educational frameworks that provide guidelines:

  1. The Objectivist-Constructivist Continuum
  2. Bloom’s Taxonomy
  3. The Community of Inquiry Model

Class sizes on the objectivist-constructivist dimension:

  • objectivist – largely one-way communication – can be large class size
  • mix of objectivist-constructivist – medium teaching intensity – medium size 20+
  • constructivist – interactive with higher teaching intensity – < 20 students

Class sizes and Bloom’s Taxonomy dimensions:

  • upper levels of taxonomy – analysis, synthesis, evaluation – small class size < 15
  • middle of taxonomy – application – medium teaching intensity – 16-40 students
  • lower levels of taxonomy – knowledge, comprehension – lower teaching intensity – > 30+ students

The Community of Inquiry Model is the more complex of the three models. Three types of presence are recommended for online courses:

  1. Teaching presence (faculty)
    1. course design & organization
    2. facilitating discourse – this may or may not be used
    3. direct instruction – may be fully or partially used
  2. Cognitive presence (students) – may or may not be fully required
  3. Social presence (faculty & students) – faculty being a “real person” in the online environment; may or may not be present
  • With the Community of Inquiry model, partial teaching presence that is associated with lower teaching intensity can have a class size of 25+.
  • With full teaching presence, cognitive presence, and social presence, there is higher teaching intensity and smaller class sizes of < 20 students.

Using the objectivist-constructivist continuum + Bloom’s Taxonomy leads to a more objective and quicker determination of class size. When Community Of Inquiry model is considered, the complexity of judging appropriate class size increases.

Examples of class size determinations considering combinations of all 3 frameworks:

  • Use of objectivist teaching methods, lower levels of Bloom’s Taxonomy, limited implementation of COI – class size can be large, > 30 students
  • Constructivist methods, higher levels of Bloom, and full use of COI model – class size should be small < 15 students

For courses, determine how much the faculty member needs to be present and in the center to help students learn. Much of this determination (80%) can be discerned from the syllabus (as long as it’s a good syllabus). For the additional 20%, need to review online workload – are faculty facilitating good meaty discussions among the students, are faculty grading online discussions.

A truism – most faculty see their specific course as the highest intensity, requiring the highest workload level. In most cases, it is not true. So the administrator needs to review across all faculty. Develop guidelines for courses offered during this particular semester will have this many students allowed into them. Develop guidelines for different levels in the program. RN-BSN courses should have between 20-40 students in each section so that faculty can grade papers and give effective feedback to students.

Synchronous can add an additional level of teaching intensity – because faculty need command of the tools to make them work and because synchronous sessions tend to generate questions and issues that the faculty need to follow up on. Synchronous teaching should add in to the faculty workload.

Discussion of the Quality Matters Program – it is great for structure of a course, but it doesn’t address process and outcomes. The Sloan Consortium quality scorecard is a better model for online work, according to the presenter.

In addition to the presenter’s article (referred to above) some accessible online resources related to determining optimal class size in online education are:

REMINDER: UTHSC is an institutional member of the Sloan Consortium, which enables faculty and staff to obtain important and relevant materials related to online education. Contact Cindy Russell for details.

For follow-up material or discussions related to this or other topics, contact Cindy Russell at crussell@uthsc.edu or 901-448-6158.

Meta-analysis finds students in online learning conditions perform modestly better than students receiving F2F instruction

At UTHSC we have several programs and courses that are offered either fully online or offered in a hybrid format. Note: Hybrid = a blend of in-class and online activities.

A key question that repeatedly arises is whether the fully online and/or hybrid courses are “as good as” general face-to-face courses. This translates to a question of effectiveness of instruction, with the need to compare various forms of learning.

For those of you with questions, have a look at the report entitled “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies” that was published in Sept. 2010 by the U.S. Department of Education’s Office of Planning, Evaluation, and Policy Development Policy and Program Studies Service.

The bottom line of the report:

The meta-analysis found that, on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction.

Four research questions guided the research:

  1. How does the effectiveness of online learning compare with that of face-to-face instruction?
  2. Does supplementing face-to-face instruction with online instruction enhance learning?
  3. What practices are associated with more effective online learning?
  4. What conditions influence the effectiveness of online learning?

What did they do to get to the bottom line?Researchers systematically searched the research literature from 1996 through July 2008 to identify relevant and usable studies for a meta-analysis. From the more than 1,000 empirical studies of online learning identified, researchers found 45 usable studies and a total of 50 independent effects that could be subjected to meta-analysis.

Their key findings included:

  • Students in online conditions performed modestly better, on average, than those learning the same material through traditional face-to-face instruction
  • Instruction combining online and face-to-face elements had a larger advantage relative to purely face-to-face instruction than did purely online instruction
  • Effect sizes were larger for studies in which the online instruction was collaborative or instructor-directed than in those studies where online learners worked independently
  • Most of the variations in the way in which different studies implemented online learning did not affect student learning outcomes significantly
  • The effectiveness of online learning approaches appears quite broad across different content and learner types
  • Effect sizes were larger for studies in which the online and face-to-face conditions varied in terms of curriculum materials and aspects of instructional approach in addition to the medium of instruction

When the researchers conducted a narrative review of experimental and quasi-experimental studies that contrasted different online learning practices, the majority of studies suggested:

  • Blended and purely online learning conditions implemented within a single study generally result in similar student learning outcomes
  • Elements such as video or online quizzes do not appear to influence the amount that students learn in online classes
  • Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection
  • Providing guidance for learning for groups of students appears less successful than does using such mechanisms with individual learners

The researchers offered caveats to their findings that included:

  • Despite what appears to be strong support for blended learning applications, the studies in this meta-analysis do not demonstrate that online learning is superior as a medium.
  • Although the types of research designs used by the studies in the meta-analysis were strong (i.e., experimental or controlled quasi-experimental), many of the studies suffered from weaknesses such as small sample sizes; failure to report retention rates for students in the conditions being contrasted; and, in many cases, potential bias stemming from the authors’ dual roles as experimenters and instructors.
  • Although this meta-analysis did not find a significant effect by learner type, when learners’ age groups are considered separately, the mean effect size is significantly positive for undergraduate and other older learners but not for K–12 students.

What have your experiences been as instructor or student in online/hybrid courses? When you access a copy of the report, do the findings ring true to you?
7.365_todd_takes_a_class

Image attribution: Image copied by C Russell 20120105 // Photo of 7.365_todd_takes_a_class // Photo provided by Todd Morris http://www.flickr.com/photos/alohateam/4253713645/   // Some rights reserved by Todd Morris http://creativecommons.org/licenses/by-nc-nd/2.0/deed.en