… and perhaps what doesn’t seem to be working.
Recently, thanks to a tweet from Jenni Case, I came across Michael Schneider’s and Franzis Preckel’s analysis on the influence of 105 variables influencing student learning performance in higher education .
Teaching staff are being urged to adopt new and supposedly better teaching methods than traditional lectures. With more than 60 different methods ranging from problem-based learning to flipped classrooms, it can be hard for even an experienced university teacher to know where to begin. And then there are dozens of student factors that also influence learning performance. Knowing which characteristics of students, teachers and instruction methods influence learning outcomes and by how much will be immensely helpful which is why this is an amazingly useful paper.
The results should be compulsory reading for everyone involved in university teaching.
The original article is a demanding read so I have summarised it here. Dig deeper into the original if you need to:
It is not just their work, of course. Schneider and Preckel have brought together results from thousands of empirical trials over two decades of research.
This research provides for higher education a resource comparable with the Visible Learning initiative for primary and secondary educators . Visible learning drew on about 800 meta-analyses of 53,000 original empirical studies with an estimated 236 million participating students. Reliable data from higher education is scarcer. Schneider and Preckel selected 38 meta-analyses published since 1980 covering 105 variables with an estimated 1.92 million students participating.
It’s hard to overestimate the significance of this advance. We now have comprehensive data identifying the strongest influences on student achievement, ranked by a consistent effect size (Cohen’s d) ranging from +1.91 to -0.52.
An effect size of 1 indicates an improvement of 1 standard deviation. In terms of percentage grades, a typical standard deviation might be 8%, so variable with this effect size has the potential to improve student performance enough to gain an extra 8% in the course grade. (https://en.wikipedia.org/wiki/Effect_size provides a more detailed explanation.) Education researchers consider anything that produces an effect size greater than 0.3 to be worth considering, and there are plenty to choose from.
Not all analyses provided consistent results. Some reported significant effect size variation and in some cases likely reasons were reported.
This study has also set clear guidelines for future research that could reduce the number of future studies that would be rejected for methodological reasons. 86 of the 124 peer reviewed meta-analyses found by systematic searching were rejected for this study. While ten were excluded because there were later and larger studies available, most (41) were rejected because they only covered a single discipline in a single country.
The detailed results appear in Table 1 taking up 14 pages in the manuscript.
The strongest effect on student learning has come from student peer-assessment: peers grade a student’s achievement in addition to the teacher-given grade (1.91, p 24 ). Student self-assessment (0.85) has the 8th largest effect, and the analyses all indicated high correlations between student-, peer- and teacher-assigned grades. As one might expect, other strong influences include the teachers sensitivity and concern for class progress (0.63), the quality and fairness of exams (0.54) and quality feedback (0.47).
The strongest group of instruction effects were based on social interactions between students, and between students and teachers (p 23). Encouraging questions and discussions in class (0.77), and encouraging open-ended questions to get students to explain, elaborate or evaluate (0.73) are the most effective in this group. Small group learning is better than individual or whole group learning (0.51), especially when each learner has responsibilities within their group and collaboration is essential. Small groups can work just as well in large lecture classes as on their own with careful task design. The other factors in this group are “teacher availability and helpfulness” (0.77) and “teacher friendliness, concern and respect for students” (0.47). Overall, as in schools and informal learning, social interactions are strongly associated with effective learning because it requires active engagement by students, verbalization of one’s knowledge and comparison and evaluation of arguments.
Another category with strong positive effects is “stimulating meaningful learning” (p 24). It includes thoughtful course preparation and organization (1.39) and clearly explained learning goals and objectives (0.75). It helps to ensure that students can learn how the content relates to their lives and careers beyond school (0.65). Interestingly, project-based learning in which students work on authentic tasks under the supervision of a teacher is more effective for learning skills (0.46), but less effective than lectures for knowledge acquisition (-0.22). A project-based curriculum with appropriate lectures, is more effective than a single project-based course. Discovery learning is far less effective than conventional lectures (-0.38). Conceptually oriented tasks that elicit students’ understanding and misconceptions work well (0.47).
As expected, clear, understandable content exposition (1.35), teacher stimulation of students’ interest (0.75), speaking clearly (0.75) and enthusiasm (0.56) are all strong presentation factors (p 25). Presenting with slides (0.26), orally explained diagrams (0.38), especially with dynamic visualizations (0.87), as long as all distracting detail is removed, and slides only show keywords rather than full sentences. Though the effects might be smaller, these might lead to easier performance gains for many teachers. Note taking has a small positive effect (0.14) for material not shown on slides.
Information technology (p 25) only shows strong effects when used to implement other effective strategies such as student-peer assessment. Online learning is almost indistinguishable from conventional classroom learning (0.05). However, blended learning – mixed classroom and online learning – is more effective (0.35). Small group learning is much more effective face to face (0.51) than through group chats (0.16). Resource intensive techniques such as computer simulations and games provide only modest effects, except individual games (0.72). Intelligent tutoring systems perform poorly compared with human tutors (-0.25), but can be helpful if no other tutoring is provided.
Students self-efficacy, a belief that they can succeed is the strongest student influence (1.81, p 27). Surprisingly, general intelligence is a comparatively weak factor (0.47) compared with school achievement (0.91) and admission tests (0.79, p 26). Three learning strategies show strong influences: class attendance (0.98), persistence and effort (0.75) and adopting task-dependent learning strategies (0.65). Deep learning strategies have no significant effect, but shallow learning has a negative effect (-0.38). As expected procrastination (-0.52) and academic self-handicapping (-0.37) showed negative effects. No personality variables make much difference except for conscientiousness (0.47), test anxiety (-0.43) and emotional intelligence (0.35).
Schneider and Preckel conclude that we now have broad evidence on what makes higher education effective. Teachers make a difference: there are attributes and classroom strategies that teachers can use to make a large and positive effect on student learning. However, the ways teaching methods are implemented at the micro level have a large effect. For example, presentation slides work much better with just a few keywords rather than long sentences.
Teacher training works well when teachers receive feedback both on video and from colleagues. Ensuring that teachers have allocated time to prepare their courses also helps.
Relatively easy improvements include
- encouraging frequent class attendance,
- stimulating questions and discussion,
- providing clear learning goals and course objectives,
- asking students open-ended questions,
- relating the content to the students’ lives and futures,
- providing detailed task-focused and improvement-oriented feedback,
- being friendly and respecting the students,
- complementing spoken words with visualizations or written words (e.g., handouts or presentation slides), and
- letting the students construct and discuss concept maps of ideas covered in a course.
When combined together, the greatest influences on student achievement came from social interactions.
Assessment is at least as important as presentation and deserves as much attention in course design.
Combining teacher-centric and student-centric classroom activities works better than either on their own.
Education technology is most effective when it complements and supports classroom social interactions. There is no empirical evidence to support the notion that technology can revolutionize education, and technology cannot replace teachers.
To me, an interesting omission was cooperative learning, an instructional technique in which students work together in small groups on a structured learning task, in which positive interdependence and individual accountability are important factors. Small group learning (ie without requiring interdependence and accountability) was included (0.51). A recent meta-analysis provided data with these additional requirements (0.54) .
Recent technologies such as MOOCs, clickers, social media and others were not available long enough before the 2014 cut-off date for sufficient systematic research to have been accumulated.
One popular technology-enabled instructional technique, the flipped classroom, has been evaluated (0.36) in a more recent meta analysis . The positive effect was only evident if teacher contact time remained the same and quizzes were added and was no significant effect on student perceptions.
Implications for higher education researchers
There are still opportunities to evaluate different instructional methods, however the focus is now on combinations and variations. While it is evident that combining effective methods leads to further improvements, it is not clear whether the effect sizes combine linearly, or perhaps there are diminishing returns as more improvements are combined. It is essential that researchers collaborate to evaluate methods with large, statistically significant samples of students, and preferably ensure that control groups are used to obtain as much statistical predictive power as possible.
What’s not working so well?
Beyond teaching effectiveness lie troubling findings on graduate employability, above all the finding that university performance is almost unrelated to career outcomes. To me these questions should now be a stronger focus for education researchers. We know how to teach. We know less about what to teach and why, and how to align the hidden curriculum that today rewards independence with the needs of societies and workplaces where interdependence is critical .
My own experiences
Looking back on my own teaching career, I would love to have had the benefit of this research. After 40 years, I had reached similar findings by trial and error. The last course I designed incorporated student-peer assessment (1.91) and student self-assessment (0.85). I tried to be thoughtful about course design, and developed comprehensive lesson plans with expected learning outcomes and carefully designed assessments (1.39). I had learned to explain ideas clearly (1.35), and developed strategies to encourage near 90% class attendance (0.98). I think I encouraged questions and discussion. I know I was not always approachable or available out of class hours. I did teach students listening skills, and used additive rubrics for self-assessment . Above all, I encouraged as much in-class social engagement as possible and arranged for students to work in small groups with industrially experienced tutors.
I hope that new generations of university teachers can learn from this research and adopt effective teaching methods rather than learning from trial and error like I did.
1. Schneider, M. and F. Preckel, Variables Associated with Achievement in Higher Education: A Systematic Review of Meta-analyses. Psychological Bulletin, 2017. 143(6): p. 36.
2. Hattie, J., Visible Learning for Teachers: Maximizing Impact on Learning. 2012, New York: Routledge. 270.
3. Kyndt, E., et al., A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educational Research Review, 2013. 10: p. 133-149.
4. van Alten, D.C., et al., Effects of flipping the classroom on learning outcomes and satisfaction: A meta-analysis. Educational Research Review, 2019. 28(100281): p. 18.
5. Trevelyan, J.P., Transitioning to Engineering Practice. European Journal of Engineering Education, 2019. 44(6): p. 821-837.
6. Trevelyan, J.P. Incremental Self-Assessment Rubrics for Capstone Design Courses. in American Society for Engineering Education Annual Conference. 2015. Seattle, WA, USA: ASEE.
Photo Credit: NeONBRANDING via unsplash.com