Close Menu Subscribe
Close Search
June 28, 2016

Systems that learn: creating an education evidence ecosystem

To get Australia’s children learning as well as any in the world, we need rigorous evidence of what works and why – accessible to every teacher, principal and policy-maker.

The ABC’s recent documentary Revolution School has given an insight into the fantastic work of teachers and school leaders at Melbourne’s Kambrya College. The show has chronicled the remarkable turnaround at Kambrya, which from 2008-15 rose from being in the bottom 10% of schools in Victoria to the top 20% (based on its students’ academic results).

… teachers are learners, and because they learn, their students learn better.

It’s easy to see that the turnaround rests on the dedicated efforts of the school’s teachers, who go above and beyond their duties to find what works for each student. Adding to their passion and commitment, the teachers at Kambrya had a great advantage, engagement with global education experts from Melbourne University’s Graduate School of Education. These experts[1] helped the teachers to engage with the great body of educational research to figure out what would work best for their students. They helped them to focus on the impact of their teaching, providing data about their lessons and students and giving direct feedback about how to improve their teaching. At Kambrya, teachers are learners, and because they learn, their students learn better.

FoordFoto_SVAEdu_GEPS (24) cr-cropped
Teachers, school leaders and system leaders need to be learning too; not just the children.

There’s a lesson in Kambrya’s experience for all Australian education systems. It’s not only students who need to be learning. Teachers, school leaders and system leaders should all be learning, too. These educators need to learn about the impact of their teaching, their leadership and their policies on kids’ learning. Unfortunately, right now they can’t do that as well as they might. Not all schools have access to the experts that Kambrya does, and so they can’t draw on the best educational research or assess their local impact well enough. That’s one reason Australia’s national education results have slipped over the past 15 years.

To get back on track, Australia needs a robust evidence ecosystem in education. This ecosystem would create a set of feedback loops so that everyone involved in education could learn about what works and why – to help all Australian children, just like Kambrya’s staff have done. Fortunately, emerging conditions mean there’s every opportunity for us to create that ecosystem now.

Emerging conditions

There is a growing recognition among Australian educators and policy-makers that key stakeholders need to look to evidence as we make decisions about what to do (and what not to do). Non-partisan organisations like the Mitchell Institute, the Grattan Institute, Learning First, and the Australian Council for Educational Research (ACER) increasingly focus public attention on Australian and international evidence about the state of Australian education and what might improve it.

Teachers and school leaders are increasingly engaging with evidence, too.

Reference to evidence is built into national, state and territory education frameworks such as the Australian Institute for Teaching and School Leadership’s Professional Standards for Teachers, New South Wales’s School Excellence Framework, and Victoria’s Framework for Improving Student Outcomes, which are themselves built on international evidence. State departments of education are investing in evidence. To name just a few examples, Victoria has brought the Teaching and Learning Toolkit to Australia, Queensland’s Evidence Hub has defined standards of evidence[2] to support educators in assessing education research, and New South Wales has established the Centre for Education Statistics and Evaluation[3].

Teachers and school leaders are increasingly engaging with evidence, too. There is a robust debate in the Australian education Twitter-verse about research evidence and its day-to-day usefulness in schools, and there are two new annual Australian conferences for educators interested in evidence: the Australian version of researchED, which began in the UK, and ACER’s Excellence in Professional Practice Conference. With Revolution School, the focus on education evidence and impact has even made it into the mainstream media.

Amidst this activity across the complexity of Australia’s education systems, the Commonwealth Government has recognised the possibility that national coordination of education evidence could have a beneficial effect. As a result, the Productivity Commission is undertaking an inquiry into the education evidence base.

All of this activity, from policy to practice, speaks to our collective desire to improve the educational outcomes of Australia’s children. We want to learn how best to help our children, and we want to act on what we learn. Despite this, there are still missing pieces in our evidence ecosystem which, if present, would go a long way to helping to get great outcomes for our children.

Missing pieces in Australia’s education evidence ecosystem

Rigorous evidence of ‘what works and why’

Some of the world’s leading educational researchers and research organisations call Australia home. There is a great tradition of qualitative and descriptive research that uses methodologies such as ethnography to understand teaching and learning in specific contexts.

Also, recent reports from the Mitchell Institute, the Grattan Institute, and the Australian Council for Educational Research analyse existing national and international data sets, such as the National Assessment Program – Literacy and Numeracy (NAPLAN) and the Program for International Student Assessment (PISA), to give deep insights into our large-scale national trends.

Teachers, principals, and system leaders thus have to make decisions about which approaches to adopt without rigorous evidence of their impact.

With these strengths, there is also a weakness. There is very little rigorous evaluation of the many hundreds of programs offered to and innovations developed by Australian schools and education systems. Teachers, principals, and system leaders thus have to make decisions about which approaches to adopt without rigorous evidence of their impact. At a systemic level, this means we are not learning about what works in our schools and why. We risk delivering programs that appear to work but on closer examination do not. (See below for an example.)

RCTs: the ethical issues

RCTs in education are often criticised on ethical grounds, because the children in the control group do not get access to a program that is presumed to benefit them. This criticism misses two crucial points.

First, the program may not actually be beneficial.

Take the case of RCTs evaluating ‘scared straight’ programs that aim to reduce crime. Groups of otherwise-similar youth at risk of criminal activity are randomly assigned either to visit prisons and interact with inmates, or not. In the program’s logic, the youth who visit prisons would be less likely to commit crimes because they know and would want to avoid the harsh conditions in prison. However, numerous trials[4] have found the opposite – the youth who participated in scared straight were more likely to commit crimes. So the program actually harmed them.

Second, trials can be designed such that the control group does not miss out on the program, but gets it after a delay if it is found to be effective.

The promise of randomised controlled trials

With these points in mind, we need to commission more randomised controlled trials (RCTs) into Australian educational programs and publish the results, regardless of outcome. RCTs are not a panacea and do not address all important research questions. Nevertheless, they fill an important gap in the Australian education research landscape. When designed and run well, they provide the most reliable way of concluding that a program has an impact. Without RCTs, we cannot know if a program, no matter how good it may seem, is actually the factor that causes the benefit.

We also need to know why it works, for whom, and under what conditions.

But we need to design RCTs well, so they don’t just tell us ‘what works’. We also need to know why it works, for whom, and under what conditions. Careful analysis of secondary factors such as school, teacher and student characteristics within RCTs can go some way toward this. But linked qualitative studies should also be commissioned, to garner rich information about how the programs were implemented. When RCTs are conducted in this way, they provide educators and system leaders with valuable evidence of what works and why. This information must be provided no matter the results of the research. It is as important to know what doesn’t work as what does.

This is the model of RCTs that SVA’s new social enterprise Evidence for Learning (E4L) has adopted when commissioning evaluations. E4L has announced the first two programs in the Learning Impact Fund and is working with the program developers and researchers from its independent panel of evaluators to design rigorous trials. When the trials are complete, E4L will publish the results, with a plain-English summary, on its website, so that educators and system leaders can learn from the trials.

Measuring all that matters

In Australian public discussion about education, student outcomes typically refer to narrow but important measures on a limited range of academic outcomes: literacy and numeracy, occasionally extending to science.

Largely, this is a result of the large-scale testing undertaken. Every Australian student takes the NAPLAN tests in Years 3, 5, 7 & 9, and nationally representative samples of students take international tests like PISA (for 15 year olds), the Trends in International Mathematics and Science Study (TIMSS) at Years 4 & 8 and the Progress in International Reading Literacy Study (PIRLS) at Year 4.

The results from these tests provide useful information at a system level to understand how we’re tracking nationally, both over time and in comparison to other nations. They help us learn about how our children are doing in some academic subjects.

But this is too limited. No parent or teacher wants their child to be fantastic at reading and sums but utterly incapable in all other aspects of life.

As a result, our national conversation will remain focused on narrower measures, sometimes to the detriment of great learning…

In 2008, the education ministers of all Australian governments signed the Melbourne Declaration on Educational Goals for Young Australians, aspiring that ‘All young Australians become successful learners, confident and creative individuals, and active and informed citizens.’ The Melbourne Declaration led to much more educational infrastructure in Australia, including the Australian National Curriculum[5] which includes descriptions of ‘general capabilities’[6] that students should develop through their schooling. These take in traditional capabilities like literacy and numeracy, but also information and communications technology capability, intercultural understanding, ethical understanding, personal and social capability, and critical and creative thinking.

FoordFoto_SVAEdu_GEPS (18) cr-cropped
The next PISA cycle will include a test of collaborative problem-solving.

Many schools, teachers and program developers have devised ways of knowing how their students are developing these capabilities. Unfortunately, we do not yet have ways of measuring these wider capabilities in a nationally comparable way or reporting on them. As a result, our national conversation will remain focused on narrower measures, sometimes to the detriment of great learning and holistic development. It’s time our national evidence ecosystem allowed for a robust conversation about other important educational outcomes.

Fortunately, international trends are moving in this direction. The next PISA cycle will include a test of collaborative problem-solving[7], in which students work in virtual teams to solve problems, with sophisticated algorithms scoring their interactions[8]. Recently, the OECD has indicated that future PISA rounds may also include tests of global competency and intercultural awareness[9]. These international measures will be helpful, but we should develop and deploy our own local measures, too, so we can learn more regularly how our children are developing.

Knowing how to get evidence into practice

We should do all of the great work above to produce rigorous, independent evidence of what works and why and we should make sure that we measure all that matters. These steps would make for a stronger education evidence base. But a stronger evidence base on its own is not enough. We must get evidence into practice.

… evidence needs to be worked into authentic professional networks.

Melbourne University’s global experts were critical to this process at Kambrya College. They supported school leaders and teachers to understand and act on rigorous evidence and to evaluate their students’ outcomes as a result. Unfortunately, direct engagement with experts is not an option for most schools.  A key challenge for education systems is figuring out how to encourage the practical use of evidence by educators.

Fortunately, there is emerging research[10] on the most effective ways to do this, which leads to two key conclusions:

  • evidence needs to be worked into authentic professional networks, and
  • in order for evidence to be used to best effect, system leaders must invest in building the capability of teachers and school leaders to do so.

If education evidence is going to make a difference to student outcomes, as it did at Kambrya, then it must work in educators’ professional networks. Within schools, using evidence involves teachers and school leaders interrogating it, discussing how it might apply in their context, and deciding what to do. In making key decisions, teachers and school leaders look to their networks of peers and want to check any evidence they find against their own and their peers’ experience. Recognising this reality, the ecosystem must generate evidence in forms that can be easily taken up by these networks, and communicated in ways that speak to these networks by actors who are trusted within them.

Even when evidence penetrates educators’ social networks, it does not always lead to teachers and school leaders changing their actions to draw on it. When educators engage only superficially with evidence, it would lead at best to temporary or conditional changes in their practice. Educators may have one of various beliefs or behaviours that act as barriers to understanding and acting on the evidence. See table below on six potential barriers to adopting a new, evidence-informed practice[11].

Belief barriers

Context mismatch

“I believe that my context is completely different and that the practice wouldn’t have the same impact in my classroom.”

Change fatigue

“I believe the practice won’t have more impact than current practice and I’m sick of all the new policies, paperwork and people.”

Behavioural barriers

 Cruise control

“I want to use the practice, but I don’t because it’s easier to rely on my ingrained habits and routines.”

Complexity of tasks

“I want to use the practice, but I don’t because it’s far too difficult to implement.”

Capacity constraints

“I want to use the practice, but I never have the time or bandwidth to try.”

Capitulation

“I try to use the practice, but I give up quickly because I don’t know if I’m doing it right or if it’s having any impact.”

To change student outcomes, educators’ practice must move beyond this superficial engagement and surmount these barriers. For what this might look like within a school see SVA Quarterly’s Spreading what works in education. But system leaders create conditions that make the work within schools possible.

To help school leaders, system leaders should:

  1. regularly communicate the importance of acting on evidence
  2. create roles that support teachers and principals to act on evidence
  3. invest time and money in professional learning (or allow principals to do so), and importantly
  4. model the process of using evidence to inform their decisions.

If system leaders don’t act in these ways, it is easy for educators to remain blocked by their barriers to change.

For each approach, systems should measure how much educators’ attitudes and practices change…

As system leaders create these conditions, the evidence ecosystem must generate good evidence to learn about the impact of their efforts. Just as they measure student outcomes, systems also need to gather data about the most effective approaches of evidence dissemination and engagement for educators. For each approach, systems should measure how much educators’ attitudes and practices change, and how much impact this has on student results. We also need to understand the capability of educators, individually and collectively, to engage with evidence and effectively implement evidence-based approaches.

Here, systems should have national measures of

  • attitudes toward and opinions about using evidence
  • skills and capabilities to implement evidence-based practices for improvement in students’ outcomes and
  • ability to capture data and evaluate impact at the local level.

This would provide useful information about how well educators can use what we learn. We could use this information to make wise choices about where to invest professional development time and effort for the best impact on student outcomes.

A thriving ecosystem for thriving kids

Australia rightly has high ambitions for its young people. We want them to fulfil their potential and contribute to the continued advancement of Australia and the world. To achieve these ambitions, we need a world class education system. And for this, it is essential to grow a thriving and dynamic evidence ecosystem.

The ecosystem must produce rigorous, trustworthy evidence about what works and why. The ecosystem must let us measure what matters, those broad skills that are critically important for young people’s success in the emerging global reality. The ecosystem must also ensure that evidence meets educators where they are, must support them to analyse, synthesize and apply it, and must measure how much those efforts improve kids’ outcomes. This is what is so revolutionary about the Melbourne University academics working with Kambrya College. If the evidence ecosystem can make that support available to all schools, even those without a direct connection to a university, we can create systems that learn. And if we have systems that learn, we will have children who learn as well as any in the world.


Endnotes

[1] Professors John Hattie and Lea Waters and Associate Professor Janet Clinton

[2] Standards of evidence, Queensland Government

[3] Centre for Education Statistics and Evaluation, NSW Education

[4] Scared Straight and Other Juvenile Awareness Programs for Preventing Juvenile Delinquency: A Systematic Review

[5] Developed by the Australian Curriculum, Assessment and Reporting Authority (ACARA) in consultation with educators and the public

[6] General capabilities, Australian Curriculum

[7] Draft collaborative problem solving framework

[8] This test is based on work developed by Professor Patrick Griffin at the Melbourne Graduate School of Education.

[9] Pisa tests to include ‘global skills’ and cultural awareness, BBC

[10] See, for example, Using Research Evidence in Education: From the schoolhouse door to Capitol Hill, K.S. Finnigan and A.J. Daly, eds.Springer International Publishing, Heidelberg, 2014.

[11] Developed by Suzie Riddell, Simon Breakspear and John Bush (author) for the 2014 SVA Education Dialogue.

Back to top