Explainer: Constructive Alignment

Constructive Alignment is probably the central model of curriculum in Higher Education. This blog intends to flesh out a brief introduction to the theory for people who are new to it, with specific examples of how I’ve used it in context. It ends with some criticisms of the theory and a few suggestions for further reading.

Constructive Alignment: Backwards Thinking

Constructive alignment describes how educators can usefully think backwards from outcomes to interventions.

Starting with the intended objectives of the education, teachers can design assessments which demand that students evidence the outcomes, and then teaching which helps students to become people who can pass those assessments. Aligning outcomes with teaching via assessment is “constructively aligned” practice.

This is all a bit abstract! Perhaps sharing some examples from my practice will help make the ideas more concrete. I’ll work from cases where I have very little control over the outcomes to ones where I have substantial control over them.

In general, what I am trying to show is how I have made an effort to align the teaching in ways which respond to both the content and skills demanded of graduates.

Example 1: A Class

I run a class on comparing elements in the Periodic Table. The intention and assessment are beyond my control: the intention is for students to build a synoptic understanding of Inorganic Chemistry, and the assessment is a 25-mark essay to be written in 45 minutes with a rubric of the form “Compare and contrast the chemistry of boron and gallium.”

Constructive alignment allows me to narrow my focus to the assessment task: how do I get students to write a high-scoring essay in the exam room? 

Before the class I give students exemplar essays which I have myself written in 30min. Students give each of these essays a mark out of 25 and then write one essay of their own from a list of possible comparisons, which I mark and return before the class.

What I’m trying to do with this teaching is two things. First, I want to help students understand the tacit expectations of the assessment task (“essay” can mean “full bullet points”, you should have an introduction, your conclusion might look something like this for a p-block comparison). The other is to exemplify the type of content which is appropriate (use of simple examples rather than esoteric ones, a clear appeal through subheadings to structuring principles like oxidation states and group trends, how to develop a comparison rather than two independent “shopping lists” of facts about elements).

Example 2: A Lecture Course

I had a lecture course on Transition Metals. Amongst other things, I really wanted students to be able to discuss High Spin and Low Spin complexes fluently. What assessment allows me to get students doing that? I made sure to ask an exam question on this idea each year (e.g. by asking students to explain the magnetic moments of two octahedral Fe(II) complexes and interpret the difference by appeal to the field splitting). I then made sure to use part of a lecture to discuss the balance of pairing and promotion energies, giving clear examples of where the spin state varied because of the field splitting of the metal d-orbitals. This aligns my content (didactic) teaching with assessment.

I also placed this style of problem in parallel workshop worksheets - arguably workshops are a better alignment than lecturing because they involve students doing the relevant task rather than just being told about it. Not only is the workshop content aligned with the assessment, but so are the exam-authentic problem solving skills.

I took a similar approach to other intended outcomes (e.g. field splitting diagrams in different geometries, factors affecting delta, explaining the colours of complexes, field stabilisation energies, the Irving-Williams series, Frost diagrams of d-block metals). The course had a lot of learning outcomes! I think most chemistry courses do.

It’s often much easier to carry out constructive alignment when you have more autonomy over the components of an intervention. Lecture courses are often a moment where an individual has quite a lot of agency over all elements of the learning-assessment-outcomes sequence.

Example 3: A Programme

The OxICFM doctoral degree is designed to produce graduates who are able to articulate and solve challenging problems in chemical manufacturing. What kinds of assessment align with this outcome?

In the taught portion of the degree, we designed assessment formats which demanded that students communicate effectively (e.g. presentations, reports, infographics). Format decisions like this are content-neutral, but address some elements of ‘articulate’ as a course outcome. The content was built around challenging problems across the molecular, nanoscale, and extended solid lengthscales. Academic and industrial chemists gave our students real problems (often still-unsolved) and teaching interventions were built around small-group professional engagement with scientists and the primary literature.

Programme-level outcomes are generally much more vague than those of a lecture course, and are therefore often harder to evaluate. This doesn’t stop you from using constructive alignment to design a programme, but alignment at this level can often look quite unspecific. It is important to the course outcomes that students can give a strong spoken presentation on unfamiliar primary literature at short notice, but it doesn’t really matter which specific modules get them to do this.

I feel this might be seen as the other side of my experience running a class on comparing elements (Example 1): I have agency in describing the big picture, but no strong role in designing teaching or the specific assessment task.

Some Objections to Constructive Alignment

Awkwardness

It’s worth reflecting on how awkward it can be to think like this! Articulating professional standards is incredibly difficult, and also typically contentious (though in Chemistry the QAA benchmark statement and RSC accreditation documentation are important points of reference). It’s often hard to express what we want learners to accomplish, and people disagree about what we should be aiming to do. The act of expressing also sometimes simplifies, and sometimes people are worried that things around the edges can suffer for this. We’re used to thinking about teaching, and less familiar with thinking backwards from objectives.

I have come to believe that this awkwardness is actually a merit of constructive alignment, because it forces you to have the important conversations. There will always be disagreement about the aims of a Chemistry degree, for example, but it’s surely healthier to discuss it than to ignore it. Becoming a little clearer about what you’re trying to accomplish when you teach is never a bad thing.

Unintended Learning Outcomes

In principle, it is possible that unintended learning can be an important part of a curriculum. A rigid application of Constructive Alignment seems to squeeze out the space for spontaneous and unexpected education.

I think this is a fair criticism, but one easily addressed by framing Constructive Alignment as a principle of educational design rather than any kind of overwhelmingly correct approach. It’s a rule of thumb.

(There are also ways of aligning teaching with an incidental learning intention; ‘Rhizomatic’ education is probably the most famous model used for designing this kind of course.)

Threat of losing things

There are practical reasons to find constructive alignment threatening. If you have become good at giving your lecture course, it is much more comfortable to think about education from teaching to assessment to outcomes (“I cover this topic in my lectures, so it’s fine to ask about it in the exam” is a forwards logic rather than a constructively aligned one). Thinking from outcomes backwards is an approach which opens up more radical ideas. Is this topic something which graduates should know about? What kinds of assessment get students to use the knowledge in useful ways? Do we need a lecture on this topic, or would a class (or a lab) be better?

I see how this concern is intuitively important, because meaningful constructive alignment must leave the door open to pedagogic upheaval: you have to be willing to let go of lectures if they’re not the right tool. At the same time, it seems unlikely to me that topic-based lecture courses are going to disappear: they are efficient ways of communicating professional knowledge at scale, and a good lecture can be a fantastic education in a subject. There are really good reasons to use lectures, and having that conversation out loud seems positive to me. If we think there is value in what we’re doing, we should feel confident of defending it openly.

Empty Accountability

A final criticism of Constructive Alignment is that it can be used to advance an agenda of empty accountability if used badly. For example if a module’s paperwork must have three Learning Outcomes, it is often the case that these can be expressed in ways which serve the audience of the Quality Assurance team rather than the professional reflection of the educator. Copy-pasted Learning Outcomes or monstrously vague language (“problem solving in scientific contexts”) can be rational responses if Constructive Alignment is presented as an imposition rather than an opportunity.

I think this is a really important criticism, and I acknowledge that pursuing Constructive Alignment as an institution- or sector-level policy can make this kind of behaviour common. I think it is probably sometimes better to have no policy of Constructive Alignment than to have a poorly-implemented one which invites engagement in bad faith. At the same time, an educator who wants to use the tick-box constructive alignment exercise in good faith will probably find chances to do something positive.

Conclusion

I like constructive alignment! I’m a fan! I think it is a helpful way to make a lot of implicit things explicit, and once you get into the habit of thinking about alignment you start to unlock really powerful ways of seeing how higher education fits together.

It’s a versatile way to analyse the curriculum because it points at generic things (intention, assessment, intervention) in a way which honours the disciplinary dimensions of higher education. I think what it most helped me to do was understand how integral assessment is to education: it's what mediates the intention of educators with the behaviours of learners.

Where to Read More

I found the discussion in Biggs & Tang useful as a starting point in my first years of lecturing.

This J Chem Ed paper has a really thoughtful description of aligning an entire degree programme. It seems to me like this scale of thinking is particularly important in Chemistry because the discipline fragments into subdisciplines (IOPA) so readily. Thinking about negative intended outcomes is also something which I think this paper invites me to think about (e.g. considering what kinds of assessment and learning align with principles like “don’t overwhelm students” or “don’t disadvantage certain demographic groups”).

“Reclaiming Constructive Alignment” is a great paper on the policy context of Constructive Alignment, and usefully maps out some of the key history of its popularity in Quality Assurance processes. Most of the criticisms I’ve given above come from this paper. It’s comfortably one of the best papers I’ve read about HE curriculum policy - I would strongly recommend it if you’re interested.