The touch paper has been lit. Black Lives Matter have taken to the streets. The revolution has started – statues have been pulled down and TV programmes have been removed from streaming services.
Apologies are also coming in thick and fast – a tearful Keith Lemon actor, a "sincerely sorry" Ant and Dec. Many white people are now joining the cause, stating their views on social media and beyond.
But racism is about action in everyday life, not just words or hashtags at a time of uprising. We can be careful about what we say – language is conscious and controllable. But it is perfectly possible to hold deep-seated racist views, sometimes subconsciously, and simultaneously announce you are definitely not racist.
Some 10 years ago, I started looking into the vexed question of the under-representation of people from Black, Asian and minority ethnic (BAME) backgrounds in academic and senior posts in universities. Universities were publically wringing their hands about this issue. It was emotionally charged with accusations of racial prejudice on the one hand, and the idea that racism is all in the past, with people just trying to get advantage by crying prejudice, on the other.
But what if most of us, at a conscious level, are no longer prone to open racial prejudice? What if at some deeper level, there is some independent system which is more susceptible to racial bias? This was the question we explored using the now well-known implicit association test.
The basis for identifying bias in such tests is how quickly people associate white or black faces and names with concepts like "good" or "bad". Research has shown that white people are quicker at associating white faces or names with the concept "good" than they are for black faces or names.
We tried to improve on the well-known Harvard test, where all the faces are unfriendly, by making all faces nice and smiley. Surely, there would be no implicit racial bias here.
Not so. We found a medium to strong implicit pro-white bias in white participants. This was regardless of what attitudes to race they reported that they had.
We also studied the shortlisting process for academic jobs experimentally. We presented participants with the CVs of four job candidates – two white, two BAME – for various positions with identical (but rotated) CVs. We also used a remote eye tracker to see what part of the CV they looked at on a computer screen.
We found that white experimental participants were ten times more likely to shortlist two white candidates for a lectureship post than two BAME candidates with exactly the same CV. We also found that white participants spent more time looking at good information on the CVs of white candidates and bad information on the CVs of BAME candidates.
Combating implicit bias
In other words, our "rational" decisions about the suitability of candidates are based on biased pattern of fixation. This is the reality of prejudice in action, working away below the level of consciousness. The practical implications are clear.
We should never use "first thoughts" or "gut instincts" as a basis for shortlisting, and never conduct any shortlisting meetings under strict time pressure. The more time pressure, the more powerful the effects of these implicit processes will be.
A helpful tool may be "implementation intentions" – which are conscious plans to override unconscious instincts. This may be in the form of reminders such as: "If I see the application of a candidate from a BAME background then, if I am white, I should be careful to scrutinise the best sections of the application once again before I make my final decision."
It sounds clunky and unnatural, but it can work, blocking the effects of parts of the brain that want to jump to an immediate conclusion.
Recent task force recommendations have spelt out other ways of combating implicit bias – including committing to a culture shift, introducing bias literacy, encouraging mentoring and empowering individuals to recognise and overcome their own implicit biases.
But the implicit association test itself is not without its critics. A new study argues that we should focus not on the test, but on the actual psychological mechanisms that can lead to implicit bias in actual discriminatory behaviour.
For example, with multiple sources of information, there may be biased weighting of certain information over others, such as emphasis on experience versus education in assessing job applications, where this weighting may vary depending on the race of the candidate. We must also tackle biased interpretation – such as the perception of an object as a weapon when in the hands of a member of a particular racial group.
My colleagues and I have also argued that the implicit association test is not even genuinely implicit, because it hinges on explicit categorisation by race. Participants have to explicitly assign the facial images they see into the categories "black" or "white", "bad or good" etc.
For this reason, we have just developed a new race implicit association test probing multiple attributes at once. Participants are asked to categorise images of black and white male and female individuals on the basis of either race (as before) or gender (also associating it with good or bad).
This means that we can look at people's racial biases when they believe they are sorting faces by gender. Again, reaction times are used to measure the associative connections.
We have found that there is still a race bias even in these tests, but the effect is reduced in size. This new test may have important diagnostic potential for the future.
We need a revolution in action – not just in rhetoric. New critical thinking about implicit processes could be powerful tools for identifying the hidden barriers to equality of opportunity. Maybe even the quiet harbinger of the real revolution is still to come.
Geoff Beattie, Professor of Psychology, Edge Hill University.
This article is republished from The Conversation under a Creative Commons license. Read the original article.