On Nigeria, Systems Thinking and Corruption

The problem with the way our leaders talk, and what it says about how they—and we—think

I recently passed Ikorodu Road and realised the under-bridge traffic of road-crossers was gone, because a wire fence now divides the road.

I thought it was simply brilliant.

No more of the meaningless campaigns by government to convince passersby to stop crossing expressways. No more of the waste of human resources in the form of KAI (Kick Against Indiscipline) officers to “catch” those who persist. And no more of the lambasting of Nigerians for not loving their lives.

A simple wire fence, and problem solved.

That’s the difference between thinking systemically versus the knee-jerk response variety common in Nigeria.

I’ve long been interested in systemic thinking because I’ve always found patterns intriguing, first when I discovered science in secondary school, and then when I was trained to find them as a medical doctor (pattern recognition is maybe half of a doctor’s job, but that’s another story). But it was Peter Drucker’s The Effective Manager that first helped me capture the concept in words.

It was a passage about decision-making (as much a doctor’s main job as it is a manager’s) and he explained that your first job in addressing any problem must be to determine if it was a single random event or if it was only part of a larger pattern. This single exercise was invaluable because it gave me a way to think through problems that I could use in real life.

Once Drucker laid it out, it wasn’t hard to recognise the conceptual equivalents of Drucker’s explanation in scientific measurement: systematic and random errors.

Systematic errors are what you might call “reliable” errors: you can reliably expect them to occur every (or almost every) single time they have a reason to occur. A simple example would be a weighing scale that is off by say 1 kg. You can expect it to always be off by 1 kg. In fact, systematic errors are so reliable, that when they can be properly accounted for, you can actually plan around them.

Random errors are the beasts, though. Every scientist’s worst nightmare. They are the errors that show up without reason and cannot be planned for. A random error would be a sudden gust of wind when you were trying to take a measurement. You can’t predict them, you can’t plan for them, you can only hope they don’t happen. Worse, you can’t guarantee you’ll even recognise them, or undo their effects.

Like I said, random errors are the real beasts.

(Both kinds have their social corollaries, of course. You could think of systematic errors like character flaws, pretty much, built into who we are. Random errors, meanwhile, would be what we think of as “mistakes.” And we all know how upsetting it is when someone we think has a systematic error claims it’s a random. Or how terrible it is when we mistake a character flaw for a mere mistake. Topic for another day.)

Anyway, you see how this all connects with Peter Drucker’s insight. What he was basically saying was, when something goes wrong, the first thing to ask before even attempting to fix it is…

“Is this a systematic error or a random error?”

You see that question? Look around Nigeria, and you can almost pin the persistence of all our problems to a failure to ask it. We don’t think systemically.

To some extent this is normal human behaviour, observable everywhere in the world. Our default is to take each problem as though it stands on its own. The difference in other climes is not that regular folks think systemically, but that systemic thinking is built into structures.

The worst of it, however, is our failure to work this out is evident in the biggest thorn in our national flesh: corruption.

Ayo Sogunro has written about this in his brilliant essay [1] on greed corruption versus need corruption (which every Nigerian leader should totally read), where he highlighted our focus on the greedy form of corruption (stealing money), while we largely ignore the necessary corruption that plagues our nation: the corruption that is so pervasive, overcoming it requires nothing short of heroic conviction. And heroes are, by definition, exceptional.

What Sogunro was saying, basically, is that corruption is a systematic error and we should stop treating it like it’s random.

Now I know some don’t like to hear this, because they think it takes away personal responsibility, but that itself is a failure to appreciate how much what we call “personal” is social. (If you doubt the human propensity to social influence, simply Google “Milgram’s obedience experiments” [2] and the “Stanford prison study.” [3] (Feel free to use the exact terms in quotes.)

Personal choice is real, but let us not imagine that it is isolated.

Anyway here’s my point: if corruption is a systematic error (as I believe, and as I understand Sogunro to be saying), a simple way to test it would be to show that it is indeed predictable, a reliable error, and not a random one. And I believe I can do just that.

My submission is simply this:

Corruption arises from a system failure at a user interface.

In other words, where a system fails to produce the expected results for a specific user, that user will likely bypass the system to get those results for himself or herself, even if it’s at the expense of the value of the system to everyone else.

Call it selfish behaviour, but you cannot declare it irrational. In fact, it’s the most rational thing to do, if you weren’t taking ethics into consideration: if a system isn’t producing the promised results, why on earth shouldn’t I bypass it for my own good? Especially if I have the means to?

Now imagine a setting where you can never quite expect the system to work for you the individual and suddenly we can explain pretty much all the anomalies we see in Nigeria.

  • Why do we go against one-way traffic? Because we don’t expect the road systems to work?
  • Why do we jump queues? Because we don’t expect queues (which are a micro-system in themselves) to work.
  • Why do we have massive ports corruption? Because we don’t expect the importing system to work.
  • Why do we bribe policemen? Because we don’t expect the justice system to work! (It’s also why jungle justice is so common by the way.)
  • Even people stealing money (Sogunro’s “greed corruption”) may be at least partly attributable to a distrust of a proper compensatory system.

And let’s face it, those expectations aren’t misplaced. Those systems really don’t work.

If all this is true, making these systems work should generally improve popular behaviour, right?

Turns out it does.

Like with fencing off the express, which I mentioned at the start. But let’s even consider another example: BRT queues. As systems go they are far from perfect, but they work better than most queues do in Nigeria, which is even more unexpected because BRT queues are longer than most. But they work. Buses come mostly regularly, and when they show up they take up to 50 people (standing included), so even 200 people on the queue can be cleared in just four buses. (In the queue at my voting centre for the 2015 elections too, people waited: the queue worked.)

When waiting won’t leave us hanging, it turns out we can wait.

One more thing.

Whenever I have these conversations about systems, someone is bound to ask, “But who make up the systems?” the implicit accusation being that it’s still up to individual people to change their ways.

Let me make this clear: individual people can change systems, but that’s a very long road and it still often involves getting the buy-in of the leaders at some point. It’s leaders who have the primary job of fixing systems, or building them. (And non-leaders have the job of holding leaders accountable to doing this.) And it’s irresponsible of leaders in any system to say things like, “You people should behave better,” and “You people shouldn’t act badly.” It’s worse that we let them get away with that kind of talk.

And what that kind of talk shows is a failure on the speaker’s part to appreciate a simple reality:

Systems determine minimum acceptable behaviour.

Wherever we are, whatever community or culture we’re part of, whether it’s as small as a company or a family, or as large as a state or country, it’s a system. And the way it is structured, the things it rewards, will determine what kind of behaviour is the minimum required to succeed, or even get by.

A friend recently changed jobs from a place where people were polite to another where swearing and cursing were the norm. The people in the first place weren’t necessarily better, nor were those in the second necessarily worse. They just belonged to different systems that tolerated different kinds of behaviour.

The reality is most of us tend to rise or fall to whatever levels of behaviour are acceptable wherever we are. (Think how people act at reunions.) Of course, we’re capable of fighting this, but it requires effort, and even then, who isn’t familiar with finding that one’s “high standards” in one setting turn out to be quite ordinary in another?

Whatever behaviour any system tolerates or engenders, you can’t reasonably expect the generality of population to act above it. You simply can’t expect the exception to be the norm, except you’re prepared to do the work of actually making the exceptional normal. And you (especially if you lead within it) don’t do that by lecturing at people like you’re their daddy.

You do it by making the desired behaviour easy, obvious or even inevitable.

Like, you know, fencing off the express so people simply use the bridges you’ve ensured are in place.

This thing doesn’t have to be rocket science.

References (and recommended reading)

(Speaking of which, why doesn’t Medium let you reference like Markdown does? And for that matter, why doesn’t it support Markdown? Oh, well.)

  1. Ayo Sogunro on “need corruption” and “greed corruption”: ayosogunro.com/2014/06/11/why-i-am-corrupt-in-defense-of-nigerians-by-ayo-sogunro/
  2. The Wikipedia article on the Milgram experiments on obedience: https://en.wikipedia.org/wiki/Milgram_experiment (Read more at the references.)
  3. The Wikipedia article on the Stanford prison study: https://en.wikipedia.org/wiki/Stanford_prison_experiment (Again, feel free to read more at the references.)

If you like this article, take a moment to show it some ❤︎ and then share it!:)

Published by Doc Ayomide

I’m a medical doctor with specialty training in psychiatry, and I love thinking and writing about what it means to be human.

Leave a comment

Leave a Reply

Scroll down to content