Leadership and maintaining discipline

I’ve covered before the fact that a title of “Leader” doesn’t actually make you a leader. Simply being in charge doesn’t bestow leadership, which is active, example setting, and interactive. There’s a reason the phrase “lead by example” exists, and countless tales of commanders leading men into battle having more respect than faceless commanders elsewhere.

Yet still one of the most pervasive ideals of management roles is “maintaining discipline”. That sounds reasonable at a glance, right?

 Well, let’s look at what it really means:

No alt text provided for this image

So, realistically we’re talking about conditioning, control, enforcement, self-control, or punishment. Only one of these things speaks me me about a skilled worker effectively getting their job done; see if you can spot it. The rest all speak only of a sense of power.

This might make sense in a military setting, but in business, in a socially complex and multiple-industry environment relying on innovation and progress, it makes a lot less.

What I find interesting is that when you look back, the idea of maintaining discipline is a holdover from the earlier days of Taylorism where it meant ensuring people in a factory production setting essentially acted like components in a machine, and “discipline” meant removing as much humanity as possible to enforce efficiency.

With the changes of the modern world and market, as well as the advanced complexity and role requirements, this is distinctly anachronistic; if we’d had the capabilities then we do now, we’d have automated all that from the get go, and I think modern worklife would look very different.

No alt text provided for this image

Or:

You can’t break down something complex into smaller pieces, only something simple or complicated. Most business is complex.

But how do I maintain control of a workforce?

The idea of requiring discipline, as if a company is an army, makes a mockery of the mutually beneficial contract between company and skilled workforce, who are supposed to fit together to produce something of worth. Workers are adults; if you don’t trust them to do their job, work from home, be sensible – whatever it is – and have to micromanage them or police them to ensure they are not falling out of line, why have you even hired them? What culture does this suggest you have? How do you get things done efficiently? And what management style have you been conditioned to?

This is an issue I’ve seen with a lot of MBAs in the past, and I’ve had people who teach them at prestigious business schools (such as the London School of Business) agree on this point: the core, traditional business concepts are still taught, despite having never been truly fit for purpose, and because it’s a qualification, it’s taken as the be all and end all of management science, despite having hardly changed since Henry Ford and Frederick Taylor created the core concepts! An MBA is a definite achievement, don’t get me wrong, but it’s not simply an argument-ending mic-drop. There are huge benefits to studying for one, because you’re not only taught traditional management, but we need to also treat things with insight and curiosity to move forward and find better ways to do them.

A qualification is the start of true learning, not the end of it.

Just because something has always been done that way… it doesn’t automatically follow that it’s the best way to do it.

But some people need discipline!

If you impose strict restrictions and policies on a workforce that is not invested in the system, you invite gaming behaviour, cynicism, sycophantism, and lack of engagement. At this point, yes, people are perhaps acting in a less adult fashion and require discipline to realign them with the company’s expectations – but that’s the whole point. This is not a healthy expectation in the first place.

The same ideal of “discipline” also sees the repression of the innovators in a company – the heretics, mavericks, outliers. Discipline becomes about fitting in, meeting metrics that are more important than the outcomes they purportedly measure, and – essentially – supporting a rigid hierarchy.

The very fact you have created an environment like this as a role-titled leader has two effects:

Firstly, you have now invited the very behaviour you tried to avoid, allowing those who game and manipulate politics, rules and policies to hide behind and actually be disruptive to work for personal gain; in other words, you have encouraged a toxic culture and atmosphere. These people – who aren’t invested, don’t care about the company or their coworkers, and will do anything for themselves to get ahead – do need discipline, but they are rarely the ones that get it.

If you worked on a basis of investment and mutual trust in the first place inside a healthy culture, they’d have far fewer places to hide and could be mitigated or removed much more quickly and cleanly – or not invited in in the first place!

Remember:

Culture is defined by the actions and inactions of leadership. If discipline is required company-wide, accountability for this is held only in one place.

No alt text provided for this image

Secondly, you’ve set up fertile soil for the Cycle of Woe:

No alt text provided for this image

When the hierarchy in a company matters more than anything else, the system isn’t working. If you truly lead – without relying on your training of the bureaucracy being the structure to maintain – people will invest in you and the company, and you won’t need to “maintain discipline” outside the very few actual troublemakers, and each of those needs to be dealt with in context. Not all troublemakers are troublemakers; sometimes they just need to do things differently, but can then deliver outstanding benefits.

And this attitude of hierarchy being all is instilled from the very first interview, with many companies ghosting prospects, demanding what they will offer whilst wielding contracts stating more hours than contracted are expected to be worked, and treating the process as if prospects are vying for a great honour – rather than looking at fit and human skills to move forward to mutual benefit.

Leadership relying on enforcing discipline simply isn’t Leading.

This is why I find the entire concept a barrier to business, to trust, and to human interaction. When a manager says “I need to see what you do with every minute of your day” even though you deliver consistent, excellent results and outcomes, what they’re really saying is “I feel the need to exert power over you”, and I can virtually guarantee they are bad at their own job and not thinking about benefiting the company if they’re spending their time micromanaging yours. When they say “I need to maintain discipline”, it’s worth asking why. Is this one problem person? Is it everyone? Is it really a problem, or just something requiring a paradigm or interaction shift?

If the answer is simply “because I’m in charge”, there is a major problem.

I once had a boss tell me I wasn’t allowed to do something I needed to do for my job, and effectively block my career for his own purposes (along with constant micromanagement, isolation, and offline talks to other management, as well as directly breaking my professional trust). When I inevitably had to do what he’d told me not to to actually do my job, and I then raised this issue to his boss, he used this as a demonstration of how uncontrollable and untrustworthy I was. HR’s response – even though they found on my side! – was “but you disobeyed a direct order”(!). To which my response was roughly:

“What is this, the army? Am I doing the job better than anyone else?”

“Yes.”

“Do you want this to continue?”

“Yes.”

“Then please stop ‘disciplining’ me and let me get on with doing that.”

No alt text provided for this image

Being a “boss” or “in charge” doesn’t make you a leader or give automatic respect. It’s also worth noting you can be a leader without it being in your role description; anyone who influences people positively within the company is a leader, whatever their actual job. Look at how they enable, invest, and encourage – without the power inherent in a title, or from the bureaucracy – and you can see how leadership works, and discipline is reduced to the only beneficial form: self-control. 

Don’t just take my word for it – there is a wealth of decades of evidence, studies, and frameworks designed around this very real problem. People are complex, and not perfect; companies need to truly understand how to manage them. Realistic expectations must be set either side.

There’s a lot more to this, and it integrates into a lot of areas, but for now:

I’d suggest it’s time we rethink our conditioned ideas of command and control, and maintaining discipline. 

How to be Positive 2: Positively Negative

In Part 1 I spoke about Positivity, what it is, and where it’s been going wrong. Now I want to explore more deeply to further identify what is Toxic vs what is Genuine and where we often lose the sight of constructive positivity or negativity.

But before that, I want to clarify that there are two types of “negative” I refer to in this article (which should be obvious in context!):

  • The concept of negative as “not being positive”
  • Something that is actually damaging to us

Firstly, I want to look at why people might be negative – and to point out that it is almost impossible to be 100% positive or negative all the time, so we should probably stop blanket-accusing people of this. It’s a very inaccurate and unhelpful habit which can reinforce problems.

Negativity sucks

There is no denying that toxic negativity is vampiric. No wonder we try to avoid it! Sometimes people are negative to harass, to bully, to compete, to divert, to assert power or control; people can be negative through personality trait or experience. Some people are cynical to a degree where they impact getting things done. Sometimes people are negative as a result of being a jobsworth, or from a limited, rigid mindset that sees little growth. Negativity is also habit forming, and there is a perverse pleasure to always picking the negative path – at least you won’t be disappointed, right? It’s very hard to be motivated when you think like this exclusively.

All of this is negative negatives, and we all know how draining it can be. But I want to expand on negatives that can actually be positives, ignored to our detriment and damage, and also highlight how disturbing and damaging it is to invalidate valid negativity.

Some “negativity” is actually simple constructive criticism. So how much of what’s being labelled negative is toxic? Perhaps not as much as we think. When beneficial information to resolve genuine issues is automatically ignored because it isn’t positive, problems increase. An attitude of relentless “only provide solutions, not problems, be positive” no matter what is not always realistic or pragmatic.

Something else to also bear in mind is that all of this can depend on whether people are also trying to sell stuff. Negativity cuts through falsely positive bullshit and is often straight-speaking. Sales pitches, manipulations, and cons are almost universally positioned as positive. People say they value straight-shooters, but most of us don’t like anything invalidating a positive message. This becomes a rabbit hole of whether positives are really negatives and vice versa, so ask yourself when using or confronted with either:

Are they genuine, constructive, meaningful and appropriate? If so, chances are they could be valid, and you should pay attention and not just dismiss them.

So now we’ve had a think about that, let’s look at several ways positives can actually be negative when out of context or balance.

Belief in yourself

It is an amazing realisation to believe in ourselves – to realise that we are capable of so much more than we limit ourselves to. Self-limiting beliefs are responsible for much of the dissatisfaction we may feel, or our apparent inability to achieve things.

But we also have to acknowledge true limitations, and the fact we do not control every single aspect of our lives.

It is as much of a lie to tell ourselves we are totally limit-free as it is to tell ourselves we are too limited.

The tendency of humans to not find balance and veer between the two means we form very destructive patterns and imbalances (more on how we form mental patterns and make decisions here in The Decisive Patterns of Business).

So how can unlimited self-belief be harmful?

Let me ask you a question: If you are told you can never fail if you believe hard enough in yourself – that you can do anything – and you believe that truly; strive, and do everything you can to achieve it; and for whatever reason (life decisions, chance events outside your control), you just don’t achieve what you have set yourself, no matter how hard you try…

Who will you blame?

Being sold this personal maxim constantly means that if we fail, we are likely to believe we are at fault, that we just didn’t believe hard enough. And although I say there is no failure, only feedback, and indeed speak about failure being necessary for learning, growth, and success, here it is often taken as abject failure; not a lesson, but a lessening.

And that’s fundamentally not right. Let me explain.

Self-belief cannot be rigidly applied to everything in life. For every incredible story, every driven hero of mine who has achieved incredible things against the odds – Arnold Schwarzenegger, for example – there are hundreds, thousands, even tens of thousands that had the same drive and determination, but didn’t get quite the same opportunities at the same time, whose contexts were just different.

For instance, by telling schoolchildren they literally cannot fail and removing fails from exams, we set unrealistic self-belief and expectations for the real world, where failure is an inevitable lesson.

Your personal drive and belief are incredibly powerful; never believe I am not supporting that. Have that goal; use that drive. Be inspired! Removing limiting self-beliefs allows you to achieve your full potential, but that is not the same as being able to literally do anything, and I think this is an important distinction.

For example – you only have to look at the diminishing returns of the fastest sprinters in the world, Usain Bolt and his ilk, to know that there is a literal human limit to what can be achieved. He ran 100m in 9.58 seconds, achieving a peak speed of almost 28mph, after years of incredibly intensive training. Men who are likewise ludicrously fast (many of whom have tested positive for performance-enhancing drugs, so are in a way superhuman) have come close to this, although this record stands out from even these scores, but most of them are mere milliseconds apart. So to remove self-limits and say you will train and one day compete, equal, or even beat Usain’s record may be vanishingly small, but it’s something you can still possibly achieve if you start from the right context.

But you also have to be realistic. If you say you will, as a baseline human, beat an 8 second 100m world record, it is inhumanly unlikely. Add to that the fact that the people who get to this level have decades of training, manage to avoid career-ending injury, have superior genetics for this event, and all have different context in life for many of us – they were the best of the best, naturally in most cases, to even begin training – and you can see how it’s just not possible for nearly anyone to say only self-belief stands between them and Usain’s record. A whole range of factors, including serendipity, are involved.

No alt text provided for this image

We must all acknowledge that sometimes, we simply can’t close that gap between dream and reality. Life is not a level playing field, and treating it as if it is is wrong. Perhaps chance, or genetics, or a situation stops you doing what you want to the degree you wish to. Not everyone is equally unconstrained by choices. The people who achieve their perfect dream may be driven, excellent at seeing opportunity, or have the means to make a good start, but that doesn’t mean they would still achieve it in another context.

This is where serendipity and complexity align; opportunities and context may exist for one driven, talented person that simply don’t for another who is equally so. In Cynefin, you realise that finding new emergent paths to success can deliver even better, more achievable goals than the original perhaps unattainable one. I feel very uncomfortable when I see the focus on the people who have achieved something amazing portrayed as “this would be you if you only believed in yourself enough”. Very often, their story is incredible, inspiring, against all odds, and they are amazing people – but there is more to it than just human spirit. It is wrong to simply say that someone in a different context who doesn’t achieve it is always less driven, discerning, or capable. They are not automatically a failure.

Yet that is exactly what we tell people when all they hear is “believe enough, and you can do anything“. It may incorrectly suggest that those who didn’t achieve simply didn’t want it as much.

We need to be super careful of language here. These are true:

Believe in yourself enough, and you can reach your full potential.

We can achieve much more than we believe. We have not failed if we don’t achieve something perfectly.

But this is not:

Believe in yourself enough, and you can do anything.

How are we measuring success? Who do we punish when we can’t achieve it? Who controls this? Dreams and reality must match up at least a little to be achievable.

Speaking of what’s within our control:

Positivity and Control

I constantly say humans polarise very easily. I often here there’s no point trying to do things as we have no control, or conversely we have complete control over our destinies. We also often create false causal links – for example, that anything not intensively positive must be negative. The truth, as usual, lies in a fluctuating balance somewhere between the two.

 I think we need to accept two things:

  •  We can positively control much more than we often realise (Believe in yourself!)
  • Some things we simply can’t control, and that’s not necessarily negative (Don’t believe in yourself exclusively and unrealistically!)

Bruce Lee said that we have a choice; that being constructively positive is how we begin to make changes, and he is absolutely right. I do this in my own life, and it’s incredibly powerful. But we also can’t control everything in our lives, and the myth we can prevents us from growing and learning properly at best, and damages our mental health at worst. As he says, it’s how we begin.

In this video, Derren Brown makes some great points (highly recommended watch):

No alt text provided for this image

It reinforces my points below on happiness, positivity, and optimism not being conflated.

I particularly like his thoughts on why so many of us get it wrong. How much of what we acquire to be happy is actually only to impress other people and project positivity for their benefit? What is our aim, and personal measure, of what happiness means?

You could also define positivity more as:

Instead of wanting what we don’t have, shifting our desires so we want what we already have is truly positive.

He also references the Stoics, and the idea that:

…there are things in your life that you are in control of, and there are things in your life you aren’t in control of; and the only things you are really in control of are your thoughts and your actions.

Everything else is subject to outside influence. What other people do, think, how they act, what happens to them, what the world does to all of you, is outside your sphere of control. You may or may not influence it; but influence is not control, and in an age of “influencers” it’s important to remember this.

So in this context, positivity can simply mean a pragmatic decision that everything you cannot control is ok – not good, not bad, but just there – because you simply can’t control it. And you have to let that sense sink in; mere words are not enough for comprehension. Constructively change yourself positively, but don’t lose sight of reality.

Positivity, Optimism, Happiness, Fulfillment

It is extremely important to differentiate between these, more than ever now we’re bombarded with a conflation of them constantly through social media and work.

We’ve looked at what Positivity is; Optimism, on the other hand, is more concerned with not being worried about the negatives in a situation, a mental attitude reflecting a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable, regardless of evidence. Optimism is usually a trait where you hope things will always work out well, where positivity is a choice. Optimism may then be responsible for blindness to realities or problems, because it’s often a refusal to accept they matter – or even exist.

Happiness, on the other hand, can come from enjoying short-term experience, or long-term fulfillment/satisfaction (I’m defining happiness fairly simply here). In Derren’s video he mentions Daniel Kahneman speaking of the experiencing self and the remembering self: if you are given a choice between doing something really fun or doing something meaningful, which one would you say would make you happier?

Many people will often pick the fun activity because the experiencer will be catered to at the time. But the rememberer will look back at the meaningful activity instead, and the chances are you will keep more of a profound, deep sense of happiness from that; in other words, you are more likely to find real fulfillment.

Another way to consider it is short-term gain requires a constant re-buzz, whereas long-term satisfaction sustains you.

Ask yourself; are you being optimistic, genuinely positive, or toxically positive to achieve happiness – and which of these really fulfills you?

The conflation of these terms and our lack of awareness of these two selves shows most of us have a very poor understanding of what really fulfills us a lot of the time; and until we experience something traumatic enough to force a reframe outside our set mental patterns, we probably won’t gain a new perspective.

Trying to fulfill ourselves by “patching” or “hacking” with quick quotes and memes is anywhere near as useful as a genuine depth-of-character change. That short dose of inspiration doesn’t last, but a profound memory does.

But what about expectations around being positive?

Societal and business demands for Yes! Can-Do! and other immediate “positives”

This is something we do a lot. Removing significance from everything non-positive as simply “negative” is profoundly damaging, and these demands from establishments or other people can become quickly ingrained. They ignore reality, and tie straight into the short-term fulfillment and experiencing self mentioned above. They invalidate any concerns or emotions, and demand intrinsic optimism regardless of consequent cost. Once you set this as a pattern, like any other habit humans create, it’s hard to break.

I made a video on the “can-do” attitude a few months ago:

No alt text provided for this image

And again, in context, the concept of can-do and not being immediately negative is great – it gives clients confidence, it sets initial goals, and much more. But too often we see can-do as a substitute for able-to-do. It’s not enough to just say yes if you can’t achieve things. That’s not positive; it’s disingenuous (and from 22 years in DevOps, it’s something I have seen an awful lot in tech!).

A short story: I was once asked to write a technical presales proposal for a current customer; my first draft was very technical and not dressed up. I was told it was too negative, and they wouldn’t buy it – which is fine, you have to highlight benefits. So I rewrote it. It was returned again. I was to remove anything even remotely negative, meaning that any realistic cautions would be ignored. I reluctantly complied, objecting on the grounds that proposing this just to get a sale would mean an implementation standing an unacceptably high risk of falling over within 3 months. I was told we’d worry about it then and to just fulfill it now, and also to rewrite again and remove anything even neutral.

At this point, it seemed ridiculous – to sound positive enough to get a quick sale, a technical consultant was being asked to essentially write a marketing document which was false and high-risk to the solution and the long-term reputation of the company (also, since I would implement it as well, I stood a high chance of being blamed when it almost inevitably fell over). This is a great example of toxic business positivity. There was no balance, realism, or care; it was false-positive to achieve a short-term, selfish singular goal.

There is no point in saying yes to everything you’re asked in life, because you simply can’t deliver it all.

Yes-ing is also a problem internally for leadership because it leads to sycophancy and a ungrounding from reality for leaders making decisions; this isn’t positive. It’s harmful. Having auto-validation is an extremely bad thing – in business, in friendship, in life in general.

Before you automatically condemn something as “negative”, take a reality check and look to see if it it, in fact, constructive and realistic – and if it IS, you stand a good chance of being immersed in a toxically positive atmosphere that is detrimentally skewing decision-making.

Remember: you can approach a realistically negative situation in a positive manner!

It isn’t just actions and situations that positivity is demanded in, however: emotions are perhaps a far more important area where we make unrealistic demands. I’ll go deeper into the harmful side of memes when they suppress valid emotional negativity with examples in Part 3, but first I want to go deeper into why suppressing negative emotion is terrible for our mental health.

The denial of non-positive emotion

This is one of the most harmful possible outcomes of toxic positivity. When someone is fake-happy, positive-toxic, it’s actually invalidating themselves and others. The demands of toxic positivity can lead us to do four terribly harmful things:

  1. Minimalise valid concerns and feelings, leading to saying our big problems “aren’t big at all” because all we have to do is “stop being negative”
  2. Comparing and contrasting issues, fostering a belief that all our emotions and circumstances can be ranked on a shared scale – that we all experience our troubles and feelings in exactly the same way regardless of context
  3. Negativity shaming, which denigrates and excludes people socially because they aren’t bubbly, happy and at ease all the time. This dismisses natural, valid emotions and forces faking positive vibes to the point you refuse to acknowledge anything less than excessive happiness, and also marginalises personalities, cultures, neurologies, and more
  4. Repressive behaviour control, where we deny our own feelings to fit in and not be outcast – important enough that we would rather risk our own mental health than be perceived as being negative. By using 2) to say “we don’t have it that bad”, we try to hide our emotions when we think the cause is too small

Of course, some issues are genuinely more minor than others, but when you look at them from a point of view of trauma rather than just negativity, it changes perspective somewhat. We all experience trauma differently.

Toxic positivity tells us it’s not okay to feel down, especially if the rest of your life is going great. This isn’t right. No one should feel like they have to hide their true emotions because society plagues us with this artificial idea of a happy, positive life, especially online.

This is especially hard-hitting when you consider how many people suffer from real depression, bi-polar and other personality disorders, autistic spectrum disorders, or traumatic events. Invalidating neurology, chemical imbalance, or personal trauma is hugely damaging, and we do it to ourselves as much as others with this constant air of “just be happy!”. This is terribly insidious – tendrils of it touch male mental health, suicide, female mental health, dismorphia, dissatisfaction, burnout, and so much more.

In addition to that, we’re not even addressing the source of this negativity in a realistic fashion, but marginalising it in favour of just somehow becoming positive.

Genuine positivity is finding constructive ways to get the best out of a situation.

Work is a prominent example. Many of us know how damaging is it to us to feel trapped in a career we may not like (up to 85%). Simply demanding you feel happy or make a change doesn’t fix it – some people have no other job to go to for supporting their family, or may be too anxiousor stressed for a host of reasons. So the answer may well be to change job – and for some an inspirational sudden change may well work, but not for everyone.

We then also look at corporate culture issues like praise addiction where we demand positive praise to the point where it doesn’t matter if we have earned it or not as long as we feel good and get a bonus, or we look at the demands in companies to accolade others to the point of it being almost policy, and the web becomes ever more entangled. Toxic positivity is everywhere.

Again, this isn’t defending toxic negativity – far from it. But rather than getting advice on mental health from the average life coach, it’s worth talking to psychologists and psychiatrists who have to deal with the mental health fallout and who actually know about this.

So should we be negative?

Everything needs context. We should be negative where appropriate in its many meanings, because we’re human. Sometimes that means being realistic. Sometimes it means being sad. Sometimes it means not invalidating the experiences or situations of others. Being human is about balancing and fluctuating between many states, including positivity and negativity.

If we didn’t have the negatives to deal with, we wouldn’t have a basis for comparison for being positive. Demanding we simply remove it all wholesale from our lives is therefore ridiculous.

 “Everything worthwhile in life is won through surmounting the associated negative experience. Any attempt to escape the negative, to avoid it or quash it or silence it, only backfires. The avoidance of suffering is a form of suffering. The avoidance of struggle is a struggle. The denial of failure is a failure. Hiding what is shameful is itself a form of shame.”

Mark Manson, The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life

All this means accepting that the negative is also part of us, that it can ground us, balance us, and that it can be constructive and appropriate – and thus actually positive in context!

In part 3, How to be Positive 3, we will summarise and take a look at examples, things we can do, and ways forward.

How to be Positive 1: Positivity

I’ve been meaning to post on this subject for some time, as I think it is extremely important to have a conversation about, both personally and professionally.

Now, before I start, I’ll be very clear – positive mental attitudes and mindsets are invaluable. Finding the positive outcomes and lessons in any possible situation is also invaluable. These enable us to move forward, be productive, and enable growth.

But as with everything I speak about, there must be both context and balance, and both are now often lacking in our daily drive to be positive.

Let’s explore what positivity is, how it can be both beneficial and surprisingly damaging, and what we can do to maintain that context and balance, and use it to help instead of harm.

As this is extremely important to understand, it’s in-depth over 3 parts – and a little contentious in places.

Positivity is Positive!

It is critical to clarify what these articles deal with. Positivity is not only one thing, although it’s often referred to as such, and here I look at the current societal focus on positivity as a set of concepts and a choice.

Just say yes! Don’t be negative. Can-do! Always focus on the good. Don’t let the negatives drag you down! Life throws things at you, you have to laugh and move on! Laugh, and the world laughs with you; cry, and you cry alone. Surround yourself with positive people! If you stay positive, good things and good people will be drawn to you. When life gives you lemons, make lemonade! Could be worse. Always someone worse off than you. Don’t limit yourself!

“If you are positive, you’ll see opportunities instead of obstacles.” – Confucius

There are a thousand things said in every culture about being positive, especially during times of hardship. We revere and tell inspirational stories about people who achieve this state, often quite rightly. Humans are curious in that when suffering problems we often don’t just get on with surviving, as many animals do, but actively look for ways to still fulfill ourselves where possible; to consciously push through hardship with a smile and find some joy.

Anyone who knows me personally or professionally will tell you I’m a positive person, but I strive to be genuinely positive.

Genuine Positivity is based in empathy and connection, in acceptance and opportunity. Finding ways forward whether the situation is good or bad, and being thankful for what you have; this is positive. Learning from hardship, sharing and laughing with others whatever is happening, accepting yourself mind and body; these are positive. Positivity helps us Dream Big even when we feel we exist small. It helps us find some peace and contentment whatever our situation. And it helps us achieve things we would otherwise consider impossible from self-limitation.

Positivity requires meaning.

True positivity is supportive, sharing, constructive, and beneficial to ourselves and others. It is deliberately applied in context to the person and situation, which helps align you with the universe at large.

Modern positivity is often considered to have derived in part from Stoicism; that is, seeing situations in the most positive light possible and looking for the good in them for the best ways forward. But positivity is also widely being mistaken for something a lot less beneficial, and I see it not just used and said across LinkedIn and other social media, but also demanded within companies and lives as if it’s a hidden policy (this is actually often a Dark Constraint as defined in Cynefin by Dave Snowden of Cognitive Edge – positivity is actually a very complex, dispositional area).

In fact, that’s one reason I think positivity as a concept is so popular; in Cynefin terms, positivity is a form of certainty, and it helps avoid the panic-inducing negativity of not knowing what to do, alongside potential physiological responses (feelgood hormones et al). In that respect, positivity provides direction and stimulus, which is good.

Unfortunately, I believe we are actually experiencing a subtle “perfect happiness” pandemic, much of it derived from the relatively new awareness of mental health and social media’s strong and constant influence, and our inability to balance their affects in our lives.

That sounds a little extreme – so let’s explore the idea.

Where can Positivity go wrong?

Positivity can become highly toxic in several ways, especially when generalised, and this can be very easy to mistake for genuine, beneficial positivity – especially online where context is naturally diminished through snippets of narratives. It can be spread anywhere people post words or images without applying them to someone or something through empathy, but instead for attention, for likes, to be heard instead of to listen. It can be spread anywhere people demand happiness through association, via attitude, to aid “hustle”, fulfilling requirements, or in just in general. It is a grey area – but it exists.

I can’t stress this strongly enough: I know people whose refusal to acknowledge negative emotions, or whose insistence on self-belief above pragmatism, has destroyed their lives and relationships, badly damaged their mental health, even led them to suicide. Toxic positivity is not an overreaction, nor is it a joke. It is dangerous and antithetical to society, individuals, and business.

I want to break down something we conflate all too often, here:

POSITIVITY does not automatically equal HAPPINESS, but we are often sold this concept. 

We have this subversive belief that we can force happiness through positivity; that we can use optimism to coast through any barrier; that simply by presenting the face of happiness, we can be fulfilled, or that the universe will align with us.

I said above that Genuine Positivity is based in empathy and connection, in acceptance and opportunity; Toxic Positivity is based in demand, selfishness, and lack of empathy or context. It isn’t supportive. It’s dismissive. It demands to be heard instead of listening and understanding. It says you must be happy, or at least appear happy, no matter what, especially for other people. It is invalidating. It is undermining. It is repressive. And it is incredibly damaging, especially because it’s become ingrained in society, business, and interactions.

It’s led to companies demanding that people and processes not be negative in any way. It’s led to people portraying perfect lives on social media, even as they suffer from mental health problems behind the influencing. It’s led to men “putting a brave face on things” to be unemotional and strong, to entertainers trying to cope with pressure using drugs to because the show must go on. It’s led to quotes and memes being applied to everything, with only a brief dopamine release from gathering “likes”. It’s led to people feeling that they can’t find support from others so as to not commit the social faux-pas of “bringing them down”. It’s led to people focusing so much on finding the good that they don’t deal with or even sometimes acknowledge the bad at critical moments.

There are a number of things that define Toxic Positivity, which inhibits success as much as extreme negativity, and I want to look at these in more detail.

No alt text provided for this image

Identifying Toxic Positivity

There are a number of ways to identify whether something is genuinely or toxically positive.

“Toxic positivity is ‘pushing down’, denying, or minimizing negative or uncomfortable emotions (and actually, a person’s experience or reality)”

Rachel Eddins, M.Ed., LPC-S, CGP licensed professional counselor

Toxic positivity is a genuine, widespread psychological issue, and it operates at a societal level. More than ever, people are seeking happiness, but you can’t gain that by repressing or ignoring the other parts of your life.

This dark side to positivity comes in many forms:

  • The promotion of belief in oneself being the sole factor to achieving a perfect goal
  • Conflating optimism (a trait), positivity (a choice), and happiness (a feeling of enjoyment/satisfaction/fulfillment)
  • Deliberate ignorance of long-term consequences in favour of short-term gain
  • Demands for Can-Do attitudes, hustle, “just say YES and fulfill later” in business
  • The removal of all emotive response that isn’t totally positive from yourself and others
  • The pursuit of perfection
  • A refusal to acknowledge reality (denialism)
  • An insistence on labelling all “non-positives” as “negative” (a form of emotional self-gaslighting)

I’m sure you can think of others. In context, any or all of these could be detrimental or beneficial. All too often, they are blanket applied.

But surely, I hear you cry, belief in yourself, setting a goal that is a dream, and working towards that is what we should do?

Yes – absolutely. Direction, removing limiting self-beliefs, and achieving our full potential are what we should strive for.

The power of self-belief and following your dreams is immense.

But you also have to be constantly mindful of reality and context; for every person who achieves their dream, someone equally hard working and focused doesn’t, because not everyone starts from the same line at the same time. Life can – and does – get in the way. Some people’s dreams are simply unattainable, and you can actually harm yourself by ignoring opportunities that are better and more attainable in pursuit of perfection. This is inattentional blindness to the nth degree; the treating of life – a complex, unordered situation – as ordered.

Achieving your full potential doesn’t automatically equal being able to do anything at all no matter how unrealistic!

The other danger is that these goals may be achievable, but at what cost? Burning out is not a cost worth paying – I should know, it’s happened to me twice. Achieving something positive even if it breaks you is still a negative. I’ve written about burnout elsewhere, but it’s linked to this, too.

Spreading and connecting Positivity

I’ve mentioned memes and quotes, so I also want to break down in more detail how these can be positive – and not so positive. Bear in mind, I’m not talking about humour; I’m talking about something specifically designed to promote “positivity”.

Many of us struggle for meaning, or have experienced hardship. We want to find or share comfort and support. And that is great. I have no problem with feelgood stuff; I love it. It makes me, well… feel good! Watching someone rescue an animal, watching a little girl dance with her disabled brother, reading a quote or personal story that touches my core and reminds me of the good or profound, that we can move forward and find our way; all of these and many more are good things that can bring some light to our day.

What is less beneficial is the casual posting of positive memes and quotes, especially ones that are essentially meaningless and vague. Many of these are really well-meaning, and designed to tap into the general idea of being positive, but a generic post can be at best an attempt to salve a deeper sense of anguish, and at worst a replacement for actually constructively dealing with problems. I’d rather have genuine support from a connection or friend in context than a generic, borderline toxic “you got this” or “it will get better”. I don’t like hearing “trust the process” unless it’s very specifically applied, either, because not everyone who trusts the process ends up achieving – this links to the unwavering self-belief I mention in part 2. Because of human nature and the dopamine hit they provide us, these posts often end up getting higher engagement than genuine, applicable and beneficial content, which isn’t always a good thing, either for us long-term or the algorithms within the social media platforms. It can end up saturating our attention.

Add alt text

No alt text provided for this image

…ok?

I genuinely get why we all love these, myself included, and I certainly think they have a place on, say, LinkedIn. It’s so easy to post a quick quote that has some meaning, maybe pop up a decent picture, and especially on LinkedIn people want motivation. But whilst it’s encouraging, it can also be habit-formingly lazy, and lead to carelessness as long as we post and get engagement.

The number of us who genuinely know what we’re doing is probably nonexistent, especially in business, because life is complex and we’re all feeling our way. And although these casual posts can be part of the problem, these still aren’t nearly the worst part. The problem creeps further when people use a feelgood or inspirational meme or quote that has zero relevance to what they’re posting, just to gather likes, or spew buzzwords to sound positively profound when they are talking nonsense (and I have seen a number of people do this and get worshipped for it daily!). Posts designed as positive purely for the manipulation of algorithms, likes, or literal rubbish posts for the sake of it are much more problematic; they use this deep need for positivity to disingenuously gain influence, engagement, and visibility.

I see so much quality original content on social media, so many genuine stories and meaningful posts, and I find it frustrating when much higher engagement results just from posting an empty, random quote that isn’t even verified. It happens when people are pushing the “influencer” idea rather than actually being a genuine thought leader, and it makes me uncomfortable because it strengthens this falsity that people, desperate to find more meaning, buy into wholesale.

If the only goal is to “influence” and be seen to do so rather than genuinely be positive and enhance people’s lives, that is toxic behaviour. Dave Snowden estimates that within ~9 months any system becomes subject to gaming behaviour; add dopamine hits and self importance to that, and then drop in some narcissism or attention seeking, and it’s far worse. By far the most alarming is the advice from some major influencers, which can be very damaging and dangerous, being spread as positive just because they have influence and it has the right buzzwords or delivery to sound inspiring, not from any substance or evidence.

This is a subjectively grey area because people often post with the very best intentions – but if you take a step back and really look around, it’s easy to see that this has become a movement that doesn’t always have substance behind it. People almost automatically applaud and spread anything that even sounds vaguely profound because we all seek profundity, certainty and meaning.

Next time you see or consider posting something like this, I’m not saying don’t – I love this stuff as much as the rest of us! But I’m suggesting that we perhaps consider the context, meaning, and whether it’s genuine or not. Is there thought and constructive positivity there? If in doubt, you can always check in with any number of excellent psychologists on here – they can tell you what is positive, or not! I mention a couple in the next parts.

Summing up the Positives

So, we need self belief, a positive mental attitude, and to find the best ways forward in any given situation; but we also need a pragmatic view, and to accept that even achievable goals can change (or become even better), that we can’t control everything in life, and that – moment to moment – we have a choice. We can be supportive, use profound meaning to inspire and give hope, and encourage others on their own paths. There are many things we can and should do to find fulfillment, but we must do them with meaning, empathy and support, in context to the situation. This is where I think positivity truly lies.

What we mustn’t do is apply an empty, inappropriate and meaningless veneer to situations and people, and repress anything that even hints of “being negative”, especially when it might be beneficial to be mindful of evidence in reality. Not only does this not achieve what we hope for, but it causes serious problems. In a time when we are more aware than ever of our mental health, it’s worth considering this:

As you can’t cure all physical problems just by exercising, you can’t cure all mental health problems by trying to force happiness.

In part 2, How to be Positive 2: The Negative, I’ll delve deeper into some of the dark side points above, and explore the two meanings of “negative” a little more.

Be positive – but make it genuine!

Comfort Zones, and where to find them

The chances are you’ve heard, read or used the expression “Comfort Zones” even if it isn’t part of your day-to-day work.

I find however that a lot of people often talk about them as yet another buzzword, a platitude to trot out, even up to and to the point of telling people to dive into crippling fear, but they don’t often think about how they can best be used. Like anything else, they require context and nuance!

This will come as no surprise to anyone familiar with my work, but… much of this is all about balance.

So, let’s explore some models showing what we think they are, how they might apply, and why you might not be using your understanding of them to your best advantage for yourself… or others.

(My model below is a work-in-progress, and is subject to future change!)

What is a Comfort Zone?

The typical definition of a Comfort Zone is a behavioural state, within which an individual operates in an anxiety-neutral condition.

When we are comfortable and experiencing no anxiety or challenging stimulus, humans tend to become extremely sedentary, both physically and mentally. Although we excel at change, if we see no reason to do so, we won’t. We value comfort and convenience above almost all else, in fact, and we are very good at lateral thinking and finding shortcuts to simplify and ease processes, which means we are reluctant to make changes to these systems once in place.

Routine, Pattern, Familiarity, Relaxation, static Repetition are all hallmarks of a Comfort Zone.

That said, and taking into account the rest of the article is dealing with representations of movement out of this Zone, Comfort Zones are absolutely a good thing. They provide safety, recovery, mental surety, and balance, and are much needed and natural parts of us. We should be balancing our time between comfort and growth; comfort is a physical and mental resting place.

It is not at all true that we must always be moving outside our comfort zones. We like comfort for a very good reason!

The misinterpretation of “Comfort Zones”

There is an oft-repeated school of thought that implies that this is a Comfort Zone, and one I see represented a lot on LinkedIn and other social media:

No alt text provided for this image

This is not really accurate, nor is the assumption that you will automatically grow and learn simply by moving outside the comfort zone. All this does is open up the opportunity and the motivation do do so, but work and risk are still required, and there is nuance depending on context.

People tend to speak about “moving outside your Comfort Zone” as a binary action; it isn’t.

Most of the motivational posts I see work on this basis, and that’s fine; a first step is a first step. But you have to take more steps after the first one to continue a journey. It’s fantastic to be inspired to take that step for change, but how many people are then discouraged by taking a risk and not seeing themselves grow or learn quickly or obviously? Human nature then makes it less likely we will do this again in the future.

We also are conditioned to want quick results, but like getting fit, these things require time and consistency. In my role I constantly see people expecting quick results and change simply from doing something they normally wouldn’t, and being disappointed if their life doesn’t radically shift.

As with most things humans get involved with, we love to over-simplify a concept that requires a little thought. Comfort Zones are complex, because the humans that form them are complex, and they are dispositional – you can guess their boundaries and what might happen, but they can change depending on circumstance, and you can’t predict what will happen.

Other interpretations

Comfort Zones are subjective in nature; they are intensely personal to us all. What is comfort for one is discomfort for another – outside a basic defining scope, of course (sofa vs torture rack tends to be a no-brainer!). Many of us have our own mental image of our own Comfort Zones.

Whilst there is hypothetically no “wrong” way to represent them, it is important that the way they are represented is clear and in line with scientific, psychological, and sociological understanding, so that people can more accurately map it to their own context.

For example, I occasionally see models like this:

No alt text provided for this image

It’s a great motivator and outline for a number of steps, but in terms of how we work under normal circumstances, or in the normal order of things, it isn’t really accurate, which can lead to some confusion or differing expectations.

For example, I wouldn’t call the Fear Zone here Fear, but Demotivation, or perhaps Reluctance. Fear – true fear – is almost always greatly inhibitive to learning and growth. If you are panicking or in a heightened state of anxiety, you can’t learn, because your body has essentially shut everything down but fight-or-flight. Those are things you can’t really push “deeper” through; you have to control them, because the deeper you get the less control or higher thought processes you can maintain. Anxiety and panic typically get worse the more you push, not better!

There is always an element of pushing through initial risk, fear, uncertainty, complacency, and anxiety to catalyse change, but I see this as less a zone and more a border between zones. They are the gatekeepers we must overcome to move into a zone where we can change and optimally perform, be challenged.

Many of these models also give an apparently clear progression, direction, and almost waterfall-style expectation of how you can progress, and that isn’t how we work, especially when you realise these Zones are tied into emotion as well as cognition. If you look at basic psychology you will get an idea of how we work – we’ve known for centuries and more that thrusting someone untempered into a danger zone has a much higher attrition rate than safely teaching them over time. It’s make or break – and that may be beneficial in extreme circumstances, but it isn’t the best way for us all to learn and progress!

In terms of motivation and pushing these models are fine, but I prefer a more realistic model that reflects how humans actually make decisions and work based on our current knowledge.

So what’s a more accurate representation?

A typical modern model of a comfort zone will usually look something like this:

No alt text provided for this image

This is very basic, of course, but I find it’s accurate for most situations. There is no specific direction or set of things that may happen; it shows the progression that typically happens when you make changes to learn and grow, expanding outwards. For me here, learning and growth are so intertwined as to be synonymous.

The middle zone is labelled optimal performance instead of growth or learning, because it doesn’t only apply to those concepts. It suggests that a relatively small amount of stress motivates or catalyses us to do something with greater focus, which gives the opportunity to optimally grow and learn, but it’s not such a great leap that it shuts us down in utter panic.

As an example, you don’t learn and grow in swimming terms by throwing yourself alone into the deep end of a pool when you learn to swim; this typically only delivers terminal feedback where you drown, and if not you haven’t really learned much of use. Instead, you learn to swim in increments in shallower areas or with swimming aids, and preferably with an instructor, creating stress and risk but also psychological safety, and as you get better you push your boundaries. It’s important to clarify what constitutes “pushing” and “fear” here!

Of all things, humans fear uncertainty the most. It’s the most consistently stressful state for us to be in. But there’s a modicum of stress and uncertainty that gives us adrenaline and heightened perception, makes us ready and breaks complacency. It allows us to perform tasks we know, or learn tasks we don’t, at an optimum level of focus and control.

That’s quite different to such high stress and anxiety levels that our brains shut down and we’re operating purely on adrenaline and cortisol.

I’ve spent some time looking at how learners operate and learn over the years, as well as doing so myself continually, and I’ve also spent a lot of time looking at how humans make decisions and how our minds work and form patterns; it’s integral to a lot of what I do with agility, culture, learning, leadership training and more. These toes in science, psychology, and sociology have helped me develop a more detailed model that integrates with what we know, not just how we learn.

With that in mind, there are three major things to bear in mind when you consider Comfort Zones:

Identities

Something we don’t think about much because it’s intrinsic to our ability to do many things is Identity.

No alt text provided for this image

All of us have multiple different identities, to which we link different modes of thinking and understanding. These are in turn linked to mental patterns and how and why we form them, as well as tribes we form – or are formed around us from meta-complex tribes (you can think of these as tribes-within-tribes at differing levels of complex systems, like a 3D Venn Diagram. Don’t think about it too hard for now!).

All of this makes identities quite a variable and often conflicting arena for us to navigate.

We switch between these identities, which are unique in combination to each person, quite seamlessly and without thinking about it; it’s almost as if our brains rewire on the fly to operate differently depending on circumstance. I’ve always been fascinated by how some of the greatest thinkers I know, who are methodical and quiet, can do another activity (watch a game of rugby, for example) and become intensely loud, tribal, and involved, as if they are a different person, and think nothing of the process. I love watching someone termed excitable and with attention deficits find their favourite hobby (such as painting!) and spending hours quietly working on it.

None of us are two dimensional in aspect; we have myriad faces, and this is important to remember when we consider Comfort Zones and how we deal with them.

Systems of Support

The most widespread, automatic support structures we have are tribal. Humans create tribes without thinking, both in the real world (families, communities, countries, et al) and in the abstract (music, hobbies, philosophies and more). I’ll go into tribes and their negatives more another time, but here they serve multiple positive purposes, including humanising, binding and helping people invest and be collaborative to mutual benefit, even if that’s just moral or psychological support rather than physical survival. When you integrate into a tribe, you assume an identity for that tribe.

Not all identities are tribal. We may not share them with others – they may be intensely personal and thus segregated. But many identities are tribal, because we are social creatures who share knowledge and are comfortable with a sense of belonging.

Add alt text

No alt text provided for this image
How many different tribes and identities can you link to this? Where might they apply, and which ones do you belong to? Are any of them oppositional?

The Zone is not Alone

Given that we have multiple identities – and the tribes they may link to – it then makes sense to look at models which acknowledge that we have multiple Comfort Zones, and each has different boundaries and limits.

Think about it for a moment: do you know anyone who is quiet, shy, retiring, who is not shy and retiring at something quite specific? Or perhaps you go to do something you are very comfortable with, but the situation and environment makes it suddenly uncomfortable?

A wonderful thing about identities and tribes is that they buffer us against uncertainty, because you have a degree of support and understanding. When many people come together doing this, it means you all support and buffer each other as well. You might take a risk on a night out with friends you never would if you were alone; the same goes if you are at work, making a decision that affects a company result where you are protected to an extent by the bureaucratic structure and policies, as well as colleagues, in a way you aren’t in your personal life – in which we are far more reluctant to chance lasting consequences.

All of these tribes and identities link into – but exist outside of – your Personal Comfort Zone, which is really your inner sanctum.

This is the one you can least afford to breach, and your willingness to risk and expose yourself here will by nature be far less. It’s your last defence before the naked you, as it were, and we usually find the idea of changing who we are at our core anathema, because then we would potentially not be who we are any more. So we take less risks, change more reluctantly, and our Danger Zone is much larger, with the Optimal Growth Zone smaller than perhaps some others; it’s much easier to overstep into panic or anxiety and uncertainty.

Conversely, strong tribes that we identify with have different Comfort Zones. One more thing to consider is how we tend to collate smaller identities under larger ones – and how they sometimes affiliate to more than one tribe.

So, to expand upon the above example of Personal vs Work with a general example:

No alt text provided for this image

Notice that there is a definite line of danger between your personal and professional comfort zones – although skills and actions may pass across, there are things we will do in one we absolutely wouldn’t do in the other!

Our professional Zones may have a larger Comfort Zone because we do a lot of mundane, safe things day in day out, and the optimal Performance Zone may likewise be larger because giving it a go isn’t often as risky as in our personal lives (for example, whilst not desirable, losing a job is generally less long-term destructive than losing a personal Zone – consider what happens when someone’s confidence is destroyed, and the knock on effects it has personally, professionally and more).

The Danger Zone for our personal Zone is therefore likely to be proportionately larger than our professional one, because at work (depending on role and company!) we generally accept or hide that we make the odd mistake; the impact of a mistake in our personal lives can be much more shameful or impactful to us.

To dive into Cynefinthink for a moment – consider the boundaries between the Zones in a model as constraints, and consider how they may be more rigid, elastic or permeable etc depending on which model you’re in; where there might be catastrophic failure; and how you can equate “psychological safety” to “Safe-to-Fail Probes” in Complexity and shallow Chaos.

The interesting thing here is that the different identity-linked models also feed into each other; you may take small amounts of confidence or lack of confidence from one to the other, depending on your mental framing and state, so for example proficiency over time at work or a sport can feed easily into personal life, and vice versa.

Finding New Comfort

I’m still thinking about the correct visual representation of a basic model, but imagine the Personal Comfort Zone in the centre, and other identity(/tribally)-linked Comfort Zones all around it, each Zone connecting to every other model’s Zone, and every Zone a different relative size, as if they were neurons in a network, and you start to realise how many – and how interconnected! – they can be.

Hopefully this exploration into how much (and many!) more Comfort Zones are than our usual daily perception is useful, and has given some food for thought. Far from being a simple concept, Comfort Zones have many levels and contexts, and are actually very fluid and ever-changing – and at times, we all need to return to our Comfort Zones.

Consider how it might all apply to yourself. The next time you think about “moving outside your Comfort Zone”, remember not to assume growth is automatic – we have to work at it! Think about which Comfort Zone model it might be, how far is too far, how to maintain the psychological safety for optimum learning or operation, and how, if it isn’t your personal model, it might be used to grow that, too.

You might be surprised how many you find you have.

Rise of the Machines Part I: Mind Machinery

Here’s a field that heavily integrates into a number of the areas I talk about, and I think it’s a good time to explore a few areas. Internet of Things, Internet of Us, Automation, AI: simultaneously incredibly exciting prospects with amazing potential, yet new tiresome buzzwords used to jump on the bandwagon (about 40% of European “AI Companies” don’t use AI at all).

I think it’s best to split out the fields of AI and Automation here, as, although they are linked in some cases, they affect us in different ways, so first, let’s look at the rise of the Machine Intelligence; watch out for Rise of the Machines Part II for discussions on Automation. I’ll also abbreviate a lot of the terms as per headings below.

Artificial Intelligence (AI) and Cognitive Computing (CC)

I heard about true AI in business nearly 15 years ago as the next big thing, and I think we erroneously believed it was immediately poised to drastically change the market. It then apparently went quiet. AI has yet to burst into our consciousness as we gleefully describe it. Certainly in its nascent state, I don’t think it had the support and understanding required.

Fast forward to the now, and most people know it’s been used for some time algorithmically for social media, or that a computer beat Gary Kasparov at chess – but it now goes far deeper than that. AI can be used to find new viewpoints, crunch huge amounts of data, position things to hack into group human behaviour, even manipulate our decisions.

In fact this is a huge field, and one I am learning more about all the time. You can probably split it initially into two main fields:

  • ArtificiaI Intelligence, which looks to solve complex problems to produce a result
  • Cognitive Computing, which looks to emulate solving complex problems as a human would to produce a process

So, going back to Gary Kasparov, Deep Blue was definitely AI because it performed what was essentially a brute-force computation to solve a complex task better than a human can, but it isn’t Cognitive Computing, because it wasn’t mimicking how a human would play chess at all (and it seems there are multiple different types of “mimicking”).

Which of these you want will be contextual. Do we want a self-driving car emulating human decisions? Or do we want it to give the best possible result as quickly as possible? Food for thought; the answer may be “both”.

To further confuse things, some AI can also be CC – but both of these are already also becoming a buzzword, a cargo-cult (“do you do AI?”), so it’s also going to be defined by industry marketing in some cases.

The influence of AI on business structure, not just process

Back to an old favourite, the Knowledge management matrix!

No alt text provided for this image

Now, we know about Taylorism, Process Engineering, and Systems Thinking (at least, if you read any of my articles you do!) – explanation here.

But when we look at how humans ACTUALLY work, and thus most companies, we see much of it lies firmly in Social Complexity.

AI/Machine Learning(ML) is currently fairly solidly in the Mathematical Complexity quadrant, with a touch on Systems Thinking; essentially, the use of mathematical models and algorithms to find optimal output. Where we go wrong again here is that we often use this to predict, where it can only really simulate.

What true Artificial Neural Network Intelligence (ANNI) may eventually offer is an interesting cross-over potential of mathematical complexity and systems thinking – and with cognitive computing leveraging these, the possibility of a non-human processing unit that at least partially understands human social complexity. We’re already looking at branches of AI for decision-making.

This could be very interesting – and potentially harmful, as humans are (demonstrably!) easily socially manipulated; but also because as a nonhuman, an AI capable of doing this would be out of context and therefore not bound by any human constraints. Even a CC-AI would be emulating a human, not being one, and I think it’s going to be some time before that becomes fairly accurate. It’s a very dispositional field. 

To extend the future exploration, it’s hard to tell if a resulting endpoint intelligence would be alien – or because of our insistence on modelling human minds, produce an intelligence that has a sociopathy or psychopathy. If there is this much understanding of humanity, the NN might need more to try to give a feeling of empathy, and it’s hard to say how that would interact for a nonhuman intelligence.

Speaking of Neural Networks…

Artificial Neural Network Intelligence (ANNI/ANN/NN)

This is much closer to the human brain in structure, and is a subset of AI. Neural Networks have been around since the 40’s, as scientists have long been fascinated with human brains.

No alt text provided for this image

We have an approximate storage of 2-3gb in our heads, which is pretty poor. It’s not even a decent single layer DVD! But because we store and recall data not just by creating neural connections between neurons, but firing them in sequence, our actual storage is estimated at about 2 PB. That’s enormous – although it’s not immediately accessible (we just don’t work like that). It’s also something we use in myriad, intuitive, individual ways to arrive at decisions.

We knew computers worked totally differently, logically, via calculation, and the race has (until recently, with CC and Qubits) been about how quickly we can sequentially do it. Even with CC, we just can’t emulate the human intuitive leaps and individual distributed cognitive functions that arise from social complexity – yet, if ever.

But now we’re edging into the fringes of these areas, and ANNI are being combined with incredible hardware, software, and new understanding to incrementally produce something much closer to sentient intelligence.

Machine Learning (ML)

Although my specialty is Human Learning, when you start talking about Machine Learning and AI there are some interesting similarities, as well as some drastic differences.

Machine Learning and AI aren’t quite the same. AI pertains to the field of artificial intelligence as a whole, and machine learning and Deep Learning are subsets of this. They rely on the use of algorithms for pattern-hunting and inference – although I need to spend more time understanding if the latter is sometimes more imputation (reasonable substitution) than actually inferring in many cases.

Machine learning can also be considered a specific, logical process which doesn’t carry the human intuition aspect that AI and neural networks are seen as able to edge into with Cognitive Computing. In this arena, an amusing example of Machine learning could be:

 Me: I’ll test this smart AI with some basic maths! What’s 2+2?

 Machine: Zero.

 Me: No, it’s 4.

 Machine: Yes, It’s 5.

 Me: No, it’s 4!

 Machine: It’s 4.

 Me: Great! What’s 2+3?

 Machine: 4.

Machine Learning tends to ask what everyone else is doing – but some AI may be able to decide what to do for itself on extrapolation. There is a clearly a nuance.

Deep Learning (DL)

This is an expansion of basic machine learning. Input is everything, and where traditional processing cannot utilise everything, methods like deep learning can, because it progressively uses multiple layers to extract more and more information. Human-based systems are easily saturated, distracted, and fallible, and traditional IT is really still an offshoot of this methodology, using automation and tools to make this task easier; augmenting human effort.

Deep learning needs as much data as it can get however – which is how Big Data and algorithms that can take billions of user’s data can change how we work.

I remember many years ago working a data protection deal with the largest radio telescope in the world at the time (I believe now a precursor to SKA SA) in South Africa. They needed to back up the data they pulled in, but I believe (and these figures are approximate, from memory) that they could only process 1% of the data that the array pulled in, and that’s what we were looking to protect and offsite.

This was around 2008. Imagine that sheer quantity of data with today’s storage and ANN/DL capabilities. Given the right patterns to look for, you don’t NEED human intervention any more. I strongly suspect that when SKA SA comes online around 2027, it will be using Deep Learning and AI, if not ANN, to parse, categorise, and archive the bulk of that data for searching.

Deep learning and AI has the capability to be a game-changer for how we analyse the world and advance; if we combine that with quantum computing capabilities, we’re starting to work out where the genesis of the godlike Minds of Iain M. Bank’s novels could come from, if we’re lucky.

 Or Skynet’s People Processing Services if we’re not. Personally, I would prefer the former!

Big Data

You can’t talk about AI without the latest buzzword/required input for AI. The term “big data” refers to data that is so large, fast or complex that it’s difficult or impossible to process using traditional methods. The act of accessing and storing large amounts of information for analytics has been around a long time, but when a certain level of complexity and data saturation is reached, false positives become an issue. (another danger here is Big Biased Data, so having as much as possible may help reduce this).

Where AI variants shine is that they need this data to make decisions or produce better results, so now this is on many lips too.

No alt text provided for this image

The supporting structures

AI requires connectivity, integration, a variety of data required; the ability to comprehend rather than merely operate on statistics, failure incidences and calculation; automation, IoT, and potentially IoU.

These are much more prevalent today, and more importantly the human supporting structure is there; AI/CC is already now proven for certain deliverables.

Without these, AI alone isn’t able to affect much. It requires input, constraints to act against; indeed I believe this is why the initial fanfare around AI was premature some years ago. It wasn’t in any way ready (AI has only just beaten professional human players at StarCraft II, which is totally different to calculating chess moves in advance and requires nuance), and more importantly the surrounding structures weren’t ready either – which I have long considered integral to its success.

Internet of Things (IoT)

One of the things any organism – artificial or otherwise – requires to learn and grow is feedback to stimulus; information. And in terms of AI, this is likely to amount to as much connectivity and data as possible to carry out its tasks.

IoT is therefore interesting because it offers the opportunity to both optimise our lives, and learn frightening amounts of data about us. This is already being massively misused by humans – an example being Facebook, or Amazon, using AI and algorithms. Could this be worse with a full AI entity? That depends on what its purpose is, and whether humans have access to all the results (initially at least the answer is probably “yes”).

What I find fascinating is that there is the potential here to have IoT act somewhat analogous to a Peripheral Nervous System to an ANN’s CNS (Neural network and immediate supporting structures). Facebook does this in a rudimentary way with mobile devices; Siri, Cortana, and Google AI exist; Amazon also uses Alexa and analogues.

Special mention: Internet of Us (IoU)

And now we come to something really interesting. What happens when humans integrate into this? And I mean really integrate?

Jowan Osterlund has done some fascinating, groundbreaking work which I’ve referred to a number of times regarding the conscious biochipping of humans with full control over the composition, sharing, and functionality of the data involved.

This has amazing potential, including ID and medical emergency information, and giving full control to the owner means it can be highly personalised. And therein may lie a weakness for us as well as a strength, as far as AI is concerned.

There’s currently no way to track an inert chip like that via GPS or our contemporary navigation systems; however, AI integration could potentially chart the almost real-time progress of someone through payment systems, IoT integration for building security, even medical checkups where human agencies couldn’t and wouldn’t.

On the other hand, the potential for human and AI collaboration here is immense. Imagine going into Tesla for an afternoon with one of Jowan’s chips implanted in your hand, and coming out with it programmed to respond to the car as you approached (assuming no power source was required for the fob, which I believe it currently is). Your car would unlock because it’s you.

That’s fantastic, but also open to vast potential and dangerous misuse by humans, let alone AI. Cyborgs already exist, but they just aren’t quite at the Neuromancer stage yet, and neither are the AI’s (or “Black Ice Firewalls” – Gibson is recommended reading!).

No alt text provided for this image

Stories Vs Reality

I think there is definite value in reading Sci-Fi and looking at how people imagine AI, because we’ve already seen life imitating art as well as art imitating life – and there are so many narratives of AI, from the highly beneficial to the apocalyptic, that there is something of warning or hope across the board. This can help us take a balanced approach, perhaps – but it needs to be tempered by reality.

Our ability to craft stories of AI gone awry as a deep-rooted fear of the usurpation of humanity, its subsequent destruction at our hands in a pretty violent way, and the other myths we surround ourselves with might not reflect well upon us should a learning ANNI come across it unprepared. We simply don’t know how, or even if, any of this data would be taken in.

The Dangers of AI

AI as a tool has a number of worrying possibilities. It is developing so fast that the danger we will ourselves not adapt in time is real; additionally, we need to balance job losses with new roles around the new tech, which is exponentially faster and more disruptive than the physical and hybrid processes that came before. If we have massive numbers of people losing jobs and don’t find a solution, this is a cause for real concern.

Of course a tool can be used for good as well; but AI is a dynamic tool that can potentially learn to think and change. Some very smart people, including the late Professor Stephen Hawking, have been concerned about the dangers. There are some great examples here (https://futurism.com/artificial-intelligence-experts-fear), as well as a few worrying instances recently:

Tay was a bot designed to be targeted towards people of ages 15-24 to better understand their methods of communication and learn to respond. It initially held language patterns of an age 19 US girl. Tay was deliberately subverted; it lasted only 16 hours before removal.

Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet… as a result, [Tay] began releasing racist and sexually-charged messages in response to other Twitter users. Artificial intelligence researcher Roman Yampolskiy commented that Tay’s misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior.

Within 16 hours of its release, and after Tay had tweeted more than 96,000 times, Microsoft suspended the Twitter account for adjustments, saying that it suffered from a “coordinated attack by a subset of people” that “exploited a vulnerability in Tay.”

(Source: Wikipedia)

Another AI has been designed to be deliberately psychopathic. Norman – MIT’s psychopathic AI– has been designed to highlight that it isn’t necessarily algorithms at fault, but the bias of data fed to them.

The same method can see very different things in an image, even sick things, if trained on [a negatively biased] data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.

Norman is a controlled study to highlight these dangers, and in fact there is a survey available to help Norman “learn” to fix itself – but imagine if code was leaked, or elements of Norman somehow were used by other AI to learn. This is similar to a cerebral virus – once it escapes a lab, it’s very hard to contain, so let’s hope it doesn’t (I’m not going to speculate on the results of any MIT robots being subject to this!).

A third – and the most damaging IMO – example is Facebook’s algorithms twinned with their data harvesting and manipulation practices. Fed by the deeply troubling human-led Cambridge Analytica data, for example, it not only expands on the above issues but adds in the wholesale manipulation and misinformation of large portions of populations around the globe. The intent may be to make money for Facebook (and they still show little mores where this is concerned, especially politically and with a bent towards specific leanings, which continues to alarm many), but the reality is these algorithms display a great understanding of making humans do something en masse. This is now changing politics. Humans are wired to tap into fragmented narratives because of our evolution of mental patterns; we should have checks in place on social media before implementing these measures. We don’t, and that’s likely deliberate where corporate profit is concerned. It’s alarming enough when humans direct this – what if AI manages to change to the point it’s now taking this decision?

AI also ties directly into digital warfare and the removal of the human condition from life decisions – terrorism, drones, vital infrastructure disruptions, and more, all directed by humans, and the errors could be as damaging as the aims. If we enable this, and AI decides to include more than our directives, this becomes doubly problematic.

Bear in mind – in many instances of machines not working as expected, especially in computers, the root cause is almost always human error, through mistake, misunderstanding, or lack of foresight. We are and will be further complicit in mistakes made by and of AI as we move forward, so we must take care how we step ourselves. We can’t actually predict all this, only simulate, because an AI in the wild would be totally alien to us. It would not have any recognisable humanity. And therein lies a danger, or not. It could potentially see us as a threat, or be utterly indifferent to us; it might not understand us at all. Add into this that many people find it amusing to deliberately warp the process without care or thought for consequences, and there is genuine cause for concern and will skew things further.

However, even with all of this, these have still been directed or influenced directly by humans. Extrapolating further, it’s hard to project what AI-derived AI would be like. A lot of this depends on how it’s approached by us. It’s possible that sentient AI could be, in human terms, schizophrenic, isolated, sociopathic, psychotic, or any combination of these. It’s equally possible that these terms simply don’t apply to what stories love to describe as “cold, machine intelligence”. Or perhaps we’ll go full Futurama or Star Trek and install “emotion chips” to emulate full empathy. It’s hard to say, but I think it goes without saying that we need to step sensibly and cautiously, and not simply focus on profit and convenience.

My own concern isn’t so much what an innately innocent self-determining AI would do; it’s what an AI would do at the behest of the creatures that created it – and who often crave power without caring about others of their species. Instill those attributes into an AI, and we have some of the worst elements of humanity, along with an alien lack of compassion.

It’s a fascinating field of study and projection with a deep level of complexity, and we know only one thing for sure; whatever we do now will have unforeseen and unintended consequences. This is where Cynefin is really important; we need to make our AI development safe-to-fail, and not attempt “failsafes”.

(I’ll be writing an article on safe-to-fail vs failsafes another time).

No alt text provided for this image

Looking ahead…

Much of this is in the future; in terms of human replacement, AI is currently behind automation, which has a good head start. AI is currently either set to augment human thinking, or analyse it – not completely replace it.

No alt text provided for this image

Even in some of the better future stories I’ve read, such as Tad Williams’s Otherland series, the AI capability still requires gifted human integration to be truly potent, and we’re probably going to be at that level for some time (albeit in a less spectacular fashion).

So there is some interesting exploration of AI and linked fields. Some of this no doubt sounds far-fetched, and I have obviously read my fair share of hard and soft sci fi as well as real-life research and study – but the truth is we simply don’t know where we will end up; only simulate at best. We must tread emergent pathways cautiously.

“The real worry isn’t an AI that passes the Turing Test – it’s an AI that self-decides to deliberately fail it!”

I hope this has been a useful exploration of the disruption of AI and its impact on the market – keep your eyes out for Rise of the Machines Part II, where we also delve into Automation.

The Rise and Fall of the Bureaucratic Empire – Part II

In Part I of The Rise and Fall of the Bureaucratic Empire I explored where Bureaucracy came from, how it’s been applied to corporations, and why it’s having issues – if you haven’t read it yet, it’s a good idea to look through it first!

Now in Part II, I want to explore some examples of why moving away from Bureaucracy can work, alternatives such as Entrepreneurship, and how we can begin moving over and scaling.

Remember, as ever – this is exploratory, but it is being done by consultants and self-awakening organisations, and it’s being done successfully.

The hinderance of Bureaucracy – and an answer

A great example of how bureaucracy can be problematic has been shown time and again with road rules; certainly in the UK we have very rigid, strictly set road laws for driving and parking, and adults driving 1.5+-ton death-machines break them regularly despite in most cases knowing better.

Research has shown that road laws and regulations make roads safer and more efficient – up to a point.

No alt text provided for this image

Research has also shown that too many of them make roads less safe and efficient. If you overconstrain, people will fall back on reliance and operating in a less accountable fashion. I see far more dangerous speeding and aggressive, entitled driving in the strictly regulated UK than I do in, say, the Philippines (which can also be dangerous, don’t get me wrong!).

Greater car safety and performance than ever lulls us into driving closer to our limits, negating those advances (we drive closer, faster, with less attention); road rules for everything remove accountability, allowing us to simply follow them without consideration (and at the same time, game the system); and expectation, convenience, and a combination of luxury and status induce entitlement and an attitude where we only care about ourselves (where we deserve more than other road users).

It doesn’t help that warning signs or markings in the UK require a KSI (Killed/Serious Injury) before they are put up, rather than common sense; if there are warning signs outside a school, it means a child has likely been harmed by traffic there. My local council saw evidence that cars regularly sped down my old road at up to 50mph through houses with children as a shortcut, but could not put up preventative measures unless someone was badly hurt first.

This is the insanity of heavy bureaucracy – requiring real justification rather than pre-empting issues in a human fashion. But it gets worse – in some areas, speed cameras are quite openly not being used for accident prevention, but revenue generation, meaning people game the system instead of focusing on safety of others. A cyclist in the UK who wears a helmet is less likely to suffer head trauma in an accident – but more likely to be in a bad accident because cars assume they are safer and pass much closer and more dangerously compared to a cyclist not wearing a helmet. Mental patterns and accountability, again.

So how did we become over-constrained? Well, bureaucracy creeps. Perhaps one person crashed somewhere, so a rule was introduced to account for preventing this again. This rule then spreads, often from other harm; and you reach the point where the existence of the rule perhaps causes more problems than the lack of any, but by this point the rules are an institution rather than applied within context and reason.

What’s really interesting is when you remove most road laws and introduce uncertainty, forcing accountability again. Cameras used for costly fines don’t reduce speed overall; but removing road markings cuts speed by an average of 13%, because drivers are less certain and more careful:

Behind this demarking lies the concept of “shared space” and “naked streets”, developed in the 1990s by the late Dutch engineer, Hans Monderman. He held that traffic was safest when road users were “self-policing” and streets were cleared of controlling clutter. His innovations, now adopted in some 400 towns across Europe, have led to dramatic falls in accidents. Yet for some reason Monderman’s ideas remain starkly uninfluential in the world of “big” health and safety, especially in Britain.

Monderman’s principle is that freedom to assess risk for ourselves is what makes us safer. Rules, controls, signs, traffic lights all reduce our awareness of our surroundings and thus our sense of danger. On roads, he said: “When you don’t exactly know who has right of way, you tend to seek eye contact with other road users. You automatically reduce your speed … and take greater care.”

This has also been seen in towns where all road markings and rules were removed. Traffic self regulated to drive safely and efficiently because suddenly blame and certainty of risk were fuzzier. This isn’t conjecture; it’s been tested.

By lessening the bureaucracy and making all drivers invested in safety and driving accountability, in other words making the roads more of an ecosystem, the efficiency and safety rises – exactly what bureaucracy wanted to achieve, and worsened. You’ll still get the occasional accident, but it’s not everyone gaming the system (note: this may not apply for people parking discourteously, but then correct punitive action and processes help here).

These are clear but consistent examples of how bureaucracy inhibits humanity in just one area. A certain amount of hierarchy may be important; but too much, and the system is exploited or ignored, and ends up acting against itself, and we remove accountability and form reliance. When we treat adults wholesale like children, they will act as such. Invest them and treat them as adults… and they will act as such.

Against all we’ve been taught, less bureaucracy doesn’t automatically equal anarchy, and that’s an important thing to understand. We automatically create systems and order our worlds; that’s what humans do.

Ecosystems

Ecosystems are constantly adaptive, learning, and reactive, so why do humans form bureaucracies? Well, I think part of the answer is the comfort zone, and laziness. Once we set up a structure to do things for us, it’s comfortable and we can focus on other things, or don’t have to expend so much energy. But as discussed before, this often ends up problematic when the structure itself begins to take precedence.

Ecosystems require a little more effort, or rather investment, because everything within an ecosystem affects everything else; but the overall effort is less because all agents quickly align to deliver value, so there’s much less friction, whereas bureaucracy is… well. It becomes a grind, and we often forget the majority of us are the grain. And I believe we are individuals gain far more personal achievement, worth and value of our own from working within an ecosystem.

If Bureaucracy is focused on power, authority, and control, an Ecosystem approach is focused more on delivery of the value within the system. That value is the product, but also the people that make the company, the ecosystem itself. I refer back to my friend and colleague Liz Keogh, a talented consultant who does a great talk on how Value Streams are made of People.

No alt text provided for this image

In Cynefin terms an Ecosystem is complex and doesn’t try to order complexity. The structure is emergent, what works, not categorised, what is forced;an ecosystem develops according to feedback, not initial dictation.

In Part I, we saw this:

No alt text provided for this image

So, if we moved to an Ecosystem, how would this work?

We don’t NEED to be sets of fish in different tiers of down-linked aquariums, where a single fouled pipe can cause problems down the line, and as we’ve seen, this actually creates more inefficiencies.

But put us all in a lake, and we develop a functioning ecosystem; the big fish still take up more space and dictate the culture and balance, because ecosystems conform to apex predators within the system.

No alt text provided for this image

If all fish are invested in the operation of the fishbowl, the success of all fish benefits all fish… and the whale!

This is a critical concept to understand – you can move out of a structure which focuses on power, authority and control so much that ego, policy, power struggles, and dehumanisation (amongst a horde of other issues) actually make business incredibly less efficient… and still retain decision-making ability, structure, and gravitas.

An ecosystem isn’t chaos. There’s still hierarchy, but it is reactive, fluid, contextual, invested, and now it has enabling constraints, not rigid constraints. Ecosystems also self-regulate to some degree, and that’s something that (hypothetically at least) adult humans working together can do. For an example, I refer you back to the “towns with road rules removed” above.

Entrepreneurship – the reactive structure

When we speak of Entrepreneurship in business, we are usually speaking of two types: Institutional, and now Millennial (there are also many other sub-cultural and social types). Both of them have a number of perceived qualities; innovation of new ideas and business processes, strong leadership, people management skills and team building abilities are considered essential. These are obviously not limited to startups and small companies – but are perhaps more common there.

Institutional Entrepreneurs are defined as collective and collaborative. Edith Penrose says that “in modern organizations, human resources need to be combined to better capture and create business opportunities.” Paul DiMaggio furthered this view, saying “new institutions arise when organized actors with sufficient resources see in them an opportunity to realize interests that they value highly”.

An entrepreneur is willing to take risks in the name of an idea, even putting financial aspects on the line and spending time as well as capital on what may seem uncertain ventures – but often they will judge the risk to be less than other people might because they have vision and drive, and often operate outside the Comfort Zone in the Optimal Performance Zone. They are adaptive and reactive enough to often mitigate a lot of the risks should they arise.

Millennial Entrepreneurs further change this by adding more qualities; far greater acceptance and knowledge of new technology, new business models, and a strong grasp of the business applications of digital media. They have less work/life identity split. They also face greater challenges – less of their generation are self-employed, but with higher expectations from employers, and the current economy, higher education debt and several other factors means that those who choose to be entrepreneurs are focused, driven and very aware of the new marketplace.

I usually find more initial accessibility to coaching and mentoring an Entrepreneurial mindset, because it tends to want to learn and grow and achieve a vision, not stay comfortable. There’s also a greater ability to take risks, more flexibility to change what doesn’t work, and less ego if something isn’t known or is required.

No alt text provided for this image

To be a successful entrepreneur requires creativity, accurate decision-making, and conceptualisation. Innovation is easier and quicker; adaptation and exaptation are easily committed to. Reaction times are quicker. Networking and system building is a true skill. The company has fast, direct and clear communication lines. Information is accurate. People are often quite invested, and there’s little or no gaming, sycophantism or cynicism because these can’t be hidden easily when everyone is involved.

It’s very difficult to simply switch from a massively ingrained, traditional approach like Bureaucracy straight into an Ecosystem, but Entrepreneurship has elements of both that could be considered as interim (or indeed endpoint). It requires a different skillset to Bureacracy, which is often satirised by boring, methodical, rubber-stamp obsessed faceless automatons. In contrast, an Entrepreneur is often seen a energetic, dynamic, interesting, world-changing, and eager.

There’s a reason personal, likeable people often build companies and remain entrepreneurs, even if they take this into corporate sized companies; you get the odd sour apple who still thinks like a bureaucrat (usually through ego and a false equivalence of years=better entrepreneur), but by and large these are people people, and they have a vision they need to enact.

That is traditionally seen as a problem as a company grows and “matures”; people automatically believe that the company must then transition to become more “serious”, gain “corporate culture” as if it’s a requirement to be taken seriously. I’ve seen it happen over and over, and often think… why did you lose what you had!? But it is possible to still retain the culture of entrepreneurship. I’ve seen entrepreneur-style leaders take over SMB/SME or even large parts of large corporations and still have this culture, and be very effective.

So is an entrepreneurial mindset the best balance between pure ecosystem and bureaucracy, with just enough of the latter to appeal to everyone? It’s got some hierarchy, but a lot of the benefits of an ecosystem, and it’s very focused on human culture and delivery of value.

It’s certainly worth considering, and far easier to transition into from rigid structures.

Transitioning and Scaling Entrepreneurialism and Ecosystems

This also means from Shareholder to Stakeholder, because rigid hierarchy typically prevents and even discourages internal stakeholdership, and marginalises valuable outlier stakeholders. As mentioned by the Roundtable referred to in Part I… for a long time, the only matter of consequence has been shareholder profit at the expense of all else.

That has now changed, but flipping from corporate culture in one go is just not possible in a large company. Instead, you have (at least) two important things that need changing to begin with:

  • Culture – how things are perceived. This is set partially by interactions between people, but primarily by the actions and inactions of leadership. Some of these are informal, “unwritten rules”, or as Dave Snowden calls them, Dark Constraints. If you want to change Culture, this needs to be from the TOP DOWN, and is holistic. You change it all for everyone, or none of it.
  • Processes – how things are done. You can change units or departments at a time, because they essentially are small ecosystems of their own – they have their own intra-company culture, methodologies, and requirements. Much of this starts with the people and the mindset, and using enough policy for direction but not enough to stifle innovation, delivery, and investment. The focus should always remain the results not the methods (within reasonable bounds).
No alt text provided for this image

So it is possible to scale a change through even a large company, but it requires a very interesting motion where you change from the top and the bottom simultaneously – two waves that cross through each other, if you like!

Google

Obviously scaling an ecosystemic, entrepreneurial approach is possible – at least in part. Google is, again, a classic example. A huge company that has hierarchies, it nevertheless ensures that its employees are invested where possible, given their own time, judged on outcome vs output (for example, results vs hours), and as far as I know still holds regular meetings for employees to highlight where it is becoming too bureaucratic so the issues can be resolved.

No alt text provided for this image

Part of this successful structure is the fact the founder was never indoctrinated into believing bureaucracy was required, or the only way. Larry Page’s life has apparently been hugely influenced by four factors, according to interviews; his grandfather’s history struggling in the early labour movement, his education, his admiration for Nikola Tesla who was a hero (and is one of my most-admired as well, as it goes), and his own participation in the leadership institute at the Engineering School of the University of Michigan.

He has further said that the hardships of his grandfather’s story encouraged him to make Google a totally different kind of workplace – “one that, instead of crushing the dreams of workers, encouraged their pursuit”.

If you look at the Big 4(/5) in tech at the moment (Facebook, Apple, Google, Amazon, and somewhat Microsoft), as well as past giants like IBM, or even smaller multinationals like Computacenter (who are still big!), you will mostly see huge trappings of Bureaucracy. This is interesting in Microsoft given the founding of some lean/agile principles there, but remember hierarchy usually wins over time! Apple still retains some of its entrepreneurship in some ways but it’s lost reactivity and innovation; Google is the most entrepreneurial of them all.

So it IS possible to do, to a point! I think as size increases it is inevitable you will need some strong hierarchical structures to support the facilitation, delivery and decision-making, but at the same time, if you can view units as are ecosystems in themselves, and the company as an ecosphere – an organism, perhaps – made of systems composed of invested people, you gain a better understanding of how it can work together. All areas of the company affect all others; traditional company management tends to view company structure as more modular, and is definitely not always as grounded in reality.

And that’s another key – decision-makers need to ensure they have time enough to decide well, using disintermediation for accuracy and grounding themselves by listening to everyone within and without, outliers included. There is a balance point between leadership needing to focus on decisions and not having time for irrelevance, and considering your time too important to spend long on it. Much of this comes down to trust, accuracy, and grounding in reality – something entrepreneurs are far, far better at than bureaucrats in my experience.

As I always, always say: there isn’t a templated, simple answer. It’s going to be highly complex and contextual. Each company, bluntly, will have to find its own way if it’s going to work. But that doesn’t mean people can’t advise and help guide that understanding and discovery. Again, that’s what I’m there for – to help a company sustainably understand this for themselves.

In fact, this is why I work holistically using multiple, contextual frameworks; this is like trying to untangle a huge knot, and pulling one part invariably affects another.

No alt text provided for this image

I hope you’ve enjoyed this exploration of what could lie beyond bureaucracy. If you need to talk more, or discuss engagements, DM or call – that’s what I’m there for.

But one thing is evident: like it or not, in the current market, with the current workforce and consumer set made from a new generation, and with 4IR so prominent in the next leap forward… Bureaucracy as we know it isn’t just struggling, it’s a dead-end beginning to grind to a slow halt under its own friction in many industries. It’s time to explore new emergent avenues of better business.

Let’s get Involved.

The Rise and Fall of the Bureaucratic Empire – Part I

What is the bottom line for your organisation? The main objectives as a CEO, Director, or other Executive?

Profit? Short term results? Growth? Long term survivability? Shareholder value?

Or is it no longer only one or many of these – as reported recently from the Business Roundtable, a group of ~200 CEOs from US firms – but now a move towards a more holistic, human and value-delivery approach? I’ve seen a number of posts on this recently, and I’ll delve into it more in future, but I wanted to look at this in terms of the transition from a longstanding tradition – and one I’ve worked to facilitate for a while.

The Harvard Business Review writes that this report “explicitly counters the view held for decades that the sole focus of a corporation and its CEO is to maximize profits. Corporations are, according to the new statement, accountable to five constituencies, of which shareholders are only one. Customers, employees, suppliers, and communities are the others.”

That is an incredible statement – and one which is both very welcome and inline with today’s growing expectations. No longer is one group profiting at the expense of the other four; now they are all stakeholders, and that’s an important perspective. But this is directly opposed to the original ideals of bureaucracy, which focuses on efficiency and benefit in only one area – and it still needs some work.

In this article I will explore business bureaucracy’s rise, its challenges, and how it’s failing; in a second article, I’ll look at what lies beyond and how we can not only move forward, but benefit hugely doing so.

As ever – this has a lot to it, but is by no means completely comprehensive. It’s an ancient and complex subject centuries in the construction.

The Challenges Bureaucracy Faces

There is an onus on leadership more than ever before – the market change has accelerated to the point most companies can’t keep up, a new generation is becoming the majority of both workforce and customer, markets are super-saturated with companies clamouring for differentiation; sustainable innovation and disruption have taken a notable dive in recent years, people are overwhelmingly dissatisfied with being a component and taken advantage of. Traditional management models are failing both companies and employees. Individualism is being re-realised. People are demanding change and making their demands known across social media and business. A large number of organisations are stuck in the Cycle of Woe, some refusing to even admit there is a problem, and many entire industries are struggling (areas of tech, retail, banking and many more).

No alt text provided for this image

In The Decisive Patterns of Business I explore some reasons Leadership is facing so many challenges and what they can do to be mindful of mitigating them – so there are all the mental patterns, time limitations, and increases in complexity in business to factor in as well. I also suggest 3 ways you can immediately enhance leadership and a way you can make more accurate leadership decisions. All of these things (and much more) are intricately interlinked; this isn’t an easy puzzle to solve for business, nor is it one you can do from within the frame of reference.

All this is just the start, and I’m sure any Executives/Directors reading this can agree and/or add to many of these issues.

So – how do we fix it?

Many of my articles speak about the 4th Industrial Revolution (and we could be in the 6th depending on your definitions), the challenges faced, the implementation of fads, the adherence to older and ineffective models of management (process engineering, systems thinking) past where they are suitable, finding coherency in complex situations, and much more.

You can, for the purposes of this article, boil things down to this: we need to find ways to make organisations deliver better value, get better return and be leaner, act in a more (contextually) Agile fashion where appropriate, divine where they really are as a system in each situation and react appropriately; not just demand or rely on one buzzword framework but use multiple frameworks in context; and rediscover the humanity of the people who are our value and assets both. We need to move to being inspirational leaders, not instructional bosses, because the acceptance and effectiveof the latter is fading. We need to realise the power of making everyone a stakeholder for the business to achieve its full success potential naturally – to reinvest in culture and success being mutually beneficial.

Traditionally, bureaucracy requires a rigid hierarchy adhered to as below, in descending order of size (or perceived importance) and a downward flow of strategic information/instruction. If a pipe is fouled or blocked, problems occur (and they block easily!).

No alt text provided for this image
(Note the outlier/maverick/heretic on the bottom right!)

Ok, give me one easy step that can enable all the above!

Those familiar with my work will know by now there is no “easy recipe” for success – it’s ALWAYS contextual. But that said, there is one thing we can do to begin exploring the facilitation of the above:

We can re-evaluate the bureaucratic approach and the strict hierarchies within it, and begin the move to a more entrepreneurial approach via ecology.

I’ve been told before that this is quite radical, and I guess it is, but that doesn’t make it less advantageous. The fear of change, and the focus on “doing things this way because they always have been” are both powerful suppressants to changing for the better.

This is meta-innovation, if you like – the ability to innovate abstract structures, not just to conceive a new product – and this type of innovation is arguably more critical to long-term survivability for a company.

Why does this solution make some C-suite executives uncomfortable?

The ecosystem approach is often perceived as a lessening of power, of decision-making, but of course that isn’t the case. Look at a company such as Google, which aims to reduce bureaucracy where it can and invests in investment; Google is considered powerful and fairly effective in terms of business.

I recently had a professional tell me that bureaucracy must exist because that’s the only way you access leadership as a consultant – but to me, that bespeaks an avoidance of a necessary paradigm shift, instead working within a closed loop that will continue to shrink. Hierarchies don’t need to be a totally rigid structure for people to function – in fact, the opposite has been proven true, and long-term, rigidity is problematic for stakeholders.

It’s also worth noting that hierarchy doesn’t require bureaucracy. Hierarchies can also self-regulate or define their own structures based on composition. For example, a professional team of people know their places, have individual investment, and deliver as optimally as possible; this doesn’t have to be directed to the nth degree. Ecosystems take this further, and automatically regulate themselves around the decision makers – or apex predators – within them. In both these examples, over-constraint affects the whole system negatively.

In Part II, I’ll give a great daily, ingrained example of how ecosystem is more effective than bureaucracy, and I’ll expand on the aquarium analogy, but for now let’s focus on what a bureaucracy really is.

Defining Bureaucracy

Bureaucracy as a concept is ancient, because at its core it is rigidly hierarchical. Wherever humans wished to control other humans via systems, it existed; religiously, politically, profitably, sometimes all three at once. Via policy and tradition, it was (and still is) established.

It isn’t just hierarchy; hierarchy is a complex and fluid structure within a system dependent on any number of contextual ideas. Bureaucracy is a further constraint via human management; “any system of administration conducted by trained professionals according to fixed rules” is more or less the current definition.

The German sociologist Max Weber argued that “bureaucracy constitutes the most efficient and rational way in which human activity can be organized and that systematic processes and organized hierarchies are necessary to maintain order, maximize efficiency, and eliminate favoritism.”

This was seen as a logical end result of administering a hierarchy, but it leaves out the question of what, or whom, the hierarchy is benefiting.

However, Weber apparently also rightly saw unfettered bureaucracy as “a threat to individual freedom, with the potential of trapping individuals in an impersonal “iron cage” of rule-based, rational control.”, something we’re now seeing as a widespread mindset of decades.

Humans naturally polarise and go to extremes, especially if comfort can be attained doing so. We form mental patterns and follow traditions once established because it’s simply always worked like that. Changing those traditions requires usually then requires either chance, or a vast upheaval.

The Rise of Bureaucracy in Business

What we now usually mean by the word Bureaucracy is Corporate Culture, in business at least (and in modern times, politics and religion have both taken on many of the trappings of business, especially in a world driven by neoliberal ideals of profit), and it goes back a couple of hundred years to the foundations of Taylorism, which has had enormous impact on modern business mindset. I talk about Taylorism a lot in my work, so this matrix should be familiar:

No alt text provided for this image

Knowledge Management Matrix

Frederick Taylor is often called “The Father Of Management Science”. He was a mechanical engineer who essentially created the idea of Process Engineering – the reduction of workers to a dehumanised component level. The idea was that by removing the unreliable and perceived lazy human aspect, you made a process more efficient – something we can also achieve today with automation, but which was not available then.

Somi Arianwrote a great article on this recently here, so I don’t want to rehash it – go and check it out, it’s got great information (and she does some amazing work on AI and Millennial Culture so check that out too).

Process Engineering still requires human skill and judgement, so it wasn’t totallydehumanised. What is interesting is what happened when Systems Thinking was created in 1956 by MIT Professor Jay Forrester. It was originally designed as a way to improve the understanding of more complex systems by looking at how the agents within interacted as a whole, and was a largely social construct at first, but it quickly became an effective way to explain the aspects of business Process Engineering couldn’t define and broaden management science in general. However, Systems Thinking removes all human judgement, and it operates on prediction and outcome based measures, very often (in business at least) heavily relying on a perfect goal and forecasting – neither of which necessarily reflect reality.

The other problem with predicting humans in business structures is that companies invariably then require those humans to follow these predictions. This, as you may be aware, is neither how predictions nor humans actually work!

A Delicate Balance

So now we have what makes up the majority of the Modern Management approach in the corporate culture of bureaucracy – a sliding scale between these two systems. One is rule-based, one is heuristic-based; one removes human variance and demands constant full output, one removes human judgement and metricises for prediction. Henry Mintzberg’s 10 schools of Strategy lie between these, 3 in the former and 7 in the latter (read The Red Pill of Management Science for more information on this!).

A problem here is that bureaucracy has mostly slipped into the worst parts of each over time. The dehumanisation, the over-constraint, the metricisation, the pure focus on outcome-based measures, all whilst preserving strict hierarchy and trying to also marry up a somewhat schizophrenic wish to care for employees and give them a voice – it’s akin to mixing water and oil when you then add real people, who operate on systemic, social, and individual complexity.

Something has to give somewhere, and the hierarchy inevitably wins. When you add into that the perceived threat of automation, and the conditioning over 200 years of people to believe they should be paid for hours, should take pride in their skills and focus only on those, and should give their all to the company, then combine it with the drastic shift in recent years of generational mindset, market orthodoxy trophic cascades, the downward dive of innovation across industries and the awakening of individuality, we find that the system isn’t fit for purpose any more – and it never really was, it just worked well enough at the time.

The world has moved on. The multiply-complex new age of business doesn’t work like a steel mill 200 years ago. It’s time we acknowledged this, and considered alternatives.

No alt text provided for this image

The Problems We’ve Been Ignoring

So many companies are struggling and mired in red tape and politics it’s obvious the system has become more important than the result, despite the desired outcome being profit. Even the Business Roundtable report, encouraging though it is, is going to have to also address the fact that business has been set for decades to work on short-term profit mindset for C-suite to fulfill, and that’s been the aim of bureaucratic corporate culture for so long now it’s “tradition”.

Bureaucracy can have its place, just as ideals born from hierarchy and strict process such as Waterfall are still applicable in certain instances. But it’s been globally applied out of context and in general for decades. In the vastly more complex landscape today, it’s failing us all, both individually and organisationally.

Think of the recessions, the rise in suicides reported in the last 100 years from work pressure globally; the gaming behaviour, sycophantism, cynicism that we now take for granted; the exponential increase of burnout, the sheer inefficiencies where we expect not to get results because of the hierarchyEven those thriving off bureaucracy use the word as an epithet for not getting anything done! How many startups fail to cross the chasm or grow past an initial point of entrepreneurship? How many companies emulate a phoenix, growing, purging, growing, purging? How many both feel trapped by and acceptance of forever living in the Cycle of Woe?

No wonder the next natural steps are political maneuvering, empire building, selfish personal ambition, gaming the structure for personal gain, cynicism, sycophantism, and a type of Cobra Effect – where the new language and the appearance of following a certain message are used, but it’s only a veneer allowing both continued operation of individual ambition, and micromanagement where a boss spends so much time insuring their authority is maintained very often they are not adequately doing their own role. These go hand-in-hand with harassment, isolation, divide-and- conquer island-creation, all to maintain often tenuous control for just long enough to deliver short-term profit at the expense of long-term reputation and survivability.

The extreme ends up being a company that is locked rigid in over-constraint, zero humanity, checkboxing, and demotivation in an excessively toxic culture. Nothing gets done properly, but at least it doesn’t get done properly in triplicate and everyone’s arse is covered. I’ve both worked with and worked for companies like this; they exist.

This is not delivering the maximum value that a company can deliver. It is not efficient, or healthy for individuals. This is what I work to change.

No alt text provided for this image

Bureaucracy across Business is a Dead End

Or to be more blunt, it’s dying as a fix-all concept, and taking a lot of profit, individuals, and organisations with it in its death throes.

But it doesn’t have to. We’ve known for decades that strict adherence to hierarchies is damaging to people as well as inefficient. And what’s interesting to me now is that emerging powerhouses like India and other areas of Asia are transitioning far more rapidly from a very heavily Process Engineering outlook to being extremely aware of this new methodology than the West, where bureaucratic corporate culture largely originated. I’m seeing more interest in my skillset from executives there than I am in the West at the moment, and when it changes, it’s going to be a huge and sudden ripple effect, I think. The hidebound West stands a good chance of being left behind in efficiency, ethics, and value delivery overall.

The emphasis is on delivering value, and it always has been; only the meaning has changed here. “Value” for decades meant shareholder profit. Now “value” means stakeholder satisfaction, not just in terms of profit, but also ethics, sustainability, work/life balance, end product quality, as well as other contextual elements.

Bureaucracy falls readily into the trap of general application, and is very hard to balance effectively. This is why it has finally begun to be phased out in many areas of business, to allow more reactive, delivery-focused, long-term, human and sustainable structures to grow and drive business and return to true innovation. Agile, entrepreneurship, ecosystems, human individuality – all have their place in this transition.

The market no longer supports a sprawling bureaucracy. The trouble is, bureaucracies and their keepers are so slow to react they don’t yet realise this, if indeed they even recognise the signs of trouble at all. That in itself is reason enough to explore different avenues of management and business, and when you add the desperate need of humans to again be human, and indeed to be able to work at environmental, ethical, social, and other concerns traditionally exploited by corporate culture, the need of change is desperate.

We’re past weak signal detection by this point; we’ve reached change-or-die territory for many bureaucratically-structured companies, and most of them can’t even see it because they’re in it.

That’s why you hire people like me!

What is to come

So this article explored Bureaucracies a little. In the second part, we’ll look at why ecosystems are better, some examples and other structures, and how we can begin moving over.

If you’re a C-level or director looking to facilitate this change – DM me! We’re at a point where we need to have conversations around this.

What are your thoughts on bureaucracy? What have you seen or experienced? Where do you think we need to go?

The Trouble With Coaching In Wonderland

We’ve all been Alice – especially if you are, like me, a coach, advisor, or mentor.

We work very hard striving to develop methods, platforms, frameworks, theories, and practices that provably enhance the lives of businesses and individuals. Many of us live in and amongst them – but like any area of expertise, this can actually make us prone to mental patterns and inattentional blindness.

No alt text provided for this image

We give others very good advice, but we don’t always follow it… even when we know better! Sometimes, we work so hard we forget to apply it to ourselves. This can be because we’ve run out of processing space, are too focused, or just plain forget.

Analogue beings in a Digital Age

However much we love the idea of everything digital, the simple fact is that humans themselves are analogue. We aren’t on or off. We’re a sine wave of efficacy, and move through it day to day. The way we operate requires constant refinement. The way we learn is analogue, too (there are some articles forthcoming on elements of this).

Part of being human is the fact we are human, and not perfect. It’s what can spark such innovation and repurposing. I am trained in working out as well, but will find that sometimes I don’t keep strict form, or I don’t focus where I should where I pick up on it quickly in people I’m training. I eat bad food, or I fail where I should know better. I don’t always warm up my voice for events, despite being a singer and knowing better. I don’t always find mindfulness before events, even though I know better! I don’t always apply frameworks to what I’m doing where I should. I talk about looking for new opportunities and serendipty for learning every day, and I follow it… most of the time. Sometimes I can’t. Sometimes I don’t. I try to more often than not.

Inner focus is always much harder than focusing on others! Because we’re always looking outwards from the inside. The difference is, experience can mitigate this quickly.

As a coach, an advisor, a mentor, it’s critical to practice what you preach. But it’s just as important to remember we don’t do everything perfectly. No one operates at 100%, 100% of the time. No one is a machine that downloads something and then does it perfectly ever after.

No alt text provided for this image

Tempting though this is, it means we focus on the end point, not the journey.

It’s really about incorporating knowledge and refining and using it more consistently over time. True progress is measured in evolution, not plateaus of static achievement.

Learning to apply it when it matters is more important than being perfect and applying it 100% of the time, because the second is not realistic. Don’t focus on unrealistic goals.

No alt text provided for this image

Be Kind

Be as kind to yourself as you are to others, especially if you are a coach, because you’re prone to overworking – voice, mind, body. This is part of self-care, and of acceptance. Of exposing vulnerabilities. Of using mistakes to become even better at what you offer others.

There is a huge shift now in businesses and people preferring to see us as human, too, because when that happens, when they see these vulnerabilities, it makes us more likeable, approachable, and gives better interpersonal connections. We are our brand now, in this modern, fast shifting, latter-millennial generational market; we’re more than a service, we’re a package. That includes the imperfections which make us human, help us learn, and make us approachable.

Part of re-realising humanity in business, bringing the human back into HR, is not only celebrating and benefiting from individual strengths in a collaborative ecosystem, but recognising that we are, well… only human.

When that happens… don’t be Alice. Don’t be too hard on yourself.

No alt text provided for this image

Do you want to make more accurate Leadership decisions?

It’s surprisingly quick and easy:

Don’t just make a decision based on someone else’s summary.

This seems obvious, but in fact it’s what a large number of leaders and upper managers do – with less time to make more decisions, it is very common to require management underneath you to summarise meetings, events, or data so that you can make a quick decision without spending time you don’t have reviewing the minutiae. Everything from major company direction to internal cultural decisions is usually decided in this way.

No alt text provided for this image

Why is this a problem?

Because when somebody else summarises for you they do two things:

  •  They remove data they don’t consider valuable but that may well be needed. This is reductionism
  •  They interpret data for you, which can change its context from true

Both of these can be a big problem. To put it in more perspective, Management can also be called Intermediation, and it’s easy to forget management has a chain effect. You might trust the manager who reports to you; but do you know and trust them all the way to the data?

This is something I call Decision Resolution.

Intermediation

Not all intermediation is bad. Managers are there precisely because they need to manage aspects of the business or people on behalf of leadership, and interface with leadership on behalf of those aspects or people – the second part being something I have seen a number of managers unfortunately pay less attention to.

It’s also possible for management to accurately aid decision making, if they particularly know their leadership or work very closely with them in specific instances, but I would say this is an exception rather than a rule, especially in larger companies – and the larger the company, the more this is all a problem, especially because summaries are very easy to quickly and lazily dash off – for example, in bullet points. Also if senior managers are very close to leadership, there is a chance they will be affected by the same inattentional blindness I mentioned in The Decisive Patterns of Business.

(Quick aside – how many times have you noted a summary in bullet points and later gone back to find you don’t remember all the context around each one? Don’t worry. We’ve all done it.)

The more managers you have between you and the raw data for a decision, the more likely intermediation will summarise, reduce, and interpret data that usually is presented with little context.

If – especially as a leader or senior manager – you want to make an accurate decision with long-term reliability, you need to do it based on slightly different data than the traditionally-presented set, with better decision resolution.

Granularisation

Instead, try making a decision based on a small chunk of RAW data. What does that mean? Well, here’s an example in visual form (it could be any actual decision):

Let’s say you are a leader, and you have to make a major decision based on the wheels in this picture:

No alt text provided for this image

It could be size, shape, colour, composition, how they should complement the car, perform, whatever. This is the big picture. You feel you don’t have time to inspect the whole car, because you have a mountain of other decisions for other areas, and all you care about are the wheels.

So you trust intermediaries to prepare the data for you to make an informed decision. This is where problems arise, because complex issues are involved that include things such as politics, competency, how many levels of management, and so on.

This unfortunately includes agendas, where summarised data is manipulated to encourage a decision beneficial to the summariser. An example? I’ve seen senior sales management swearing blind they need a feature for a deal, and doing everything they can to summarise positives and position deals to persuade leadership it must be developed… for it to then resolve that the company has embarked on an urgent $750k R&D project for a couple of potential $50k deals. I don’t have to point out how that’s not really helpful for a leader.

So with our car visualisation, several layers of management deciding what is important and passing the information on up to increasingly busy seniors then might well be affected by the car colour isn’t important, the wheels are all this size so we can worry about other aspects, or even if we make it generic enough they might pick wheels that will work better on another car we prefer… and so forth. Eventually, the picture may emerge to leadership a little more like this:

No alt text provided for this image

Right… so, the general shape is there. The wheels are in the right place. Nothing seems out of order, per se. But there’s no context. No colour. The decision resolution is so low at this point a manager can’t really tell, but since they have no basis for comparison and they’re given a basic set of data, they make what seems to be an informed decision.

If you were to make a potentially critical decision, which picture would you rather make it on?

Right, but we don’t feel we have time to scrutinise the first. So instead, what’s better than reducing and interpreting the data is this:

No alt text provided for this image

Ok, here’s the wheel. Original data. We can see it very clearly. It still lacks some context, but we can be more confident that a decision will be more accurate.

This is granularisation instead of reduction. All the data is there, but you’ve chopped it into small chunks.

Now this might not be enough for you to make a decision, but you can add more chunks until you have enough to make a decision:

No alt text provided for this image

The more data there is here the better your decision will be. The critical point here is that the data has not been changed – it’s just a smaller portion than the whole. This means that when you make a decision it will be accurately based on real data, with real context – not what somebody else decides is the correct data.

If you don’t feel you have enough raw data for the decision, add more until you do.

No alt text provided for this image

Instead of expecting a chain of managers to summarise the points in a meeting for an overall decision, perhaps pick one person and ask them to specify only one point, in detail, with the rest ready if needed (and I speak elsewhere about the value of using narrative and example for this, not just parroting bullet points). For a decision, don’t invite too many chefs; ask one specific person to ready the data in a granular fashion. It’s a little more work than summarisation, but for the purpose of an accurate critical decision… shouldn’t it be?

Less intermediation mean more accuracy. It’s long been known that too much middle management also can interfere with company running and value delivery – there is a point of diminishing returns with delegation of authority within a hierarchy, and that is an important modifier. Gaming behaviour, politics, jobsworthmanship, red tape, and many other symptoms of bureaucracy can result (I speak about these a lot as well!).

This also means leaders not falling into the understandable trap of saying there is “not enough time”. You still need to have some understanding and oversight of what you are making a decision about, of course, which means making appropriate time. If you don’t have that understanding, and are simply relying on whatever people are telling you (outside a completely trusted relationship) to make decisions quickly… Perhaps you shouldn’t be making that decision.

Obviously, there is a balance between accuracy and time spent on the decision, so this is about learning how to refine the decision-making process, not reducing it.

And a final consideration – listen to the subject matter as well, not only your own authority. The data is telling you what possibilities there ARE, instead of you – or someone below you – trying to force possibilities into the data.

No alt text provided for this image

Why is accuracy important?

I keep talking about the acceleration of business today. And it’s true – it’s faster than ever. Half the problems companies are facing is because their old management structures, hierarchies and engagement practices simply can’t keep up.

So speed matters! But what matters as much is accuracy. Take too long to make the right decision and the opportunities pass by. Make decisions so quickly so they aren’t accurate to the situation, and they will still pass you by, or worse, damage you.

Measure twice, cut once – speed of decisions is not the only deciding factor in business! Accuracy and ability to change those decisions based on constant feedback must also exist.

Balancing this will help you choose wisely in the appropriate time.

The Professionals Part I: Defining Professionalism

I have been considering professionalism a lot recently. This can be a grey area that is both individual and role-based, and one that is supported or suppressed by culture, company culture, and industry. The definition has been relaxing and changing in recent years (which I think is for the better), but it still retains defining principles and individual requirements.

We don’t often think about what professionalism is, even though we use the word constantly, so I wanted to explore some thoughts on what it’s defined as being, how it’s seen, and how it’s changed. I’m also largely focusing on professionalism as a generic business term, though I’ll touch on other aspects.

(For the purposes of this article, I’m not going to quantify what amateur means, as that can be wildly variable!)

Defining a Professional

No alt text provided for this image

Loosely, this is the conduct, aims, or qualities that characterise or mark a profession or a professional person that are usually pursued for gain (as a career path or for personal profit).

Expanding on this in application, a professional is someone with both skillset and mindset, with knowledge, ability, and also attitude, which can be represented by different things to different people in different industries, some minor and some major. Which is which may depend on all three of the above, but I think there are some fundamental truths to professionalism.

That being said, this is logically going to be highly subjective based on context. However, we’ve become widely used to a certain stereotype for professional in business. When you hear that word, what do you think of?

Who do you picture?

I’ll wager many of you think of this:

No alt text provided for this image

But really, the general concept of professionalism has been business-codified. Based on the definition, professional really looks like this:

No alt text provided for this image

And much more besides.

There is no difference for me in role or industry – if you have the requisite minimum knowledge, skill, and attitude to perform the gainful task at hand, you are a professional in your field.

So is a “Professional” the same as “Professionalism”?

I think of it like this:

  • Being a professional is a vocation
  • Professionalism is based upon ongoing use and display of appropriate attitude, core competency, and knowledge

A professional may be unprofessional in certain circumstances, because we’re human. An actor may lose their temper faced with paparazzi; a doctor may act inappropriately with a patient; a newscaster may be overcome with laughter on air. I’m sure you can think of many other examples. That doesn’t mean they aren’t also professionals in their field.

This may be why we refer to people who are both professional and never fail to act professionally as consummate professionals; we’re acknowledging an ongoing dedication, not just a situational application, which other professionals may not achieve. Interestingly, we may also tend to view them as less human in certain ways, because some of their personality appears perhaps supplanted or augmented by always doing the professional thing – and we make a far bigger deal out of it if they react outside our expectations.

It’s also well worth noting that professionalism and unprofessionalism happen at all levels of business. I think this is a very important conversation, and one that historically has been assumed to be a given. Role doesn’t equal professionalism. It can equal apparent professionalism through associationwhich isn’t the same thing.

For example: Leadership isn’t an automatic qualification of professionalism any more than “unskilled” work is an automatic disqualification of professionalism. It’s all about the context (like everything else I speak of, context is everything!): are you appropriately professional to your role, base culture, and company?

There is a quote I use time and again by the late, esteemed Gerry Weinberg:

The name of the Thing is not the Thing. People often buy labels, not Products.

So I think we have two mainstream and overlapping viewpoints for defining a Professional:

  • Do they perform a role with all due skill, knowledge, and attention?
  • Do they look like they should be doing the job?

Out of the two, I categorically care about the first, and care very little about the second past a basic minimum! But it’s still very prevalent in some business that the second is at least as important, if not more so.

Thankfully, this latter part is changing. Offices are becoming more casual and acknowledging the presence and performance of individuals; culture is becoming less rigidly toxic. I don’t think anyone could accuse Gary Vaynerchuk of being unprofessional, for all he is often wearing jeans and t-shirt; likewise with many of the Silicon Valley entrepreneurs – Steve Jobs, Mark Zuckerberg, et al.

But why does it persist elsewhere, especially in City and Finance? This goes back deeper into not changing structures because “that’s how it’s always been done” – traditional bureaucratic structures and hierarchical roles, where you look like power to associate with (and share) power. I’m not an advocate for “dressing for the job I want” to get it – I’d rather demonstrate I excel at it however I’m dressed. (If I dressed for the job I wanted, I’d probably be wearing a Batsuit).

No alt text provided for this image

What do you mean, “not meeting appropriate”?

Professionalism is, then, as tribal as everything else, and as prone to hollowing to become the Label not the Product. So let’s talk about the REAL defining traits, not the superficial ones. 

What did Professionalism once mean?

Some vocations have always rightly been seen as professional – doctors or soldiers, for example. But in the public eye and popular media, certain professions have become more “professional” than others.

Think back to the mid 20th Century through to the late 70s, where professionalism as a term was very specifically used for sport, curiosities, or specific types of business. Professionalism was considered a rare thing, and men were very much at the fore of this; business was highly rigid and bureaucratic, polite, formal, requiring looking the part – although the strict starching and more formal attire descended from Victorian wedding dress has slowly faded, it leaves descendants in the forms of polished shoes, ties, three piece suits, and “casual” variations thereof.

This morphed from the 80s to the early 00s, where it became a little more generalised, and the number of people acknowledged as having professionalism exploded. The term became increasingly synonymous with sharp businessmen, often including being emotionless, utterly driven, super competitive in work, even inhuman if required; being utterly focused, machine-like, on the goal (this is the association I have with “hustle” which is why I don’t use the word!). A serious businessman was sharply dressed. A serious businesswoman was often seen as a ballbreaker. Dominance, aggression, doing the job at any cost. It often meant living to work (not working to live), giving extra free to the company, never stepping outside bounds, “the deal” being a driving factor. This wasn’t true everywhere, but enough to become parodied.

A lot of presentation became male-oriented and seen as “masculine-competitive” – cellphones, fast cars, sharp suits. Think of the scenes from Glengarry Glen Ross, American Psycho, Wolf of Wall Street:

No alt text provided for this image

Yes, they’re extremes – but for a good reason. For a time, this was what certainly Hollywood saw professionalism as (as well as assassins, like Léon, above). But this era also began to acknowledge – depending on industry, leadership, and decade – that anyone very good at what they did might be considered a bona fide professional, and this attitude grew until the mid-00s. With the advent of Silicon Valley giants run by younger generations and the huge number of Millennials redefining business, a lot of these standards have relaxed or changed even in “City”, especially in tech – and leading industries do affect others, so the changes have spread.

So, looking at the slightly tongue-in-cheek examples above, we can see a vast change of values defining professionalism, from an almost formal politician-polite carrying out of corporate policy, through to profit at any cost, through to dynamic, young, new attitudes… through to today.

What are core professional traits to be mindful of?

Professionalism isn’t just about capability for me – it’s about mutual relationships. If you haven’t read my article on Chris’s Four Foundation of Sustainable Relationships (really need to think of a snappy name), have a look.

So with those four foundations in mind, let’s look at some conduct, aims, and qualities I think are fundamental to today’s professionalism:

Respect. Focus. Ethics. Due process. Proper conduct. Empathy. Collaboration. Understanding. Flexibility. Compromise. Acknowledging of reality. Demonstration of skill and knowledge relevant to the task. An ability to discuss and listen, and make considered decisions. A dedication to the best possible outcome, and completion of the job at hand. Reasonable punctuality. Investment in the project.

I’m sure you can think of more in the comments below. But what does all this really mean?

To me, it’s using tangible ability and knowledge to deliver VALUE in a culturally- and industry-appropriate fashion. That’s what the attitude is there to support, and I think it’s still very easy to fall into demanding a hollow construct of professionalism that looks good but doesn’t deliver; walk and talk are useless in lieu of results, and I know which I prefer. This is a well-known hallmark of more traditional management techniques in bureaucracies, and one reason that after extensive study of them I now work to change the fundamental belief system in business to a more human model, with a focus on value delivery and stakeholder investment.

Where we need to be careful is to STAY mindful of the core aspects, and not just assume them (Cobra Effect). Attitude can become like a mantra; a construct to emulate something demanded by business, not a quality to possess beneficially and mutually. We’ve all seen how a veneer of professionalism can supplant the actuality of it.

Professionalism needs to remain substance over style to retain meaning as we move forward.

Today’s professionalism

With the advent of places such as LinkedIn and instant communication, the attitude of the new younger generation – which is becoming the primary force in consumerism and workforce- and the bringing of human aspects back to business, spreading in part from the tech giants, some of the stiff formality in communication and appearance is melting. It’s possible to have friendly, relaxed conversation and work relationships which retain professionalism – and although I think in some ways the potential for miscommunication is higher, with it being perhaps a little easier to accidentally offend or misunderstand due to more relaxed boundaries, I believe it is also more forgiving and easier to re-communicate, and engage. The way we text, email, write, speak, meet has all drastically changed – but is no less professional. In fact, companies are beginning to realise they need to understand new professionalism, because they need to engage with totally different people now.

Given that we also now carry a personal identity and brand across interactions both business and otherwise, which is a hallmark of the Millennial generation, I think we have the possibility of forging better relationships and bonds than before with beneficial blurring of delineation between personal and business – keeping a professional distance, but utilising individual strengths to bolster this.

We have also acknowledged professional areas across wider swathes of industry than ever before; professional gamers, coaches, sportspeople, actors, salespeople, developers, musicians, executives, workers, photographers, even in incredibly specific areas such as art, curiosity, entertainment, YouTube, and countless more areas of Influence.

But still the definition applies: the conduct, aims, or qualities that characterise or mark a profession or a professional person that are usually pursued for gain (as a career path or for personal profit).

We are now in an era of more relaxed communication and presentation, of valuing the individual, and human sides of professional conduct – no longer strict machine-like interaction – but I believe the core aspects remain the same: ethics, working towards value delivery and using knowledge and ability to do this effectively.

Professionalism is now beginning to include equality, mindfulness, self-development, kindness, EQ and the affective side (not just the cognitive side), and a greater focus on compassion, collaboration, and fulfillment – an investment in what we do. This can only be a good thing, and we have the work of many, many great consultants and coaches I have connections to here, and thought leaders such as Brigette Hyacinth, Gary Vaynerchuk, Dave Snowden, and many others to thank for this. We’re ALL changing what professionalism means, and it’s become much more social, in line with growing realisation that business has always needed to be seen in terms of social complexity.

Individual hyper-competitiveness is now more seen as potentially destructive and less professional than collaboration. Working purely for profit and ignoring human problems and suffering – the “it’s just business” attitude – is fading out, to be replaced by care and empathy. “I’m the boss” and boss-mentality as a whole is being replaced by humble leadership, people who invest workers as stakeholders. The focus of professionalism is moving towards the value delivered, not the unthinking support of hierarchies, profit, and perception. That, to me, is welcome – and consummately professional.

No alt text provided for this image

So what’s next?

I hope this has been an interesting exploration so far, with some food for thought.

As usual, this is designed to get us all questioning traditional patterns and consider how they fit into the rapidly-accelerating changing face of business, especially with the growing focus on people and individualism; thinking from another perspective. I’m sure there will be many different viewpoints depending on industry, age, experience, culture and more – please comment below, I’d love to hear your take on a subject that is both objective and subjective!

I think it’s good to be mindful of how changes are happening – look out for Part II, where I discuss what defines UNprofessionalism.

No alt text provided for this image

FEARING CHANGE, & CHANGING FEAR

Note: This is a tweak of an old blog that collates several thoughts, some from other articles and videos I’ve posted recently on LinkeIn. It may be of value…

There is undeniably a resistance to changing management which is deeply entrenched within the business world, predicated on decades of “but we’ve always done it this way!”, even if we know that way doesn’t work on multiple levels. To survive, leadership are slowly realising that they need to be open to different views, focus, and structures – to change. And they need to do this faster.

When I speak of change here, I am speaking about evolution and acceptance of beneficial change within human systems and business, not radical destructive change such as the Climate Emergency or personal enlightenment. Not all change is good – destructive change is to be avoided – but as a result, we shy away from beneficial change, too, and that’s my focus here: the change within ourselves to adapt, not the change we attempt to enforce order on our world through innovation and similar. I talk about human fear of change and comfort zones elsewhere, too, but much of this is applicable.

No alt text provided for this image

Rosanne Cash

Whether we experience it individually or within human constructs (religion, organisations, families, clubs, etc), there is a Fear of Change ingrained in us in both business and personal life. Humans are comfort-creatures; we value stability and comfort in our lives, be it professionally or at home. So what happens when the ever-changing Universe rudely reminds us that everything is, ultimately, transient?

It is very human to deny that change is happening, that a system has become (or always was!) un-ordered. The reaction is often to then try to impose order (constraints), and often we do this to systems or situations that cannot by nature be ordered, making the problem worse.

Change represents the oft-acknowledged deepest fear of mankind: that of the unknown. Uncertainty.We know we are here, and find comfort, even in uncomfortable situations; true change will really change things, and this can induce anxiety, worry, discomfort, fear – not only of the consequences, but the change itself.

If something isn’t working, a change is needed for it to begin working. Sometimes the fear of change is so great that we would rather it simply continue not to work, because at least then we know it isn’t working; in other words, we have some form of certainty. This, of course, isn’t helpful in the long term, for delivering value, or in urgent situations, and to accurately gauge this we also need to understand the benefits or risks of making the change.

But what if something is already working?

One response is: why change if something works?(which can also mean, if it sort of works well enough, maybe, also I don’t want to spend money).

Why indeed? But as with everything, this isn’t a black and white situation, much as we love to polarise. It may be barely working, or require workarounds to complete. It may be inefficient or cause rising/unnecessary costs, or added complication and hassle to daily life. If it works well enough, which is highly subjective, you have to ask if it is worth changing. If the benefits of change are outweighed by the risks or clear negatives, or it is poorly perceived or understood, it is probably not worth doing.

But if you take any organisation with working processes in place, the chances are high that people will usually say, “Yes, it works, sort of – but it could work much better” about many of them, and then specify where the inefficiencies impact their overall effectiveness and workload. (A problem I have often found is that, where an organisation does undertake to make changes – be it a new system, process, or team – it is usually a higher-level decision that often doesn’t fully provide training, positioning, and applicable usage to the people actually doing the job, and can be either too simplistic, over-complicated, or ill-applied – in other words, not appropriate to resolving the core issue. This is why listening to the people doing it matters!).

If this is the case, and benefits clearly outweigh risks… why not change it to make it work better?

The place to start with processes, change and the fear of that change is the same: you start with the people.

No alt text provided for this image

Why start there?

All processes, all base decisions, and all value delivered stems from the people within an organisation. People are interconnected individuals working within an organisational structure towards a common set of goals in a variety of ways; without those people – and their interconnections – the innovation, the products, the organisation itself would not exist.

Another way to say this is that people both create and are the value delivered by an organisation. Or, to put it in a more succinct fashion, Value Streams are made of People (Keogh).

So, recognising that the value of your organisation is the people is an important step, for a number of reasons. It is people who fear change, not the products or the infrastructure within an organisation; it is people who make an organisation work.

People fear the change wrought in any organisation because it disrupts processes and workarounds that may work imperfectly but still more or less work, and allow at least some value delivery. Worse, it may cause further inefficiency and unnecessary stress, or expose workarounds that are not strictly in line with company policy – but bureaucracy may have left them no other choice to achieve their business goals, which brings potential personal risk into play even in a clearly failing scenario. Gaming Behaviour is a clear sign your organisation is in trouble, and is probably too rigid to even see that, let alone adapt. It doesn’t work properly, and people have to go against it to achieve what it demands.

In this environment, change will not come from people concerned with being perceived as catalysts for disruption; change must come from leadership.

“It works well enough.” “Let sleeping dogs lie”. “Don’t rock the boat.” “Don’t stick your head above the rest.” “Don’t stick your neck out.” “Don’t be sales/delivery prevention.”

These are human qualifications of not wanting to cause further potential problems, and become progressively more fearful of being singled out for causing issues, even if the root aim is to resolve perhaps more fundamental issues within the organisation to provide better, smoother value streams. Politics, bureaucracy, interdependency and tradition can all turn what looks on the surface to be a simple change into a highly complex situation and possibly render a goal unattainable, even though it may be to the greater good of the organisation. In a perfect world, a flexible and reactive enough organisation – one that recognises itself as a complex systemoverall – shouldn’t need covert workarounds; experimentation should be built in.

A root of this fear lies in uncertaintyPeople require certainty to maintain stability, comfort, and (relatively!) low stress. Knowing a situation is good or bad is far preferable to not knowing if it is good or bad or even what it is, so the natural inclination is to maintain the status quo and not be singled out, as long as this isn’t disruptive enough to become worse than the potential uncertainty (there is a fantastic example of the effects of uncertainty in a study involving rocks and snakes used by Liz Keogh in her talks).

No alt text provided for this image

Why do organisations and leaders not recognise this?

Some do, of course, but not many seem to fully realise the causes behind it. One of the most important things to understand is that the landscape has shifted, and it is accelerating in modern business. Knowledge has become the primary global economy, with business being undertaken around the world, around the clock, and data being digitised and made available and changeable at exponentially greater quantities and speeds than ever before.

The management of this knowledge, and the methods used, have become key to an organisation’s productivity, innovation, and agility (Snowden, Stanbridge). Sprawling bureaucracies are giving way to entrepreneurial practices, and many companies are caught between the two, trying to apply often contradictory methodologies of both to their staff and their products.

At the same time, the latest and not yet widely-understood shift to virtual systems, the increasing use of AI and IOT, and knowledge digitisation has moved business to a realm we have no prior experience of or reference for, and this causes fear and concern because we are being forced to changeat both a personal and industrial level at a speed that isn’t just uncomfortable but alarming. Organisations push back against this by acting as they always have with The Cycle of Woe and traditional management – cutting costs, replacing management teams constantly, and so on – but the simple procedures that once seemed to work do not produce any benefits past the extreme short-term (I talk about this in much more detail in https://www.linkedin.com/pulse/red-pill-management-science-christopher-bramley/).

No alt text provided for this image

This is because we are now experiencing the Fourth Industrial Revolution (possibly the 6th)which is an entirely new landscape requiring new understanding and actions. Because organisations do not have either, many of them currently “feel like they are in Hell” as a result of the Dark Triad (Kirk):

 • Stress

• Fatigue

• Antagonism (“Arseholeness!”)

 …and they occur both at an organisational and a personal level.

So how do Organisations and Leaders currently react?

One of the key reasons for these responses may be because of two things:

1) The still-existing and long-term investment in structures based in Taylorism (which dates back to the 19th century, yet is still a core of today’s management science), a root of Process Engineering. This can be interpreted as the belief and (and action upon the belief) that an organisation is a machine with people as cogs or components that will consistently deliver the exact same output in quality and quantity – or, that an organisation conforms exactly to rules.

It contains 3 of Mintzberg’s 10 Strategy Schools:

No alt text provided for this image

2) The more recent but equal investment in Systems Thinking which is used to model and project towards perfect goals using outcome-based measures, removing human judgement and relying heavily on causality and prediction.

It contains the other 7 Schools:

No alt text provided for this image

The problem with these two widely-used adherences, now used as a combined modern modification of Taylorism, is that they are based on the assumption that business and organisations are ordered, causal and predictable as per the matrix below, whether they are applied in context or not:

No alt text provided for this image

Cynefin Knowledge Management Matrix (Cognitive Edge)

Business and Businesses are neither ordered nor predictable. Despite the realisation for decades that modern Taylorism is actually detrimental and only a slight shift from a perception of “machine” to “human” (Peters, Senge, Nonaka), businesses have only just started becoming aware of the importance of truly humanising business and redefining the core values. This is the crucial understanding of moving to the top right sector, Social Complexity.

In other words, a vast number of companies still try to force their organisation to fit the modified concepts of modern Taylorism because it is trusted and traditional, despite being proven ineffective, and act as if it will forever output the exact same quality and quantity in a forecastable fashion.

Why this approach simply doesn’t work

The very presence of humans who can vary output, focus, workloads and innovation both within and driving an organisation dependent on a number of factors that aren’t necessarily causal or logical – that is to say, complexity – means an organisation can’t be a rigidly ordered system. It is by nature complex, un-ordered, but the tools we mostly use to resolve issues are based on it being an ordered structure with simple rules. The understandable preference, based on certainty and comfort, is to seek simplistic identically-repeatable approaches (“recipes”) based on clear and idealistic outcomes (Snowden).

No alt text provided for this image

Ontologies in relation to basic Domains (Cynefin)

What’s interesting is that people will try to manage an organisation as ordered when it isn’t, yet adapt very quickly to managing home life which is similarly un-ordered, often within the same day! In other words, between two complex systems, leadership will flip from Modern Taylorism to Social Complexity. This is really interesting, and brings into focus the concept of our different identities, or aspects we transition between seamlessly to fit into different situations (more on those in other articles) – something older generations have more strongly than the newest.

It is also very easy to miss that many instances are multi-ontological. As a very simple example, if I run a lab when coaching, I deal with an obvious domain in much of the basic subject, but also complicated areas in advanced concepts; technical systems I may use to train are largely complicated; and the addition of students themselves bring complexity, as the learners drive the class and every class is different to any before as a result (it’s rare that a session descends into chaos, but it’s not unknown, and usually requires outside influence!). So I can end up dealing with all three ontologies in one course! Order, un-ordered complexity, and un-ordered chaos all require different management, but they can all be managed.

We have to think in terms of multi-complexity in a world that is multiply-complex; there is no simple answer, and another company’s will not be likely to be yours.

The visible effects

50+ years of business practice have left a huge number of organisations not fully comprehending that the shift of many markets from product to service, and Industry 4.0,requires organisational agility and change. Markets are seeing the stifling of innovation and a downwards dive of productivity (Snowden).

This inevitably sparks the above frantic reaction (change of focus, sudden arbitrary swerves to “disrupt the market” without recognition of opportunity outside a narrowly focused goal, cost cutting, redundancies, management team swap-outs, further cash injections, etc) without looking at what is working, and more importantly understanding that this is not a one-fits-all recipe that can merely be transplanted inter-organisation for success (Snowden).

It is becoming clearer that collaboration, reactive approaches, SME level agility and innovationare where markets now grow in this new landscape of people being and delivering value via a knowledge economy, and this is a beneficial realisation for organisations struggling “in Hell” to take a first step into new understanding.

So what now?

No alt text provided for this image

 “…Where we go from there is a choice I leave up to you…”

The more I look at the current struggles to achieve the results of yesteryear, my own experiences of the last twenty years plus, and the new evidence of Industry 4.0, the more I realise how accurate the above is. Interdependency is clearly now essential in a new, barely understood industry of High Demand/Ambiguity/Complexity/ Relentless Pace (Kirk). We haven’t been here before.

To find balance and prosperity, and deliver real value once more, collaboration, agility of approach and innovation are all required. We need to sense-make; we need to path-find, or forge our own new paths.

“Reacting by “re-actingor repeating our actions, merely causes problems to perpetuate. In a new landscape, a new reaction is required for change” (Kirk) – and it’s currently how many companies ARE actingThis is also one of the keys to Cynefin and managing complex situations; it is virtually impossible to close the gap between the current situation and a goal projected based upon causality when dealing with complexity, a system with only some constraints where each aspect affects all others. Instead, you must see where you can make a change, see where you can monitor that change in real-time, and recognise the opportunities to amplify success and ignore failure when it arises via experimentation (Snowden). 

Or: instead of trying to achieve an idealistic goal impossible from your current standpoint, instead make changes to the system that may throw up even better goals, watch for them instead of focusing on the old goal exclusively, and then grasp them when they arise. You must start from somewhere, but the key is to start – a certain step is the first one to conquering uncertainty.

“Organisations and people ALL matter, because they drive, innovate and ARE value; we matter because everyone else matters” (Kirk), and industry becomes, not forced into trying to be a destined-to-fail machine system, but a safe-to-fail ecosystem – holistic and interconnected, not only able to adapt to change, but driven by it.

I constantly quote her words here, because they are overwhelmingly true.

You know who is comfortable with change, both because they know it’s needed, and because they were born into this landscape as it developed?

Outliers, and the Millennial Generation. I mention them time and again because, like it or not, they are both the catalysts required, and the workforce and consumers who now need to be engaged in every sector.

The problems we still face

The issue in many organisations, and with many managers, is that it is still the de-facto belief that correlation = causation, and that simple universally-applicable recipes give idealistic outcomes. These beliefs have led to years of failure and problems, and are a driver of the industry “waves” of best practice management fads that don’t work long-term but propagate because they are new, and short or medium term results may have been seen by some other organisations (see: https://www.linkedin.com/pulse/secret-shortcuts-innovation-christopher-bramley/)

What works to fix or improve one organisation is not necessarily (in fact very unlikely) to work perfectly for other organisations, or work subject to simplification and/for general application. This is a core concept still used that conforms to the Process Engineering ideology. You cannot take something in complex situations and reduce it to a repeatable generic recipe that works perfectly; it just… won’t. No two organisations are alike. Every instance should be approached, investigated, and worked on individually and holistically to see if it should be managed as ordered, or un-ordered (complex or chaotic). There is benefit from seeing what other organisations did to resolve similar problems as long as it is understood the approach and fit must be modified: the incorporation of aspects, rather than the dogmatic following of a whole.

Furthermore, the more people find approaches to be effective, the more they seek to codify the concepts – which is fine to a point, but can easily lead to them then structuring the approaches, modularising them, and then seeking to force them back into the ordered ontology (the Cynefin domains of Obviousness or Complication) as a simple, universally repeatable recipe, when many are ultimately agile and flexible tools to manage un-ordered systems (Complexity or Chaos). This is something that appears to be happening to the concept of Agile at the moment; it is becoming less agile itself as it is taken in by large organisations and constrained (see https://www.linkedin.com/pulse/never-mind-buzzwords-christopher-bramley/).

At the same time, there are constant clashes intra-organisation. Organisations want to both be fully ordered with infinitely repeatable output, but also flexible and innovative. The first of these is causal (repeatable cause and effect), and the second is dispositional (you can say where you are and where you may end up, even simulate, but not causally repeat or predict). They are very different in nature. By their very nature and composition, an organisation cannot be a simple ordered system, and this is where the work within Cynefin into Social Complexity/Anthro-Complexity begins to make sense of these systems and the management of complexity and chaos.

There is also the requirement for a deeper comprehension of the fuzzy liminality of whether or not you should make a change, which differs in each situation; a risk/benefit exercise where we weigh up the benefits – deep and long term as well as short term – of making a change, where the former is often ignored in favour of short-term profitability. Where the dangers of making a change are not defined or understood, or are clearly not beneficial, it is wise to consider carefully whether you should do so – and if so, what the correct manner of doing so is.

Finding a new way forward

One of the fundamental movements that resolves many of these issues will be a shift from Hierarchies, where organisations are ranked internally relative to status and authority with a focus on control (power), to Ecosystems, where organisations recognise the relationships of every person to each other and to the organisation, with a focus on delivery (value).

This is geared towards agility, adaptability, acknowledging change and the driving by change, and that organisations are largely complex and cannot be distilled into simple recipes repeatable for idealistic outcomes. The market, the industries, the universe itself inflicts change, as do the people within, and order is impossible to maintain rigidly, so recognising how to manage un-ordered systems is required.

Before this can happen, organisations (and the management thereof!) need to understand how much efficiency and value delivery they will gain from the also-fundamental shifts in their traditional beliefs: it is understandable that organisations wish to impose order and tighten control to make sense, but Dave Snowden warns against the effects of “over-constraining a system that is not naturally constrainable” – you are asking for more inefficiency and problems, not less.

Many of the concepts touch on Agile, Lean, Cynefin, and other concepts and frameworks all at once. There is a reason I tell organisations they need to view this holistically, not just engage firms based on the latest buzzwords and processes.

Change is a fact of business, and life, and can be feared for good reason; but that should not stop change where change is required or beneficial, or strive to stop change that cannot be stopped. Instead of fearing change, we can teach ourselves to change fear into something more productive: an awareness of grasping opportunities that change will throw up. You have to let go of the old with at least one hand to be able to grasp the new.

You only learn when you are open to change, you move outside your comfort zone, and you accept failure as a lesson that builds success; that uncertainty is the point from which new understanding can grow. The more used to taking that first certain step into uncertainty you get, the less you fear the challenge, and the more you relish it. A good teacher can help place your feet on that path, and walk the first steps with you.

Don’t be afraid or too egotistical to reach out to me for help to understand how to change. We all need unbiased guidance with context from time to time.

If you do it correctly, you and your organisation will change for the better.

No alt text provided for this image

Why the American Dream isn’t what we think

Jamie Dimon, chairman and CEO of JPMorgan Chase LLP, is quoted as saying “The American Dream is alive but fraying” in regards to the most recent Business Roundtable report, which explicitly counters the view held for decades that the sole focus of a corporation and its CEO is to maximize profits.

But, tightly woven or fraying, it isn’t really what we think it is. So, why do I say that about “The American Dream”?

No alt text provided for this image

Well, here’s an interesting thought exercise:

The American Dream is really the Global Dream

The American Dream is based upon a number of ideals, and although it’s archetypical it actually isn’t a purely American dream at all. It’s founded upon the idea that capitalism, which is a fundament of modern global business, can align with hard work, passion, and freedom to allow a genuine opportunity for prosperity, success and upward social mobility for families, in a society that (hypothetically) has relatively few barriers to this.

I’m sure we can mostly agree that this is now something that is represented as a dream in some way across most countries around the world. Partially, this has come about because elements of America’s culture have spread for the last 100+ years, through media, film and business especially (for example, the ubiquity of McDonald’s is incredible – I have seen one on a beach on a small island in the Philippines, with people lining up!), alongside the fact that English, more specifically US English, has become the primary business language worldwide (as mentioned in other articles, language frames our thoughts, so when you think in American English you think in more American terms).

So it’s no longer constrained by the East and West coast, if it ever truly was.

No alt text provided for this image

The Largest Generation are a Global People

A growing number of people are having trouble with the concept of the American Dream however, and they are invariably of the Millennial generation. They’re aware that it’s harder than ever before to achieve it, and they know it’s really world-wide. It’s further out of reach for them than previous generations – think of the difference between buying a house in the midst of the Baby-Boomer generation, and the difficulty faced by Millennials today. It’s also rapidly being considered to not be what it’s represented as, too, because in reality there are considerable barriers in much of the west (at the least) to widespread individual prosperity, success and upward familial social mobility.

Additionally, they have a strong sense of personal brand, identity, and belonging – they aren’t prepared to become a cog in a machine. They want to be stakeholders, part of a solution, and visibly recognised as such. They want to be acknowledged as individuals.

They are different; they are coming into spending and directorship power as they hits their 40s; they engage and think differently to previous generations. They hold a primary strong identity across work, personal and online presence which is also a personal Brand, and they think less in terms of cultural identity and more in terms of individuality as a member of Humanity at large. I’ll post another article later on what it means to be Millennial and what the generations mean, but for now it’s enough to understand:

They have a Global Dream, not an American one, and they are demanding it.

No alt text provided for this image

Isn’t Capitalism part of the problem?

This is one of those “Yes… and No” answers. Capitalism per se isn’t necessarily a problem, but hyper-capitalism is – the basic description of Neoliberalism.

Neoliberalism doesn’t align with the above Roundtable shift because it essentially promotes the belief that nothing has value unless it makes money (I’ll delve into Neoliberalism more deeply elsewhere). This is oppositional to finding value in individuality, humanity, culture, work-life balance, integrity, mental and physical health, and all the other human things we have begun demanding again, because many of these things don’t bring financial profit; they are rewarding in other ways. Neoliberalism aligns very well with bureaucracy, in fact, because both have been used to removed human aspects from business in the name of efficiency (whether that efficiency has been achieved is widely open for debate, watch for other articles!).

A larger part of the problem is humanity’s proclivity towards polarising. We rarely tend to take a measured middle ground when we can leap in and pendulum across to one end of a scale!

In the grand scheme, Capitalism isn’t necessarily mutually exclusive to socialism – a mixed economic system can allow for capital, private property, and economic freedom, while allowing governments some control to achieve social aims, allowing monopolies etc but regulated, and a whole host of other balances.

In the same way, Socialism isn’t necessarily bad as a democratic ideal – anyone believing in a free health service and looking after veterans and homeless, and similar is aligned with it. A true mixed economy would essentially allow us to pursue earning for ourselves without being too selfish, which is a great idea – precisely what we want to achieve in business right now, in many ways. 

In practice this is hard to achieve, and it’s estimated it’s less efficient than a free market. Certainly those markets such as France that are considered partially mixed economies I believe to still be quite weighted towards the capitalism side. But I think the point is that it’s something that can be explored, and the people best suited to (and with the most right to) this exploration are, now, the Millennial Generation with a Global view. 

This is all highly complex, of course – it involves mindsets, markets, the environment, the current state of business, engagement, and a whole host of other areas (hence my widely varying work).

So really, the original American Dream has elements of a mixed economy – but the current practicality is really very neoliberal, and in my opinion, very unhealthy for both businesses and individuals long term. I do not believe the (Global) Dream is being realised right now, but I think we’re looking in that direction. 

This is obviously a widely-shared opinion, given not only the Roundtable report itself (which puts a stamp of formal authenticity on the movement), but the radical and sweeping shifts in business practice and focus, championed by people like Gary Vaynerchuk, Brigette Hyancinth, Bill and Melinda Gates, and increasing numbers of other leaders (the 180 of the Roundtable inclusive) too exhaustive to list.

I’m seeing it here on LinkedIn from a majority of you, too.

No alt text provided for this image

The Choice Faced By Us All

Currently the Global Dream is propped against a Global Nightmare. We’re making great strides in redefining business and the humanity within it; we’re moving towards value for stakeholders – the shareholders, customers, employees, suppliers and communities – over simple profit for shareholders. But Neoliberal attitudes still have a lot of weight, and many companies are reluctant to let go of bureaucratic control and that focus. It takes effort. We’re still getting used to the (long-overdue) realisation that the environment is now a critical factor in all this, as well. We need radical attitude changes, and we’re moving that way.

I think we’ve got a real chance at a bright, productive, kind future where we can still earn our own way and benefit individually as well as together, and I’m encouraged by the wealth of forward-thinking I’m seeing on platforms like LinkedIn. But many companies still need help; many executives still need encouragement to find a new path which will be even more profitable, because change is frightening at a personal and professional level, and uncertainty is the biggest source of our fear. Ego plays a part; refusal or inability to see the rapid market shifts and engagement changes plays another.

Some of us specialise in helping businesses find certainty in uncertainty. Reach out to us!But don’t just reach out to specialists like me – reach out to the greatest asset you have as well. Your employees, your customers – your outliers, your mavericks, the Millennial generation. They have new and innovative ways forward to help this dream be realised. Use all of us!

From my own work, I strongly suspect the Dream will not be realised by bureaucracies, but by ecosystems, for a number of reasons.

You can’t make changes in a complex system without affecting everything else in that system. That’s why I have such a holistic approach, and advocate us all to do so; it’s needed for true change. If we want all the things I see discussed every day here for business and individual alike, we have to make large changes, together – and that really needs championing by the people best able to bring about this change.

Isn’t it time many of us stepped back and let the new generation that is going to live this future at least help guide it? If anyone is going to make this Global Dream happen, it’s them. 

No alt text provided for this image

The 4 Fundamental Foundations Of All Relationships

It doesn’t matter who we are – base a relationship upon these principles, and it has a good chance of being extremely mutually rewarding. Miss one, and it’s likely to fail.

Every relationship has multiple aspects, and they will differ depending on the type of relationship, the intensity, and more, but we are finding that integrity is more critical than ever.

There are four things that many of the others fall under without which a relationship simply cannot sustain itself, and that – for me – help define integrity. This is true for ALL relationships – it doesn’t matter if this is business, marriage, friendship, or even family! Without all four foundations a relationship will fail, sooner or later.

Learn to apply these in every aspect of your life and you’ll see a difference. All of them are vital, but the last is the most critical – and the most often missing – because the rest all depend upon it.

So what are they?

No alt text provided for this image

A Form of Love or Respect – a Basis

All relationships have to be based on something, to have a heart, and you cannot even have a simple business relationship or partnership if you don’t respect the other in the relationship – or if they don’t respect you. With closer relationships, there may also be fondness, deep regard, or love – but if you don’t even have basic respect for someone, why do you have a relationship with them? And how long do you think that will last?

No alt text provided for this image

Honesty

Being honest to yourself and others is vital for a relationship’s stability. This doesn’t necessarily mean telling everything without diplomacy, but it means not misleading others, being direct where it’s needed; having transparency where required, being deserving of trust. If you are not honest, trust will quickly evaporate and the relationship will fail. People don’t want to marry or do business with dishonest people. It’s an old cliché, but it’s actually true: Honesty IS the best policy in relationship terms, because it signifies respect or love, and it is both justified by and inspires trust.

No alt text provided for this image

Trust

Hand in hand with honesty goes trust. Trust is vital – sustainable, beneficial relationships are mutual, with give and take. How can you do either if you cannot trust the other person? Trust is a concept that is both active and passive. People feel they have to earn trust, but also you should, unless given a specific reason not to, trust someone you respect and have formed a relationship with on a mutual basis of respect. This is a part of risk and reward, and justification for honesty. If you trust someone, respect someone, and you’re both honest, you will collaborate optimally in whatever you do.

So, what is the fourth? This most critical of all?

The basis of everything we do together?

No alt text provided for this image

Communication

Without communication, you cannot express respect, love, honesty, trust, teaching, learning, feedback, a multitude of other things. And yet, communication is where we most often fail; in business, in marriage, in friendship, in family. The importance of clear, mutual communication based on respect, trust and honesty cannot be understated. It is the principle foundation upon which all relationships are constructed, and upon which the other foundations depend. It’s also the least clear, because individuality, context, assumption, mental patterns and traditions, and much more that I speak of in other articles and posts skews it, so we are very often communicating together on different levels even when we want to be aligned.

No wonder mistakes are made so often! Communication is probably the most crucial to understand or fix before any others. It’s quite literally how you interface with all other humans, and will define how a relationship works. True communication is also based on discourse, not one-way instruction, which is easier – and modern management has fallen into a very tempting pattern of dictation and expectation as a result. For ideas on learning to listen and accept other communication, have a read of 3 Things You Can Do To Immediately Enhance Your Leadership.

So, these are (for want of a better title!) Chris’s Four Foundations of Sustainable Relationships. It doesn’t matter who we are – base a relationship upon these principles, and it has a good chance of being extremely mutually rewarding. Miss one, and it’s likely to fail.

I’d love to hear what people think – please comment below or give me a shout! I find many people add elements to this, although often they loosely fit under the 4 above.

3 Things You Can Do To Immediately Enhance Your Leadership

Here are three simple things anyone, especially in management or Leadership, can do to increase communication, context, and decision-making ability exponentially:

  1. Talk to your employees on a personal level. Listen to their ideas, and value them as people.
  2. Don’t just ask what they can do for you – ask what you can do for them!
  3. Get honest, constructive feedback, and act on it where appropriate

These three things can help change how culture works in a company, and help the company start the journey towards being an ecosystem. 

What do they really amount to?

Making people in your company Stakeholders.

What do these things mean?

  1. Talking to people personally helps you care more about them, and helps them understand this. It avoids them becoming a mere component to you. If you also humanise yourself to them, they will care more about you as well, and you will find that translates to caring more about your business. This also steers away from the pitfalls of assumption – you assume less when you know someone better.
  2. Employees have come to expect that they give a lot and get little back in modern business. There’s little more disengaging and demotivating than this, and you won’t get their best! Having a genuine interest in their needs helps you understand their drivers better and thus manage better, and it lets then know they are getting value back from you. It truly promotes the realisation that you’re all in this together, and collaboration is the best way forward. Value doesn’t just come from monetary reward; it comes from job satisfaction, achievement, and integration. Consider this when you create “awards”.
  3. Leadership is habit-forming because it’s so immersed in the company, and it’s easy to become used to hearing only positivity, especially in a toxic culture that helps promote sycophantism. It’s important to get honest, grounded feedback from everyone, not just direct-line reports or peers, because it will help you understand what’s really going on within your company, and without too. This is also a great opportunity to get left-field ideas and suggestions which might spark some interesting thoughts and directions. Feedback benefits those who give it, but it also hugely benefits leadership. Just be sure that you then act appropriately; ignoring valid feedback is often worse than not asking in the first place! And if you don’t act if it’s valid, you only end up weakening your leadership and the organisation long-term. Don’t fall prey to inattentional blindness or hubris!
No alt text provided for this image

Why is all this important?

With a better understanding of individuals and the culture within an organisation, you are acknowledging that each agent within the system that makes up your company has an effect on the whole. If you work for your employees, and they know it, if you invest them in the company’s success and make it their success as well as yours, they will strive to make the company succeed – naturally. 

Company Culture is crucial. It is made of the interactions between people, and is defined by the actions and inactions of Leaders. If the interactions are bad, and bad behaviour is supported by either action or inaction, the business is not as effective as it could be and the people aren’t as happy. Look at companies with toxic cultures, or that dehumanise and fully model decision-making – they are often mired in unhappiness, red tape, lack of innovation and progress, staff turnover and a hundred other problems. How is this beneficial?

Culture defines basic ethics, accountability, honesty, and whether people want to work there. If you have a good, open culture, you will get less gaming behaviour, less sycophantism, more accuracy for decisions, better grounding, and more investment – the ability to move forward as an organism aligned, not fighting itself.

Remember, people are becoming more aware that a company has to fit them, as well as their fitting the company. Ethics, accountability, humanity all matter, especially to the people beginning to make up not only the larger part of your workforce but your new customers as well. Creating an ecosystem makes people stakeholders on multiple levels.

Ecosystems are better for modern companies because they are in line with the changing values of business and individuals, but also because older bureaucracy and hierarchy is not capable of keeping up or innovating at the speed the modern market is.

This doesn’t mean Leadership loses power. It means Leadership gains valuable insight.

If you demand their all and extra work, and treat people as less important or even human, you won’t get their best, but if they have a better connection with you and are stakeholders in what you all do, they are more likely to give freely if their all and do extra – and you’ll get their best.

Explore the idea of working for your employees! Prove your worth, and they’ll prove theirs over and again.

If you want to talk more about leadership, decision-making and how to improve it, please reach out to me!

The Decisive Patterns of Business

I’ve always been fascinated with patterns. I’ve unknowingly (then knowingly) hunted them for most of my life, choosing to ignore some but embracing others, very often where most people don’t see them. I’ve always sought equations, patterns, an understanding for solving pieces of life instead of relying on the Dei Ex Machina of Survival of the Luckiest and Success without Learning. In this article I’ll delve into the concepts around patterns… and how they affect your business.

Humans naturally seek to create order, find patterns, and build systems. We do this in business, material, social, religious, emotive, physical, and cognitive terms; we seek, create, build, and find comfort in patterns, rituals, methods, which often consciously or subconsciously produce certainty and comfort, and which we use to interpret the world around us.

Patterns are everywhere. We see them all the time, but many of us don’t notice any but the most obvious (although some of us can’t help notice more, and that can be a little overloading at times). But there are many more levels than this – we fall into patterns, and operate on patterns; we think in patterns; our brains operate on primal patterns, too, as do our bodies. We each develop our own patterns at all levels, as well as sharing a common pool.

This isn’t only how we live, operate, and create; it’s also how we decide.

Decisions, Decisions

Humans are a fascinating mixture of instinct, emotion, thought, and rationale. We all contend to greater or lesser extents with confirmation bias, denialism, and the occasional bout of cognitive dissonance, and many of us get stuck on the primary curve of the Dunning-Kruger effect. Some of this is determined by individual brain structure (for example, activity within the amygdala), some by shared structure, some by formative experience.

In short, we are designed to fall into patterns of thinking controlled by impressions, emotions, and filtered, reasoned (and therefore probably inaccurate) past experiences in lieu of fully rationalising using firm facts. This is great for tribalism and building subcultures; it’s impedimentary for accurate decision-making, however, and this can especially affect business.

It has been noted, interestingly, that one group of people consistently can make decisions based more on rationale than immediate patterns, and who tend to see patterns innately, and they are usually autistic (Snowden). So it’s clear that neurodifference changes our processing and allows different capabilities and breakthroughs.

There is some wonderful work done by Dave Snowden of Cognitive Edge on Decision Making, based on work from Klein (1998) around mental patterns and how they affect decisions. In it, he goes very deeply into a lot of the subjects I discuss and consult on which I believe are fundamental – neuroplasticity, engagement, narratives, ethnography, upbringing, and much more.

Patterns for decisions link deeply into Cynefin, Dave’s naturalistic framework for understanding and complexity. Narrative as defined here is far deeper than some consultants who work with storytelling go; storytelling is crucial, of course, but narrative means far more than simple storytelling. It’s context and data, quantification and qualification, discovery and transfer; it’s human cognition and memory, and evolutionary advantage, teacher and student in one linear, often fragmented data flow. All of these fall into patterns.

Decisive Manipulation

Buy now to relieve guilt later!

Shopping in a supermarket is a great example of pattern-based decisions. From the layout, where fruit and veg is at the front (which alleviates guilt at less healthy foods later), the essentials are often at the far end – forcing us to walk through a veritable cornucopia of delights, most of which leer temptingly at us from the end plinths as “special offers” – the stacking of more expensive products at eye level, and stocking popular combinations next to each other to persuade you to get both, to populating tills with small ‘essentials’ to trigger impulse buying, moving produce periodically, preventing us from going straight to what we want and ensuring we walk past deals we might suddenly think we want. Some stores even play slow music to make us subconsciously spend more time (and money) in the store.

We are literally hacked as we walk into the store, and it’s based on mental patterns – and the manipulation of them for profit.

How often do we really consider a new supermarket purchase based on optimal fit over convenience, attention grab, or mere close approximation?

It isn’t just supermarkets. Casinos are famed for pumping in oxygen, having no clocks, bringing constant free drinks, having no windows, and using the addictive patterns inherent in games and “chance” to attract and hold attention until you have spent every penny you had.

These are the patterns they want you to focus on…

How this affects Leadership (The Decision-Maker’s Quandry)

This is also extremely prominent in business decisions and why they are made, up to and including Director and C-level. Leadership especially falls into patterns because they typically have a set amount of things to decide and carry out which are often similar and repetitive, in less time, with less involvement. This brings the focus away from other areas, and decreases the resolution of patterns behind in a kind of mental bokeh effect (the blurring behind a subject in photography) – they are roughly aware it might exist, but it has zero focus. That this focus is often relied upon to be set by intermediation by management means that things become vastly more complex in terms of interdependency (culture, management style, hierarchy, motivation, and other even more human aspects can all flap “butterfly wings” and have effects on larger things without people realising). Decisions made upon summarised data, which has less context, and is reductionist and interpreted, are likely to produce less than desired results long-term.

Focus too hard on the decision, and everything else fades out (Mental Bokeh)

Years of experience and understanding can be detrimentally manipulated because the longer we do something a certain way, the less able we are to change patterns or consider things objectively, and the pattern can come to mean more than the data it was designed to react to. When you combine that with a tendency to project on possibly falsely positive outcomes to assuage doubt (this is a firm pattern in many organisations), and the fact that business is changing faster than companies can keep up with, it can present a serious problem (part of the patterns learned are often not to admit there is a problem, of course).

Where things are now changing interestingly alongside the shift in speed of change and focus is that a new generation of Millennials has arisen which has many different patterns to the older, and the older generations in charge of organisations are having real trouble figuring out how to engage them, both as employee and customer. Old ways don’t work so well anymore (the trick is, many never did – they were just good enough at the time).

Modern Business is a perfect example. It works the way it works because it always has, but this happened because it did just enough at the time and chance helped it become “the way of doing things” (more in the blog post Survival of the Luckiest), and therefore a tradition that must be adhered to. Long-term sustainability and equilibrium be damned, new ways are suspicious and “not proven”, and so we remain in base Taylorism-alikes and decision-making that we’ve used for 200 years or more, and is still taught in many business schools, in an accelerating business landscape that no longer supports such an approach.

So how can Leadership realise their patterns?

Gorillas in the Midst

Let’s look at modern pattern examples. There is a great one called The Invisible Gorilla (Simons, Chabris) where you focus on a video of a ball game, only to later discover that a man in a gorilla suit walked through the middle of it without being seen by most people.

Inattentional Blindness is a mental pattern that affects critical decision-making. Survivorship Bias is closely related as well. Professionals are often more likely to fall foul of patterns such as these because of training and practice, unless they’ve been trained to look for what isn’t there (or have oversight).

A further study on the former was done with radiographers, who were shown images where a few frames had a relatively large gorilla superimposed in a section of lung. You can clearly see it, but most of the specialists trained to look for malign lumps missed it, for two main reasons: it didn’t fit the specific pattern they were looking for (despite anything off-normal being important to note), and even if they did, they doubted their own memory or experience after the fact against the experience of others (Reporting Bias and herd behaviour/Groupthink).

Trafton Drew and Jeremy Wolfe

These are both also seen constantly in Leadership and Business, where focus on a projected goal is so intense that it’s rarely seen that there can be other, better goals, or that you might not even get to the perfect goal; or too much focus on what a successful company did do (The Secret Shortcuts to Innovation) which may hold no context for your organisation, and not what an unsuccessful one didn’t do.

Understanding the problem

So, let’s sum up some of the most common – and major – Leadership patterns which can stifle business success:

  • Assumption of Correlation = Causation
  • Simplification/Reductionism for decision-making
  • Effects such as the Cobra, Butterfly, and Hawthorne
  • Templating or Recipes for easy success or Innovation
  • Inattentional Blindness
  • Survivorship Bias
  • Reporting Bias (conditional mid-process, blame-laying post-failure, rationalising “genius” post-success)

I address many of these individually in other posts, so we’ll just focus on the pattern awareness here.

Leadership can counteract many of these via a paradigm shift into less hierarchical thinking and acceptance of complexity and outlier viewpoints, as well as engaging decent consultants to help frame this in context. This is one reason outside comprehension and coaching is so valuable – the further removed from contextual immersion, the easier it is to see the whole picture. You can learn to see your own patterns, but it takes a release of ego, comfort, and tradition to do so.

Remember: It’s hard to Navigate when you’re too close to the Sun.

Learning this requires understanding why we make these decisions, understanding our own company culture instead of trying to suppress or dress it up, and what Dave Snowden calls “becoming ethnographers to our own condition” to then understand how to sense-make and see new patterns emerge. Of these, some are likely to be best fit, because they emerge and are largely neutral – they were not constructed to fit our preferred patterns, as happens in categorisation.

Some patterns – primal and base patterns – never change, because they’re hardwired. Human patterns for decision making are a first fit not a best fit (Klein, 1998), or in other words, satisfaction, not optimum.

Humans still operate based on a pattern extrapolated from between 5-15% of what our senses take in, consolidated from three things:

Past Experience Current Experience Projected Future Experience

From these, we create a pattern that is based on a First Fit. As a hypothetical example, back in pre-history this was a distinct advantage; we would fashion a first-fit mental pattern that made us extremely wary when we saw certain patterns which could result in us, for example, dying from predation. Where we then excelled as a species was to adapt our behaviour and exapt our tools, so that we not only moved in groups for defence, and used language in our watch for predators, but divined how they hunted, that spears do more damage than in our hands – and then further, to proactively hunt them to remove the threat rather than simple react as a prey species. These advances became parts of new patterns, and so we progressed.

Intricately bound into this was the formative use of Narrative, alongside a number of other things. We now realise that humans, human intelligence, and interpretation are highly dependent on narrative, as well as a built-in ability to collaborate based on how we engage and learn with each other. The rise of both hyper-complexity in human systems – where they themselves form a meta-complex landscape – and further and further abstraction of thought and learning means that we move ever further from the understanding of how and why we do things, but not from the way we still do them.

I generally see three main ways patterns are used:

  1. Seeing a pattern set directly as the only likely one in context to a specific situation, fitting perfectly;
  2. Seeing a number of patterns that can apply and finding the correct one for the situation;
  3. If one doesn’t exist, try to create a new one, or force the fit of one

But there are other ways we can use patterns, too:

  1. Sensing patterns as they emerge instead of trying to fit what’s happening into your own preconceived patterns, and applying the most beneficial, even if it changes the projected outcome;
  2. If no patterns exist, catalyse via actions and then probe patterns as they emerge, discarding negatively impacting ones and applying the most beneficial along with accompanying constraints

The latter are where Complex Adaptive Systems Theory comes into play.

As with nearly everything I do, I end up turning to Cynefin to frame a lot of this, as it’s very deeply bound into my work on human learning, and it helps us understand patterns and why we fall into them. Cynefin as a framework also simultaneously recognises, utilises, and challenges patterns, seeking to help us shift paradigms to new understanding and the finding of coherency in, by, through, and around them (you can learn more about Cynefin in Never Mind the Buzzwords).

7-Domain Liminal Cynefin in three dimensions, and inter-domain movements

It’s not about never using the domains of obviousness and complication, or avoiding Chaos completely, but when to use them; or more appropriately, which situations fall into those domains, and which are complex or chaotic (generally, you know when you’re in the chaotic one). Patterns exist on multiple levels, from instinctive to the highest abstraction, and which domain you are within depends upon the pattern of events – and the context.

I think this is where an interesting misconception inherent to humans comes in around Cynefin. We innately seek to categorise and control situations (which in itself is a pattern), and even a framework such as Cynefin is subject to a certain amount of human attempts to control. Trying to force a situation into a different domain, or pretend that treating it like a different domain will make it become that domain, are unlikely to work. This is because we are still very much slaves to our preconceived patterns and it is comfortable to be able to utilise them.

Situations often dictate you haven’t chosen a domain; the first step is to divine which domain you are REALLY in, and then act appropriately. Just as we often mistake a complex situation for an ordered one, we also can learn about complexity and the ascribe it to every situation, unordered or otherwise. I see a huge number of blogs and consulting manifestos regarding a focus on complexity, and this is a necessary focus, because so much of what we understand is complexity being treated in an ordered fashion, especially human systems. But too much focus on this can be as counterproductive as ignoring it; not everything IS complex, and it’s the flexible and difficult ability to understand where complexity lies vs the other realms and act in context with multi-methodology approaches that allows real transformation.

When you move into a more complex situation such as decision-making within an organisation, in a landscape demanding innovation and disruption, first fit doesn’t serve us quite as well. Pre-conceived notions and old-school management techniques blind us to opportunities and multiple paths forward, as well as innovation, and because patterns and narrative are shaped partially by our past experience, being conditioned by traditions and rigid methodologies and living within that structure day-in, day-out means our first fit patterns may be wildly inaccurate for a complex situation.

If you’re individually or organisationally lucky, you have learned to see this in part and try to decide based on more factors. Sometimes our first feel is good. Often it isn’t, but we have a prevailing cultural dependency on “gut” feel and first impression which can help or hinder in equal measure. That’s going to be down to the context of the situation, but I suspect it’s always worth re-evaluation for a second considered opinion if you can, just to be sure.

Let’s take an HR/Interview example here: the person who doesn’t “first fit” your expectations and traditional interview methods for a role could end up being the optimal fit – but if you stick to your comfortable patterns, you’ll miss the opportunity. It’s worth working on these things and questioning them, as the rituals around hiring are now often hollow and bring less value.

Probing further into Patterns

In my experience, the amount of times a “good” feel or a first decision pays off vs the times it fails is not usually justifiable, but since we retroactively remember and justify decisions as logical rather than unknown or gut-feel assessment, we often don’t recall this (Reporting Bias, as above). This particular pattern makes understanding why we made decisions difficult to investigate.

Patterns and narrative tell us a lot about reality – they are not only useful for the transfer of knowledge, but for discovering context as well. Micro-narratives from multiple sources daily for three months can give us a better idea of a situation than an overarching company narrative for a year, for example, and this allows us to then use quantitative data and support it with qualitative data for context to search for a better-fit pattern. Where our first response might be usually to reinforce company policies and make cuts, swap management teams in to “turn things around”, here we can see a possibility of understanding a root cause instead of trying to treat a symptom (one that is quite contagious for organisations in 2019) and using what is there to understand and repair the situations. This almost always means better understanding, and getting value out of, existing structures, teams, and employees, and is often accompanied by a paradigm shift.

It’s also worth noting that the inherent human use of Narrative is often subject to confirmation bias. Where narrative doesn’t get backed up by credible sources, it can become problematic, because we are designed to believe fragmented narrative, which is fundamental to the evolution of our intelligence. This is, according to research by Dave Snowden (and my own observations), why social media has become such an enabler of serious problems as well as serious advantages – it bypasses what we used to require, which was the contextual, consequential human interaction side, and has removed consequence and accountability from these fragments.

And finally, the manipulation of patterns is well documented in politics, psychopathy, and business alike – but always remember that those doing so also adhere to their own. We’re immersed in a sea of patterns at every level.

Fixing the problem

The first steps to making better business decisions, then, are to be aware of how patterns and narrative both affect us and the systems we create at a fundamental level, and how we use shortcuts and impressions to often make incomplete or inaccurate decisions; to understand that we need to realise the complexity within organisations; and to being able to admit that we can always learn more in context about achieving better value. Mental Kaizen, if you will.

This all links very strongly into finding our humanity again in business, and about having advisers who are not too much part of the process, because they won’t be subject to the same contraints.

The other thing to understand is, outside the ordered areas of our lives (and even eventually within these), patterns do not remain inviolate, because life is subject to change in all areas; nothing is immutable. Like it or not, we must accept that comfort is never forever, and we must dowse new patterns as situations change, allowing them to emerge into understanding rather than trying to hold onto old, less contextual patterns and force new situations into their shape.

So, be mindful that Correlation ≠ Causation; use granular raw data for decision making, not reduced data. Be wary of Cobra, Butterfly, and Hawthorne effects (assumptive patterns). Know you can’t template management techniques or innovation between contexts. Use awareness of complexity to avoid Inattentional Blindness and Survivorship Bias. And be very careful of Reporting Bias.

Find context; get professional understanding from outside your situation; dowse complexity; question your patterns and rationalise your decisions to be optimal. That’s what someone like me is there for; to help understanding of these processes.

We’re creatures of comfort, and patterns comfort us; but we’re also creatures of change. Never stop finding new patterns, learn to leverage them and not be used by them, and you can shift between them and find far more benefit in your business – and in your life.

Survival of the Luckiest

In continuing my thoughts on innovation and disruption in organisations, I’ve come to agree with an interesting conclusion:

Darwin almost got it right.

It’s become an almost universal saying, at least in the English-speaking world: Survival of the Fittest. The idea that the most apt, the most finely-tuned to circumstances will prevail. This is something we apply to organisms, but it’s easy to forget that an organisation is, essentially, an organism, and we consider them as such knowingly or otherwise.

This idea is one I see strongly represented at a subconscious, conscious, and marketing level in business on an almost daily basis. It must be true – if your organisation is the best suited to a market and demographic, if your product or service is the best out there, you surely should be the most likely to not only survive, but dominate this market. Right?

 

 

What is Luck?

In a way, survival of the fittest does define how things work in part, but not as much as we want to believe. Recently we have begun to understand that it is less accurate than survival of the luckiest. It’s survival of the fittest in serendipity; the right place at the right time combined with the minimum requisite fitness for circumstances is what truly guarantees survival.

Luck is an occurrence of positive, negative, or improbable events of note. A more scientific interpretation is that “positive and negative events happen all the time in human lives, both due to random and non-random natural and artificial processes, and that even improbable events can happen by random chance. In this view, being “lucky” or “unlucky” is simply a descriptive label that points out an event’s positivity, negativity, or improbability.” 1  But what generally affects an organisation isn’t luck per se, because organisations are complex, and they exist within a complex market, and each has their own further strata of complexity. Thus, they usually have contingencies and redundancies most individuals lack.

Serendipity is not quite the same; it is a lucky event associated with discovery that is positive in aspect. When we talk about luck in business, innovation, and the market, we are more often talking about serendipity – an unexpected, fortunate discovery that is beneficial to delivering value and making profit.

 

 

Serendipity and Sagacity

Sagacity is “acuteness of mental discernment and soundness of judgement”. Serendipity benefits from an approach where a business is ready to dowse and act upon positive circumstances, so sagacity is useful when we talk about managed serendipity, the art of detecting and amplifying likely opportunities, because this is more important than fit for purpose for an organisation. Organisations can adapt, or more likely exapt (or radically repurpose); humans are very good at doing this and so are organisations. But unless they are in just the right place at just the right time, that fitness for purpose may be wasted.

You can write the best software product in the world, but if it doesn’t get seen, another will take its place. Survival of the “mostly fit but far luckier” occurs. I’ve seen this countless times in the tech industry, most recently in a company that supplied state-of-the-art drones which were better priced and more advanced than anything else, yet was having a wide number of issues both internal and external despite this stronger, faster, more price-friendly suitability. This is where understanding Cynefin’s methodology of probing in complexity using multiple experiments to discover serendipitous circumstances is useful, because it helps an organisation innovate, find coherent paths to market, and retain relevancy – throughout the system. It’s not just about the product – agility, management, culture, and internal structure all matter.

 

 

Adapting Vs Exapting

There is another key distinction to be aware of in seizing business opportunity. Adapting is the phrase used the most, because your business is either adapting to wring advantage from a market landscape, or it is a dominant Apex Predator that adapts the marketplace to itself. The second approach is never permanent, rare to attain, and a major source of complacency and competency-induced failure, so most companies do the former. But companies also exapt, often without realising they are doing it or conflating the two.

Adapting is the development of traits or features to meet a specific purpose. This takes time; Analysis, Reaction, Testing, Restructuring, Feedback, Infrastructure, and on. This can be the difference between seizing opportunity and watching it pass by. Many startups are adapting to a current or projected future market, but they are small enough to have the reactivity and adaptivity to repurpose extremely quickly.

Exapting is the co-opting of existing traits or features developed for another purpose. In business terms this is far faster, and not only more effective but more widespread than you may think. A business repurposing a product, infrastructure, or goal is extremely responsive and likely to be fit for purpose in far less time – and more completely, because the supporting structures are already there and require only tweaks. Many post-startups exapt in some way, all the way from SMB to Global Enterprise.

Viagra, Mouthwash, Microwaves – all examples of commercial exaptation. Personal or biological examples are too numerous to count, as we are individually natural exapters and repurpose not only other things but our own bodies without thinking about it (a good example could be that of piano-playing; we certainly didn’t evolve fingers with the aim to manipulate pitches of music derived from counterbalanced keys with hammers on the end, but hands that are remarkably adaptive and strong from grasping branches, tools and weapons, and with sensitive fingers from opening seed pods and tactile sensing of our environment can learn to perform this task exquisitely).

The ability to react quickly and grasp opportunity is something that we all need to learn better – it is an ongoing journey in my personal life as well as my professional, and it is the same for any organization, because no matter how good they may be at initially grasping that opportunity, time, size and complacency inevitably dull this sense, which leads to orthodoxy changes and sometimes catastrophic failure. I love the further example of superglue here, as it’s thrice-serendipitous exaptation.

 

 

 

 

Superglue’s Roots

Dr. Harry Coover didn’t set out to make the strongest, fastest setting glue that he could; he set out, initially, to make clear plastic gun sights for Allied Troops in 1942, testing a variety of plastic materials. One of these, an acrylic resin – cyanoacrylate – was found to be unsuitable to make these sights, and instead bonded things together. It wasn’t formable, and was just too sticky. Clearly not fit for purpose, it was discarded (although he had the foresight to patent it).

In 1951, he was working for Eastman Kodak trying to develop a heat-resistant acrylate polymer for jet canopies, and a colleague of his used the original discarded formula as a test for this purpose. On applying it between two refractometer prisms, to their surprise they became solidly bonded. At this point of second serendipity, they realised the commercial use, and in 1958 it went on the market.

So, we have initial serendipity where the time and place were arguably wrong, and the exaptive possibilities were ignored due to focus on a measured outcome (clear sights). Then we have a rare example of another serendipitous moment for the same person with the same product, again with the attempt to reach a different goal, but this time with the product subsequently repurposed commercially. You could argue that the second time is a clear example of exaptation, or radical repurposing in the domain of complexity, but there is also a third repurposing here which is often misrepresented:

During the Vietnam War (not WWII as often mentioned), injured soldiers could easily bleed to death before making it to surgery. Emergency medics began closing or coating wounds with cyanoacrylate spray, which was found to be extremely effective at stemming bleeding and protecting against infection, although it could cause irritation. This meant soldiers were more likely to survive until they got to surgery, and many saved lives were attributed to it. It says something for the desperation of war – firmly within the chaotic domain at this point – that such a radical, untested repurposing was carried out, but it paid off. Today, Dermabond is a cyanoacrylate-related antibiotic adhesive developed especially to carry out this task without the irritation and cell damage, regularly used in wound closure and medicine; it wouldn’t exist without the 3 levels of exaptation/serendipity before it.

So, from a failed experiment to make a gun sight, to a chance test nine years later, one of the most ubiquitous glues in the world arose to dominate the market and then began to concurrently save lives, before being adapted into countless close variations for different bonding purposes across a wide spectrum of industries.

(This is also an excellent example of both innovation, and market disruption through ubiquity, concepts I go into more detail in Innovation & Sowing the Seeds of Disruption.)

 

 

So, what’s the Recipe for Success?

As I mention in my post The Secret Shortcuts to Innovation… there isn’t one. Context is key to everything, and unique to companies and situations. You may be able to use another company’s recipe as a very loose basis to exploration, but that’s usually it.

The thing about serendipity and its close ties with innovation is that you can’t consciously repeat this innovative process. If I were to set out to create the new “best glue in the world”, you can almost guarantee that the WORST way I could attempt it is to try to invent a new gun sight! What works as innovation for one company will not work as innovation for another, because it’s already been done, and the context is not fitting. Serendipity is in constant flux, and how fit for purpose you are is only relevant when you find the right place and the right time.

Instead, focus on shallow, controlled dives into chaos to find what may spark that innovation, and learn to see the serendipity that exists, and amplify it. This will give you contextual paths forward towards possibly even better goals than you set for yourself initially:

 

 

So as mentioned, it’s less being the fittest, and more being just fit enough at the right time to engage, and that requires understanding that for most industries, 2019’s market has utterly become a new landscape with a new demographic that has never been fully engaged before.

Sagacity is required, and Serendipity must be seized.

You can’t base predictions on chance. Some of us do seem to get more opportunities than others, and there is always an element of chance there, but you also have to be able to notice and take those opportunities, and you’d be a fool to rely on your “better luck” for anything substantial. It’s rare things simply fall into our laps, and this is doubly true in business – where you notice obvious, generic opportunity, you can bet another organisation probably has as well. If they are in the right place at the right time, it doesn’t matter how much better your product is, as long as theirs is good enough.

A lot of issues come down to managing this serendipity. Internal management, business decisions, culture, and the delivery of the value are all a base part of being fit enough, because it’s not just about the product but the infrastructure around it. It’s very common for companies to focus on the product and message rather than what lies behind it all, which is equally important.

So success comes from being fit enough in part, but mostly from managed serendipity, and this is not as easy to determine internally for many companies.

Understanding how to spark serendipity in innovation and take advantage of the opportunity is key and this requires an understanding of complexity and the ability to probe sense and respond, or in some cases chaos and the ability to act sense and respond. You then have the requirement of being able to change enough to take advantage of this, which is far faster done through exaptation than adaptation; already having that fitness is doubly fortuitous. This is where a consultant not part of your corporate infrastructure is very valuable, because it’s difficult to see all this from within. An outside viewpoint can be vital to seeing signs you can’t through culture and habit.

If you’re lucky enough to have a perfect opportunity drop into your lap… get it to market, and don’t waste it, but don’t expect it will ever happen and base plans upon it.

Chances are it won’t.

 

 

Innovation & sowing the seeds of Disruption

It is inevitable that, when an industry sees companies struggling to lead, grow or even maintain homeostasis, organisations will shift focus to innovation or disruption. They have to justify continued financial support from investors or parent companies; they have to prove the vision of the CEO is in line with the board’s goals; they have to prove they are providing value to return profit.

We are in an unfamiliar market landscape, populated by a new, populous generation who don’t think, act or engage like the old. Everything is moving faster than ever, and entire sectors (such as retail) are struggling. I’ve spoken about the shift of global economy and the failures of older management styles to keep up in earlier posts, and I’ve also spoken and posted about innovation a lot recently, as it’s becoming ever more a focus as more organisations try to find their footing in the Cycle of Woe, so perhaps now is a good time to explore the current market approach of many companies in more detail and collate my thoughts overall; I’d also like to explore disruption, which I’ve spoken about and worked with before, and I am seeing many more people discussing.

There is a reasonable rule of thumb here:

All Disruptors are Innovators, but not all Innovators are Disruptors.

I think this usually holds true, but not always; sometimes disruption isn’t innovation, but provision of something already needed, existing, and known, but simply not being provided (or that was provided incorrectly and failed). If that need is identified, or a company fails to find a key differentiator or novelty by which to dominate an often saturated market, the focus may shift to disruption in an attempt to change the market itself. Often, a business conflates the two; sometimes, they are both possible at once.

So, let’s explore innovation, disruption, market S-Curves, and more. This is (as usual) by no means conclusive!

 

A Hot Needle

For many years I worked with an executive named Johan in the IT sector, who at the time headed EMEA. When he came on board, he took the entire area from a dead sales stop to market traction and regained relevancy within ~6 months, which was a phenomenal result, but he didn’t do it by being steady and organic.

Instead, he quite forcefully pushed and demanded, both internally and externally; he made waves, acted quickly, innovated with pricing structures and products that were different and attention-catching, and disrupted people’s expectations and business alike. He didn’t always make friends, but he did make a big difference.

One of his more infamous moments was when he upset a few other players in the market (and I’m sure ruffled a few feathers internally) by publicly announcing that we were going to give the VAR channel “a poke with a hot needle” when the industry least expected it (and due to a lacklustre rebrand, had possibly mis-recognised the company as a potential newcomer).

It was a provocative comment, which had the desired result – people sat up, took notice, argued, laughed, or queried, and the market realised a quite radical shift allowing SME resellers a way into cloud against the larger players, both from the process focus and from new innovations we offered them. It also conveniently spread the rebranded name of the company quickly and re-established the technology as relevant. It wasn’t a total orthodoxy change, but it was a new way of thinking – previously only large providers were offering this type of service, and it caused a great stir in the way the industry was viewed, at least short-term. The disruption process was driven by complementary innovation products. Because of Johan we did, briefly, become a hot needle jabbing at the market, and it reacted.

 

 

In a complex/borderline chaotic situation, Johan acted, monitored the feedback, and introduced or re-positioned novel products and offerings. Not everything worked, but it didn’t have to. We picked up on an untapped gap in the EMEA market and enabled smaller companies to compete in the Cloud at the right time – innovation with products, and a relatively quick disruptive shift as a process – in only a few years. Quite an achievement.

It didn’t last, possibly because the company didn’t capitalise on the disruption or gain context from EMEA markets and customers. EMEA wasn’t their core focus, and they didn’t understand it very well; once the company had stabilised, grown, and moved on, it very quickly ossified again, lost a lot of impetus, and methodology became as or more important than results. The Cycle of Woe began again (without Johan, myself, and a number of other people, who had moved to other things). Even a jab like that can be quickly forgotten.

Working with Johan was an interesting experience – we didn’t always agree, but we respected each other’s specialities, and I don’t think either of us would argue with the results we were individually getting. He certainly could be a hot needle (and I’m sure still is!).

 

Types of Innovation

Innovation is often automatically associated with a product. Whilst of course this can apply to services (Spotify is a hybrid example), it’s still essentially a brand.

Many professionals categorise innovation into 3 areas:

 

Incremental Definitive Breakthrough

 

There are purists who will insist that true innovation can only be “Breakthrough”, but innovation isn’t necessarily only world-shattering and huge, so I don’t agree with this. It is, however, what most people mean when they speak about innovation as a buzzword: a differentiator that is a breakthrough to success.

I have found that innovation as a concept also tends to be subconsciously considered in two other ways:

 

Being Innovative Innovating

 

The first is often a goal in and of itself. I’ve attended many companies where they are desperate to differentiate, to become a market leader with any product; they want novelty, and work towards it without direction. It doesn’t matter what it is, just that they do it. It’s an outcome; a badge of worth.

The second is a pathway on a journey that is a coherent, contextual path forward, where they innovate with a product at the core of the business. It’s a part of the process of value delivery; whilst being an achievement, it is an enabler, part of the overall narrative.

I’ve posted plenty about Cynefin in this blog, but as a quick summary in terms of innovation, the main domains all have their place. Because of this I distinguish four types of innovation, not three:

The Cynefin Model showing Order/Unorder, with Disorder in the Centre. All rights reserved Cognitive Edge

 

Incremental Innovation happens in the nicely ordered, rigid, boring, safe space of Obvious, where all works as expected and predicted, and if innovation happens at all, it’s in small, by-the-numbers ways (often debatably innovative, often borderline nice-to-have).

Definitive Innovation happens in the governed, ordered, expertise-driven space of Complication, where all works as expected and predicted but the rules are looser to allow for multiple causes and effects, via key differentiators – stand-out features that not every product has.

Breakthrough or Radical Innovation occurs in the unknown, unordered, dispositional and uncertain space of Complexity, where we can only say what is likely to happen, and there is no clear cause and effect. All we know is that any changes will effect everything, and could be good or bad. It will likely be both serendipitous and unexpected, and often is game-changing both in terms of value delivery and direction. More organisations than you’d think are here.

Disruptive Innovation lies amidst the total unorder and crisis of Chaos where everything is, well, PFU. A very bad place to be – we don’t know what is likely to happen, what is happening, or what we can do, only that we MUST do something. We act or die, essentially. Innovation that occurs here is make or break – if it works it will not only help manage the crisis and differentiate the company, but is likely to be so novel it disrupts entire markets. There is a crossover with Radical innovation here, as chaos can be used as a guarded area to spark disruptive innovation in relative safety using safe-to-fail probes.

It is important to innovate in context, but a vast majority of companies are in Disorder, that red spot in the middle; that is, they believe they are in a specific domain when in fact they are not. This usually errs on the side of believing themselves in very ordered situations when they are in very complex situations, but that is not always the case; the key is that they aren’t where they think they are (or should be). Unless you are extremely lucky, making strategic decisions or trying for innovation here will likely be unproductive or damaging based on a lack of real-world context. Randomness generally lacks coherency; emergence doesn’t necessarily.

So, to summarise: Innovation is change/novelty of varying types, and it can be mild or extremely disruptive.

 

Types of Disruption

When we talk about market disruption, we may mean something that is not necessarily innovative, but may be so required in a market it gets widely adopted enough to become the new orthodoxy. This can happen in such a subtle fashion that it may not be realised until after the fact, unlike innovation, which relies on being extremely visible to spark things like The Hawthorne Effect, i.e. humanity’s interest and adoptive reactions to novelty.

Disruption can include:

New Market Disruption  – Targeting a market where needs are not being met by existing dominant orthodoxies

Low-End Market Disruption – Targeting a market where not all features offered by existing dominant orthodoxies are valued, except by high-end customers

Innovative Disruption – A process where in the short-term a new market is created and grown based on a product or service, and in the long-term finally displaces an existing market

Market disruption is often not a fast process, unless the gaps in the market are crying out for it. It often happens through general adoption over time – thus, a subtle ubiquity – rather than early adoption, “the next big thing”, and ambassadorial representation. It’s perhaps better thought of as a displacement rather than a spearhead. Kickstarter is a great example of tons of innovation which clearly doesn’t disrupt entire markets immediately, or even at all.

There is also another type of disruption which is a result very often of innovation and market disruption, that isn’t often considered by organisations, and which needs to be understood, and that is role disruption – the effect where changes and progression in the marketplace, new technologies, and paradigm shifts all contribute to previous roles no longer being required or substantially changed. This matters, especially to individuals in an ecosystem.

There will be new roles in this Brave New World, of course – for example, automation doesn’t automatically equal removal of humans, often a shift in their expertise and role, or an opportunity to learn new skills – but a lot of change is coming, and has come before. This is especially of concern to the current largest generation – who are no longer Baby Boomers but Millennials – because they face less security, more uncertainty, and more difficulty by a considerable amount than the previous generation.

Role disruption, and the concern over role disruption, can have a number of knock-on effects that need to be addressed by culture, learning, and realistically projected prospects rather than the age-old adherence to inaccurately modelled outcome-based measures and falsely-positive forecasts – some of this was covered in The Red Pill of Management Science.

So to summarise: disruption can occur at the market, the company, or the role level, and is a process, not a product, affecting ubiquity.

 

The conflation of Novelty and Displacement

One thing I find interesting is how many businesses seem to conflate the two concepts, as there is a lot of crossover between the two approaches, which muddies execution. Both of these things represent stages of paradigm shift, and although they often occur independently, they can occur together as well. For example:

When someone says “smartphone”, what phone do you think of?

Chances are it’s an iPhone. It was first to define the form factor as the first true multi-functional smartphone (in the current long-term form), and hasn’t significantly changed since that incredible step forward (past incrementally innovating).

Apple used an incredibly clever and aggressive marketing strategy to make this product unspeakably novel, desirable, functional, and elite. It worked. Everyone who was anyone wanted one. The drawbacks, of which many still exist today, simply didn’t matter.

What’s interesting is that the i stood for two things: individuality, and internet (given everyone then seemed to have one, I consequently also decided it stood for irony). A subtle message which worked; iPhones became the de facto communications device.

Apple managed the rare feat of both innovating AND utterly disrupting the market very quickly. This happened because the market was at a point of orthodoxy change, and it was the exact right time to become the new paradigm, shifting Microsoft’s Apex Predator dominance of software to a new orthodoxy of software and hardware combined in the form of an object of material desire, which not only enhanced functionality and ability for users, but also image and self-worth.

This ubiquity came from incredible brand awareness, and a melding of the OS and hardware into one product. There is only one “iPhone”, which merely manifests in different forms.

However, these drawbacks (high price, poor non-Apple integration, availability, lack of customisation, fragility (especially of screen), constantly changing power adapters, slowdown over time, lack of memory card, sealed battery, requirement for costly Apple store for many minor issues, punitive action for jailbreaking), although intially ignored, became better understood, and slowly the market began to change. Dominance shifted, market presence shifted.

This was due to Android. The first cellphone running it was released a year after the iPhone (the HTC Dream G1), but the OS had actually preceded the iPhones’ – it just hadn’t yet been refined or named.

Android phone manufacturers didn’t particularly innovate, certainly at first; they simply allowed more people the chance to do more, for less, to belong, and they spread, quietly. A much wider range of devices filling the gaps in the market were developed; different combinations of hardware, price point, customisation, and the adoption of a universal charging standard, as well as memory expansions, battery changes, variety of materials, and so forth. People who couldn’t or wouldn’t buy an iPhone took up the diverse legions of Android phones instead, and discovered in many cases they were more capable, less restrictive, and mostly affordable, if not as smooth or elite. With the sheer variety of OS customisation and hardware options, Android phones became far more individualised than iPhones. A plethora of companies sprang up; where they couldn’t compete with the desirability – apart from companies such as Samsung – they competed by offering a piece of the smartphone pie.

Looking at the % of smartphone market share by OS, we see this:

 

As always happens with orthodoxies, once they become widely adopted they are no longer disruptive, or innovative in anything more than small increments. The slow disruption of Android has told over time; iPhone OS phones are still mostly the poster-phone for smartphones, but they are now so omnipresent in consumer consciousness that there is no more novelty. They hold perhaps 23% of the platform market, impressive for one company.

Meanwhile, Android-based phones in their extreme variety hold nearly 75%. They slowly but surely disrupted the market, and became the new norm.

If we then look at smartphone manufacturer market share %, you might expect Apple to be the top of the tree based on the above, but we find:

(Courtesy of https://businesstech.co.za/news/mobile/314372/smartphone-market-share-samsung-vs-apple-vs-huawei/)

 

So when you think “smartphone”, you may think iPhone first through conditioning – but now you also might instead think Samsung, which has gained a larger market share than the original innovative disruptor. And, with a huge presence in Asia and now the west as well as definitive innovations, Huawei is becoming a new name to recognise. The smartphone market is also due a huge upheaval, which is likely to come from foldable screens – which Huawei, again, seems to be at the forefront of.

It is important to remember that innovation doesn’t automatically equal disruption, and that they can be independent or happen together, but they WILL happen. The one constant in business is that change is inevitable – which is why rigid, dominant paradigms eventually fall foul of complacency.

 

Orthodoxies, Paradigms, and Apex Predators

So where does this change occur?

There are a number of places disruptive paradigm shifts and innovation can happen more easily, and a couple where it MUST happen or dire consequences will be faced. I won’t go too deep in this post, but there are several things to be aware of here: the market arranges itself around the Apex Predators, there is a lifecycle to all orthodoxies, and (as ever) context is crucial.

Market Lifecycles

Many people are familiar with the basic market lifecycle, and indeed this is typically used in strategy because of the assumption that you always start with a “green field”, or a standard approach.

But this does not take context into account, which is critical, especially for innovation and disruption to work. You very rarely start from a green field. A more realistic version is Moore’s “Crossing the Chasm” depiction:

Which shows the chasm which must be crossed. Failing here means you never become relevant; crossing means you will see a drastic shift in focus.

It is important to note that many companies do not cross this chasm. Using Kickstarter as an example again, innovation is wildly high and wonderful, but only 36% of companies reach successful funding. The actual % of companies that go on to become a force or even a long-term blip in the market is much lower than that. Very few end up disruptors.

84% of the top projects ship late; many of them find resource problems, even liquidate soon after creating the successfully funded product. Creating a stable, profitable company afterwards requires other funding and skills (Angel, Venture, etc) and a continuous value delivery stream. If the whole company is based on one innovative product, and people quickly lose attraction to novelty, and that’s all you had, you’re dead in the water.

A key shift here is from Sell to Make to Make to Sell. Leadership often assume or require that initial early adopter sales will continue linearly, and they don’t. A decline in sales usually leads to reduced funding, lost confidence, and not enough push to get across the chasm. Innovation is not a guarantee of success; market disruption is not a guarantee of success.

So, how do we introduce radically new, innovative products on the other side of the chasm?

S-Curves

One method is to add novelty to what people already know they want; the desire for novelty then crosses the chasm because it becomes, for a time, more important than the product. This sparks mass uptake and desire for the product which breaches the gap; this is definitive innovation from Complication.

Markets are constantly in flux with this behaviour; phone cameras are a good example. All phones have them, and they have gone from being ignored to being used constantly for everything from selfies to business receipts.

Huawei’s standout features on their P20 Pro flagship (at the time) were the triple lens camera that delivers incredible pictures that still blow many other phones – iPhone included – out of the water, an insane battery life, and an eye-catching 2-tone twilight colour reminiscent of the old TVRs. A smartphone is a smartphone – but this differentiated them enough to lead to a huge surge in Huawei sales in the west (which continued until the recent widely-broadcast concerns about the technology and security, which consequently have led to a current decline). It didn’t hurt that they had less reports of batteries exploding than the two leaders, either. The novelty told: P20 Pros became more desirable than Samsungs or iPhones for many people within the last 2 years.

So how we achieve this symbiosis? How do we innovate and disrupt at the opportune point?

The best way is to divine a point where the dominant way to do things is becoming commodified or coming to an end, whether it’s known or not by most players, and find space for novelty or a regime change. This can be additional innovation alongside the orthodoxy, radical or disruptive emergent innovation at the right point, or you may be able to alter the course of the whole paradigm akin to switching a train onto alternative tracks via the subtle spread of process (i.e. disrupt the whole market).

One way to view this is to look at an extension and expansion of the Gartner Hype Cycle:

Credit goes to Dave Snowden of Cognitive Edge, who pioneered the linking of the curves, theories and applications.

 

I’ll discuss S-curves another time, but here we can see a narrative of the relevance to novelty and the hype-cycle at the lower left, and the subsequent establishment of orthodoxy to create Apex Predators within a market. This leads to the eventual beginning of commodification and complacency by the Apex Predators due to being too invested and effective over time. Think of past extinctions caused by becoming a food-chain dependent megapredator that is too specialised, and you’re not far off.

The two key decision making points for understanding where change can integrate with the adoption curve will be reached: the pre-chasm point where weak signals tell you there are opportunities to be explored, and the end-chasm point where you have fallen in, unseeing, and must change to climb back out. If these are ignored, the fall or irrelevancy of an Apex Predator causes a trophic cascade (the radical reshifting of the entire ecosystem, which tend to be defined by Apex Predators).

Meanwhile, the point of “crossing the chasm” and uptake either via disruption, novelty, or both – in context and at the right moment – leads to the rise of new Apex Predators and/or the effects of total market disruption.

There is a lot more to this than that of course, and this doesn’t explain how it fits into Cynefin or other frameworks.

It’s not enough to be innovative; you can have the best product on the market. It’s not even enough to be disruptive; you can infiltrate the market at the low end and spread your net. It’s also about weak signal detection and uptake moments. It’s certainly not about being currently dominant, as that means you are more likely to be blinded to threats.

A last thought here: people (especially in mid-tier management in my experience) often choose moderate quality/profit using known contacts over potentially high quality/profit with new contacts, because it’s safe and hithertofore guaranteed. Or, to put it more succinctly, they will often choose certainty over uncertainty, especially in the context of an uncertain, unknown, changing market situation.

 

Shake-ups

From time to time, every market needs a shake-up, as does every company, preferably not through situations such as the Cycle of Woe. This is what Cynefin and safe-to-fail probes can be used for; to find a new path and avoid complacency before it becomes an issue.

Complacency induces failure, eventually, and this is a real problem, because that failure is often catastrophic. Dave Snowden likens this to falling from a cliff-edge, and when you understand how Cynefin allows you to make sense of scenarios, and how strategic and tactical complacency is widespread and usually unnoticed ESPECIALLY to Apex Predators, you realise this is a very apt analogy.

A firm that is complacent is at great risk of falling off that cliff-edge because someone else’s disruptive innovation has abruptly made them obsolete, which makes it very hard to re-establish coherency again (many companies here end up in death/rebirth unless they can secure new funding or benefactor). Bigger dominant organisations are often more complacent by nature, and the bigger you are, the harder you really can fall.

The long fall from Obvious to Chaos through Complacency-Induced Failure

 

Old giants rarely die; they can become too big for mortality. IBM is still a powerhouse; Microsoft is still gigantic. But with the shift of market and paradigm, the Big 4 in tech today are considered to be Google, Apple, Facebook and Amazon. Occasionally Microsoft joins the club as a fifth member who has some form of tenure – for how long, we don’t know.

If you look at these main examples, which of them are still both innovating and disrupting?

Understanding how to innovate and/or disrupt in context and emergently is vital for companies of any size, arguably more so the bigger they are; being able to see when you can or must do so is equally critical. They must furthermore understand how to do it in this new emergent, uncertain market landscape we’ve never been in before, for an entirely new generation. It’s become much harder to do.

Get it wrong, and you’re strolling near that cliff edge… while you’re looking the other way.

 

The Red Pill of Management Science

Further Into the Matrix

Management Science hasn’t changed much in the mainstream for decades, and people have become exceptionally skilled at navigating a system and command structure that is not always fit for purpose, but has come to be used to try to resolve everything.

I felt it of value to take a further look into some thoughts on systems, organisations, company culture, and decisions via knowledge management matrix.

Traditional and “modern” management science methods are mostly based off Mintzberg’s 10 Strategy Schools, with an expected hybrid outcome of consistent, transferable, repeatable and rigidly controlled performance with an alignment to mission statements and values which are predictable and usually single perfect goals. When this is applied out of context, problems can result.

 

The Cycle of Woe

Many organisations large and small are trapped in a loop of trying to remediate fallout from this approach to everything, whilst continuing to apply it. This produces a cycle which typically lasts 6-12 months, although it can be longer or shorter, and roughly follows this order:


This is not only woeful for the company, but the individuals creating the value streams for the company, and links into crisis management, weak signal detection, S-Curves and Complex Adaptive Systems as well as a whole raft of other subjects.

 

Understanding what lies behind the Cycle

An organisation, and the people that make it up, are complex, as are many situations. Complexity is by nature unordered and therefore not linearly causal (unlike complication or obviousness, where if I do x, I will always get y). It has dispositional states, where you can estimate, even simulate what is likely or unlikely to happen, but you cannot predict with certainty – and that’s an important distinction: prediction is not the same as simulation.

In Fearing Change & Changing Fear, I talked about the matrix below – a core precept of Cynefin, created by Dave Snowden of Cognitive Edge:

 

(Un-ordered)

MATHEMATICAL COMPLEXITY

SOCIAL COMPLEXITY

(Ordered)

PROCESS ENGINEERING

SYSTEMS THINKING

(Rule-based)

(Heuristic-based)

Cynefin Knowledge Management Matrix (Cognitive Edge)

 

…where Order and Unorder are ontologies (definition of causality) and Rules and Heuristics are epistemologies (knowledge in terms of action).

This time, I’ve added colour to show the relationships between the elements:

Systems dynamics (Systems Thinking) and computational complexity (Mathematical Complexity) take a MODELLING approach which, in most of the popular forms of Systems Thinking, essentially removes human judgement through models and predictive process.

Scientific management (Process Engineering) and anthro-complexity (Social Complexity) take a FRAMEWORK approach, which look at things from different perspectives, and also respect human judgement.

It is important to note that I am by no means saying that Process Engineering and Systems Thinking have no place – Contextual Complexity is the idea that humans can operate in and move between all 4 quadrants of this model, either accidentally or deliberately. In some cases Process Engineering and Systems Thinking are the applicable approach, but when we move outside those quadrants and don’t realise it, their application actually damages success.

Instead, this is about understanding when they have their place, where you currently are in Cynefin’s domain model, and acting appropriately to the context you find yourself in. If you are amidst uncertainty and you cannot resolve conflicting issues within a feasible timeframe based on the evidence… you are probably in the complex unordered domain, and it’s understanding when you are and how to act that is crucial.

This is where things can become a serious problem and catalyse the Cycle above, because the two Ordered quadrants are prone to simplified “recipe” thinking, prediction based on perfect outcomes, and the unthinking application of order in unorder.

 

The worries of modern Management

Many organisations are now in a market/landscape they have no prior experience of or reference for, and this causes fear and concern because we are being forced to change at both a personal and industrial level. They push back against this by acting as they always have using the cycle of woe, but the simple procedures that once worked do not produce new benefits past the very short-term now.

Does any of this come to mind with current or past companies you are aware of?

One of the key reasons for these responses may be because of the still-existing and long-term investment in structures based in Taylorism (which dates back to the 19th century, yet is still a core of today’s management science), a root of Process Engineering. This can be interpreted as the belief and (and action upon the belief) that an organisation is a machine with people as cogs or components that will consistently deliver the exact same output in quality and quantity – or, that an organisation is both inherently ordered and conforms exactly to rules.

Despite the realisation for decades that Taylorism is actually detrimental, because that just isn’t how people work, and supposedly eschewing it in favour of a more Systems Thinking approach and a shift from a perception of “machine” to “human” (Peters, Senge, Nonaka), businesses have not changed it fully.

There has been an effort to balance the Mintzberg et al Process Engineering-centric Schools of Strategy:

 

Designing Planning Positioning

 

and the Systems Thinking-centric Schools:

 

Entrepreneurial Cognitive Learning
Cultural Environmental Configuration
Power

 

but this is still an attempt to balance mechanical efficiency with modelled semi-utopia, and the value of people – and thus the organisation’s own value-streams – can tend to get lost along the way. In my own experience of companies I have found a leaning to the Process Engineering side with some nods towards System Thinking, in many cases taking the worst of each to form an organisation in the likeness of a machine with an optimum goal, fresh, dynamic values that aren’t as humanly achievable as they sound, pride in a pseudo-innovative approach, and an inability to sense or react correctly to situations no longer being as desired.

Organisations often use the modified concepts of Taylorism because it is trusted and traditional, despite being proven ineffective for decades, and act as if it will forever output the exact same quality and quantity towards an outcome they are certain they can reach. When forced to drastically change, there is a tendency to jump onto a new orthodoxy or bandwagon of the latest management fad that “worked wonders for x company”. Unless scientifically investigated or proven in context, be wary of “hacks” and “secret methods” – especially if novel, yet already in a new best-selling book!

This is representative of something called The Hawthorne Effect (Snowden), which you can read more about in my post The Secret Shortcuts to Innovation, and is a good example of the trend of applying novel, popular, simplified fads to innovate and fix that are not actually applicable to your organisation, dropping you back into the Cycle of Woe, when your value usually already lies within; it just needs context and sense to emerge.

So… how to get the best value output? This will depend entirely upon your organisation’s unique situation and context.

 

You only get out what you put in… right?

Not necessarily. Whilst you see this quote around a lot, and in certain circumstances it’s true, it isn’t an immutable law, certainly not in business:

 

Complex Output

MATHEMATICAL COMPLEXITY

SOCIAL COMPLEXITY

Simple Output

PROCESS ENGINEERING

SYSTEMS THINKING

Simple Input

Complex Input

Cynefin Knowledge Management Output Matrix (Cognitive Edge)

 

Process Engineering is best considered as something automatable, rigid, controlled, with people as components in the process; a machine. This is a simple input/simple output scenario.

Systems Thinking is best considered as the determination of a desired (often semi-utopian) outcome, with a system set up around achieving that goal that is controlled, predicted and measured; an organism, if you like, or often more accurately the desirable model of an organism. This is a complex input/simple output desirability.

Mathematical Complexity is best considered as simple rules being modelled to demonstrate complex behavioural patterns from agents within a system; an algorithmic or simulation approach (remember, simulation ≠ prediction. The former is designed to see what could happen, the latter tries to guess what will happen). This is a simple input/complex output model.

Anthro- or Social Complexity is best considered as trying to understand the dispositional state of the present (or what is likely to happen), then trying to guide the future state by modulation instead of driving (guiding emergence instead of forcing desire) and using vector measurement (feedback defining the parameters of the journey forward) to monitor for new, better opportunities, and basing all of this on all agents within the system; an ecology approach, flexible, innovative and reactive. This is a complex input/complex output emergence.

The required output of organisational value has drastically changed. Once, a local artisan may have arisen to make shoes as a basic human requirement. It required simple or obvious components, basic materials and practices going back perhaps thousands of years, and a complicated element in the form of an expert craftsman (certainly once competitors arrived). Eventually this would grow, as people need new footwear, and become a company, or trade. People had to take a set number of steps in a certain order to reproduce the quality of shoe preferred; gradually, they then expanded the quantity produced in line with growing demand.

Once sufficient complexity and saturation of market/product/company is reached, there is no longer any guarantee of staying within the realms of order and cause and effect, or balancing both quality and quantity using the old methods; you also can’t effectively innovate by modelling, or total controlled rigidity.

Today, companies have grown, globalised, diversified, propagated and moved far into abstract realms providing services as a priority, and the once-simple production of shoes by specialists is a product mass-produced cheaply, efficiently, in multiple materials and at a cost of ethics and craftsmanship; the commodification of the process itself, rather than the product, and a mantra of being innovative. Almost all business today is exponentially more complex, in a likewise exponentially more complex world where knowledge and services have become a primary global economy, 24/7. Companies are finding that you cannot operate as you once could, bureaucratically and hierarchically, because everything has changed, and they need to catch up – fast.

Entrepreneurial SMEs and the EMEA market approach are good at dealing with this. More traditional company structures aren’t, and that’s a problem for huge corporations as well as everyone else.

 

Adding to Value Production

It may also help to understand something further – in simplest terms, each of these is an attempt to augment how we approach the production of value:

 

Complex Output

Cognitive Replacement

Cognitive Augmentation

Simple Output

Physical Augmentation

Cognitive Replacement

Simple Input

Complex Input

Cynefin Knowledge Management Augmentation Matrix (Cognitive Edge)

 

As you can see, the two labelled “Cognitive Replacement” are attempts to model ideals or outcomes and replace both productivity and distributed, real-time cognition with their practices or results (almost to force utopia), whereas Process Engineering produces value logically and restrictively (but is prone to bottlenecks) by adding or removing people, components or processes, and Anthro Complexity treats this as a parallelisation of human processing power to more effectively discover the best path in uncertainty and maintain constant feedback to do so. They all have their place dependent on situational context.

 

Collaboration and Culture

Of the four quadrants, collaboration and innovation are most likely to happen in Social Complexity. It’s real-world applicable, reactive and monitored, and the output emerges in a vector-based fashion; in other words, it doesn’t try to define an outcome, unlike Process Engineering and Systems Thinking. A Vector measures intensity and speed of travel from a point (usually the present), and allows you to modulate (guide with feedback) the progress until a viable path emerges. It also takes into account something critical to success, and that is company culture and sub-cultures.

Culture is created by and for the people within the system, but also by the actions and inactions of leadership. It can be beneficial, the glue that welds the company into a cohesive value delivery platform, or it can be incredibly toxic, losing vital agents, morale, collaboration, producing gaming behaviour, cynicism, policies that impede roles, nonsense politics, focus only on immediate reward structures – in short, losing the ability to be effective anything more than short-term. When we talk about real collaboration that is self-creating and sustaining, it is found here. Understand that over-competitiveness and overconstraint via rules/policy/demand of output can be contradictory to success (inducing cynicism and gaming behaviour merely to do the job), and you understand why a holistic ecology needs good culture to operate.

Attempting a culture via Process Engineering, which relies heavily on human involvement, can fail because those humans are individuals and complex, not components in a machine. It is a framework approach, but a heavily constrained one which doesn’t allow for individuality and feedback, and although it does allow for human judgement it expects mechanical efficiency and does not allow for a lack of order within the system. It still expects rules to be adhered to, even if they impede progress and value production.

At the same time, Culture from Systems Thinking, whilst based on some good ideas, has a fundamental flaw of being a model – so whilst ostensibly Systems Thinking says “we allow this is a system of humans with individual traits”, and allows feedback, it removes human judgement in favour of prediction and order, and still expects firm adherence to that order whilst heavily measuring and metricising humans against a perfect vision.

This is a real problem with most popular Systems Thinking – it instils a habit of thinking where you want to be in an ideal world and then trying to close that gap, in other words using outcome based measures – which may have no actual basis in reality. Depending on these then for forecasting, culture, and organisational direction can be dangerous, as can attempting to then control and apply policy to humans, who live in a real complex world not an ideal world, and act accordingly. Systems Thinking does not always apply well to HR, for instance, because measuring complex agents on outcomes which may be unrealistic or require gaming of the system to reach, or demanding people map to a model when they are all individuals, is a decidedly failure-prone way to try to make sense of knowledge, achieve job satisfaction or good morale, or deliver value.

Where both of these approaches often fall down is that they still assume that circumstances and organisations are ordered, even if this is not the case. Forecasts, company message, and guaranteed output heavily rely on a firm goal that must be achieved whatever the cost, or after a tipping point reassessed; all of these induce an initial tunnel-vision that then cannot be seen outside of.

For me, one of the most unforgivable aspects of popular Systems Thinking is that positivity and adherence to the perfect desired outcome is far valued over realism and the achievable – think about how many times management is unhappy with a forecast because “it seems negative” – and the mavericks and heretics who suggest other approaches are often suppressed or punished.

I’d rather a realistic prediction than a comfortable one; and these are the people most likely to spark innovation in an organisation.

 

Why Social Complexity is so effective in uncertainty

Social Complexity takes a different approach, saying that this is an ecological framework, based on individuals working as collaborative agents within an unordered system, and that real-world feedback is critical to assess and modulate goals which may change significantly. The flexible vagary of human input (including outliers) can be harnessed positively instead of suppressed to produce productive, innovative, beneficial output, which may even improve from original preferences. It accepts that predictions cannot be accurate, and allows looser constraints to allow the system to achieve the contextual coherency required to achieve an appropriate goal, find new goals, or spark innovation. This is the closest we get to a cohesive ecosystem delivering the most effective output and value, self-monitoring and constantly feeding back and adjusting.

Contrary to popular management approaches, it says if you find yourself in an uncertain scenario, you must identify where you are NOW, and then see where you can make changes (via probes and contextual constraints), and then monitor vectors as close to real-time as possible as you go forward, allowing for all agents within the system. If you find a path of coherency where you dampen negatives and amplify positives, you may be able to then stabilise this emergent path until you have discovered or even created causalities, transitioning through a liminal domain into linear causality (Complication), whereupon you can breathe a little more easily and create governing constraints.

How you get here and where you get to may not be where popular Systems Thinking had you start out, trying to attain an idealistic outcome, because you have probed for new possibilities to close the gap between here and a REAL outcome – which could be even better than the original goal, and is likely to be more realistic.

In other words, the journey, which never ends and allows novelty, serendipity, and new paths to be discovered en route, is really more important than a goal where you are so set on the target that you don’t see alternatives – or the fact you might never actually be able to get there.

 

What does this mean for an Organisation?

Well, not that you have to drop everything and instantly decide to attack every situation as complex, remember. It’s more about understanding internally to the company the cultures, departments, and people making them up as reactive pieces of a holistic ecological whole, and learning how to divine what is a complex situation and how to make sense of it; Contextual Complexity and appropriate action. The simple fact is you probably aren’t experiencing issues if you are within the ordered quadrants you think you are. It’s when you think you’re still there and you’re not that problems rapidly arise.

As mentioned in a few posts on this blog, there are a number of reasons many organisations today (and many of those don’t understand they have issues yet) simply don’t seem to understand their markets, their employees, where they are going, or how or if they can get there.

There are several ways to approach these issues, and as companies become aware of them they get inevitably caught on buzzwords and “certified approaches”, but one of the best is simply engaging a multi-methodology consultant who – rather than come in as an expert in one specific popular approach to do disassociated work for the company then leave – advises, jiggles, and helps the organisation learn how to sense-make (instead of reflexively categorise) for themselves, then change from within using a mixture of appropriate methodologies and frameworks. This helps create a sustainable, learning organisation one step closer to a collaborative ecology, and lets them focus on the value they deliver instead of the internal struggles they faced.

It’s all about dowsing for context and coherency – decoding where you are in the matrix and acting accordingly – and that’s what we’re here for – nudging, education, and paradigm shifts.

…I don’t recommend the blue pill. It leads to a Cycle of Woe.

Homogogy – the Neo-Paleo Teaching of choice

Before I get into the definitions of teaching in this blog (which are by no means conclusive), I feel it is relevant to address terminology, as it was a core reason behind the concept of Homogogy for me.

Dave Snowden (Cognitive Edge) is one of many who has outlined the importance of language for defining concepts, and it is something I agree with. Where a thought process or conceptual framework requires intelligent application and consideration, it also requires clear, concise, and precise language, which in turn defines how you frame your thoughts about things.

A good example of this is with industry – terms may be mutually used from one situation to another, but in business buzzwords may be used or misused or misappropriated. Terminology can become fuzzy or situational. As an example, many people will use the phrase “in theory” or “theoretically” to talk about an estimate or guess, but are actually talking about a hypothesis – a supposition or proposed explanation, based on limited (or no!) evidence. I’ve heard this a lot in IT and related verticals, especially in sales.

In science, however, a theory doesn’t mean something guessed at, but a substantiated explanation based on a set of facts which have been reliably confirmed via experimentation and observation – both of which are provable and repeatable by anyone. It also accepts that this is the best understanding of something at that point in time, and this could change based on new data.

Humans are learning machines; we adapt and learn faster and on more levels than any other creature we know of; and yet, we manage to actively and aggressively damage that natural learning. We impose limits; we opt for profit over results; we force rather than inspire; and we muddy language around this process, often twisting terminology so it means the opposite to suit our whim.

So language is critical, and its correct application is as important. Cynefin uses it precisely to help define concepts (e.g. Order, Un-order, Disorder); science uses it identically. Business, largely, does not. We must use the correct words, in the correct manner, if we are to comprehend.

I’ll also refer to Narrative a few times here, as it’s critical to human learning, but I’ll focus on it in another post.

 

Pedagogy & Andragogy

There are two primary accepted methodologies of teaching used, both named in the West from a similar Greek root:

Pedagogy (leading boys) is the concept that children must be lectured and moulded, taught what they need to know. Pedagogues are traditionally associated with the young, strictness and pedantry (children should be seen and not heard is a classic integration with this concept).

Andragogy (leading man) focuses more on adults needing to teach themselves, and discover new skills through play. Andragogues are seen as adult educators and enablers who focus on experiential learning.

They were codified during the 1960s and 1970s by Martin Knowles, a US Professor of Adult Education. He noted that the way adults were being taught was ineffective; lectures, learning by rote, exams, and other techniques we still to this day associate with University learning (as well as school learning) simply weren’t achieving the results they should, relatively easy to monitor and perform though they were. Books such as The Adult Learner: A Neglected Species changed the way adults and industries thought about teaching adults, although the older methods are still surprisingly widely used to this day in both University and Business.

This was very beneficial to adults, especially in industries looking for new and effective ways to engage, and was a definite springboard for engaging techniques such as Agile learning to develop. It was not beneficial to children, however; for all his progressive thinking to how adult humans learn, Knowles assumed that because children had been taught via pedagogy for hundreds of years, it must be the correct way (it’s interesting he quite consciously refuted this assumption for adults), and so he unwittingly and drastically reinforced the old, rigid, totally incorrect methodology for children.

Neither of these acknowledge that humans learn in a similar fashion throughout their life neurologically, nor are they technically correct in times when we are rightfully acknowledging that women are also an equal part of the human intellectual process in intellect and ability, if not recognition and reward. The terminology, in my opinion, needs updating.

 

Why Pedagogy is wrong

Pedagogy is, essentially, Teacher-Centred Instruction. Immediately this misses the point of learning; the focus should be on the students, their retention, and comprehension, not on an authority figure and their instruction. Even worse, this is usually not the teacher’s preferred method, but professional demands from a disassociated governing body.

Anyone who has seen a child pre-school years realises that children learn naturally and swiftly through interest, play, and repetition. So why in many countries, including the West, do we begin to inject discipline and demand, younger and younger, to remove all fun and interest and to indoctrinate them into the stress of modern life?

This comes from centuries of instructing children, and in some ways is worse than it has ever been. We have moved from rigid silence and forced learning, to the veneer of play – laid over disciplined, metricised, commodified enforced learning, with less resources than ever.

It is not the same country-to-country; Scandinavia, especially Finland for example, has incredible results teaching children – and they avoid pedagogy. Instead, they allow teachers AND students freedom to learn and experiment in the best ways to do both. It seems incredible to me that this happens in the US and UK, then, but there is a clear connection between academisation, profit, and agenda and class, where actual results matter less than these others; blame shifts to the students, for not trying hard enough, and teachers, for not teaching well enough despite the huge limitations placed upon them by the system (this applies very much in business intra-organisation as well).

In the UK, children as young as 4 are being monitored for SATs  – Standard Attainment Tests. They supposedly monitor the progress of children in black and white, for all to see, but they are frankly a ridiculous idea, and one many teachers balk at. They do not provide accurate understanding of children of all backgrounds and neurodifferences; they are not an accurate monitoring of the level of a child; and they induce massive stress levels which de-incentivise children and stifle learning. They also value learning-by-rote achievements over applicable comprehension, which for me is unforgivable.

Far too much pressure is put on schools to deliver certain levels of results or lose status or funding; far too much pressure is put on teachers to get results within strict limitations; and far too much pressure is put on children, who are finding their learning interrupted by the trauma and stress.  Why then do this, preparing children for a life of dictated mindless toil and stress, when later we work to “reawaken” adults in industry and help them learn intuitively? Should we not be doing this from the beginning?

Well, yes, we should. It’s been proven in multiple studies that the best school systems in the world with the highest results (such as the example of Finland above) remove enforced homework, constant measurement and competition, and the high pressure levels, and allow children to develop interest and learning themselves. It also been proven over decades of research into neural learning patterns and brain-friendly learning that pedagogy is the diametric opposite to these. Exams and SATs should be a loose marker, a gauge; but they are taken as a grail; THE RESULTS.

Learning is a continuous path, not an end location.

I’ve passed exams on hardly any work, because I’ve always excelled at seat-of-my-pants reactivity; that does not equate to comprehension and application of a subject. Apply that to business, as I’ve seen happen after my own courses, and you are suddenly left with an organisation in trouble with a client because they were more concerned about the course qualifications of the student (or tick-box for compliance) than their ability to know what they are doing. That equals lost revenue, lost reputation, lost trust, and the growth of a culture of only caring about the paper (anyone in IT will readily cite MSCEs as a victim of this cramming process).

Pedagogy is still widely used in business. Certification is all; classes are strict; classrooms are arranged in desks, and more. Typically there is an information overload delivered in too short a timeframe to too many people at once in a generic, boring, and “company approved” manner which often spawns bad conceptualisation, inability to apply or retain data, a hierarchy of go-tos, and a host of other problems. It is a terrible teaching methodology for humans, let alone children, and it comes hand-in-hand with the expectation that the certification is the goal; a qualification that has more value than the learning itself both before and after.

As I said in my recent post Never Mind the Buzzwords, it is important to understand that a certification or qualification is the beginning of understanding and application, not the end. This should especially be borne in mind for younger humans.

Children learn in the same way as adults, but better, faster; they lack only the developed cognitive abilities for the abstract, and the prior experience. That is why stories (narrative) are as crucial for children as play and experimentation; they allow relation of concepts to their limited experience and the understanding and expansion this brings, and the inspiration to test. Humans learn naturally and better by the use of narrative.

A much smaller % of people are readily capable of learning in this restrictive way, and the rest are judged for not managing; but even those who are incentivised and capable of learning like this can improve how they do so.

In short: Pedagogy does not allow children, or anyone else, to learn like humans.

 

Why Andragogy is no longer right

Andragogy has been widely understood to mean “the teaching of adults”. Despite efforts by many professional teachers to redress the usage, it remains associated with adults, and agile learning methodologies especially, rather than the default way we should teach everyone.

Aside from abstract processing and life experience, another difference between adults and children in teaching and learning is the ability of adults to know and/or be able to express if that teaching and learning is not working effectively (wrong course, irrelevance, poor teacher, and so on). They have developed a meta-understanding of the process and how abstract concepts integrate, whereas children tend towards pure learning without the awareness of the method. This may be why Knowles focused on the needs of adults in finding a new way to teach and learn; it is easy to look at several hundred years of teaching children and say, “Well, they haven’t raised these issues”.

The idea was formed that adults need games, engagement, and “space to learn”; that the teacher still has knowledge to pass but the adults use experiential learning. Of course, as soon as you really consider children you realise it’s no different for them.

Does any of this sound mad to anyone else? Children, who learn by play naturally, should be taught like automatons and be rigidly graded with SATs and every other possible metric; whilst adults, who have lost some of that neuroplasticity but can discipline themselves to learn in a number of more restrictive ways, are taught like “real people” and encouraged to play games?

 

Rather than unlearning what you have once learned, it’s better to learn
correctly from the start… and continue for the rest of your life!

 

In Training from the Back of the Room, Sharon Bowman paraphrases a list from Knowles, noting humans:

• Want/need to learn
• Learn in different ways
• Learn best in informal environments
• See themselves as self-directed and responsible
• Learn best with hands-on practice
• Bring their past experiences to learning
• Learn best when they can relate new information to what is already known
• Have their own ideas to contribute

There is an unwritten assumption in almost all teaching that the teacher is always the holder of knowledge, correct, and that they are “in charge” of a class, although Andragogy is far less rigid in this respect. Bowman’s book stresses the importance of removing the teacher as an impediment, which I have always strongly agreed with.

In short: Andragogy has come to mostly be accepted only as a(n agile) way of teaching adults and is often misapplied as a result.

 

Why Homogogy is what we now need/have always needed/used to have

The meanings of the above methodologies have been misappropriated over time, and were misunderstood from the start. Both Andragogy and Pedagogy view the teacher as a holder of knowledge, and a student as a recipient of knowledge. This is massively simplified, and only one aspect of teaching.

I’ll introduce a novel thought:

A TEACHER IS NOT A KNOWLEDGE TRANSFER DEVICE.

We are a guide; we inspire, we help; we provide information, too, but we are there to spark and engage, not enforce. Learning is not effective when you attempt to force it upon people for anything other than survival (at which point you expect losses). In addition, a teacher can only open the door; the student must decide to walk through it themselves.

I proposed 6 I’s in another post that are mostly overlooked as part of Pedagogy, and often Andragogy:

Neurologically, children and adults learn through the creation of neural connections in their brains in certain orders. The brains of children are far better at this (neuroplasticity) and this means it is incredibly important to allow them to explore, play, and incorporate their developing ideas into their learning; unlike adults, they do not have a lot of experience to draw conclusions about and so experimentation is even more vital for them, along with correlational, simple stories.

How humans learn is through doing, and fragmented narrative; back before we tried to structure learning en masse, the best learning came from stories, mentoring, expert advice, apprenticeship, repetition, experimentation, and guidance. We went with others to learn what and how to hunt and forage, and were guided by their advice as we attempted it. We apprenticed to a blacksmith to practice working in metal. We told stories to inspire others to want to learn life skills and knit a community closer. These things were interesting, immersive, and inspirational; we had incentive, and we were involved constantly, as we knew we needed them for our very survival. We passed on instruction of them to others so that they, too, would be successful, making us all successful. Narrative was also a method of allowing human knowledge to pass far beyond our own sphere in terms of abstract thought and contextual correlation.

Most teaching has become much more abstract as our real and virtual communities expand and increase in complexity; we have changed the way we teach to be convenient, but the way we learn is coded into us on three levels – by evolution, by culture, and individually by neurodifference or conditioning.

Our brains are analogue devices, not digital, and work via experiential neural connection creation. Teaching must engage this, not the reverse; we cannot change how we are programmed to learn. 

Pedagogy and Androgogy should never have been defining methods of teaching and learning. They are segregational, generalised, limited, reductionist, and restrictively associated. This is all wrong; we have decades, centuries, millennia of evidence to prove it, both scientifically and anecdotally. Although Jay Cross (Informal Learning) has a point when he says, “‘Andra’ is the ‘gogy’ to go with for all,” I would go even further because of the evolved restrictive association with adults.

How we should all be taught is as humans, regardless of age or gender. That is why I have realised my teaching and learning pattern methodology is Homogogy.

 

So what is Homogogy?

Homogogy (“Leading Humans”) is a “new” framework which is actually very ancient, hence neo-paleo. I believe it is what most effective teachers mean when they refer to Andragogy now, but as I said at the start, language is important. For me, Homogogy has more precision and better connotations; we’re all human, after all. But it isn’t just the teaching or learning of a simple subject; it’s at the core of human interaction.

Every interaction we have holds teaching and learning patterns. Business meetings; basic onboarding at a company; a fire safety compliance meeting; a school class; a presentation; a workshop; a heavily technical training; social events; university; the first time we meet someone; and on. All of these hold multiple levels of understanding, potential paradigm shifts, feedback, and information – intellectual, physical, and emotional. Understanding how we teach and learn, and seeing those opportunities, is something we often miss – every day holds them, and yet we usually only consider them during a formal “class” occasion.

We have conditioned ourselves to become lazy at teaching and learning, in a time where more humans have access to more information more quickly than ever before. Worse: we are so de-incentivised to learn by subjects perceived as boring, disagreeable, or too complicated that we often choose wilful ignorance. 

I genuinely believe Homogogy, how humans teach and learn, can reverse this, and it begins with all human interaction. Everyone, every situation, has something to teach us. There are some similarities with Andragogy, with a number of concepts that are required for effective human learning:

Brain-friendly (neurological) learning
It’s been shown through multiple studies that all human brains learn in the same fashion: we create neural connections to facilitate memory. How we arrive there and engage this can differ slightly, and there are natural neural differences in humans which will dictate the most effective technique for each human.

Collaboration
We are both competitive and collaborative naturally, and both can be drivers of learning; but only collaboration can be an emergent modifier. Competition will quickly inhibit group learning in favour of a dominant individual or group, and often the focus becomes more about who wins over who learns. Collaboration can be competitive, but it’s beneficially so, and the greatest advances come from the sharing of ideas, not the dampening of them. Synergy between multiple cultures, organisations, and individuals in a class is not only possible but beneficial – and can hugely enhance learning and collaboration. I’ve seen large competitor organisations develop a shared knowledge pool in my classes before, and stay in touch afterwards to exponentially enhance troubleshooting. This is something you will not find with competition.

Engagement & Ethnographical engagement
The gateway to accessing brain-friendly learning is through engagement, which is both individual and cultural. You cannot engage an Asian class in the same way you can engage an American class, for example, so before you can connect with students well enough for them to learn, you must understand the best way to do so. This requires some understanding of Ethnography; one size does not fit all. People will not learn if they are not engaged, and the teacher’s role is to engage them so they can learn, not try to force learning upon them. Learners need engaging culturally, individually, and neurologically – the trick is doing this in a mixed class! It is also crucial for a personal connection of some form between teacher and learners.

Failure & Feedback
Failure is crucial for learning to occur, especially from doing; demonstrating the consequences of what not to do is more effective than simply knowing what should be done. Humans tend to need demonstrations to understand or believe something. Feedback is linked to Failure, because failure is really only feedback; it’s telling you what you need to adjust so you can do something as you require it to be done. The true failure comes in the form of not learning from the feedback. This is vital for people to understand, and can have an impact when faced with a cultural engagement that considers failure shameful rather than an opportunity to learn. Failure is not shameful – it is a required part of learning, and must be monitored constructively.

The 6 I’s
To help understand this, consider the circle of the 6 I’s (as above) – Interest, Inspiration, Involvement, Immersion, Investment, Instruction. This is the lifecycle of the learning of an idea for a human.

As an example: a child is curious about the sounds an adult makes, and it realises that the sounds bring consequences – attention, food, love, and so on – so it is inspired to use them creatively (“Children say the funniest things”). As it becomes involved, it learns, and as it is immersed, it learns faster and more completely, and just as importantly, long-term. As it grows it becomes more and more invested in the use of language and its complexities, and eventually teaches others usage for mutual advantage. This is a natural cycle for social interaction and teaching/learning; yet most patterns, especially pedagogy, tries to force some parts and ignore others.

Individuality
It is important to realise that a strength of humanity (and an overall weakness) is its individuality. We are incredibly individual in a multiplicity of ways, yet working together we create vast, complex anthro-social systems and paradigms. Our individuality is ofttimes at odds with this, and we limit ourselves via conflict, hierarchy, and strong assertion of identity in a number of areas, as well as an inherent desire to order systems – which sometimes cannot be ordered, by their very nature. This causes serious problems in time (see Cynefin posts for more information). The best human systems are those that celebrate and utilise the individuality of each person, acknowledging it and harmonising it. Individuality is part of where we get our complex natures from, and it makes us learning machines. Anonymising and repressing this stultifies learning.

Freedom to Experiment & Innovate
For truly organic, flexible learning, both teachers and learners must be able to play, test, do, experiment, and understand not just the subject but how to take it in. This allows individuals, situations and classes to naturally find the best ways to be incentivised, understand, and teach one another. Outliers, individuals, and shared cognitive load in an unconstrained environment spark personal as well as industry innovation.

Relaxation
Humans do not learn effectively unless we are receptive to retaining and understanding data. Outside a few very clearly focused instances (picking up a hot coal, for example!), this requires us (certainly for abstract and complicated issues) to be relaxed, and to enjoy our learning. Moving outside a professional comfort zone is required to spark innovation and experimentation, but staying far enough inside a personal comfort zone is important, because you do not absorb or retain information effectively when anxious.

Ignoring age as a segregator
Humans are humans. Regardless of sex or age, we learn in the same physiological manner, allowing only for different engagement culturally, individually, and minor adjustments (positive, not negative) for age. You would not teach at the same conceptual depth with the reliance on world experience with a class of 6 year olds as you would a class of 40 year old professionals; but you would teach in the same way.

Homogogy is Evolutionary, not Revolutionary!

It’s always been there, and we’ve ignored it to our detriment. It’s time to re-acknowledge how we are designed to learn instead of suppressing it in favour of convenience of teaching.

The most important thing is always the applicable comprehension and retention, and passing on of the learning. Somewhere along the line, we’ve lost sight of this.

 

Further Consideration

I have coached many organisations, large and small, on learning as children naturally do, organic flexibility in structure, allowing students to drive the class, engaging, spaced learning techniques, mental parsley, the importance of teaching for applicable use and comprehension instead of exam answers, the criticality of real-world training, class sizes, layouts, and much more.

Many of these are covered in my first book on teaching and learning patterns, Involve Mea short guide which didn’t deeply delve into the understanding behind the teaching (and needs updating!). A revised edition is due out very soon (I’ll tweet it when up!).

I’ll expand further upon Homogogy in my upcoming second Teaching/Learning book – title TBD!

I hope this has been useful. As ever, comments below or on Twitter welcomed!

 

 

The Secret Shortcuts to Innovation

The bad news:

There aren’t any.

There is no template for Innovation.

 

If someone offers a set, guaranteed methodology of “making you Agile”, or a template or recipe for repeating innovation, be sceptical. This is not how innovation works, nor is it how management and practices can really propagate.

By its very nature, innovation can only happen a couple of times at best before it’s no longer innovative and people lose interest and productivity. Creating an organisational culture that is primed to innovate can be done – it’s part of what I can help an organisation learn. But there is no template or hack for it. Each organisation (and situation) is different, and requires different methods to acquire its own skills to learn how to innovate, but there is one thing it always needs – buy in and adaptability from management and Leadership.

As I mentioned in my post Of Scuba Diving, Cynefin, & Value Delivery – in markets that are slowing or going into stasis, in a new landscape unfamiliar to many businesses now, there is a drive to innovate, to influence.

Innovate what, though? Anything – anything that differentiates, that sparks interest, brings new relevancy, that gives an edge. The shift from focusing on things requiring innovation to finding anything to innovate – to be innovative – is starting to occur, and organisations are trying to find a secret, repeatable formula to achieve that goal – but true innovation is not easy, well understood, or predictable.

Organisations often consider innovation less as an abstract emergence and more as a tool, and also speak about disrupting the market in the same breath, which is a little different:

Innovation is usually seen as differentiation in a current market that aims to grow market share and hopefully dominate that market by being novel, better, the next big thing, whilst Disruption is usually seen as more of a process over time that replaces the entire market with a new one through ubiquity and seeks to change how the entire industry operates.

Both of these actually have an interesting and complex interaction with orthodoxies and market S-curves, which I’ll explore another time, and can be misused as concepts. Neither are easy or guaranteed.

 

Sadly, not how Innovation works!

 

So how do you Innovate?

Innovation, unless you happen to strike it lucky, is often about understanding and boundary-controlling unconstrained circumstances where innovative ideas can be explored using immediate feedback, and the positive ones amplified (with the dampening of the not-so-positive). This often also involves listening to everyone in the company – the heretics, mavericks, and outliers, who may be more likely to be innovative – as well as the mainstream agreement, which can be quite inclined to follow leadership through reward structures and a toe in sycophancy. This is where most companies face difficulties, because they stick to trusted and traditional methods, whether they are fit for purpose or not.

Innovation can also occur hand in hand with a crisis, but this is unbounded and risky (any consultant pushing a crisis to force innovation is someone to be wary of!).

Another thing to avoid is a promise of the latest in management techniques. Management fads sweep through general industry every 6-12 months, and may or may not have any validity whatsoever; I can use the original example of the Hawthorn Effect again as a hypothetical warning, courtesy of Cognitive Edge:

A light increase was introduced into the Hawthorne Cable factory to see if it helped production.

Productivity increased as a result.

This is where many companies would leave it, satisfied they had resolved an issue; often this is immediately seized upon as Correlation=Causation, a quick and easy simplified recipe to increase productivity. In this instance, the company decided to test what would happen if they dropped the light levels again, expecting a decrease.

Productivity increased.

These days, it’s likely two book formats and a sponsored Facebook post would be up within a month giving step-by-step success instructions on how to best bring light to enhance productivity, rather than going back and attempting to better understand the science behind what happened, and today we have more fads than ever as a result of this constant kneejerk in management approaches.

This result obviously was not in line with expectations; what it actually showed was humans responding to novelty, and it’s been found we only do this a couple of times at best before we stop.

 

New shiny things

When something is novel, and seems to work very well in one instance, it is extremely attractive to people and organisations as a method to replicate success or gain image/reputation. Innovation by nature is novel, and when it is effective it can be exponentially rewarding. It has a foot firmly in the Hawthorne effect space; Amazon is an excellent example of this. The company offered something practical, convenient, and above all novel, in a space where people doubted it could be done cost effectively. They innovated, and immediately spawned many imitators – but none of them have achieved what Amazon has, despite cloning multiple actions from its rise. Organisations looked at Amazon and tried to emulated the success, but failed. Why?

Because it had already been innovated!

On the back of this come the Fad Templates, ideas of “the Amazon way”, methods to do what Amazon did in your own company, and so on. The temptation exists to stop at the Correlation=Causation point and market or broadcast this, especially for consultants, because it’s easy to market and pitch.

“We can make you the next Amazon!” Or, going back to Hawthorne Cable, “Light Bringers Increase Productivity! Any company can increase productivity in this one easy step!”

Managers hear about what seems to be a catalyst for success and implement it in the hopes of replicating it, usually in very different scenarios (or scenarios that are too similar, thus a saturated entry point). Many fail to see the improvement, but assume it’s not being done correctly, or not enough time has passed. As the interest starts to peter out, a second wave of the same technique sweeps through industry because now enough companies are doing it that it has become de rigeur. Eventually, it falls into disuse and is phased out, just in time for a new fad to sweep through.

Long-, even medium-term, this does not support an organisation or help growth, and can be very damaging. New doesn’t always mean better, in the same way that old doesn’t still equal the best way. It’s up to an organisation to probe what works for their situation and adapt.

 

What Innovation involves.

 

Innovation is cleaved to Management and Leadership.

Leadership and Organisations must accept fundamental change to innovate, because INNOVATION IS CHANGE, whether incremental or radical in nature.

Management science and management consultancy needs to be based in proven and evolving techniques, and these have long moved away from Taylorism and the basis of Process Engineering, and even the newer Systems Thinking. The world of industry has changed; at the very least we’ve been through 4, and possibly 6, industrial revolutions by this point. A consultant should be helping guide you based on the best possible individually probed basis for the organisation today, and that can’t be done based on old methods, fads or templates, but on provable repeatable science and adapting to each individual circumstance. This in turn needs to be supported at a cultural level by Leadership; the more rigid, traditional, and dismissive it is of the smaller outlier voices, the less likely innovation and change will occur.

As with most things in life – learning something to mastery, getting fit, losing weight – shortcuts are usually not anything past short-term and shallow, and rarely get the result we truly want. Anything worth doing is worth doing well. This means a little pain and adjustment, experimentation and investment, but it’s usually worth the change, and the emergent opportunities give far better paths to the future.

The same applies to innovation; copying and pasting from someone else who innovated is extremely unlikely to produce the same results. Instead, it’s better to probe in complexity and discover all possible opportunities to find your own contextual innovations where you can. Cynefin is a very effective framework for helping systems do this, and adaptive – what I tend to think of as true – Agility will also help with this; the ability to gauge in a granular and contextual fashion what method or methods are required in a given situation for a given company as opposed to a reliance on structure, buzzwords, recipes, and constraints.

As we move into an uncertain future, beset by a quickly changing generational market demographic, and more and more companies flounder or even founder whilst feeling they have lost their way, this approach has become ever more key.

A final thought – a consultant should never be the one to introduce the change and “cure” anything. We are there as an advisor, a mentor, a coach, to help an organisation understand, discover, and learn how to do this for itself – otherwise it’s not sustained after we leave and, ultimately, likely to fail.

 

Never mind the Buzzwords

Defining Agile, Lean, Kanban, Kaizen, Waterfall, Scrum, Cynefin… & why we need a Multi-Methodology Approach

 

I wanted to summarise definitions of some of the more popular terms that are becoming ubiquitous in business, and give basic understanding. It used to be mostly specific industries (e.g. software development) that threw these buzzwords around, but now organisations in every sector are realising the benefits of applying one (or more) of these methodologies. I’m not going to go into huge detail on each as there are many good articles which are very comprehensive out there with more depth and nuance, but I’ll go over the basic differences and applications. I’m also not going to include what you should consider for decision making and production (perhaps another article!) – rather, to briefly explain the differences and potential applications of the better-known.

One important factor to note is that the concepts in this article are not really substitutional. Some are concepts, some are processes, some are manifestos and methodologies, and some are frameworks. They may integrate and support each other well; each business situation may use or require one or more simultaneously.

As always, I’ll also treat Cynefin differently, as it is a naturalistic, scientific framework for understanding complexity and achieving coherency which can describe where the others are effective and applicable, rather than act as a tool for a specific process. Cynefin is for sense-making.

 

Waterfall

This is an older, rigid process which is highly ordered and constrained. It consists of a linear, sequential set of segregated phases which are always achieved in order, in one direction. It always begins at the first phase, and you only move to the next phase when the current is complete. Once the phase is complete, there is no returning to it without restarting everything.

This arose from manufacturing requiring steps done in order to achieve an end goal, so is applicable only to projects or businesses that adhere completely to the Complicated and Obvious Cynefin domains. It is extremely rigid, requiring extensive planning, strict following of steps, thorough documentation and zero role flexibility. Waterfall works in specific, limited instances where strict adherence and no deviation is required – in ordered situations where it is imperative to follow a specific dependent order (pre/post-surgery implement counting, as an example). Outside this, it cannot allow for changes, errors, or incorrect predictions, and the result can be a lack of understanding for stakeholders, bottlenecking, and longer and longer completion times.

It is still a traditional mainstay for management to apply to many situations, as it gives the feeling of control, simplicity, sustained output and logic even where these are not possible in the circumstances. This approach is heavily Process Engineering (ordered and rule based), widely misused in situations which by nature cannot be ordered, and is an example of a process “simplified and transplanted” as a recipe between industry applications.

Waterfall 

 

Lean

Lean was developed in the manufacturing industry, specifically inside Toyota in Japan, a result of which was the Toyota Production System (TPS), an optimised, Lean manufacturing process. It is a manufacturing and management style which focuses on eliminating operational waste, and removing unnecessary resources and complications; cutting the fat, if you will.

Lean is related to Kanban and Kaizen; the TPS integrates all three (I’ll cover the others separately). It is a lot less rigid than Waterfall, although with the addition of Just-In-Time (JIT) processing can still fall afoul of large errors or bottlenecks. It also de-anonymises stakeholders, instilling respect for those working as a principle, and values evolution of process.

Although less prone than Agile, Lean still has elements of templating and codified certification (for example, Six Sigma) which can be limiting and not apply correctly to individual organisational circumstances. The drift from reducing organisational inefficiency to instead eliminating defects and reducing variation can also introduce its own set of challenges.

Lean can be approached in a number of ways, and is a logical path for a company looking to better understand and refine value streams, production costs, and efficiency from team to organisational level. It is what companies strive to do by cost cutting and other slimming practices, but is often misapplied; it can be ordered, and leans into Systems Thinking approaches over Process Engineering (no pun intended).

 

Lean

 

Kanban

Kanban has a foot in both Lean and Agile. It is a methodology to manage and improve work in human systems based on the concepts of limiting Work in Progress (WIP) and flexible throughput – think of it as a Lean approach to Agile.

The word means “signboard/billboard”, and is used in Japan in a number of ways, not as the concept of applying “Kanban”, but often more naturally. The basis of Kanban is to find the weakest bottlenecks in a system and smooth the flow through the chain to allow optimum continuous delivery without buildup at critical points.

In other words, the flow Pulls as capacity of flow permits, rather than Pushing when work is requested.

This was used in production with TPS to augment other Lean practices and ensure the most efficient throughput possible. It is a visual aid in decision making for what, when and how much to produce – Kanban is concerned with limiting resources and work to deliver a smooth and ultimately more productive workflow.

A basic Kanban limited WIP progression

 

Kaizen

The third part of the TPS triumvirate. Kaizen means improvement, and is taken in business to be “continuous improvement” of a process, which originated in manufacturing but has become very associated with software and DevOps/OpsDev.

Kaizen is as much a culture as a lean practice, not a systematic process applied at a single point; it requires investment from all stakeholders to achieve, and must be implemented from leadership down.

It is not enough to simply fall into a rigid adherence to a single workflow, because circumstances in business change all the time. Kaizen is the ideal of always striving to become better, whatever the circumstances, to reach to optimum throughput of value.

A Kaizen continuous improvement cycle

 

Agile

Agile as a core concept is a framework based on a manifesto. It is somewhat related to Lean practices, but emerged from a variable, complex environment (software development) rather than a complicated one (a production line or single factory business unit, for example). This makes it uniquely suited as a set of ways of working with industries or business units that are in constant flux.

Agile was conceived to adapt to both changing requirements and customer needs, but also to cut waste, and deliver value faster by using an iterative and incremental approach.

This has seen some success, because an adaptive approach will by nature be able to integrate with many different organisations and situations, but it is also seeing some problems. By its very nature, Agile is an agile concept – flexible, organic, and applicable to a variety of situations. Once over-constrained to try to make it easily repeatable, it ceases to be agile; if you succeed in turning Agile into multiple-choice Waterfall, you have removed everything that makes it effective.

There are a number of approaches with greater or lesser constraint. Scrum is a methodology of applying Agile. Scaled Agile Framework (SAFe) is another; XP, and IBM’s recent push of their Agile Thought Leader certification, yet others. None of these represent the actual core concepts of Agile, but take from those core concepts. They can be effective – or lose effectiveness – in variable amounts and circumstances.

In fact, the further we go into certifying, codifying and constraining Agile for templating, the further we move from agility, and the more concerned we become with dogmatic definition and display over the fundamental principles and application. Not having qualifications in the above doesn’t mean you can’t work or think Agile, and by introducing a restricted path to becoming agile, they may constrain and diminish agility. But what this does is allow humans to feel comfort and grounding.

What is being marketed by many consultants and businesses, then, is not Agility per se, but predictability. Constrained Agile practices (in my view) are designed to give some cessation of uncertainty, not a guarantee of agility of practice. The more certainty humans have, in fact, the less relevant agility they are likely to have, and vice versa. Each situation is different, and that’s what an Agile approach is really all about – preparing for and reacting to change, in context, in whatever way is required.

This is a strange dichotomy, similar to that of security vs accessibility; they are mutually exclusive. To be more secure, something must be definition be less easy to access; to be accessible, it must be less secure. To have certainty and predictability, something must be less Agile; to be more Agile, there is by nature less recipe-template copying possible. The best application will require analysis of a situation and balanced application.

 

An Agile overview

 

Cynefin

Cynefin, as seen in previous posts, is a way of understanding human decisions and complex situations scientifically. As is all science, it’s an evolutionary work in progress, constantly refining and being refined. It is concerned with sense-making, which is allowing data to define possible solutions, rather than categorisation, which is forcing data into preconceived constraints to fit expectations and limits possible solutions.

There are several methodologies that consider decisions similarly (the Stacey Matrix is one, albeit different in approach); Cynefin was born from Dave Snowden’s explored processes when he worked at IBM Global Services to help manage intellectual capital, and then developed further into a framework using scientific methods to evolve and comprehend (what ended up being understood as) complexity.

 

The Cynefin Model

 

Currently Cynefin consists of a 7-Domain model: 2 Liminal (Open, Complex <-> Complicated, and Closed, Chaos <->Complex), one investigative (Disorder) and 4 main domains (Chaos, Complex, Complicated, Obvious), with each of the 4 having a sub-domain containing 9 distinct areas, only the centreline of which gives coherent transitions to the conjoining domains.

 

Cynefin Sub-Domains, Liminal Domain Transitions, and the Path of Coherency

 

The Obvious domain and the Complicated domain are both ordered domains. Complex and Chaotic domains are unordered. Each domain has unique Practice that is applied, and a different methodology of decision making in order to sense-make. Each has its own action process; each has different numbers and types of constraints that define it.

 

Cynefin order and un-order

 

In its simplest form, Cynefin allows you to categorise and understand a situation’s basic state. In its deeper forms, it allows highly detailed understanding and application of concepts to resolve events in the best possible favour.

Scrum, Agile, Lean, Kanban, Kaizen all fit into the liminal domain between Complexity and Complication; methods of transitioning, probing, and resolving from unorder to order (and back, potentially). Waterfall fits into Complicated/Obviousness (order), and is limited precisely to those areas. As soon as a transition occurs away from order, it is not suitable any more.

Cynefin deals with the Social Complexity quadrant of the epistemological matrix, which is unordered and heuristic, reflecting the humans than define, drive, and live within it.

 

(Un-ordered)

MATHEMATICAL COMPLEXITY

SOCIAL COMPLEXITY

(Ordered)

PROCESS ENGINEERING

SYSTEMS THINKING

 

(Rule-based)

(Heuristic-based)

Cynefin Knowledge Management Matrix (Cognitive Edge)

 

Conclusion

Hopefully this has outlined these concepts! I think it’s interesting that, in every example above where there is a certification track, business nature becomes very quickly more focused on the qualification than the core concepts, because it’s a trackable identifier (even recently when I was asked what I did and I said I was a teacher of teachers, I was (fairly aggressively) asked, what are your qualifications that you can say you’re a teacher?, suggesting the conditioned adherence to an identifier not the years of available results!). The trap here is that a certificate gives only a signal that the consultant/etc is experienced at certain aspects, and a level below that is that the certificate track can actually be a sign of a misunderstanding and misapplication, or to put it better, an ossifying of the core concept.

It is important to understand that a certification or qualification is the beginning of understanding and application, not the end.

For me, as with most things in life, this comes down to a balance; it is possible to achieve a level of agility in some areas and a level of certainty in others, and all of these concepts, as we have seen, have their places (and times) in complexity. You cannot therefore simply template, simplify, condense, and certify (and thereafter not deviate) without running into the undeniable reality of variance (especially when you seek to remove all variance!). No one approach is perfect; a combination of them all is often required, with the understanding that Cynefin and similar frameworks are methods of comprehension.

Every one of these approaches relies on communication, learning, and investment to succeed.

 

Why we should be Contextually Multi-Methodology

Interpersonal connections, agile management, waste management, resource flow management, continuous delivery and improvement, complexity adaptation and exaptation, value and delivery coherency and teaching and learning patterns all combine into a holistic (or symbiotic) method of understanding, cohering and progressing at every level of an ecosystem. This is something I realised I have an overarching approach to which is contextually multi-metholodogy.

Understanding and choosing when to use any or all of these is critical; all too many organisations pick one they like – a buzzword, or one traditionally used, or a fad, or that worked for another company – and focus on that one, without realising that the entire landscape may require shifts between them (or multiple applications) to maintain effectiveness; contextual multi-methodology is an agility overall, behind the currently accepted variable definition of Agile.

Finally, we need to use the principle to succeed, not be seen to be merely using the name of the principle. The two are not the same.

I hope this has given an interesting overview of the terms; what I use is an ecosystem approach that exists between all of these concepts, and the teaching and learning pattern specialisation binds them all together and allows them to be communicated and understood effectively.

There are plenty more concepts and frameworks out there that are as efficacious as anything here. More on those another time!

(This article has been updated to reflect further consideration)
All frameworks, concepts and methodologies discussed in this blog are the right of the originator.

 

 

Scuba Diving Part II: Unexpected Verification

Having now been diving again, and having (ironically) experienced a very interesting complication, I can add briefly to the previous post, Of Scuba Diving, Cynefin, & Value Delivery

 

Something really sucked

My first dive was not a success. The delivery of the value was, well. Sub-optimal isn’t quite the word. I took an unknown quantity with me (a technical diving wing and plate) as part of my own gear, including a new regulator setup. This is designed to be better than rental gear, relaxing you and improving all aspects of the dive. To my shock, and horror (this is a bad thing underwater at ~24m), my air was going down as if I had a leak. 180 bar in 28 minutes is not normal!

I’ve never seen anything like it. This was using low weight (which was also odd, I should dive with 2kg and ended up needing a lot more to even descend on the first dive) and using my breath for buoyancy, not heaving like a runner on land as many people tend to when they start diving. I know it had been a while, but… this wasn’t normal.

Luckily feedback is constant with diving, especially if you wish to continue breathing, so I had plenty of time to consider options and causes. I tried upgrading to nitrox at EAN32 and a 15l tank for the next dive… I managed 37 minutes at max depth of 29m.

For those who don’t dive, this is ridiculous. A 32% oxygen/nitrogen mix should give far more bottom time than standard air, yet by the time we surfaced I was at an incredibly (and almost dangerously) low 20 bar. You should always plan to surface with 50 as a reserve, and I’ve never not done so before. I was sucking incredible quantities of air, despite some experience and careful usage. Admittedly, it had been a year since my last dive, which is quite long, but this was still way out of projection.

So what happened? Why was I suddenly emulating the finest vacuum cleaner? And how is this related to my previous article?

 

Context Matters

Firstly, I was dealing with a set of unknowns. I’d never used this rig before here – only in fresh water over a dry suit, with the guy who sold it to me as a “huge improvement for trim and diving”. I’m not a tech diver; I don’t have the rig, the gear, the training, or the cold water diving experience to utilise it, so relying on his expert advice was in retrospect more about him selling the gear and less about what was right for me as a more tropical diver.

Rule # 1 – ALWAYS test dive gear and/or consider context when possible! I didn’t, and this is how we learn. Long term, there is no failure, only feedback.

Immediate differences were apparent: the water was salt. The temperature was higher. I’d never used this getup before. It was a new dive site. It was significantly less comfortable than over a drysuit in a lake. I’d only had 2 hour’s sleep after travelling for around 10 hours (NOT advisable!) and was fatigued and stressed. There were multiple unknowns, and they did not match my projections. I only realised this mid-first dive.

Does this sound familiar in business? A plan gets set up, it’s worked before, so no one checks this time… it’s only mid-rollout that it becomes apparent that things are not as they should be, and panic and scrabble ensues whilst the stakeholders are assured everything is under control. The perception of value delivery becomes more important than the reality.

What you do must be gauged against the current situation, not estimated solely against the past, if you wish to accurately ascertain the data and act accordingly. I was out of context, and until I considered the context, I could not begin to resolve the issues.

 

Constraints needed identifying

Secondly, I had misjudged the constraints on the dives. What was planned and tested in dive prep – hypothetically, practically in different context, and obvious or complicated! – resolved on application to actually be in disorder. This is what Dave Snowden talks about when he mentions the danger of assuming a domain from the start, and acting on that assumption. To be fair, given prior experience, it should have been clear, but I hadn’t factored in new constraints which were absent or different from other dives (and, as above, context is key).

A dual-bladdered tech wing with drag and combined 90lbs of lift is not suited to my recreational diving practices. This I now know. It is far too buoyant despite a steel back plate; it changed the limits on air and usage, and the trim was ok, but not vastly improved. It was uncomfortable, stressful to don, and stressful in the water.

I didn’t test; I didn’t cover the new context to understand how the constraints could affect me differently. Once the dive was under way, and I realised my remaining air was dropping like a lead weight, I realised the situation was not only disordered instead of complicated, and resolving into complexity, but in real danger of failure into crisis.

Had I not been more aware, and carried out the obvious/complicated steps and constant checks during the dive (real-time monitoring is key in complexity probes!), I would have – without doubt – considered myself in the ordered domains and likely consequently tipped over the cliff-edge into complacency-induced catastrophic failure (read this as: NEVER fail to regularly check your air on a dive!).

In the end, those constraints – which I understood and knew about, but had not redefined contextually – limited and severely disrupted my dive (and the dive of some of those around me). Some constraints don’t change for diving; and instead of working with them, I ran up against them being fixed and governing my dive.

Hand in hand with this went Practice. Best practice was not achievable; Good practice was adhered to where I could, but it became very clear very quickly I was in the realm of emergent practice.

 

Complexity encroached…

…and came dangerously close to chaos. This can happen at any time because, as mentioned previously, dives contain a number of areas by nature out of our control.

My situation was still safe-to-fail; even had I hit zero gas, both an instructor and divemaster were on hand to give extra air as we ascended. Plenty for the safety stop, which is a requirement (in complication). Remember Stop-Breath-Think/Probe-Analyse-Respond; I had multiple options to consider to end the dive safely, but nevertheless, with the stress and confusion of what was happening, I could see the pale edge of panic and understand how even experienced and calm people could cross into it.

This point is where a lot of divers WILL panic, despite the safe-to-fail alternatives, and when you lose reason you are in serious danger, especially in a situation that requires reason from the outset (we’re not designed to breath underwater, so everything must rely on the application of the reason that led to the setup and implementation of the circumstances. Our instincts cannot and do not help us in this situation).

Crisis management was visible, and I relaxed and took stock so I could avoid it. If you don’t do this when diving, you are in real trouble. The trouble was, this detracted from the dive and the goal, which I achieved, but would have rather spent longer experiencing!

 

Analysis

I decided to consider what had happened, and grouped data by possible impact. My new regulator had a venturi switch (which I’d never had before – it governs pressurised airflow through a system), which I forgot to switch on. Perhaps that had an effect? I hadn’t dived for a while. Perhaps that had an effect? What data could I look at?

I pondered what had changed since my last tropical dives, and decided that rather than the ever-tempting process of categorisation, I would allow new understanding to emerge from the data.

So I tested different configurations. Gas, size, weight, etc, all in the correct medium (salt water, which has different buoyan). The regulator was discounted as an issue (brand new and very efficient, and unless it’s leaking it turns out it has very little effect on consumption).

I tested ideas by referring to multiple instructors at once, people who do this every day and have different gear and requirements, both men and women (air usage is heavier in general for men). I ran distributed brainspace probes for possible issues, and many possibilities were thrown up; at least one was known to be naïve (I asked other divers and even students what they thought). The multiple experts knew the dives and how the baseline metrics within a given scope should work.

We had a lot of different ideas, and what came out were three main points:

 

Context was key. The environment and what others were successfully using needed to be a baseline. The elimination of possibilities such as the new regulator making that much of a difference based on expert advice.

Differences to last successful contextual dive were crucial. (All rental equipment! This time, my own gear, but having the diametrically opposite effect than expected).

Elimination of differences, one by one with feedback after each, to see what happened. (Again, I had suspicions).

 

Finally, I tested states of mind and methods I knew had worked in the past, and evaluated why they might not work here. I got some sleep; I relaxed; I still found issues.

What was left after multiple probes, concurrent mindspace sharing with experts, my own gut feeling, and multiple dives with different setup appeared to be one key factor remaining:

The tech wing and plate.

 

Resolutions

We couldn’t ignore the data that had emerged from our discussions and tests; there was little left apart from a significant change in my physiology, which was not a great consideration to contemplate.

So for the next dive, I hired a regular bcd (standard dive jacket), connected it to a normal tank of air (baseline against the other divers again), and used my regulator.

Three of us dived. I came up with roughly the same air as the divemaster, after a relaxed dive with near-perfect trim (I trim weirdly, more on that another time) – 50 bar after 39 minutes, max depth of 24m.

Let’s put that in perspective: an EAN32 dive with 15l lasted 37 minutes and nearly ended in crisis; an air dive with 10l lasted exactly as planned, in line with the divemaster in fact, and I was not the one who ended the dive; my buddy was. I was clam, collected, enjoyed the dive, and found myself exactly back where I remembered being; the peak of efficiency, trim, value delivery, and experience. The difference was astounding; all my anxiety and fear had vanished.

(The relief you find when it’s not actually something intrinsically wrong with you is… profound!).

What did I do here with regards to Cynefin?

 

I recognised that I was in a complex, unexpected scenario; I probed, analysed, and then changed significant constraints based on context.

 

From this point on, my dives transitioned back into complication, and progressed as planned.

How directly analogous is this to business mentioned in the last blog post, and how a company will make assumptions, often untested, and then find themselves fighting to mitigate or avoid disaster, and still deliver any value? Often changing a constraint in complexity delivers a profound change – and delivering value is, ultimately, the primary goal (outside staying safe both long and short term).

 

Conclusion

So it turns out that my suspicions were correct, and heavily influenced by complexity – not only the new site, the new experience, the time since last contextual dive, and the new gear, but also anthro-complex considerations such as stress, fatigue, and alarm/stress induced when expectations were not met, all contributed. It was my new, context-untested gear I had made assumptions about, but all these things had an impact. This was not an obvious or complicated resolution.

I still achieved my goals despite it; I dived with Thresher Sharks, and got some amazing footage (I might even pop some on here).

I wasn’t expecting to have to apply elements of the last post so soon, but I’m glad I did, in a way; it validates the comparisons.

Safe coherence out there!

Of Scuba Diving, Cynefin, & Value Delivery

It struck me recently that a good way to understand and perhaps even react to the challenges of modern management science and organisational value delivery might be to consider scuba diving.

Imagine, if you will, that an organisation or project might emulate a scuba dive, with a remarkably similar line of coherency through Cynefin.

What on earth am I talking about? Bear with me… I’ll explore organisations, basic Cynefin principles, workflow, and, of course, the diving part.

How is this relevant to business?

A number of the current issues faced by organisations run enough gauntlets that entire consultancies and processes have sprung up relating to Agile, Lean, Complexity, Problem Solving, Value Delivery, Training, and other integrated practices. Each one of these is a part of a whole flexibly applied approach, rather than a singular answer.

That entire industries – let alone organisations, or business units – are now traversing a little-understood landscape which is seeing them intensely pressured, plus a loss of value delivery, is becoming widely recognised. Both Dave Snowden (Cognitive Edge) and Katherine Kirk (Agile Coach/Speaker) speak globally on the subject of the changes in management, industry and business resolution, and more and more companies are realising there is a piece of understanding missing around delivery of value.

It is extremely difficult to persuade leadership to go against tradition, company culture, and the tempting expectation that data can be summarised for simple repeatable decision, even when these are clearly impeding innovation or expansion (as is now seen cross-industry – the stifling of innovation and a downwards dive of productivity, Snowden). Often, either an adjustment may instead need to be made in an organisation where it can be safely demonstrated, or the enviroment shifts such as it has no choice but to react appropriately to survive. The second is usually not desired, as it probably requires crisis management – but handled correctly, this is where true innovation also lies (Cynefin, running innovation solutions teams with crisis management teams, Cognitive Edge).

In both instances we have instinctive or conditioned reactions which may worsen the situation – requiring a more reasoned approach – and a general inability to intuit the necessary actions.

 

So why the sinking feeling?

In pondering upcoming dives and my current consultancy, it occurred to me that there were some remarkable parallels and takeaways between business and diving, and that the latter could be used as a good example of some of the concepts.

Scuba diving is an interesting lesson in avoiding reductionism, agile assessment of situations, considered action with the ability and requirement to act immediately if appropriate, refining a plan to get the maximum effect with limited resources, and required planning and high levels of order that can be – and are – still immediately affected by unpredictability and complexity. At the same time, all divers involved strive to improve the dive as much as possible until the dive ends; kaizen, if you will.

You require strategy, tactical responses, and a lack of politics and ego for a dive to be safe, productive, and succeed. Every diver is a stakeholder, and empowered to give valid input; every diver drives success of the dive.

In any situation when you are diving, you are in an inimicable environment that is extremely unforgiving for the unprepared or error-prone. Most of this is easily avoidable via preparation, understanding and action (or calculated inaction). Recognising warning signs is key, because your options are constrained by several critical thresholds.

If you encounter an issue when you are diving, from a minor adjustment up to a major incident, there is a standard response:

Stop-Breathe-Think-Respond

Following these steps as much as possible is critical, as panic not only drastically increases use of your limited, most valuable resource (in this case, air) but it can lead to potential loss of life.

For me, this sequence is an interesting parallel/precursor to engaging the more involved responses of sense-making and Cynefin.

 

Cynefin – a closer look

This is a good moment to explore the basics of Cynefin and how it can be used to optimise organisations and situations. I’ll go into more detail in another post, but for now, we’ll focus on the basic model and what it contains, and I’ll give some diving and business related examples (and hope it makes sense!).

Cynefin is a framework created by Dave Snowden and Cognitive Edge, and is a constantly evolving, science-based method of understanding anthro-complexity and how to best manage human issues. Human issues affect everything in our lives, because everything we do relies on human interaction – organisations, products, services, families, and more are MADE of – or by – humans. It works on a naturalistic basis to allow sense to emerge from data rather than the usual human practice of attempting to force data into categories for understanding; this latter approach often constrains our perceptions and our options, but is our usual method for dealing with things.

Management science and organisational disruptions are two areas Cynefin has been applied to with great success.

 

The Cynefin Model. All rights reserved Cognitive Edge

 

Here we have a very basic Liminal Cynefin model, with seven domains. The main four domains are:

Obvious, dealing with ordered things anyone can grasp, such as moving a mouse on a computer and watching the cursor move with your actions, or swimming up or down to move up or down in water; direct and obvious cause and effect. The danger here is complacency – because if failure happens, you fall off a “cliff-edge” into chaos and crisis.

Complicated, dealing with ordered things requiring expertise to understand, such as developing in a coding language, or understanding the gas mixes at relative depths; multiple possible causal links.

Complex, dealing with unordered things that are not obviously causal and require experimentation and feedback to understand, such as a new software release’s impact and estimation of success in a marketspace, or currents and weather changing during the course of a dive; no causal links and a requirement to probe before you can respond appropriately.

Chaos, dealing with unordered things that have no causality and are in a state of crisis/emergency, such as new software blue-screening multiple client’s mission critical servers upon release, or a sudden loss of bouyancy control underwater; no time to explore cause and effect, you must act immediately to avoid catastrophic failures. Innovation typically lives here.

In addition, we have the central domain of Disorder, in which we are not yet sure which major domain a situation falls into, and two liminal domains:

Complex/Complicated, which is the liminal dynamic you can transition from unorder into order through (and back if required). This is where Scrum and similar Agile concepts, Lean, Kanban and Kaizen (and others) sit. I’ll cover these in another post in more detail.

Chaos/Complex, where controlled shallow dives into chaos can be performed to spark innovation and new goals, or you can move from crisis to complexity by the imposition of constraints.

It’s worth noting that Liminal Dynamics (i.e. the transitions between states) are at least as important as fitting things into the major four domains, and constraints and practice are both areas that influence understanding too, but I’ll attempt to cover Cynefin another time with regards to problem solving.

The last two things I want to mention about Cynefin here are that 1) order and unorder are both manageable and have different applications, briefly explored in my post Fearing Change and Changing Fear, and 2) each major domain has a sub-model which traces a path of coherency (the logically supported optimal continuous pathway of productivity in business; a way of understanding the degree and nature of evidence that supports either a planned action or a situational assessmentSnowden) and links the domains from Chaos through to Obvious. Again, more about this another time.

So, there’s some Cynefin in a nutshell!

 

Relating Cynefin to scuba diving

We can compare these by investigating the actions integral to diving. With regards to the base scuba diving precept of Stop-Breathe-Think-Respond, you would encounter it mostly with Complicated, Complex and Chaotic domains once the dive has begun.

Sense-Categorise-Respond

The base planning is set often in the Obvious domain: for example, the set up. Have you checked your BCD inflates? Have you checked your air quality? Have you cleaned water out of the connector? Have you used your second stage so you know you can breathe? Do the gauges work? And so on. These are step-by-step stable best practices anyone can (and must) carry out which are vital to success.

Sense-Analyse-Respond

But then we have a foot into the Complicated domain. Do we need to calculate NOx % for a mixed gas dive? What is our calculated depth limit so we don’t potentially die from oxygen toxicity? How are we monitoring this? What depth limit and surface time, what decompression time will be required? These requires analysis and expertise. Not everyone can do this intuitively and follow instructions, because there must be understanding, experience, and responsibility. You shouldn’t get on a plane within 24-48 hours of diving because of pressure differentials, for example, but without certified knowledge you might not know that.

Probe-Sense-Respond

Complexity is moved into as soon as we’re off, even before we’re on the boat. Weather can change quickly. Currents change. Visibility changes. The plan may beome unfulfillable as set out. Many unpredictable factors occur that require us to probe the process and change goals based on the feedback, both as the dive commences and continues. Stop-breathe-think/probe-sense-respond. Concurrent dives may also occur in multiple adjecent locations to maximise chances of success in uncertain conditions, which can then be taken into account for future dives.

Act-Sense-Respond

Thankfully Chaos doesn’t happen often, but it is always a very real danger on a dive. A malfunction, an environmental shift, or lack of experience or ability can turn a peaceful relaxed dive into a stop-breathe-think/act-sense-respond emergency scenario.

For example, the time a novice had trouble with bouyancy and was ascending in 7m of water with propelled boats overhead springs to mind. This was rapidly moving towards a crisis area requiring action. Before the Divemaster could intervene, another novice grabbed his weight belt to help pull him back down and it slipped down to his ankles, making it worse – he shot up like a cork!

It wasn’t deep enough for decompression problems, but it was shallow enough for boat-to-the-head problems, which can be quite terminal.

This was lurching into duffers better dead (Snowden/Ransome) areas a little too accurately. Immediate crisis management was implemented (the Divemaster and I grabbed a fin each and gently pulled him back down whilst his belt and BCD were fixed), using innovation (we used a typically non-tactile bit of gear to stabilise him as he gained practical experience of adjusting critical gear underwater, subsequently explaining this to the others post-dive), and we then transitioned back into the base “project” of the dive with thankfully no damage except his frantically-used air, and a number of lessons learned by all the newcomers. The dive was ultimately cut short as a result, and deviated from the route.

So we traversed a path of coherency, including a recovery from crisis management. There is generally more response time for this in business, but considering the organisation as an organism (or better, an ecology) means it’s relative, and just as impactful.

Diving is more critical to us because we can’t breathe or ascend uncontrollably. We are in a hostile environment, and we know it every second. We are forced to deal with this to be safe. But business should be considered in every bit as critical a fashion, as the market is also hostile and unforgiving, and critical timescales are relative (companies are bigger and slower). Sink or swim; complacency kills.

One note of interest is there no single fail-safe per se in diving, because if something can go wrong it will go wrong. Instead, there is a strong concept of multiple concurrent options that can be implemented during a scenario that have variable chances of being the best option depending on circumstances (I can think of four if your main regulator stops giving air off the top of my head, for example). It’s not quite safe-to-fail probes in complexity, but it has similarities. Scuba diving is about resilience for the sake of safety.

Most of the domains of Cynefin are passed through on many dives in one way or another, and an organisation is immersed in them and has its own line of coherency through them too, both in part and as a whole. Resilience is key in business, too, both at an organisational level and a project level.

 

So what can we learn from all this?

Now you’ve read the above, take a moment to try applying this to a past or present organisation, and see what correlates. You might be surprised how many similar domains fit a business goal, culture, methodology and leadership requirements as fit the overall structure of a dive.

Consider how approaching a situation as if it were a dive could have provided better results (or not!). Consider as well how what you understand of the concepts of Agile, Lean, WIP limitation, improvement, and problem solving would apply to a dive, and to your example organisation.

You can also try using examples from a dive, which are simpler than a company’s projects, and apply them to situations you feel parallel. I’d love to hear some of them in the comments.

The purpose of all of this isn’t to focus on diving, of course, or suggest it’s immediately translatable to business; it’s to prod a different perspective, another application of principles that are key to both.

 

Should we run organisations/projects more like a dive?

I think there is a good argument for consideration, if nothing else!

Diving is an interesting operation which succeeds when it is collaborative; everyone diving is a stakeholder. Everyone is empowered to make suggestions, get the attention of the group, and – if one suffers a mishap – the group responds as a whole to mitigate the issues to produce the optimum possible continuing flow of the dive. As soon as you hit the water, every stakeholder is continuously reacting and improving the group’s experience – via bouyancy, adjustments, suggestions, and constant, communicated feedback from the surrounds. You inevitably experience better flow by the end of a dive than the start.

The concepts of Kanban are very much involved. Every diver consumes air differently, has different buoyancy, trims differently – all of these affect the overall dive time, depth, and quality on an in-progress basis. Movement speed is limited to the slowest mover; dive time is limited if there is a problem, by the first to reach their air reserve, body composition and potential hypothermia, or even descent into the chaos of losing track of a dive buddy, at which point an immediate constraining if/then scenario kicks in and the entire dive ends.

 

 

Safety is first, success/enjoyment is second, and new goals may arise as all divers experience multiple probes regarding the direction, focus, or decisions of the dive. Strategy is adhered to overall, but the optimum path is per dive, not set for every dive. The dive emerges from the situational data; we don’t categorise and limit the dive unless it crosses into chaos. Although there is overall loose hierarchy in terms of a Divemaster/guides, it is more of an ecosystem, where everyone is focused and has stakes in delivering the value. Everyone is trusted to do so. By and large, everyone delivers. It’s worlds away from how businesses mostly run, but it’s startlingly similar to how businesses are beginning to understand they should run. The interest in Agile and similar methodologies is huge, but sometimes poorly understood; industry has yet to decipher the new landscape of value delivery.

Loosely translated to a company, you can probably see how running in this fashion would be efficient and beneficial. Long term company safety is critical; there’s no point in succeeding in an immediate project if it harms the longevity of the organisation. Projects should enhance it! Collaboration and investment throughout the entire value stream is key, and organisations should not be afraid to work towards goals that may shift slightly. Understanding how and why is important.

 

Applying these parallels to Business

Many organisations are currently adrift in an unfamiliar – and in terms of understanding, hostile – environment. With the fast shift of the global economy, service-driven offerings, and the clash of bureacracy and entrepreneurialism, the drive to achieve, to innovate, to be successful has never been higher – and companies can struggle to keep pace.

I’ve seen organisations getting more and more desperate to deliver the value they know they contain, only to stall due to reliance on adherence to the old methodologies of getting back on track (cuts, leadership changes, management fads, old school management techniques, reductionism, simplified recipe transplants, demands to innovate something to stay relevant, and on), or a misapplication of the new buzzword, Agile, and related patterns.

Incorporating Agile techniques is a very valid and beneficial action, as long as it isn’t immediately constrained and “certified”, or only used in the manner of a Cobra Effect – in other words, appearing to be done, using the language and visible basics, but actually not being undertaken correctly or sparking zero cultural change.

And therein lies a large part of the issue – Agile can’t be Agile if it’s constrained and simplified as a recipe for organisational transplant, or if it’s not really implemented at more than the surface, but these approaches are how many companies appear to be trying to implement it.

The name of the thing is not the thing – most of us buy the label, not the merchandise” (Weinberg).

It’s also worth noting that no single framework, and consequently few single consultants, actually hold the keys to everything. Agile, Lean, Kaizen, Scrum, and other manifestos/frameworks are all part of something larger, and fit in certain places and not others. Cynefin is a little different; it is an attempt to scientifically and naturally understand this, by defining actions and language, gauging complexity according to naturalistic methods, and allowing the data to give us sense rather than attempting to forcefully categorise it to suit us – because, as we’ve seen, being in the midst of situations tends to limit us to those situations. The fact is, organisations need to adapt to complexity to survive; with a few large exceptions, the world no longer tolerates businesses that can adapt everything else to fit themselves.

On a dive, all divers pay attention to all divers because the success of a dive hinges on all divers. It’s now becoming more obvious that an organisation should pay attention to all its people because the success of the organisation (and subsequently all those people) hinges on all those people. I refer back to my last blog post where I quoted Katherine Kirk, saying “Organisations and people ALL matter, because they drive, innovate and ARE value; we matter because everyone else matters”. I’ll probably keep repeating it, because it’s true.

 

Plumbing the depths for answers

So, when it comes to understanding Cynefin and running an optimised, lean, agile organisation, you could perhaps do worse than consider the comparison to a scuba dive.

Am I suggesting that a dive plan is the same thing as, say, a Scrum sprint flow? No, of course not. A typical dive could only ever have elements of more complex business practices. The idea is – with slight tongue in cheek – to recognise the similarities, understand the benefits of how a dive operates, spark a new way of viewing things, and realise how complexity affects all areas of our lives and must be assessed accordingly to plot a coherent path.

Prepare, recognise warning signs, relax, and deal with situations appropriately; recognise all the stakeholders and the value stream delivery for people; and take satisfaction from successfully completing the delivery of that value. Every dive is different, ever-changing – that’s part of the fun! That’s how business works, too, but organisations are still mired in hierarchy and rigid constraints, and from within the mire it’s difficult to understand how to regain innovation and change for the better.

It’s hard to navigate when you’re too close to the sun – that’s why a neutral consultant or coach can give new insight. They are not restricted by the inbuilt constraints. A consultant’s very lack of deep level expertise in a subject can be great benefit, although having knowledge can also be very helpful, and ultimately help the organisatin learn to deliver their value with a minimum of issues.  They are jigglers, to coin another phrase from the esteemed Gerald Weinberg. Facilitators who have experience and knowledge, not to expert levels within the issue, but who can help those who have it find stability and a new direction.

(Some of them are divers, too.)

 

 

Concepts, Games & Exercises for Engaging the 6 I’s

Following on from the previous free infographic 6 Ways The I’s Have It, here are some concepts, exercises, and games to help understand the importance of the 6 I’s:

 

Interest

Without initial interest, there is little personal incentive.

What stands out? What are the overall learning outcomes? What do you notice, that gives this relevance? What catalyses your desire for this subject?

Inspiration

Without finding inspiration, there is little drive to understand.

How are you motivated to be involved, to use this? How can you see it benefitting you? How can this improve your day to day life and usage? What makes you YEARN to use this practically?

Involvement

Experience is the greatest Teacher.

What is the best way to learn this? How can you engage yourself so you can understand and apply this? How can practice help you discover and absorb the concepts, information and methodologies? How can you Learn by Doing?

Immersion

Without immersion, you may lose what you have learned, and you are unlikely to learn further.

How can you make this new knowledge part of yourself? In what ways can you regularly suspend yourself and your role within it? How can you continue to learn after the initial class? How do you ensure you retain this new understanding?

Investment

The best return for learning comes from use, reliance, and capitalising on both.

Do you believe in what you have learned? How can you use this to further yourself and/or your role? How can it benefit your organisation? What is the real-world return you will get here, day to day? How will your learning continue to grow beyond the class?

Instruction

We learn best by teaching others.

How can you Interest, Inspire and Involve others? How can you both pass on and retain ever-deeper knowledge of what you now know? How can you help others applicably understand and retain this? How can you continue to understand more and deepen your knowledge?

These I’s represent an instructional ecosystem that both teachers and learners are equally a part of.

 

Games & Exercises

Learning is in the I’s of the Beholder!

 

Informal Exercises for Teachers

These can either be approached individually as a teacher, or used in informal exercises in the beginning of a class:

List basic aspects of the subject you want to teach randomly and ask learners to pick out what interests them, and why

Ask learners at the start what they would want to teach others about the subject, based on the basic concepts, and at the end ask them how they would now instruct others to inspire them in turn

Ask learners to think back to things they’ve learned in the past, which ones they’ve learned the fastest and most enduringly/completely, and why they think that is

Ask them to consider what the return on investment is for things they learn, and give examples of anything – language, driving a car, career-enhancing management techniques, etc. Ask them to expand this out to include more of an ecosystem, so how it would also benefit those around them and in turn benefit themselves even more

Ask for instances of where “use it or lose it” came true

Ask them what they think a teacher’s job is, and how they would teach the subject

These can be considered either individually, in small groups, or by the room at large.

 

Games to understand the Importance of I’s for Learners

These games can be used as a baseline to spark creative construction or combination of your own games. Feel free to use them in any way appropriate.

Interest:

Have a decent range of subjects on a board. Ask the class to, one by one, pick a subject of interest to them – a new one each time. Then ask them to explain why it is interesting to them and if they think they learn it better or not as a result.

An alternative is to ask them to pick two subjects from a range, one of interest and one of no interest, and to research for 5 minutes any information on both to tell to the class. When they present it, see how much more they do with the one interesting to them, then explain what they’ve done and why. Ask them to pick subjects they do not know much about.

The object is to help people connect with and find what is of interest, to them and others, and understand why we tend to only really invest when something interests us.

Inspiration:

Have a random set of subjects, including some seen as traditionally average, and draw one each. Split into groups of 2 and work on understanding what the subject is. Google is allowed! Then try to interest either each other (or the group, depending on how it’s played) and inspire them to want to know more about it.

The object is to help people see what can drive you to learn more about something interesting, and how formerly average things can be presented as inspiring.

Involvement:

Create a game where, to reach the end, everyone must be involved as part of the journey. An easy way to do this is to base it on a choose-your-own-adventure book (I will consider providing some for use for groups of 4/8/12/16 people at a later date!). One learner follows the pages, and makes a choice, then passes it to a random person (it cannot go back to someone who has already been) after the choice is made. At the end, a group decision must be made to choose the final ending. The stories can be in IT, services, industry, fantasy and so forth.

Another, more involved way is to have each randomly chosen person write the narrative forward based on doing the work and towards a common final goal, taking into account what was written before. Perhaps having choice pages constructed by the teacher would help keep on the rails; I’ll consider this game in further depth.

The object is to form connections within the group, and is a task that requires everyone to practically work in to complete.

Immersion:

Ask a student to tell a story about something that happened or could happen to someone else. This can be from a related set of subjects, be serious, be humorous, and so on. Afterwards ask them to describe what they see, feel, think about what they’ve described. Then ask them, or their partner if in 2s, to tell the story again as if it happened to them, or a similar story that did happen to them. Ask for the descriptions again, and get the listeners and the speaker to compare them to the previous story.

The object is to help learners understand how immersion and personal experiences are very powerful. Feelings – even simulated – are likely to be far stronger in the second story, as are the descriptions and the care for the subject.

Investment:

Have groups of two consider two methods of getting someone invested in a subject, then present them to the others. Have the class say which method is the one they would want to use and why.

Also valuable to ask people to give examples of what is in their interests to be invested in – driving, for example – and how the preceding aspects can shape this.

The object is to prove how much further people will evangelise and utilise a subject they truly believe in, especially if there is a return for them in skills and understanding.

Instruction:

In 2’s, ask learners to teach something about a subject from Interest to someone, then in return, then again, until both people have covered an exciting and a less exciting subject. Note one they are better at. Have them break down concepts and explain clearly, and see if they discover new perceptions themselves as they explain it.

The object is to show that the more you teach something, the more you yourself understand it – and can granularise it – to enable the understanding of others. Note granularisation is not equivalent to simplification (reductionism).

 

At the end of each of these games and exercises, it is wise to explore with explanation how and why it works, and ask learners how it applies to them.

Before the exercise or games are invoked, they should be made psychologically safe and relaxing to the point where they are discussable. I prefer smaller classes of a maximum (usually) of 8, so I can focus on mentoring rather than managing large groups, and so people feel better connected, more relaxed, and more able to speak to everyone.

 

Are these concepts helpful? Let me know in the comments!

Infographic – 6 Ways the I’s Have It

One way to consider the teaching and learning process, in any form – from a meeting to education to a technical training –  is in the form of six key aspects to any form of teaching or learning:

For Teachers – those trying to impart concepts, skills, and ideas – these are things you should help facilitate or catalyse for learners, but not dictate or force – you are there to open the door for the student, not push them through it!

For Learners – those trying to understand concepts, skills, and ideas – these are things valuable to helping you learn deeply and broadly.

They flow through Connections, Concepts, Concrete Practice, and Conclusions, the 4 C’s of Sharon Bowman’s highly recommended book, Training From The Back Of The Room, and can chart a path from inexperience through to subject evangelism and teaching.

The next blog post will run through some games and exercises to heighten awareness of the I’s.

Learning Is In The I’s Of The Beholder: 6 Ways The I’s Have It

Machine generated alternative text:
INSTRUCTION

Infographic, Games and Thoughts for Engaging

 

One way to consider the teaching and learning process, in any form – from a meeting to education to a technical training –  is in the form of six key aspects to any form of teaching or learning:

 

  • For Teachers – those trying to impart concepts, skills, and ideas – these are things you should help facilitate or catalyse for learners, but not dictate or force – you are there to open the door for the student, not push them through it!

 

  • For Learners – those trying to understand concepts, skills, and ideas – these are things valuable to helping you learn properly.

 

They flow throughout Connections, Concepts, Concrete Practice, and Conclusions, the 4 C’s of Sharon Bowman’s highly recommended book, Training From The Back Of The Room, and can chart a path from inexperience through to subject evangelism and teaching.

 

Interest

Without initial interest, there is little personal incentive.

What stands out? What are the overall learning outcomes? What do you notice, that gives this relevance? What catalyses your desire for this subject?

Inspiration

Without finding inspiration, there is little drive to understand.

How are you motivated to be involved, to use this? How can you see it benefitting you? How can this improve your day to day life and usage? What makes you YEARN to use this practically?

Involvement

Experience is the greatest Teacher.

How are you learning this? How can you engage yourself so you can understand and apply this? How can practice to discover and absorb the concepts, information and methodologies? How can you Learn by Doing?

Immersion

Without immersion, you may lose what you have learned, and you will not learn further.

How can you make this new knowledge part of yourself? In what ways can you regularly suspend yourself and your role within it? How can you continue to learn after the initial class? How do you ensure you retain this new understanding?

Investment

The best return for learning comes from devotion, and capitalising on it.

Do you believe in what you have learned? How can you use this to further yourself and/or your role? How can it benefit your organisation? What is the real-world return you will get here, day to day? How will your learning continue to grow beyond the class?

Instruction

We learn best by teaching others.

How can you Interest, Inspire and Involve others? How can you both pass on and retain ever-deeper knowledge of what you now know? How can you help others applicably understand and retain this? How can you continue to understand more and deepen your knowledge?

 

These I’s represent an instructional ecosystem that both teachers and learners are equally a part of. 

 

Exercises & Games

 

Learning is in the I’s of the Beholder!

 

 

Informal Exercises for Teachers

 

These can either be considered individually as a teacher, or used in informal exercises in the beginning of a class.

 

  • List basic aspects of the subject you want to teach randomly and ask learners to pick out what interests them, and why

 

  • Ask learners at the start what they would want to teach others about the subject, based on the basic concepts, and at the end ask them how they would now instruct others to inspire them in turn

 

  • Ask learners to think back to things they’ve learned in the past, which ones they’ve learned the fastest and most enduringly/completely, and why they think that is

 

  • Ask them to consider what the return on investment is for things they learn, and give examples of anything – ranging from driving a car to career enhancing management techniques. Ask them to expand this out to include more of an ecosystem, so how it would also benefit those around them and in turn benefit themselves even more

 

  • Ask for instances of where “use it or lose it” came true

 

  • Ask them what they think a teacher’s job is, and how they would teach the subject

 

These can be considered either individually, in small groups, or by the room at large.

 

Games to understand the Importance of I’s for Learners

 

Interest:

Have a decent range of subjects on a board. Ask the class to, one by one, pick a subject of interest to them – a new one each time. Then ask them to explain why it is interesting to them and if they think they learn it better or not as a result.

 

An alternative is to ask them to pick two subjects from a range, one of interest and one of no interest, and to research for 5 minutes any information on both to tell to the class. When they present it, see how much more they do with the one interesting to them, then explain what they’ve done and why. Ask them to pick subjects they do not know much about.

 

This helps people connect with and find what is of interest, to them and others, and understand why we tend to only really invest when something interests us.

 

Inspiration:

Have a random set of subjects, including some seen as traditionally average, and draw one each. Split into groups of 2 and work on understanding what the subject is. Google is allowed! Then try to interest either each other (or the group, depending on how it’s played) and inspire them to want to know more about it. See how many people say they are interested in learning more!

 

This helps people see what can drive you to learn more about something interesting, and how formerly average things can be presented as inspiring.

 

Involvement:

Create a game where, to reach the end, everyone must be involved as part of the journey. An easy way to do this is to base it on a choose-your-own-adventure book (I will consider providing some for use for groups of 4/8/12/16 people at a later date!). You follow the pages, and make a choice, then pass it to a random person (it cannot go back to someone who has already been) after your choice is made. At the end, a group decision must be made to choose the final ending. The stories can be in IT, services, industry, fantasy and so forth.

 

This forms connections within the group, and is a task that requires everyone to practically work in to complete.

 

Immersion:

Ask a student to tell a story about something that happened or could happen to someone else. This can be from a related set of subjects, be serious, be humorous, and so on. Afterwards ask them to describe what they see, feel, think about what they’ve described. Then ask them, or their partner if in 2s, to tell the story again as if it happened to them, or a similar story that did happen to them. Ask for the descriptions again, and get the listeners and the speaker to compare them to the previous story.

 

The power of immersion and experience, even simulated, is very powerful. Feelings – even simulated – are likely to be far stronger in the second story, as are the descriptions and the care for the subject.

 

Investment:

Create two methods, one kind of fun and lightweight and the other more interesting and heavy. The latter will get you further, and possibly a reward? But it needs to show the power of being invested. Have groups of two do this, one lightweight and one heavy. Then they tell the methods to the class, making it clear what you get at the end, the leaning goals. Have the class say which method is the one the would want to use and why, including reward. OR look at something it is in their interests to be invested in – driving, for example.

 

This proves the further amount people will evangelise and utilise a subject they truly believe in, especially if there is a return for them.

 

Instruction:

In 2’s, ask learners to teach something about a subject from Interest to someone, then in return, then again, until both people have covered the exciting and less exciting subjects. See which one they are better at. Have them break down concepts and explain clearly, and see if they discover new perceptions themselves as they explain it.

 

This shows that the more you teach something, the more you yourself understand it and can granularise it to enable the understanding of others.

 

 

At the end of each of these games and exercises, it is wise to explore with explanation how and why it works, and ask learners how it applies to them.

 

Before the exercise or games are invoked, they should be made psychologically safe and relaxing to the point where they are discussable. I prefer smaller classes of a maximum (usually) of 8, so I can focus on mentoring rather than managing large groups, and so people feel better connected, more relaxed, and more able to speak to everyone.

 

 

Fearing Change, & Changing Fear

 

The Fear of Change

Rosanne Cash

 

Whether we experience it individually or within human constructs (religion, organisations, families, clubs, etc), there is a Fear of Change ingrained in us in both business and personal life. Humans are comfort-creatures; we value stability and comfort in our lives, be it professionally or at home. So what happens when the ever-changing Universe rudely reminds us that everything is, ultimately, transient?

It is very human to deny that change is happening, that a system has become (or always was!) un-ordered. The reaction is often to then try to impose order (constraints), and often we do this to systems or situations that cannot by nature be ordered.

Change represents the oft-acknowledged deepest fear of mankind: that of the unknown. We know we are here, and find comfort, even in uncomfortable situations; true change will really change things, and this can induce anxiety, worry, discomfort, fear – not only of the consequences, but the change itself.

If something isn’t working, it needs to change for it to begin working. Sometimes the fear of change is so great that we would rather it simply continue not to work, because at least then we know it isn’t working; in other words, we have some form of certainty. This, of course, isn’t helpful in the long term, for delivering value, or in urgent situations, and to accurately gauge this we also need to understand the benefits or risks of making the change.

 

But what if something is already working?

 

One response is: why change if it works? (which can also mean, if it sort of works well enough, maybe, also I don’t want to spend money).

Why indeed? But as with everything, this isn’t a black and white situation, much as we love to polarise. It may be barely working, or require workarounds to complete. It may be inefficient or cause rising/unnecessary costs, or added complication and hassle to daily life. If it works well enough, which is highly subjective, you have to ask if it is worth changing. If the benefits of change are outweighed by the risks or clear negatives, or it is poorly perceived or understood, it is probably not worth doing.

But if you take any organisation with working processes in place, the chances are high that people will usually say, “Yes, it works, sort of – but it could work much better” about many of them, and then specify where the inefficiencies impact their overall effectiveness and workload. (A problem I have often found is that, where an organisation does undertake to make changes – be it a new system, process, or team – it is usually a higher-level decision that often doesn’t fully provide training, positioning, and applicable usage to the people actually doing the job, and can be either too simplistic, over-complicated, or ill-applied – in other words, not appropriate to resolving the core issue. This is why listening to the people doing it matters).

If this is the case, and benefits clearly outweigh risks… why not change it to make it work better?

The place to start with processes, change and the fear of that change is the same: you start with the people.

 

Why start there?

 

All processes, all base decisions, and all value delivered stems from the people within an organisation. People are interconnected individuals working within an organisational structure towards a common set of goals in a variety of ways; without those people – and their interconnections – the innovation, the products, the organisation itself would not exist.

Another way to say this is that people both create and are the value delivered by an organisation. Or, to put it in a more succinct fashion, Value Streams are made of People (Keogh).

So, recognising that the value of your organisation is the people is an important step, for a number of reasons. It is people who fear change, not the products or the infrastructure within an organisation; it is people who make an organisation work.

People fear the change wrought in any organisation because it disrupts processes and workarounds that may work imperfectly but still more or less work, and allow at least some value delivery. Worse, it may cause further inefficiency and unnecessary stress, or expose workarounds that are not strictly in line with company policy – but bureaucracy may have left them no other choice to achieve their business goals, which brings potential personal risk into play even in a clearly failing scenario.

“It works well enough.” “Let sleeping dogs lie”. “Don’t rock the boat.” “Don’t stick your head above the rest.” “Don’t stick your neck out.” “Don’t be sales/delivery prevention.”

These are human qualifications of not wanting to cause further potential problems, and become progressively more fearful of being singled out for causing issues, even if the root aim is to resolve perhaps more fundamental issues within the organisation to provide better, smoother value streams. Politics, bureaucracy, interdependency and tradition can all turn what looks on the surface to be a simple change into a highly complex situation and possibly render a goal unattainable, even though it may be to the greater good of the organisation. In a perfect world, a flexible and reactive enough organisation – one that recognises itself as a complex system overallshouldn’t need covert workarounds; experimentation should be built in.

A root of this fear lies in uncertainty. People require certainty to maintain stability, comfort, and (relatively!) low stress. Knowing a situation is good or bad is far preferable to not knowing if it is good or bad or even what it is, so the natural inclination is to maintain the status quo and not be singled out, as long as this isn’t disruptive enough to become worse than the potential uncertainty (there is a fantastic example of the effects of uncertainty in a study involving rocks and snakes used by Liz Keogh in her talks).

 

Why do organisations not recognise this?

 

Some do, of course, but not many seem to fully realise the causes behind it. One of the most important things to understand is that the landscape has shifted and is shifting in modern business, even recently. Knowledge has become the primary global economy, with business being undertaken around the world, around the clock, and data being digitised and made available and changeable at exponentially greater quantities and speeds than ever before.

The management of this knowledge and the methods used have become key to an organisation’s productivity, innovation, and agility (Snowden, Stanbridge). Sprawling bureaucracies have given way to entrepreneurial practices, and many companies are caught between the two, trying to apply often contradictory methodologies of both to their staff and their products.

At the same time, the latest and not yet widely-understood shift to virtual systems, the increasing use of AI, and knowledge digitisation has moved business to a realm we have no prior experience of or reference for, and this causes fear and concern because we are being forced to change at both a personal and industrial level. Organisations push back against this by acting as they always have, cutting costs, replacing management teams constantly, and so on, but the simple procedures that once worked do not produce new benefits past the very short-term now.

This is because, without realising it, we are now experiencing the Fourth Industrial Revolution (Kirk), which is an entirely new landscape requiring new understanding and actions. Because organisations do not have either, many of them currently “feel like they are in Hell” as a result of the Dark Triad (Kirk): 

 

• Stress

• Fatigue

• Antagonism (“Arseholeness!”)

 

…and they occur both at an organisational and a personal level.

One of the key reasons for these responses may be because of the still-existing and long-term investment in structures based in Taylorism (which dates back to the 19th century, yet is still a core of today’s management science), a root of Process Engineering. This can be interpreted as the belief and (and action upon the belief) that an organisation is a machine with people as cogs or components that will consistently deliver the exact same output in quality and quantity – or, that an organisation is both inherently ordered and conforms exactly to rules.

 

 

(Un-ordered)

MATHEMATICAL COMPLEXITY

SOCIAL COMPLEXITY

(Ordered)

PROCESS ENGINEERING

SYSTEMS THINKING

 

(Rule-based)

(Heuristic-based)

Cynefin Knowledge Management Matrix (Cognitive Edge)

 

 Despite the realisation for decades that Taylorism is actually detrimental, because that just isn’t how people work, and supposedly eschewing it in favour of a more Systems Thinking approach (or, where an organisation is ordered, with greater flexibility from using heuristics) and a shift from a perception of “machine” to “human(Peters, Senge, Nonaka), businesses have really only changed it slightly.

There has been a concerted effort to balance the Mintzberg et al Process Engineering-centric Schools of Strategy (Designing, Planning, Positioning), and the Systems Thinking-centric Schools (Entrepreneurial, Cognitive, Learning, Power, Cultural, Environmental, Configuration), but in my own experience of companies, especially some US-based organisations, I have still found a far greater leaning to the Process Engineering side with some nods towards System Thinking, and a greater perception towards an organisation being a machine, not people. In other words, here we try to force an organisation to fit the modified concepts of Taylorism because it is trusted and traditional, despite being proven ineffective, and act as if it will forever output the exact same quality and quantity.

Of course, even the most balanced approach here between the two still treats an organisation as an ordered construct with a variable spectrum of rules and heuristics, but the very presence of humans who can vary output, focus, workloads and innovation both within and driving an organisation dependent on a number of factors that aren’t necessarily causal or logical – that is to say, complexity – means an organisation can’t be a rigidly ordered system. It is by nature complex, un-ordered, but the tools we mostly use to resolve issues are based on it being an ordered structure with simple rules. The understandable preference, based on certainty and comfort, is to seek simplistic identically-repeatable approaches (“recipes”) based on clear and idealistic outcomes (Snowden).

 

 

Ontologies in relation to basic Domains (Cynefin)

 

What’s interesting is that people will try to manage an organisation as ordered when it isn’t, yet adapt very quickly to managing home life which is similarly un-ordered, often within the same day! This brings into focus the concept of our different identities, or aspects we transition between seamlessly to fit into different situations.

It is also very easy to miss that many instances can be multi-ontological. As a very simple example, if I run a technical training lab, I deal with an obvious domain in much of the basic subject, but also complicated areas; the systems I use to train are largely complicated; and the addition of students themselves bring complexity, as the students drive the class and every class is different to any before as a result (it’s rare that a class descends into chaos, but it’s not unknown, and usually requires outside influence!). So I can end up dealing with all three ontologies in one course! Order, un-ordered complexity, and un-ordered chaos all require different management, but they can all be managed (I touched on some of this briefly in my last blog post: https://www.involvemetraining.com/best-practices-vs-heuristics-in-teaching/).

 

 

The visible effects

 

By not changing from primarily Process Engineering thought structures for 50+ years of business practice, and many organisations not fully comprehending that the shift of many markets from product to service requires organisational agility (as a core concept, not a modular application!), markets are seeing the stifling of innovation and a downwards dive of productivity (Snowden).

This inevitably sparks a frantic reaction (change of focus, sudden arbitrary swerves to “disrupt the market” without recognition of opportunity outside a narrowly focused goal, cost cutting, redundancies, management team swap-outs, further cash injections, etc) without looking at what is working, and more importantly understanding that this is not a one-fits-all recipe that can merely be transplanted inter-organisation for success (Snowden).

It is becoming clearer that collaborative competitiveness, reactive approaches, SME level agility and innovation are where markets now grow in this new landscape of people being and delivering value via a knowledge economy, and this is a beneficial realisation for organisations struggling “in Hell” to take a first step into new understanding.

 

 

So what now?

 

“…Where we go from there is a choice I leave up to you…”

 

The more I look at the current struggles to achieve the results of yesteryear, my own experiences of the last twenty years plus, and the new evidence of Industry 4.0 (Kirk), the more I realise how accurate the above is. Interdependency and collaboration is clearly now essential in a new, barely understood industry of High Demand/Ambiguity/Complexity/Relentless Pace (Kirk). We haven’t been here before.

To find balance and prosperity, and deliver real value once more, collaboration, agility of approach and innovation are all required. We need to sense-make; we need to path-find, or forge our own new paths.

“Reacting by “re-acting, or repeating our actions, merely causes problems to perpetuate. In a new landscape, a new reaction is required for change” (Kirk). This is also one of the keys to Cynefin and managing complex situations; it is virtually impossible to close the gap between the current situation and a goal when dealing with complexity, a system with only some constraints where each aspect affects all others. Instead, you must see where you can make a change, see where you can monitor that change in real-time, and recognise the opportunities to amplify success and ignore failure when it arises via experimentation (Snowden). Or: instead of trying to achieve an idealistic goal impossible from your current standpoint, instead make changes to the system that may throw up even better goals, watch for them instead of focusing on the old goal exclusively, and then grasp them when they arise. You must start from somewhere, but the key is to start – a certain step is the first one to conquering uncertainty.

“Organisations and people ALL matter, because they drive, innovate and ARE value; we matter because everyone else matters” (Kirk), and industry becomes, not forced into trying to be a destined-to-fail machine system, but a safe-to-fail ecosystem – holistic and interconnected, not only able to adapt to change, but driven by it.

 

The problems we still face

 

The issue in many organisations, and with many managers, is that it is very easy to believe correlation = causation, and that simple universally-applicable recipes give idealistic outcomes. This leads to problems, and is a driver of the industry “waves” of best practice management fads that don’t work long-term but propagate because they are new, and short or medium term results may have been seen by some other organisations.

What works to fix or improve one organisation is not necessarily (in fact very unlikely) to work perfectly for other organisations, or work subject to simplification and/for general application. This is a core concept still used that conforms to the Process Engineering ideology. You cannot take something in complex situations and reduce it to a repeatable generic recipe that works perfectly; it just… won’t. No two organisations are alike. Every instance should be approached, investigated, and worked on individually and holistically to see if it should be managed as ordered, or un-ordered (complex or chaotic). There is benefit from seeing what other organisations did to resolve similar problems as long as it is understood the approach and fit must be modified: the incorporation of aspects, rather than the dogmatic following of a whole.

Furthermore, the more people find approaches to be effective, the more they seek to codify the concepts – which is fine to a point, but can easily lead to them then structuring the approaches, modularising them, and then seeking to force them back into the ordered ontology (the Cynefin domains of Obviousness or Complication) as a simple, universally repeatable recipe, when many are ultimately agile and flexible tools to manage un-ordered systems (Complexity or Chaos). This is something that appears to be happening to the concept of Agile at the moment; it is becoming less agile itself as it is taken in by large organisations and constrained!

At the same time, there are constant clashes intra-organisation. Organisations want to both be fully ordered with infinitely repeatable output, but also flexible and innovative. The first of these is causal (repeatable cause and effect), and the second is dispositional (you can say where you are and where you may end up, even simulate, but not causally repeat or predict). They are very different in nature. By their very nature and composition, an organisation cannot be a simple ordered system, and this is where the work within Cynefin by Dave Snowden into Social Complexity/Anthro-Complexity begins to make sense of these systems and the management of complexity and chaos.

There is also the requirement for a deeper comprehension of the fuzzy liminality of whether or not you should make a change, which differs in each situation; a risk/benefit exercise where we weigh up the benefits  – deep and long term as well as short term – of making a change, where the former is often ignored in favour of short-term profitability. Where the dangers of making a change are not defined or understood, or are clearly not beneficial, it is wise to consider carefully whether you should do so – and if so, what the correct manner of doing so is.

 

From Hierarchy to Ecology

 

One of the fundamental movements that resolves some of these issues I think will be a shift from Hierarchies, where organisations are ranked internally relative to status and authority with a focus on control (power), to Ecologies, where organisations recognise the relationships of every person to each other and to the organisation, with a focus on delivery (value).

This may then acknowledge change and the driving by change, and that organisations are largely complex and cannot be distilled into simple recipes repeatable for idealistic outcomes. The market, the industries, the universe itself inflicts change, as do the people within, and order is impossible to maintain rigidly, so adaptivity and recognising how to manage un-ordered systems is required.

Before this can happen, organisations (and the management thereof!) need to understand how much efficiency and value delivery they will gain from the also-fundamental shifts in their traditional beliefs: it is understandable that organisations wish to impose order and tighten control to make sense, but Dave Snowden warns against the effects of “over-constraining a system that is not naturally constrainable” – you are asking for more inefficiency and problems, not less.

 

And how exactly does this all fit in with Teaching and Learning?

 

Many of the concepts are relatively new and evolving, and touch on Agile, Lean, Cynefin, and other concepts and frameworks all at once. Teaching these concepts correctly and helping organisations and individuals understand how to learn them effectively (applicably understand them), at the same time as steering away from the temptation to use easy one-size-fits-all fads is therefore key, and the next step in our progress. Understanding of them is blossoming, and now it must be effectively conveyed, used, and put into practice! None of this is any use if it cannot be effectively taught and learned. At the same time, this all fits very neatly into the overall concepts of learning and teaching, which are not by nature ordered and simple. 

Equally important is learning when to change. It should not be forced for the sake of change, or without clarity or understanding. Not all change is necessary; it’s knowing when it is and where to start that is crucial, or you could lose opportunities you already have.

Perhaps one of the most important things to teach, and learn, is this: Change is a fact of life, business, and the Universe in general, and it can be feared for good reason; but that should not stop change where change is required or beneficial, or strive to stop change that cannot be stopped. Instead of fearing change, we can teach ourselves to change fear into something more productive: an awareness of grasping opportunities that change will throw up.

You only learn when you are open to change, you move outside your comfort zone, and you accept failure as a lesson that builds success; that uncertainty is the point from which new understanding can grow. The more used to taking that first certain step into uncertainty you get, the less you fear the challenge, and the more you relish it. A good teacher & consultant can help place your feet on that path, and walk the first steps with you.

 

 

Image result for rosa parks quote on fear

 

 

Sources:

Liz Keogh (lunivore.com)

Katherine Kirk (https://www.linkedin.com/in/ktkirk/)

Dave Snowden (cognitive-edge.com)

 

Best Practices vs Heuristics in teaching

Best Practice & Heuristics

Note: This article focuses on the basic concept of Best Practice understood by most organisations and does not cover the Cynefin models of Best, Good, Emerging, and Novel Practice in their relevant domains – more on that another time!

Best Practices exist in business for a reason. When we need to do something optimally, over time or using prediction we can determine the best methodology that has the least cost/risk for the output. In a perfect world, we want something that works first time, every time. This is the level many people in business tend to work at (or ignore!).

However, real life has an unpleasant if invigorating habit of not always providing us with a clear application of Best Practice – a wonderful opportunity to learn which we might not appreciate, say, mid-disaster. Time constraints, political expedience, resource limitations or complexity can all get soundly in the way.

Best Practice can also be a misnomer. Oddly, the approved, documented, step-driven way of doing things correctly to ensure consistent results can sometimes actually be less effective than doing things another way – as long as you are cognizant of pitfalls and expected results (usually via a solid mixture of expertise and experience). This means we sometimes have to choose between the proven, approved methodology and the effective methodology to achieve a goal within constraints. Even further, sometimes Best Practice simply cannot be applied at all to resolve an issue.

What we see here then is a disparity between Best Practice, the approved and most optimal hypothesised or sterile-tested way to achieve a goal, and Heuristics, a more practical approach to problem solving or learning that has no guarantee of being optimal (or sometimes even rational) but will adequately reach an immediate goal in the real world.

Or:

Best Practices as the supported optimal hypothetical route to achieve a long-term goal
Heuristics as a rule-of-thumb route to practically adequately achieve an immediate goal

The ability to choose the correct one to solve a problem is as important as having either. Spending time applying the incorrect one can at best waste your time, and at worst make the problem far worse.

Demonstrating the difference

Let’s look at a relatively simple example of this. Because I’m from a technical background originally, I’ll use an IT example (which may seem complicated, but it is actually quite logical):

Moving data between storage arrays

Using a Data Protection solution Base sitting on Windows, System Independent Format data is held on simple drive letters, typically storage arrays, as primary storage. Caveats are that it is a live system accepting and sending data constantly as well as maintaining that data in storage, the data could be into the tens of TB, and the SIDF files for each task are exclusively locked during this process, requiring a task-priority-driven queue to manage. The strong preference is that critical backups are not interrupted if possible.

Let’s suppose you run three basic RAID 5 arrays, each one a volume, and you wish to replace E as it is older or unreliable hardware, so E is marked Read-Only so no new data can be written. Moving data between these is simple:

Using this particular solution, a command “Move to another Location” is used above. This tells the solution that E will be retired. Data is automatically redistributed to remaining drive letters, database indices are updated with the new locations for the data, and then E can be removed. This is simple, easy, obvious, and conforms readily to Best Practice.

So far, so good – but what if you wish to replace E with a new volume? (H being the obvious).

Here we can see that by marking all drives except H Read-Only, the data has only one location it can go to (unless you have a really crummy solution, it is unlikely you can also mark the last location Read-Only!). So here, you can see that this is quite simple, logical, and conforms readily to a Best Practice (follow these steps for the optimal outcome every time).

So now let’s look at what happens when the project increases in scope and complexity and assume the three volumes are split over a single RAID 5 array, for example, and you wish to replace the whole array for future-proofing/performance/storage space concerns in a like-for-like scenario (to maintain some simplicity):

Using the above method, you add the new three volumes (which must of course be at least as large and will typically be much larger). Here is where things cease being obvious and become complicated, although there is still a Best Practice. If you mark the originals Read-Only, and use the command above to move the data, the data will be moved by Best Practice, optimally, with updated indices in the database, with clear logs. No new data will collect on E, F, or G, and once the operation is complete they can be removed from the solution as storage locations. This is the approved, proven, supported, and repeatable methodology.

However!

I personally would not use this method, for a number of reasons; the main being “the real world”, which introduces complexity, illogic, and external factors a go-go. An alternative takes into account time constraints, monitoring and man-hours, Solution activity, simplicity, and effectiveness, as well as other unknown factors I cannot predict that could impact the process; the Heuristic approach defined practically on previous occasions and arrived at by testing in extremis. This is a work-around using different methods to achieve a more immediate goal:

In this example, I offline all Base services immediately after backups have ceased, so the Base is not running at all. I then copy through Windows, direct from volume->volume, E to H, F to I, G to J. On arrays, which run disks in parity, you can run multiple operations concurrently, so these are all done simultaneously. I then place a marker in each new volume (notepad document typically) saying “I used to be E!” I used to be F!” I used to be G!”

Then I unplug (leaving the original data intact!) the original arrays, rename the new volume letters to the original, and restart the services (i.e. turn it back on).  And the Base says, yawn… where is my data on E, F- oh, there it is. Same indices, same locations… completely different hardware.

Let’s simplify the concept of what I have done here re the arrays:

https://www.involvemetraining.com/wp/wp-content/uploads/2018/08/image011.gif

(Disclaimer: I cannot guarantee there isn’t a stone ball waiting. That’s the problem with the real world. There could be.)

I find a lot of value in using real world examples to underpin my reasoning here:

A client had four Base machines of circa 12TB. He wished to upgrade the storage on each one from a 12TB array to a much newer, larger 48TB array. He asked my advice on moving this data. I ran him through the “Chris-approved” Heuristic method and the reasoning. He then got a second opinion from Support, who dictated the Best Practice method. He followed their advice. The copy of 12TB of data on local disk should be achievable within 8-12 hours, well within a backup window. The Base would never know what happened. Since tasks of a much higher priority were constantly running and interrupting (Replication, Backup, Restore, Optimisation, Expiration, etc), it instead took him almost 4 weeks! …he told me not to say “I told you so”.

What lies behind Best Practice and Heuristics usage

So – let’s move out from the “technical” aspect above and look a level deeper at the concepts behind the problems faced.

Best Practice here is the company line, using the tool built for the job, but actually the difference in efficiency and process is significant, although the end result is the same. Whether it is better to transplant wholly or create entirely new indices for the same data is debatable given the achievement, i.e. the replacement of the hardware. Which one was more effective in the real world is readily apparent.

Most training I have been on will teach only the first concepts, if you even get those; there is simply too much information overload in a short space of time. Training courses are not always particularly efficient, and are subject to the same choices between Best Practice and Heuristics as the subjects they cover. But in fact you can break it down even more simply than this. It’s generally accepted that you typically encounter three main types of problem (certainly in trainings) at varying levels:

  • Simple problems (You need to move the data off a volume. You click move in a clearly explained wizard. It moves.)

Obvious causality; known parameters for problem and resolution; correct answers exist and will be achieved through logic; resolution can be achieved by anyone.

You know what it does, and how to achieve it.

  • Complicated problems  (You need to move the data. You can spread it to multiple locations or send it to one but this requires decisions, knowledge and scoping. You consider then configure parameters and click move. It moves.)

Causality isn’t immediately obvious; parameters may be known but not completely; there may be multiple correct answers; expertise is required to resolve them.

You know what it should do, and how to work to resolve it if it doesn’t.

  • Complex problems (You need to move the data. You cannot complete this via the wizard in the time allotted, external factors may or will interfere, you are not aware of all factors and cannot anticipate everything. Expertise doesn’t resolve the fundamental issue. You have to experiment to find another method. You test. You find the best possible path given constraints and work around the base issues. You shut it all down, copy the volumes, turn it all back on. It’s moved. You check it worked!)

Causality is unknown; parameters are unknown; there are no absolute “correct” answers; logic doesn’t resolve it; expertise alone is ineffective; innovation and lateral thinking are required.

You know what it should do but not why it doesn’t or how to resolve it. You must test different methods to find an immediate resolution.

At this point we start moving from troubleshooting closer to the realms of Cynefin, Dave Snowden’s framework for decision making (which has another few areas a little less relevant to the core of this article). There is a wealth of incredible information here by Liz Keogh, a very talented Agile Consultant and keynote speaker who speaks and teaches on this subject globally; I strongly recommend looking at her blog.

With the above problems however it becomes obvious that Best Practice can be applied to the first, and at least GOOD Practice to the second, but neither to the last; Heuristics are required to resolve complexity because Best Practice simply cannot exist there. It is further worth noting that issues are not always one of these problem areas alone!

So how does a teacher best approach this with a class?

Applying this to teaching is an interesting conundrum then, as by nature teaching is at the same time simple, complicated and complex. Cause and effect exists for most subjects, with basic troubleshooting. But you are also teaching students to diagnose, and start down the path to expertise. The subject and the systems can usually be predicted. Yet you have a class full of individual people, and you cannot predict their actions or responses. Effectively balancing atmosphere, skill level, collaborative potential, action, understanding, interest – and what wonderful technical issues they may throw up to learn from! – requires a flexible and innovative approach to each class. It may be best to consider each training solution a unique problem with several concurrent paths to resolution, and balancing logical process and flexibility to deliver the optimal mix of learning.

Classes for me are an incredible mixture of these concepts; teaching people how to teach is a much-different prospect from simple subject knowledge transfer.

I have found over many years that, by and large, a class able to understand and apply Best Practice is also capable of also deciding when and if to apply Best Practice – or Heuristics – but won’t necessarily do so. It is very easy to fall into the ruts of teaching a class by rote, and subconsciously teaching them to follow rote themselves. Humans, adaptable as we are, prefer ease and comfort, and will often follow this detrimentally. This is the darker side of Best Practice; following a set of steps without thought or reactivity, trying something again and again because it should work, and anything else is effort. Heuristics are effort. Let’s apply the principle of Occam’s Razor to this, then:

If what you are doing doesn’t work… do something else!

Resolving problems

I often find that a problem in a sterile lab environment which is a complicated issue becomes a complex issue in the real world, simply because of unknown variables and environments. It is also why I am not in favour of unrealistic teaching environments, which may teach only the shape of the spoon (Chapter 7, Involve Me). If you teach for real-world usage and problem solving, you must make your teaching as real-world as possible or application is limited at best.

Best Practices can change and refine over time, and must be constantly updated, but for a simple or complicated scenario deliver a consistent result. Heuristics cannot be relied on for everything, and may not be optimal or concise, but they can be used to resolve a problem that does not conform to a Best Practise – in other words, a complex problem. When I’m teaching people how to teach, I encourage them to:

  • Identify Best Practices
  • Identify possible issues
  • Be prepared to react Heuristically
  • Identify if the issue is human-based or system-based
  • Impart guidance on how to direct student to fix this themselves
  • Learn from doing!

 

This requires both  pro- and re- active responses. A planned approach is key, but the ability to react and absorb changes is also critical and sometimes missed. As mentioned above, it is all too easy to fall into the habit of continuing to follow set instructions, and I see this in class a lot. If an approach doesn’t work, often I see it repeated again, and then the student sits and frowns. This is one reason in fact I have a very flexible approach to any teaching and use little presentation or documentation for anything past conceptual or reference material – these can’t be changed on the fly and allow less lateral thinking and reactivity when rigidly followed, whether from class dynamics or technical difficulties (and so forth). Where Best Practice does not fulfil all criteria, Heuristics often can.

Humans can individually be wonderfully chaotic in approach, and you cannot as easily predict people as you can systems. We are where any complexity is usually introduced (in IT there are multiple terms – I say “Chair-to-Keyboard interface error”, but you also have PEBCAK, PICNIC, and the wonderful Eastern European “The device in front of the monitor has a problem”. They all mean “human error”.). What this amounts to is – people break stuff; often, illogically, and sometimes gleefully. Best Practice usually works for systems, but not for humans.

Or:

Systems usually follow rules; people usually don’t.

In war it is oft-quoted that “no plan survives contact with the enemy”. If you stick only to a plan despite changes in expected enemy deployment and composition, you are likely to find the battle does not turn out as you hoped. In extremis, you throw things at the wall and see what sticks. This is where agility of mindset and lateral thinking are critical, vital assets of both teachers and students.

A project is the same; a training course is the same. Teaching students flexibility of thought and logic of application is important, and constantly improves your own, and whilst Best Practices are key to this in many industries, we must all be mindful of surrounding situational modifiers. If it fails, or doesn’t fit the requirements, heuristically define what is definitely and immediately effective, use it, and qualify it logically with real-world examples of why this worked. This is my final key point for students in my classes:

Use the appropriate method at the point of decision.

Ultimately, you can only provide tools and capability to use them effectively: you can only open the door. Walking through is up to the students.

 

 

Why Business Efficiency is dependent on learning

Good old-fashioned learning: one of the simplest, yet most complex, things we undertake. We are learning machines, from the moment we awaken through into adulthood; the manner and ease may change, but our learning never ceases to be critical for progress.

Humans adapt and learn quickly at multiple levels, which is one reason we are so successful. But we also have the perhaps unique ability to choose what we wish to learn, detrimentally or not.

This is because we are also creatures of profound habit, and enjoy ease and comfort. Learning is not easy or comfortable. Thus in life – and in business – we eventually end up in ruts that defy logic and impede progress.

Learning helps us develop Best Practices and use what I think of as cognitive common sense – not merely common sense, i.e. the obvious, but something that perhaps becomes obvious when you think about it and apply lateral thinking or logic. There is a constant battle within companies, management, and workforce to balance best practice, cost, return, risk, and many other parameters.