Homogogy – the Neo-Paleo Teaching of choice

Before I get into the definitions of teaching in this blog (which are by no means conclusive), I feel it is relevant to address terminology, as it was a core reason behind the concept of Homogogy for me.

Dave Snowden (Cognitive Edge) is one of many who has outlined the importance of language for defining concepts, and it is something I agree with. Where a thought process or conceptual framework requires intelligent application and consideration, it also requires clear, concise, and precise language, which in turn defines how you frame your thoughts about things.

A good example of this is with industry – terms may be mutually used from one situation to another, but in business buzzwords may be used or misused or misappropriated. Terminology can become fuzzy or situational. As an example, many people will use the phrase “in theory” or “theoretically” to talk about an estimate or guess, but are actually talking about a hypothesis – a supposition or proposed explanation, based on limited (or no!) evidence. I’ve heard this a lot in IT and related verticals, especially in sales.

In science, however, a theory doesn’t mean something guessed at, but a substantiated explanation based on a set of facts which have been reliably confirmed via experimentation and observation – both of which are provable and repeatable by anyone. It also accepts that this is the best understanding of something at that point in time, and this could change based on new data.

Humans are learning machines; we adapt and learn faster and on more levels than any other creature we know of; and yet, we manage to actively and aggressively damage that natural learning. We impose limits; we opt for profit over results; we force rather than inspire; and we muddy language around this process, often twisting terminology so it means the opposite to suit our whim.

So language is critical, and its correct application is as important. Cynefin uses it precisely to help define concepts (e.g. Order, Un-order, Disorder); science uses it identically. Business, largely, does not. We must use the correct words, in the correct manner, if we are to comprehend.

 

Pedagogy & Andragogy

There are two primary accepted methodologies of teaching used, both named in the West from a similar Greek root:

Pedagogy (leading boys) is the concept that children must be lectured and moulded, taught what they need to know. Pedagogues are traditionally associated with the young, strictness and pedantry (children should be seen and not heard is a classic integration with this concept).

Andragogy (leading man) focuses more on adults needing to teach themselves, and discover new skills through play. Andragogues are seen as adult educators and enablers who focus on experiential learning.

They were codified during the 1960s and 1970s by Martin Knowles, a US Professor of Adult Education. He noted that the way adults were being taught was ineffective; lectures, learning by rote, exams, and other techniques we still to this day associate with University learning (as well as school learning) simply weren’t achieving the results they should, relatively easy to monitor and perform though they were. Books such as The Adult Learner: A Neglected Species changed the way adults and industries thought about teaching adults, although the older methods are still surprisingly widely used to this day in both University and Business.

This was very beneficial to adults, especially in industries looking for new and effective ways to engage, and was a definite springboard for engaging techniques such as Agile learning to develop. It was not beneficial to children, however; for all his progressive thinking to how adult humans learn, Knowles assumed that because children had been taught via pedagogy for hundreds of years, it must be the correct way (it’s interesting he quite consciously refuted this assumption for adults), and so he unwittingly and drastically reinforced the old, rigid, totally incorrect methodology for children.

Neither of these acknowledge that humans learn in a similar fashion throughout their life neurologically, nor are they technically correct in times when we are rightfully acknowledging that women are also an equal part of the human intellectual process in intellect and ability, if not recognition and reward. The terminology, in my opinion, needs updating.

 

Why Pedagogy is wrong

Pedagogy is, essentially, Teacher-Centred Instruction. Immediately this misses the point of learning; the focus should be on the students, their retention, and comprehension, not on an authority figure and their instruction. Even worse, this is usually not the teacher’s preferred method, but professional demands from a disassociated governing body.

Anyone who has seen a child pre-school years realises that children learn naturally and swiftly through interest, play, and repetition. So why in many countries, including the West, do we begin to inject discipline and demand, younger and younger, to remove all fun and interest and to indoctrinate them into the stress of modern life?

This comes from centuries of instructing children, and in some ways is worse than it has ever been. We have moved from rigid silence and forced learning, to the veneer of play – laid over disciplined, metricised, commodified enforced learning, with less resources than ever.

It is not the same country-to-country; Scandinavia, especially Finland for example, has incredible results teaching children – and they avoid pedagogy. Instead, they allow teachers AND students freedom to learn and experiment in the best ways to do both. It seems incredible to me that this happens in the US and UK, then, but there is a clear connection between academisation, profit, and agenda and class, where actual results matter less than these others; blame shifts to the students, for not trying hard enough, and teachers, for not teaching well enough despite the huge limitations placed upon them by the system (this applies very much in business intra-organisation as well).

In the UK, children as young as 4 are being monitored for SATs  – Standard Attainment Tests. They supposedly monitor the progress of children in black and white, for all to see, but they are frankly a ridiculous idea, and one many teachers balk at. They do not provide accurate understanding of children of all backgrounds and neurodifferences; they are not an accurate monitoring of the level of a child; and they induce massive stress levels which de-incentivise children and stifle learning. They also value learning-by-rote achievements over applicable comprehension, which for me is unforgivable.

Far too much pressure is put on schools to deliver certain levels of results or lose status or funding; far too much pressure is put on teachers to get results within strict limitations; and far too much pressure is put on children, who are finding their learning interrupted by the trauma and stress.  Why then do this, preparing children for a life of dictated mindless toil and stress, when later we work to “reawaken” adults in industry and help them learn intuitively? Should we not be doing this from the beginning?

Well, yes, we should. It’s been proven in multiple studies that the best school systems in the world with the highest results (such as the example of Finland above) remove enforced homework, constant measurement and competition, and the high pressure levels, and allow children to develop interest and learning themselves. It also been proven over decades of research into neural learning patterns and brain-friendly learning that pedagogy is the diametric opposite to these. Exams and SATs should be a loose marker, a gauge; but they are taken as a grail; THE RESULTS.

Learning is a continuous path, not an end location.

I’ve passed exams on hardly any work, because I’ve always excelled at seat-of-my-pants reactivity; that does not equate to comprehension and application of a subject. Apply that to business, as I’ve seen happen after my own courses, and you are suddenly left with an organisation in trouble with a client because they were more concerned about the course qualifications of the student (or tick-box for compliance) than their ability to know what they are doing. That equals lost revenue, lost reputation, lost trust, and the growth of a culture of only caring about the paper (anyone in IT will readily cite MSCEs as a victim of this cramming process).

Pedagogy is still widely used in business. Certification is all; classes are strict; classrooms are arranged in desks, and more. Typically there is an information overload delivered in too short a timeframe to too many people at once in a generic, boring, and “company approved” manner which often spawns bad conceptualisation, inability to apply or retain data, a hierarchy of go-tos, and a host of other problems. It is a terrible teaching methodology for humans, let alone children, and it comes hand-in-hand with the expectation that the certification is the goal; a qualification that has more value than the learning itself both before and after.

As I said in my recent post Never Mind the Buzzwords, it is important to understand that a certification or qualification is the beginning of understanding and application, not the end. This should especially be borne in mind for younger humans.

Children learn in the same way as adults, but better, faster; they lack only the developed cognitive abilities for the abstract, and the prior experience. That is why stories are as crucial for children as play and experimentation; they allow relation of concepts to their limited experience and the understanding and expansion this brings, and the inspiration to test.

A much smaller % of people are readily capable of learning in this restrictive way, and the rest are judged for not managing; but even those who are incentivised and capable of learning like this can improve how they do so.

In short: Pedagogy does not allow children, or anyone else, to learn like humans.

 

Why Andragogy is no longer right

Andragogy has been widely understood to mean “the teaching of adults”. Despite efforts by many professional teachers to redress the usage, it remains associated with adults, and agile learning methodologies especially, rather than the default way we should teach everyone.

Aside from abstract processing and life experience, another difference between adults and children in teaching and learning is the ability of adults to know and/or be able to express if that teaching and learning is not working effectively (wrong course, irrelevance, poor teacher, and so on). They have developed a meta-understanding of the process and how abstract concepts integrate, whereas children tend towards pure learning without the awareness of the method. This may be why Knowles focused on the needs of adults in finding a new way to teach and learn; it is easy to look at several hundred years of teaching children and say, “Well, they haven’t raised these issues”.

The idea was formed that adults need games, engagement, and “space to learn”; that the teacher still has knowledge to pass but the adults use experiential learning. Of course, as soon as you really consider children you realise it’s no different for them.

Does any of this sound mad to anyone else? Children, who learn by play naturally, should be taught like automatons and be rigidly graded with SATs and every other possible metric; whilst adults, who have lost some of that neuroplasticity but can discipline themselves to learn in a number of more restrictive ways, are taught like “real people” and encouraged to play games?

 

Rather than unlearning what you have once learned, it’s better to learn
correctly from the start… and continue for the rest of your life!

 

In Training from the Back of the Room, Sharon Bowman paraphrases a list from Knowles, noting humans:

• Want/need to learn
• Learn in different ways
• Learn best in informal environments
• See themselves as self-directed and responsible
• Learn best with hands-on practice
• Bring their past experiences to learning
• Learn best when they can relate new information to what is already known
• Have their own ideas to contribute

There is an unwritten assumption in almost all teaching that the teacher is always the holder of knowledge, correct, and that they are “in charge” of a class, although Andragogy is far less rigid in this respect. Bowman’s book stresses the importance of removing the teacher as an impediment, which I have always strongly agreed with.

In short: Andragogy has come to mostly be accepted only as a(n agile) way of teaching adults and is often misapplied as a result.

 

Why Homogogy is what we now need/have always needed/used to have

The meanings of the above methodologies have been misappropriated over time, and were misunderstood from the start. Both Andragogy and Pedagogy view the teacher as a holder of knowledge, and a student as a recipient of knowledge. This is massively simplified, and only one aspect of teaching.

I’ll introduce a novel thought:

A TEACHER IS NOT A KNOWLEDGE TRANSFER DEVICE.

We are a guide; we inspire, we help; we provide information, too, but we are there to spark and engage, not enforce. Learning is not effective when you attempt to force it upon people for anything other than survival (at which point you expect losses). In addition, a teacher can only open the door; the student must decide to walk through it themselves.

I proposed 6 I’s in another post that are mostly overlooked as part of Pedagogy, and often Andragogy:

Neurologically, children and adults learn through the creation of neural connections in their brains in certain orders. The brains of children are far better at this (neuroplasticity) and this means it is incredibly important to allow them to explore, play, and incorporate their developing ideas into their learning; unlike adults, they do not have a lot of experience to draw conclusions about and so experimentation is even more vital for them, along with correlational, simple stories.

How humans learn is through doing; back before we tried to structure learning en masse, the best learning came from mentoring, expert advice, apprenticeship, repetition, experimentation, and guidance. We went with others to learn what and how to hunt and forage, and were guided by their advice as we attempted it. We apprenticed to a blacksmith to practice working in metal. We told stories to inspire others to want to learn life skills and knit a community closer. These things were interesting, immersive, and inspirational; we had incentive, and we were involved constantly, as we knew we needed them for our very survival. We passed on instruction of them to others so that they, too, would be successful, making us all successful.

Most teaching has become much more abstract as our real and virtual communities expand and increase in complexity; we have changed the way we teach to be convenient, but the way we learn is coded into us on three levels – by evolution, by culture, and individually by neurodifference or conditioning.

Our brains are analogue devices, not digital, and work via experiential neural connection creation. Teaching must engage this, not the reverse; we cannot change how we are programmed to learn. 

Pedagogy and Androgogy should never have been defining methods of teaching and learning. They are segregational, generalised, limited, reductionist, and restrictively associated. This is all wrong; we have decades, centuries, millennia of evidence to prove it, both scientifically and anecdotally. Although Jay Cross (Informal Learning) has a point when he says, “‘Andra’ is the ‘gogy’ to go with for all,” I would go even further because of the evolved restrictive association with adults.

How we should all be taught is as humans, regardless of age or gender. That is why I have realised my teaching and learning pattern methodology is Homogogy.

 

So what is Homogogy?

Homogogy (“Leading Humans”) is a “new” framework which is actually very ancient, hence neo-paleo. I believe it is what most effective teachers mean when they refer to Andragogy now, but as I said at the start, language is important. For me, Homogogy has more precision and better connotations; we’re all human, after all. But it isn’t just the teaching or learning of a simple subject; it’s at the core of human interaction.

Every interaction we have holds teaching and learning patterns. Business meetings; basic onboarding at a company; a fire safety compliance meeting; a school class; a presentation; a workshop; a heavily technical training; social events; university; the first time we meet someone; and on. All of these hold multiple levels of understanding, potential paradigm shifts, feedback, and information – intellectual, physical, and emotional. Understanding how we teach and learn, and seeing those opportunities, is something we often miss – every day holds them, and yet we usually only consider them during a formal “class” occasion.

We have conditioned ourselves to become lazy at teaching and learning, in a time where more humans have access to more information more quickly than ever before. Worse: we are so de-incentivised to learn by subjects perceived as boring, disagreeable, or too complicated that we often choose wilful ignorance. 

I genuinely believe Homogogy, how humans teach and learn, can reverse this, and it begins with all human interaction. Everyone, every situation, has something to teach us. There are some similarities with Andragogy, with a number of concepts that are required for effective human learning:

Brain-friendly (neurological) learning
It’s been shown through multiple studies that all human brains learn in the same fashion: we create neural connections to facilitate memory. How we arrive there and engage this can differ slightly, and there are natural neural differences in humans which will dictate the most effective technique for each human.

Collaboration
We are both competitive and collaborative naturally, and both can be drivers of learning; but only collaboration can be an emergent modifier. Competition will quickly inhibit group learning in favour of a dominant individual or group, and often the focus becomes more about who wins over who learns. Collaboration can be competitive, but it’s beneficially so, and the greatest advances come from the sharing of ideas, not the dampening of them. Synergy between multiple cultures, organisations, and individuals in a class is not only possible but beneficial – and can hugely enhance learning and collaboration. I’ve seen large competitor organisations develop a shared knowledge pool in my classes before, and stay in touch afterwards to exponentially enhance troubleshooting. This is something you will not find with competition.

Engagement & Cultural understanding
The gateway to accessing brain-friendly learning is through engagement, which is both individual and cultural. You cannot engage an Asian class in the same way you can engage an American class, for example, so before you can connect with students well enough for them to learn, you must understand the best way to do so. One size does not fit all. People will not learn if they are not engaged, and the teacher’s role is to engage them so they can learn, not try to force learning upon them. Learners need engaging culturally, individually, and neurologically – the trick is doing this in a mixed class! It is also crucial for a personal connection of some form between teacher and learners.

Failure & Feedback
Failure is crucial for learning to occur, especially from doing; demonstrating the consequences of what not to do is more effective than simply knowing what should be done. Humans tend to need demonstrations to understand or believe something. Feedback is linked to Failure, because failure is really only feedback; it’s telling you what you need to adjust so you can do something as you require it to be done. The true failure comes in the form of not learning from the feedback. This is vital for people to understand, and can have an impact when faced with a cultural engagement that considers failure shameful rather than an opportunity to learn. Failure is not shameful – it is a required part of learning, and must be monitored constructively.

The 6 I’s
To help understand this, consider the circle of the 6 I’s (as above) – Interest, Inspiration, Involvement, Immersion, Investment, Instruction. This is the lifecycle of the learning of an idea for a human.

As an example: a child is curious about the sounds an adult makes, and it realises that the sounds bring consequences – attention, food, love, and so on – so it is inspired to use them creatively (“Children say the funniest things”). As it becomes involved, it learns, and as it is immersed, it learns faster and more completely, and just as importantly, long-term. As it grows it becomes more and more invested in the use of language and its complexities, and eventually teaches others usage for mutual advantage. This is a natural cycle for social interaction and teaching/learning; yet most patterns, especially pedagogy, tries to force some parts and ignore others.

Individuality
It is important to realise that a strength of humanity (and an overall weakness) is its individuality. We are incredibly individual in a multiplicity of ways, yet working together we create vast, complex anthro-social systems and paradigms. Our individuality is ofttimes at odds with this, and we limit ourselves via conflict, hierarchy, and strong assertion of identity in a number of areas, as well as an inherent desire to order systems – which sometimes cannot be ordered, by their very nature. This causes serious problems in time (see Cynefin posts for more information). The best human systems are those that celebrate and utilise the individuality of each person, acknowledging it and harmonising it. Individuality is part of where we get our complex natures from, and it makes us learning machines. Anonymising and repressing this stultifies learning.

Freedom to Experiment & Innovate
For truly organic, flexible learning, both teachers and learners must be able to play, test, do, experiment, and understand not just the subject but how to take it in. This allows individuals, situations and classes to naturally find the best ways to be incentivised, understand, and teach one another. Outliers, individuals, and shared cognitive load in an unconstrained environment spark personal as well as industry innovation.

Relaxation
Humans do not learn effectively unless we are receptive to retaining and understanding data. Outside a few very clearly focused instances (picking up a hot coal, for example!), this requires us (certainly for abstract and complicated issues) to be relaxed, and to enjoy our learning. Moving outside a professional comfort zone is required to spark innovation and experimentation, but staying far enough inside a personal comfort zone is important, because you do not absorb or retain information effectively when anxious.

Ignoring age as a segregator
Humans are humans. Regardless of sex or age, we learn in the same physiological manner, allowing only for different engagement culturally, individually, and minor adjustments (positive, not negative) for age. You would not teach at the same conceptual depth with the reliance on world experience with a class of 6 year olds as you would a class of 40 year old professionals; but you would teach in the same way.

Homogogy is Evolutionary, not Revolutionary!

It’s always been there, and we’ve ignored it to our detriment. It’s time to re-acknowledge how we are designed to learn instead of suppressing it in favour of convenience of teaching.

The most important thing is always the applicable comprehension and retention, and passing on of the learning. Somewhere along the line, we’ve lost sight of this.

 

Further Consideration

I have coached many organisations, large and small, on learning as children naturally do, organic flexibility in structure, allowing students to drive the class, engaging, spaced learning techniques, mental parsley, the importance of teaching for applicable use and comprehension instead of exam answers, the criticality of real-world training, class sizes, layouts, and much more.

Many of these are covered in my first book on teaching and learning patterns, Involve Mea short guide which didn’t deeply delve into the understanding behind the teaching (and needs updating!). A revised edition is due out very soon (I’ll tweet it when up!).

I’ll expand further upon Homogogy in my upcoming second Teaching/Learning book – title TBD!

I hope this has been useful. As ever, comments below or on Twitter welcomed!

 

 

The Secret Shortcuts to Innovation

The bad news:

There aren’t any.

There is no template for Innovation.

 

If someone offers a set, guaranteed methodology of “making you Agile” and a template or recipe for repeating innovation, be sceptical. This is not how innovation works, nor is it how management and practices can really propagate.

By its very nature, innovation can only happen a couple of times at best before it’s no longer innovative and people lose interest and productivity. Creating an organisational culture that is primed to innovate can be done – it’s part of what I can help an organisation learn. But there is no template or hack for it. Each organisation and situation is different, and requires different methods to acquire its own skills to learn how to innovate, but there is one thing it always needs – buy in and adaptability from management and Leadership.

As I mentioned in my post Of Scuba Diving, Cynefin, & Value Delivery – in markets that are slowing or going into stasis, in a new landscape unfamiliar to many businesses now, there is a drive to innovate, to influence. Innovate what? Anything – anything that differentiates, that sparks interest, brings new relevancy, that gives an edge. The shift from focusing on things requiring innovation to finding anything to innovate – to be innovative – is starting to occur, and organisations are trying to find a secret, repeatable formula to achieve that goal, but innovation is not easy, nor predictable.

 

Sadly, not how Innovation works!

 

So how do you Innovate?

Innovation, unless you happen to strike it lucky, is often about understanding and boundary-controlling unconstrained circumstances where innovative ideas can be explored using immediate feedback, and the positive ones amplified (with the dampening of the not-so-positive). This often also involves listening to everyone in the company – the heretics, mavericks, and outliers, who may be more likely to be innovative – as well as the mainstream agreement, which can be quite inclined to follow leadership through reward structures and a toe in sycophancy.

Innovation can also occur hand in hand with a crisis, but this is unbounded and risky (any consultant pushing a crisis to force innovation is someone to be wary of!).

Another thing to avoid is a promise of the latest in management techniques. Management fads sweep through general industry every 6-12 months, and may or may not have any validity whatsoever; I can use the original example of the Hawthorn Effect again as a hypothetical warning, courtesy of Cognitive Edge:

A light increase was introduced into the Hawthorne Cable factory to see if it helped production, and productivity increased as a result. This is where many companies would leave it, satisfied they had resolved an issue; often this is immediately seized upon as Correlation=Causation, a quick and easy simplified recipe to increase productivity. In this instance, the company decided to test what would happen if they dropped the light levels again, expecting a decrease.

Productivity increased.

These days, it’s likely two book formats and a sponsored Facebook post would be up within a month giving step-by-step success instructions on how to best bring light to enhance productivity, rather than going back and attempting to better understand the science. This result obviously was not in line with expectations; what it actually showed was humans responding to novelty, and it’s been found we only do this a couple of times at best before we stop.

 

New shiny things

When something is novel, and seems to work very well in one instance, it is extremely attractive to people and organisations as a method to replicate success or gain image/reputation. Innovation by nature is novel, and when it is effective it can be exponentially rewarding. It has a foot firmly in the Hawthorne effect space; Amazon is an excellent example of this. The company offered something practical, convenient, and above all novel, in a space where people doubted it could be done cost effectively. They innovated, and immediately spawned many imitators – but none of them have achieved what Amazon has, despite cloning multiple actions from its rise. Organisations looked at Amazon and tried to emulated the success, but failed. Why?

Because it had already been innovated!

On the back of this come the ideas of “the Amazon way”, methods to do what Amazon did in your own company, and so on. The temptation exists to stop at the Correlation=Causation point and market or broadcast this, especially for consultants.

“We can make you the next Amazon!” Or, going back to Hawthorne Cable, “Light Bringers Increase Productivity! Any company can increase productivity in this one easy step!”

Managers hear about what seems to be a catalyst for success and implement it in the hopes of replicating it, usually in very different scenarios (or scenarios that are too similar). Many fail to see the improvement but assume it’s not being done right, or not enough time has passed. As the interest starts to peter out, a second wave of the same technique sweeps through industry because now enough are doing it that it has become de rigeur. Eventually, it falls into disuse and is phased out, just in time for a new fad to sweep through.

Long-, even medium-term, this does not support an organisation or help growth, and can be very damaging. New doesn’t always mean better, in the same way that old doesn’t equal the best way. It’s up to an organisation to probe what works for their situation and adapt.

 

What Innovation involves.

 

Innovation is cleaved to Management and Leadership.

Leadership and Organisations must accept change to innovate, because INNOVATION IS CHANGE, whether incremental or radical in nature.

Management science and management consultancy needs to be based in proven and evolving techniques, and these have long moved away from Taylorism and the basis of Process Engineering, and even the newer Systems Thinking. The world of industry has changed; at the very least we’ve been through 4, and possibly 6, industrial revolutions by this point. A consultant should be helping guide you based on the best possible individually probed basis for the organisation today, and that can’t be done based on old methods, fads or templates, but on provable repeatable science and adapting to each individual circumstance. This in turn needs to be supported at a cultural level by Leadership; the more rigid, traditional, and dismissive of the smaller outlier voices, the less likely innovation and change will occur.

As with most things in life – learning something to mastery, getting fit, losing weight – shortcuts are usually not anything past short-term and shallow, and rarely get the result we truly want. Anything worth doing is worth doing well. This means a little pain and adjustment, experimentation and investment, but it’s usually worth the change. The same applies to innovation; copying and pasting from someone else who innovated is unlikely to produce the same results. Instead, it’s better to probe in complexity and discover all possible opportunities, and find your own innovations where you can. Cynefin is a very effective framework for helping systems do this, and adaptive Agility will also help with this.

A final thought – a consultant should never be the one to introduce the change and “cure” anything. We are there as an advisor, a mentor, a coach, to help an organisation discover and learn how to do this for itself – otherwise it’s not sustained after we leave and, ultimately, likely to fail.

 

Never mind the Buzzwords

Defining Agile, Lean, Kanban, Kaizen, Waterfall, Scrum, Cynefin… & why we need Meta-Agility

 

I wanted to summarise definitions of some of the more popular terms that are becoming ubiquitous in business, and give basic understanding. It used to be mostly specific industries (e.g. software development) that threw these buzzwords around, but now organisations in every sector are realising the benefits of applying one (or more) of these methodologies. I’m not going to go into huge detail on each as there are many good articles which are very comprehensive out there with more depth and nuance, but I’ll go over the basic differences and applications. I’m also not going to include what you should consider for decision making and production (perhaps another article!) – rather, to briefly explain the differences and potential applications of the better-known.

One important factor to note is that the concepts in this article are not really substitutional. Some are concepts, some are processes, some are manifestos and methodologies, and some are frameworks. They may integrate and support each other well; each business situation may use or require one or more simultaneously.

As always, I’ll also treat Cynefin differently, as it is a naturalistic, scientific framework for understanding complexity and achieving coherency which can describe where the others are effective and applicable, rather than act as a tool for a specific process. Cynefin is for sense-making.

 

Waterfall

This is an older, rigid process which is highly ordered and constrained. It consists of a linear, sequential set of segregated phases which are always achieved in order, in one direction. It always begins at the first phase, and you only move to the next phase when the current is complete. Once the phase is complete, there is no returning to it without restarting everything.

This arose from manufacturing requiring steps done in order to achieve an end goal, so is applicable only to projects or businesses that adhere completely to the Complicated and Obvious Cynefin domains. It is extremely rigid, requiring extensive planning, strict following of steps, thorough documentation and zero role flexibility. Waterfall works in specific, limited instances where strict adherence and no deviation is required – in ordered situations where it is imperative to follow a specific dependent order (pre/post-surgery implement counting, as an example). Outside this, it cannot allow for changes, errors, or incorrect predictions, and the result can be a lack of understanding for stakeholders, bottlenecking, and longer and longer completion times.

It is still a traditional mainstay for management to apply to many situations, as it gives the feeling of control, simplicity, sustained output and logic even where these are not possible in the circumstances. This approach is heavily Process Engineering (ordered and rule based), widely misused in situations which by nature cannot be ordered, and is an example of a process “simplified and transplanted” as a recipe between industry applications.

Waterfall 

 

Lean

Lean was developed in the manufacturing industry, specifically inside Toyota in Japan, a result of which was the Toyota Production System (TPS), an optimised, Lean manufacturing process. It is a manufacturing and management style which focuses on eliminating operational waste, and removing unnecessary resources and complications; cutting the fat, if you will.

Lean is related to Kanban and Kaizen; the TPS integrates all three (I’ll cover the others separately). It is a lot less rigid than Waterfall, although with the addition of Just-In-Time (JIT) processing can still fall afoul of large errors or bottlenecks. It also de-anonymises stakeholders, instilling respect for those working as a principle, and values evolution of process.

Although less prone than Agile, Lean still has elements of templating and codified certification (for example, Six Sigma) which can be limiting and not apply correctly to individual organisational circumstances. The drift from reducing organisational inefficiency to instead eliminating defects and reducing variation can also introduce its own set of challenges.

Lean can be approached in a number of ways, and is a logical path for a company looking to better understand and refine value streams, production costs, and efficiency from team to organisational level. It is what companies strive to do by cost cutting and other slimming practices, but is often misapplied; it can be ordered, and leans into Systems Thinking approaches over Process Engineering (no pun intended).

 

Lean

 

Kanban

Kanban has a foot in both Lean and Agile. It is a methodology to manage and improve work in human systems based on the concepts of limiting Work in Progress (WIP) and flexible throughput – think of it as a Lean approach to Agile.

The word means “signboard/billboard”, and is used in Japan in a number of ways, not as the concept of applying “Kanban”, but often more naturally. The basis of Kanban is to find the weakest bottlenecks in a system and smooth the flow through the chain to allow optimum continuous delivery without buildup at critical points.

In other words, the flow Pulls as capacity of flow permits, rather than Pushing when work is requested.

This was used in production with TPS to augment other Lean practices and ensure the most efficient throughput possible. It is a visual aid in decision making for what, when and how much to produce – Kanban is concerned with limiting resources and work to deliver a smooth and ultimately more productive workflow.

A basic Kanban limited WIP progression

 

Kaizen

The third part of the TPS triumvirate. Kaizen means improvement, and is taken in business to be “continuous improvement” of a process, which originated in manufacturing but has become very associated with software and DevOps/OpsDev.

Kaizen is as much a culture as a lean practice, not a systematic process applied at a single point; it requires investment from all stakeholders to achieve, and must be implemented from leadership down.

It is not enough to simply fall into a rigid adherence to a single workflow, because circumstances in business change all the time. Kaizen is the ideal of always striving to become better, whatever the circumstances, to reach to optimum throughput of value.

A Kaizen continuous improvement cycle

 

Agile

Agile as a core concept is a framework based on a manifesto. It is somewhat related to Lean practices, but emerged from a variable, complex environment (software development) rather than a complicated one (a production line or single factory business unit, for example). This makes it uniquely suited as a set of ways of working with industries or business units that are in constant flux.

Agile was conceived to adapt to both changing requirements and customer needs, but also to cut waste, and deliver value faster by using an iterative and incremental approach.

This has seen some success, because an adaptive approach will by nature be able to integrate with many different organisations and situations, but it is also seeing some problems. By its very nature, Agile is an agile concept – flexible, organic, and applicable to a variety of situations. Once over-constrained to try to make it easily repeatable, it ceases to be agile; if you succeed in turning Agile into multiple-choice Waterfall, you have removed everything that makes it effective.

There are a number of approaches with greater or lesser constraint. Scrum is a methodology of applying Agile. Scaled Agile Framework (SAFe) is another; XP, and IBM’s recent push of their Agile Thought Leader certification, yet others. None of these represent the actual core concepts of Agile, but take from those core concepts. They can be effective – or lose effectiveness – in variable amounts and circumstances.

In fact, the further we go into certifying, codifying and constraining Agile for templating, the further we move from agility, and the more concerned we become with dogmatic definition and display over the fundamental principles and application. Not having qualifications in the above doesn’t mean you can’t work or think Agile, and by introducing a restricted path to becoming agile, they may constrain and diminish agility. But what this does is allow humans to feel comfort and grounding.

What is being marketed by many consultants and businesses, then, is not Agility per se, but predictability. Constrained Agile practices (in my view) are designed to give some cessation of uncertainty, not a guarantee of agility of practice. The more certainty humans have, in fact, the less relevant agility they are likely to have, and vice versa. Each situation is different, and that’s what an Agile approach is really all about – preparing for and reacting to change, in context, in whatever way is required.

This is a strange dichotomy, similar to that of security vs accessibility; they are mutually exclusive. To be more secure, something must be definition be less easy to access; to be accessible, it must be less secure. To have certainty and predictability, something must be less Agile; to be more Agile, there is by nature less recipe-template copying possible. The best application will require analysis of a situation and balanced application.

 

An Agile overview

 

Cynefin

Cynefin, as seen in previous posts, is a way of understanding human decisions and complex situations scientifically. As is all science, it’s an evolutionary work in progress, constantly refining and being refined. It is concerned with sense-making, which is allowing data to define possible solutions, rather than categorisation, which is forcing data into preconceived constraints to fit expectations and limits possible solutions.

There are several methodologies that consider decisions similarly (the Stacey Matrix is one, albeit different in approach); Cynefin was born from Dave Snowden’s explored processes when he worked at IBM Global Services to help manage intellectual capital, and then developed further into a framework using scientific methods to evolve and comprehend (what ended up being understood as) complexity.

 

The Cynefin Model

 

Currently Cynefin consists of a 7-Domain model: 2 Liminal (Open, Complex <-> Complicated, and Closed, Chaos <->Complex), one investigative (Disorder) and 4 main domains (Chaos, Complex, Complicated, Obvious), with each of the 4 having a sub-domain containing 9 distinct areas, only the centreline of which gives coherent transitions to the conjoining domains.

 

Cynefin Sub-Domains, Liminal Domain Transitions, and the Path of Coherency

 

The Obvious domain and the Complicated domain are both ordered domains. Complex and Chaotic domains are unordered. Each domain has unique Practice that is applied, and a different methodology of decision making in order to sense-make. Each has its own action process; each has different numbers and types of constraints that define it.

 

Cynefin order and un-order

 

In its simplest form, Cynefin allows you to categorise and understand a situation’s basic state. In its deeper forms, it allows highly detailed understanding and application of concepts to resolve events in the best possible favour.

Scrum, Agile, Lean, Kanban, Kaizen all fit into the liminal domain between Complexity and Complication; methods of transitioning, probing, and resolving from unorder to order (and back, potentially). Waterfall fits into Complicated/Obviousness (order), and is limited precisely to those areas. As soon as a transition occurs away from order, it is not suitable any more.

Cynefin deals with the Social Complexity quadrant of the epistemological matrix, which is unordered and heuristic, reflecting the humans than define, drive, and live within it.

 

(Un-ordered)

MATHEMATICAL COMPLEXITY

SOCIAL COMPLEXITY

(Ordered)

PROCESS ENGINEERING

SYSTEMS THINKING

 

(Rule-based)

(Heuristic-based)

Cynefin Knowledge Management Matrix (Cognitive Edge)

 

Conclusion

Hopefully this has outlined these concepts! I think it’s interesting that, in every example above where there is a certification track, business nature becomes very quickly more focused on the qualification than the core concepts, because it’s a trackable identifier (even recently when I was asked what I did and I said I was a teacher of teachers, I was (fairly aggressively) asked, what are your qualifications that you can say you’re a teacher?, suggesting the conditioned adherence to an identifier not the years of available results!). The trap here is that a certificate gives only a signal that the consultant/etc is experienced at certain aspects, and a level below that is that the certificate track can actually be a sign of a misunderstanding and misapplication, or to put it better, an ossifying of the core concept.

It is important to understand that a certification or qualification is the beginning of understanding and application, not the end.

For me, as with most things in life, this comes down to a balance; it is possible to achieve a level of agility in some areas and a level of certainty in others, and all of these concepts, as we have seen, have their places (and times) in complexity. You cannot therefore simply template, simplify, condense, and certify (and thereafter not deviate) without running into the undeniable reality of variance (especially when you seek to remove all variance!). No one approach is perfect; a combination of them all is often required, with the understanding that Cynefin and similar frameworks are methods of comprehension.

Every one of these approaches relies on communication, learning, and investment to succeed.

 

Why we should be Meta-Agile

Interpersonal connections, agile management, waste management, resource flow management, continuous delivery and improvement, complexity adaptation and exaptation, value and delivery coherency and teaching and learning patterns all combine into a holistic (or symbiotic) method of understanding, cohering and progressing at every level of an ecology. This is something I have been having trouble defining until recently, when I realised I have an overarching approach that is, I guess, Meta-Agile.

Understanding and choosing when to use any or all of these is critical – all too many organisations pick one they like – a buzzword, or one traditionally used, or a fad, or that worked for another company – and focus on that one, without realising that the entire landscape may require shifts between them (or multiple applications) to maintain effectiveness; Meta-Agility, if you will. An agility overall, behind the currently accepted variable definition of Agile.

Finally, we need to use the principle to succeed, not be seen to be merely using the name of the principle. The two are not the same.

I hope this has given an interesting overview of the terms; what I do actually floats in the ecosystem that exists between all of these concepts (I’m absolutely coining #MetaAgile to use alongside #Opsdev!), and the teaching and learning pattern specialisation binds them all together and allows them to be communicated and understood effectively.

There are plenty more concepts and frameworks out there that are as effective as anything here. More on those another time!

 

All frameworks, concepts and methodologies discussed in this blog are the right of the originator.

 

Scuba Diving Part II: Unexpected Verification

Having now been diving again, and having (ironically) experienced a very interesting complication, I can add briefly to the previous post, Of Scuba Diving, Cynefin, & Value Delivery

 

Something really sucked

My first dive was not a success. The delivery of the value was, well. Sub-optimal isn’t quite the word. I took an unknown quantity with me (a technical diving wing and plate) as part of my own gear, including a new regulator setup. This is designed to be better than rental gear, relaxing you and improving all aspects of the dive. To my shock, and horror (this is a bad thing underwater at ~24m), my air was going down as if I had a leak. 180 bar in 28 minutes is not normal!

I’ve never seen anything like it. This was using low weight (which was also odd, I should dive with 2kg and ended up needing a lot more to even descend on the first dive) and using my breath for buoyancy, not heaving like a runner on land as many people tend to when they start diving. I know it had been a while, but… this wasn’t normal.

Luckily feedback is constant with diving, especially if you wish to continue breathing, so I had plenty of time to consider options and causes. I tried upgrading to nitrox at EAN32 and a 15l tank for the next dive… I managed 37 minutes at max depth of 29m.

For those who don’t dive, this is ridiculous. A 32% oxygen/nitrogen mix should give far more bottom time than standard air, yet by the time we surfaced I was at an incredibly (and almost dangerously) low 20 bar. You should always plan to surface with 50 as a reserve, and I’ve never not done so before. I was sucking incredible quantities of air, despite some experience and careful usage. Admittedly, it had been a year since my last dive, which is quite long, but this was still way out of projection.

So what happened? Why was I suddenly emulating the finest vacuum cleaner? And how is this related to my previous article?

 

Context Matters

Firstly, I was dealing with a set of unknowns. I’d never used this rig before here – only in fresh water over a dry suit, with the guy who sold it to me as a “huge improvement for trim and diving”. I’m not a tech diver; I don’t have the rig, the gear, the training, or the cold water diving experience to utilise it, so relying on his expert advice was in retrospect more about him selling the gear and less about what was right for me as a more tropical diver.

Rule # 1 – ALWAYS test dive gear and/or consider context when possible! I didn’t, and this is how we learn. Long term, there is no failure, only feedback.

Immediate differences were apparent: the water was salt. The temperature was higher. I’d never used this getup before. It was a new dive site. It was significantly less comfortable than over a drysuit in a lake. I’d only had 2 hour’s sleep after travelling for around 10 hours (NOT advisable!) and was fatigued and stressed. There were multiple unknowns, and they did not match my projections. I only realised this mid-first dive.

Does this sound familiar in business? A plan gets set up, it’s worked before, so no one checks this time… it’s only mid-rollout that it becomes apparent that things are not as they should be, and panic and scrabble ensues whilst the stakeholders are assured everything is under control. The perception of value delivery becomes more important than the reality.

What you do must be gauged against the current situation, not estimated solely against the past, if you wish to accurately ascertain the data and act accordingly. I was out of context, and until I considered the context, I could not begin to resolve the issues.

 

Constraints needed identifying

Secondly, I had misjudged the constraints on the dives. What was planned and tested in dive prep – hypothetically, practically in different context, and obvious or complicated! – resolved on application to actually be in disorder. This is what Dave Snowden talks about when he mentions the danger of assuming a domain from the start, and acting on that assumption. To be fair, given prior experience, it should have been clear, but I hadn’t factored in new constraints which were absent or different from other dives (and, as above, context is key).

A dual-bladdered tech wing with drag and combined 90lbs of lift is not suited to my recreational diving practices. This I now know. It is far too buoyant despite a steel back plate; it changed the limits on air and usage, and the trim was ok, but not vastly improved. It was uncomfortable, stressful to don, and stressful in the water.

I didn’t test; I didn’t cover the new context to understand how the constraints could affect me differently. Once the dive was under way, and I realised my remaining air was dropping like a lead weight, I realised the situation was not only disordered instead of complicated, and resolving into complexity, but in real danger of failure into crisis.

Had I not been more aware, and carried out the obvious/complicated steps and constant checks during the dive (real-time monitoring is key in complexity probes!), I would have – without doubt – considered myself in the ordered domains and likely consequently tipped over the cliff-edge into complacency-induced catastrophic failure (read this as: NEVER fail to regularly check your air on a dive!).

In the end, those constraints – which I understood and knew about, but had not redefined contextually – limited and severely disrupted my dive (and the dive of some of those around me). Some constraints don’t change for diving; and instead of working with them, I ran up against them being fixed and governing my dive.

Hand in hand with this went Practice. Best practice was not achievable; Good practice was adhered to where I could, but it became very clear very quickly I was in the realm of emergent practice.

 

Complexity encroached…

…and came dangerously close to chaos. This can happen at any time because, as mentioned previously, dives contain a number of areas by nature out of our control.

My situation was still safe-to-fail; even had I hit zero gas, both an instructor and divemaster were on hand to give extra air as we ascended. Plenty for the safety stop, which is a requirement (in complication). Remember Stop-Breath-Think/Probe-Analyse-Respond; I had multiple options to consider to end the dive safely, but nevertheless, with the stress and confusion of what was happening, I could see the pale edge of panic and understand how even experienced and calm people could cross into it.

This point is where a lot of divers WILL panic, despite the safe-to-fail alternatives, and when you lose reason you are in serious danger, especially in a situation that requires reason from the outset (we’re not designed to breath underwater, so everything must rely on the application of the reason that led to the setup and implementation of the circumstances. Our instincts cannot and do not help us in this situation).

Crisis management was visible, and I relaxed and took stock so I could avoid it. If you don’t do this when diving, you are in real trouble. The trouble was, this detracted from the dive and the goal, which I achieved, but would have rather spent longer experiencing!

 

Analysis

I decided to consider what had happened, and grouped data by possible impact. My new regulator had a venturi switch (which I’d never had before – it governs pressurised airflow through a system), which I forgot to switch on. Perhaps that had an effect? I hadn’t dived for a while. Perhaps that had an effect? What data could I look at?

I pondered what had changed since my last tropical dives, and decided that rather than the ever-tempting process of categorisation, I would allow new understanding to emerge from the data.

So I tested different configurations. Gas, size, weight, etc, all in the correct medium (salt water, which has different buoyan). The regulator was discounted as an issue (brand new and very efficient, and unless it’s leaking it turns out it has very little effect on consumption).

I tested ideas by referring to multiple instructors at once, people who do this every day and have different gear and requirements, both men and women (air usage is heavier in general for men). I ran distributed brainspace probes for possible issues, and many possibilities were thrown up; at least one was known to be naïve (I asked other divers and even students what they thought). The multiple experts knew the dives and how the baseline metrics within a given scope should work.

We had a lot of different ideas, and what came out were three main points:

 

Context was key. The environment and what others were successfully using needed to be a baseline. The elimination of possibilities such as the new regulator making that much of a difference based on expert advice.

Differences to last successful contextual dive were crucial. (All rental equipment! This time, my own gear, but having the diametrically opposite effect than expected).

Elimination of differences, one by one with feedback after each, to see what happened. (Again, I had suspicions).

 

Finally, I tested states of mind and methods I knew had worked in the past, and evaluated why they might not work here. I got some sleep; I relaxed; I still found issues.

What was left after multiple probes, concurrent mindspace sharing with experts, my own gut feeling, and multiple dives with different setup appeared to be one key factor remaining:

The tech wing and plate.

 

Resolutions

We couldn’t ignore the data that had emerged from our discussions and tests; there was little left apart from a significant change in my physiology, which was not a great consideration to contemplate.

So for the next dive, I hired a regular bcd (standard dive jacket), connected it to a normal tank of air (baseline against the other divers again), and used my regulator.

Three of us dived. I came up with roughly the same air as the divemaster, after a relaxed dive with near-perfect trim (I trim weirdly, more on that another time) – 50 bar after 39 minutes, max depth of 24m.

Let’s put that in perspective: an EAN32 dive with 15l lasted 37 minutes and nearly ended in crisis; an air dive with 10l lasted exactly as planned, in line with the divemaster in fact, and I was not the one who ended the dive; my buddy was. I was clam, collected, enjoyed the dive, and found myself exactly back where I remembered being; the peak of efficiency, trim, value delivery, and experience. The difference was astounding; all my anxiety and fear had vanished.

(The relief you find when it’s not actually something intrinsically wrong with you is… profound!).

What did I do here with regards to Cynefin?

 

I recognised that I was in a complex, unexpected scenario; I probed, analysed, and then changed significant constraints based on context.

 

From this point on, my dives transitioned back into complication, and progressed as planned.

How directly analogous is this to business mentioned in the last blog post, and how a company will make assumptions, often untested, and then find themselves fighting to mitigate or avoid disaster, and still deliver any value? Often changing a constraint in complexity delivers a profound change – and delivering value is, ultimately, the primary goal (outside staying safe both long and short term).

 

Conclusion

So it turns out that my suspicions were correct, and heavily influenced by complexity – not only the new site, the new experience, the time since last contextual dive, and the new gear, but also anthro-complex considerations such as stress, fatigue, and alarm/stress induced when expectations were not met, all contributed. It was my new, context-untested gear I had made assumptions about, but all these things had an impact. This was not an obvious or complicated resolution.

I still achieved my goals despite it; I dived with Thresher Sharks, and got some amazing footage (I might even pop some on here).

I wasn’t expecting to have to apply elements of the last post so soon, but I’m glad I did, in a way; it validates the comparisons.

Safe coherence out there!

Of Scuba Diving, Cynefin, & Value Delivery

It struck me recently that a good way to understand and perhaps even react to the challenges of modern management science and organisational value delivery might be to consider scuba diving.

Imagine, if you will, that an organisation or project might emulate a scuba dive, with a remarkably similar line of coherency through Cynefin.

What on earth am I talking about? Bear with me… I’ll explore organisations, basic Cynefin principles, workflow, and, of course, the diving part.

How is this relevant to business?

A number of the current issues faced by organisations run enough gauntlets that entire consultancies and processes have sprung up relating to Agile, Lean, Complexity, Problem Solving, Value Delivery, Training, and other integrated practices. Each one of these is a part of a whole flexibly applied approach, rather than a singular answer.

That entire industries – let alone organisations, or business units – are now traversing a little-understood landscape which is seeing them intensely pressured, plus a loss of value delivery, is becoming widely recognised. Both Dave Snowden (Cognitive Edge) and Katherine Kirk (Agile Coach/Speaker) speak globally on the subject of the changes in management, industry and business resolution, and more and more companies are realising there is a piece of understanding missing around delivery of value.

It is extremely difficult to persuade leadership to go against tradition, company culture, and the tempting expectation that data can be summarised for simple repeatable decision, even when these are clearly impeding innovation or expansion (as is now seen cross-industry – the stifling of innovation and a downwards dive of productivity, Snowden). Often, either an adjustment may instead need to be made in an organisation where it can be safely demonstrated, or the enviroment shifts such as it has no choice but to react appropriately to survive. The second is usually not desired, as it probably requires crisis management – but handled correctly, this is where true innovation also lies (Cynefin, running innovation solutions teams with crisis management teams, Cognitive Edge).

In both instances we have instinctive or conditioned reactions which may worsen the situation – requiring a more reasoned approach – and a general inability to intuit the necessary actions.

 

So why the sinking feeling?

In pondering upcoming dives and my current consultancy, it occurred to me that there were some remarkable parallels and takeaways between business and diving, and that the latter could be used as a good example of some of the concepts.

Scuba diving is an interesting lesson in avoiding reductionism, agile assessment of situations, considered action with the ability and requirement to act immediately if appropriate, refining a plan to get the maximum effect with limited resources, and required planning and high levels of order that can be – and are – still immediately affected by unpredictability and complexity. At the same time, all divers involved strive to improve the dive as much as possible until the dive ends; kaizen, if you will.

You require strategy, tactical responses, and a lack of politics and ego for a dive to be safe, productive, and succeed. Every diver is a stakeholder, and empowered to give valid input; every diver drives success of the dive.

In any situation when you are diving, you are in an inimicable environment that is extremely unforgiving for the unprepared or error-prone. Most of this is easily avoidable via preparation, understanding and action (or calculated inaction). Recognising warning signs is key, because your options are constrained by several critical thresholds.

If you encounter an issue when you are diving, from a minor adjustment up to a major incident, there is a standard response:

Stop-Breathe-Think-Respond

Following these steps as much as possible is critical, as panic not only drastically increases use of your limited, most valuable resource (in this case, air) but it can lead to potential loss of life.

For me, this sequence is an interesting parallel/precursor to engaging the more involved responses of sense-making and Cynefin.

 

Cynefin – a closer look

This is a good moment to explore the basics of Cynefin and how it can be used to optimise organisations and situations. I’ll go into more detail in another post, but for now, we’ll focus on the basic model and what it contains, and I’ll give some diving and business related examples (and hope it makes sense!).

Cynefin is a framework created by Dave Snowden and Cognitive Edge, and is a constantly evolving, science-based method of understanding anthro-complexity and how to best manage human issues. Human issues affect everything in our lives, because everything we do relies on human interaction – organisations, products, services, families, and more are MADE of – or by – humans. It works on a naturalistic basis to allow sense to emerge from data rather than the usual human practice of attempting to force data into categories for understanding; this latter approach often constrains our perceptions and our options, but is our usual method for dealing with things.

Management science and organisational disruptions are two areas Cynefin has been applied to with great success.

 

The Cynefin Model. All rights reserved Cognitive Edge

 

Here we have a very basic Liminal Cynefin model, with seven domains. The main four domains are:

Obvious, dealing with ordered things anyone can grasp, such as moving a mouse on a computer and watching the cursor move with your actions, or swimming up or down to move up or down in water; direct and obvious cause and effect. The danger here is complacency – because if failure happens, you fall off a “cliff-edge” into chaos and crisis.

Complicated, dealing with ordered things requiring expertise to understand, such as developing in a coding language, or understanding the gas mixes at relative depths; multiple possible causal links.

Complex, dealing with unordered things that are not obviously causal and require experimentation and feedback to understand, such as a new software release’s impact and estimation of success in a marketspace, or currents and weather changing during the course of a dive; no causal links and a requirement to probe before you can respond appropriately.

Chaos, dealing with unordered things that have no causality and are in a state of crisis/emergency, such as new software blue-screening multiple client’s mission critical servers upon release, or a sudden loss of bouyancy control underwater; no time to explore cause and effect, you must act immediately to avoid catastrophic failures. Innovation typically lives here.

In addition, we have the central domain of Disorder, in which we are not yet sure which major domain a situation falls into, and two liminal domains:

Complex/Complicated, which is the liminal dynamic you can transition from unorder into order through (and back if required). This is where Scrum and similar Agile concepts, Lean, Kanban and Kaizen (and others) sit. I’ll cover these in another post in more detail.

Chaos/Complex, where controlled shallow dives into chaos can be performed to spark innovation and new goals, or you can move from crisis to complexity by the imposition of constraints.

It’s worth noting that Liminal Dynamics (i.e. the transitions between states) are at least as important as fitting things into the major four domains, and constraints and practice are both areas that influence understanding too, but I’ll attempt to cover Cynefin another time with regards to problem solving.

The last two things I want to mention about Cynefin here are that 1) order and unorder are both manageable and have different applications, briefly explored in my post Fearing Change and Changing Fear, and 2) each major domain has a sub-model which traces a path of coherency (the logically supported optimal continuous pathway of productivity in business; a way of understanding the degree and nature of evidence that supports either a planned action or a situational assessmentSnowden) and links the domains from Chaos through to Obvious. Again, more about this another time.

So, there’s some Cynefin in a nutshell!

 

Relating Cynefin to scuba diving

We can compare these by investigating the actions integral to diving. With regards to the base scuba diving precept of Stop-Breathe-Think-Respond, you would encounter it mostly with Complicated, Complex and Chaotic domains once the dive has begun.

Sense-Categorise-Respond

The base planning is set often in the Obvious domain: for example, the set up. Have you checked your BCD inflates? Have you checked your air quality? Have you cleaned water out of the connector? Have you used your second stage so you know you can breathe? Do the gauges work? And so on. These are step-by-step stable best practices anyone can (and must) carry out which are vital to success.

Sense-Analyse-Respond

But then we have a foot into the Complicated domain. Do we need to calculate NOx % for a mixed gas dive? What is our calculated depth limit so we don’t potentially die from oxygen toxicity? How are we monitoring this? What depth limit and surface time, what decompression time will be required? These requires analysis and expertise. Not everyone can do this intuitively and follow instructions, because there must be understanding, experience, and responsibility. You shouldn’t get on a plane within 24-48 hours of diving because of pressure differentials, for example, but without certified knowledge you might not know that.

Probe-Sense-Respond

Complexity is moved into as soon as we’re off, even before we’re on the boat. Weather can change quickly. Currents change. Visibility changes. The plan may beome unfulfillable as set out. Many unpredictable factors occur that require us to probe the process and change goals based on the feedback, both as the dive commences and continues. Stop-breathe-think/probe-sense-respond. Concurrent dives may also occur in multiple adjecent locations to maximise chances of success in uncertain conditions, which can then be taken into account for future dives.

Act-Sense-Respond

Thankfully Chaos doesn’t happen often, but it is always a very real danger on a dive. A malfunction, an environmental shift, or lack of experience or ability can turn a peaceful relaxed dive into a stop-breathe-think/act-sense-respond emergency scenario.

For example, the time a novice had trouble with bouyancy and was ascending in 7m of water with propelled boats overhead springs to mind. This was rapidly moving towards a crisis area requiring action. Before the Divemaster could intervene, another novice grabbed his weight belt to help pull him back down and it slipped down to his ankles, making it worse – he shot up like a cork!

It wasn’t deep enough for decompression problems, but it was shallow enough for boat-to-the-head problems, which can be quite terminal.

This was lurching into duffers better dead (Snowden/Ransome) areas a little too accurately. Immediate crisis management was implemented (the Divemaster and I grabbed a fin each and gently pulled him back down whilst his belt and BCD were fixed), using innovation (we used a typically non-tactile bit of gear to stabilise him as he gained practical experience of adjusting critical gear underwater, subsequently explaining this to the others post-dive), and we then transitioned back into the base “project” of the dive with thankfully no damage except his frantically-used air, and a number of lessons learned by all the newcomers. The dive was ultimately cut short as a result, and deviated from the route.

So we traversed a path of coherency, including a recovery from crisis management. There is generally more response time for this in business, but considering the organisation as an organism (or better, an ecology) means it’s relative, and just as impactful.

Diving is more critical to us because we can’t breathe or ascend uncontrollably. We are in a hostile environment, and we know it every second. We are forced to deal with this to be safe. But business should be considered in every bit as critical a fashion, as the market is also hostile and unforgiving, and critical timescales are relative (companies are bigger and slower). Sink or swim; complacency kills.

One note of interest is there no single fail-safe per se in diving, because if something can go wrong it will go wrong. Instead, there is a strong concept of multiple concurrent options that can be implemented during a scenario that have variable chances of being the best option depending on circumstances (I can think of four if your main regulator stops giving air off the top of my head, for example). It’s not quite safe-to-fail probes in complexity, but it has similarities. Scuba diving is about resilience for the sake of safety.

Most of the domains of Cynefin are passed through on many dives in one way or another, and an organisation is immersed in them and has its own line of coherency through them too, both in part and as a whole. Resilience is key in business, too, both at an organisational level and a project level.

 

So what can we learn from all this?

Now you’ve read the above, take a moment to try applying this to a past or present organisation, and see what correlates. You might be surprised how many similar domains fit a business goal, culture, methodology and leadership requirements as fit the overall structure of a dive.

Consider how approaching a situation as if it were a dive could have provided better results (or not!). Consider as well how what you understand of the concepts of Agile, Lean, WIP limitation, improvement, and problem solving would apply to a dive, and to your example organisation.

You can also try using examples from a dive, which are simpler than a company’s projects, and apply them to situations you feel parallel. I’d love to hear some of them in the comments.

The purpose of all of this isn’t to focus on diving, of course, or suggest it’s immediately translatable to business; it’s to prod a different perspective, another application of principles that are key to both.

 

Should we run organisations/projects more like a dive?

I think there is a good argument for consideration, if nothing else!

Diving is an interesting operation which succeeds when it is collaborative; everyone diving is a stakeholder. Everyone is empowered to make suggestions, get the attention of the group, and – if one suffers a mishap – the group responds as a whole to mitigate the issues to produce the optimum possible continuing flow of the dive. As soon as you hit the water, every stakeholder is continuously reacting and improving the group’s experience – via bouyancy, adjustments, suggestions, and constant, communicated feedback from the surrounds. You inevitably experience better flow by the end of a dive than the start.

The concepts of Kanban are very much involved. Every diver consumes air differently, has different buoyancy, trims differently – all of these affect the overall dive time, depth, and quality on an in-progress basis. Movement speed is limited to the slowest mover; dive time is limited if there is a problem, by the first to reach their air reserve, body composition and potential hypothermia, or even descent into the chaos of losing track of a dive buddy, at which point an immediate constraining if/then scenario kicks in and the entire dive ends.

 

 

Safety is first, success/enjoyment is second, and new goals may arise as all divers experience multiple probes regarding the direction, focus, or decisions of the dive. Strategy is adhered to overall, but the optimum path is per dive, not set for every dive. The dive emerges from the situational data; we don’t categorise and limit the dive unless it crosses into chaos. Although there is overall loose hierarchy in terms of a Divemaster/guides, it is more of an ecosystem, where everyone is focused and has stakes in delivering the value. Everyone is trusted to do so. By and large, everyone delivers. It’s worlds away from how businesses mostly run, but it’s startlingly similar to how businesses are beginning to understand they should run. The interest in Agile and similar methodologies is huge, but sometimes poorly understood; industry has yet to decipher the new landscape of value delivery.

Loosely translated to a company, you can probably see how running in this fashion would be efficient and beneficial. Long term company safety is critical; there’s no point in succeeding in an immediate project if it harms the longevity of the organisation. Projects should enhance it! Collaboration and investment throughout the entire value stream is key, and organisations should not be afraid to work towards goals that may shift slightly. Understanding how and why is important.

 

Applying these parallels to Business

Many organisations are currently adrift in an unfamiliar – and in terms of understanding, hostile – environment. With the fast shift of the global economy, service-driven offerings, and the clash of bureacracy and entrepreneurialism, the drive to achieve, to innovate, to be successful has never been higher – and companies can struggle to keep pace.

I’ve seen organisations getting more and more desperate to deliver the value they know they contain, only to stall due to reliance on adherence to the old methodologies of getting back on track (cuts, leadership changes, management fads, old school management techniques, reductionism, simplified recipe transplants, demands to innovate something to stay relevant, and on), or a misapplication of the new buzzword, Agile, and related patterns.

Incorporating Agile techniques is a very valid and beneficial action, as long as it isn’t immediately constrained and “certified”, or only used in the manner of a Cobra Effect – in other words, appearing to be done, using the language and visible basics, but actually not being undertaken correctly or sparking zero cultural change.

And therein lies a large part of the issue – Agile can’t be Agile if it’s constrained and simplified as a recipe for organisational transplant, or if it’s not really implemented at more than the surface, but these approaches are how many companies appear to be trying to implement it.

The name of the thing is not the thing – most of us buy the label, not the merchandise” (Weinberg).

It’s also worth noting that no single framework, and consequently few single consultants, actually hold the keys to everything. Agile, Lean, Kaizen, Scrum, and other manifestos/frameworks are all part of something larger, and fit in certain places and not others. Cynefin is a little different; it is an attempt to scientifically and naturally understand this, by defining actions and language, gauging complexity according to naturalistic methods, and allowing the data to give us sense rather than attempting to forcefully categorise it to suit us – because, as we’ve seen, being in the midst of situations tends to limit us to those situations. The fact is, organisations need to adapt to complexity to survive; with a few large exceptions, the world no longer tolerates businesses that can adapt everything else to fit themselves.

On a dive, all divers pay attention to all divers because the success of a dive hinges on all divers. It’s now becoming more obvious that an organisation should pay attention to all its people because the success of the organisation (and subsequently all those people) hinges on all those people. I refer back to my last blog post where I quoted Katherine Kirk, saying “Organisations and people ALL matter, because they drive, innovate and ARE value; we matter because everyone else matters”. I’ll probably keep repeating it, because it’s true.

 

Plumbing the depths for answers

So, when it comes to understanding Cynefin and running an optimised, lean, agile organisation, you could perhaps do worse than consider the comparison to a scuba dive.

Am I suggesting that a dive plan is the same thing as, say, a Scrum sprint flow? No, of course not. A typical dive could only ever have elements of more complex business practices. The idea is – with slight tongue in cheek – to recognise the similarities, understand the benefits of how a dive operates, spark a new way of viewing things, and realise how complexity affects all areas of our lives and must be assessed accordingly to plot a coherent path.

Prepare, recognise warning signs, relax, and deal with situations appropriately; recognise all the stakeholders and the value stream delivery for people; and take satisfaction from successfully completing the delivery of that value. Every dive is different, ever-changing – that’s part of the fun! That’s how business works, too, but organisations are still mired in hierarchy and rigid constraints, and from within the mire it’s difficult to understand how to regain innovation and change for the better.

It’s hard to navigate when you’re too close to the sun – that’s why a neutral consultant or coach can give new insight. They are not restricted by the inbuilt constraints. A consultant’s very lack of deep level expertise in a subject can be great benefit, although having knowledge can also be very helpful, and ultimately help the organisatin learn to deliver their value with a minimum of issues.  They are jigglers, to coin another phrase from the esteemed Gerald Weinberg. Facilitators who have experience and knowledge, not to expert levels within the issue, but who can help those who have it find stability and a new direction.

(Some of them are divers, too.)

 

 

Concepts, Games & Exercises for Engaging the 6 I’s

Following on from the previous free infographic 6 Ways The I’s Have It, here are some concepts, exercises, and games to help understand the importance of the 6 I’s:

 

Interest

Without initial interest, there is little personal incentive.

What stands out? What are the overall learning outcomes? What do you notice, that gives this relevance? What catalyses your desire for this subject?

Inspiration

Without finding inspiration, there is little drive to understand.

How are you motivated to be involved, to use this? How can you see it benefitting you? How can this improve your day to day life and usage? What makes you YEARN to use this practically?

Involvement

Experience is the greatest Teacher.

What is the best way to learn this? How can you engage yourself so you can understand and apply this? How can practice help you discover and absorb the concepts, information and methodologies? How can you Learn by Doing?

Immersion

Without immersion, you may lose what you have learned, and you are unlikely to learn further.

How can you make this new knowledge part of yourself? In what ways can you regularly suspend yourself and your role within it? How can you continue to learn after the initial class? How do you ensure you retain this new understanding?

Investment

The best return for learning comes from use, reliance, and capitalising on both.

Do you believe in what you have learned? How can you use this to further yourself and/or your role? How can it benefit your organisation? What is the real-world return you will get here, day to day? How will your learning continue to grow beyond the class?

Instruction

We learn best by teaching others.

How can you Interest, Inspire and Involve others? How can you both pass on and retain ever-deeper knowledge of what you now know? How can you help others applicably understand and retain this? How can you continue to understand more and deepen your knowledge?

These I’s represent an instructional ecosystem that both teachers and learners are equally a part of.

 

Games & Exercises

Learning is in the I’s of the Beholder!

 

Informal Exercises for Teachers

These can either be approached individually as a teacher, or used in informal exercises in the beginning of a class:

List basic aspects of the subject you want to teach randomly and ask learners to pick out what interests them, and why

Ask learners at the start what they would want to teach others about the subject, based on the basic concepts, and at the end ask them how they would now instruct others to inspire them in turn

Ask learners to think back to things they’ve learned in the past, which ones they’ve learned the fastest and most enduringly/completely, and why they think that is

Ask them to consider what the return on investment is for things they learn, and give examples of anything – language, driving a car, career-enhancing management techniques, etc. Ask them to expand this out to include more of an ecosystem, so how it would also benefit those around them and in turn benefit themselves even more

Ask for instances of where “use it or lose it” came true

Ask them what they think a teacher’s job is, and how they would teach the subject

These can be considered either individually, in small groups, or by the room at large.

 

Games to understand the Importance of I’s for Learners

These games can be used as a baseline to spark creative construction or combination of your own games. Feel free to use them in any way appropriate.

Interest:

Have a decent range of subjects on a board. Ask the class to, one by one, pick a subject of interest to them – a new one each time. Then ask them to explain why it is interesting to them and if they think they learn it better or not as a result.

An alternative is to ask them to pick two subjects from a range, one of interest and one of no interest, and to research for 5 minutes any information on both to tell to the class. When they present it, see how much more they do with the one interesting to them, then explain what they’ve done and why. Ask them to pick subjects they do not know much about.

The object is to help people connect with and find what is of interest, to them and others, and understand why we tend to only really invest when something interests us.

Inspiration:

Have a random set of subjects, including some seen as traditionally average, and draw one each. Split into groups of 2 and work on understanding what the subject is. Google is allowed! Then try to interest either each other (or the group, depending on how it’s played) and inspire them to want to know more about it.

The object is to help people see what can drive you to learn more about something interesting, and how formerly average things can be presented as inspiring.

Involvement:

Create a game where, to reach the end, everyone must be involved as part of the journey. An easy way to do this is to base it on a choose-your-own-adventure book (I will consider providing some for use for groups of 4/8/12/16 people at a later date!). One learner follows the pages, and makes a choice, then passes it to a random person (it cannot go back to someone who has already been) after the choice is made. At the end, a group decision must be made to choose the final ending. The stories can be in IT, services, industry, fantasy and so forth.

Another, more involved way is to have each randomly chosen person write the narrative forward based on doing the work and towards a common final goal, taking into account what was written before. Perhaps having choice pages constructed by the teacher would help keep on the rails; I’ll consider this game in further depth.

The object is to form connections within the group, and is a task that requires everyone to practically work in to complete.

Immersion:

Ask a student to tell a story about something that happened or could happen to someone else. This can be from a related set of subjects, be serious, be humorous, and so on. Afterwards ask them to describe what they see, feel, think about what they’ve described. Then ask them, or their partner if in 2s, to tell the story again as if it happened to them, or a similar story that did happen to them. Ask for the descriptions again, and get the listeners and the speaker to compare them to the previous story.

The object is to help learners understand how immersion and personal experiences are very powerful. Feelings – even simulated – are likely to be far stronger in the second story, as are the descriptions and the care for the subject.

Investment:

Have groups of two consider two methods of getting someone invested in a subject, then present them to the others. Have the class say which method is the one they would want to use and why.

Also valuable to ask people to give examples of what is in their interests to be invested in – driving, for example – and how the preceding aspects can shape this.

The object is to prove how much further people will evangelise and utilise a subject they truly believe in, especially if there is a return for them in skills and understanding.

Instruction:

In 2’s, ask learners to teach something about a subject from Interest to someone, then in return, then again, until both people have covered an exciting and a less exciting subject. Note one they are better at. Have them break down concepts and explain clearly, and see if they discover new perceptions themselves as they explain it.

The object is to show that the more you teach something, the more you yourself understand it – and can granularise it – to enable the understanding of others. Note granularisation is not equivalent to simplification (reductionism).

 

At the end of each of these games and exercises, it is wise to explore with explanation how and why it works, and ask learners how it applies to them.

Before the exercise or games are invoked, they should be made psychologically safe and relaxing to the point where they are discussable. I prefer smaller classes of a maximum (usually) of 8, so I can focus on mentoring rather than managing large groups, and so people feel better connected, more relaxed, and more able to speak to everyone.

 

Are these concepts helpful? Let me know in the comments!

Infographic – 6 Ways the I’s Have It

One way to consider the teaching and learning process, in any form – from a meeting to education to a technical training –  is in the form of six key aspects to any form of teaching or learning:

For Teachers – those trying to impart concepts, skills, and ideas – these are things you should help facilitate or catalyse for learners, but not dictate or force – you are there to open the door for the student, not push them through it!

For Learners – those trying to understand concepts, skills, and ideas – these are things valuable to helping you learn deeply and broadly.

They flow through Connections, Concepts, Concrete Practice, and Conclusions, the 4 C’s of Sharon Bowman’s highly recommended book, Training From The Back Of The Room, and can chart a path from inexperience through to subject evangelism and teaching.

The next blog post will run through some games and exercises to heighten awareness of the I’s.

Fearing Change, & Changing Fear

 

The Fear of Change

Rosanne Cash

 

Whether we experience it individually or within human constructs (religion, organisations, families, clubs, etc), there is a Fear of Change ingrained in us in both business and personal life. Humans are comfort-creatures; we value stability and comfort in our lives, be it professionally or at home. So what happens when the ever-changing Universe rudely reminds us that everything is, ultimately, transient?

It is very human to deny that change is happening, that a system has become (or always was!) un-ordered. The reaction is often to then try to impose order (constraints), and often we do this to systems or situations that cannot by nature be ordered.

Change represents the oft-acknowledged deepest fear of mankind: that of the unknown. We know we are here, and find comfort, even in uncomfortable situations; true change will really change things, and this can induce anxiety, worry, discomfort, fear – not only of the consequences, but the change itself.

If something isn’t working, it needs to change for it to begin working. Sometimes the fear of change is so great that we would rather it simply continue not to work, because at least then we know it isn’t working; in other words, we have some form of certainty. This, of course, isn’t helpful in the long term, for delivering value, or in urgent situations, and to accurately gauge this we also need to understand the benefits or risks of making the change.

 

But what if something is already working?

 

One response is: why change if it works? (which can also mean, if it sort of works well enough, maybe, also I don’t want to spend money).

Why indeed? But as with everything, this isn’t a black and white situation, much as we love to polarise. It may be barely working, or require workarounds to complete. It may be inefficient or cause rising/unnecessary costs, or added complication and hassle to daily life. If it works well enough, which is highly subjective, you have to ask if it is worth changing. If the benefits of change are outweighed by the risks or clear negatives, or it is poorly perceived or understood, it is probably not worth doing.

But if you take any organisation with working processes in place, the chances are high that people will usually say, “Yes, it works, sort of – but it could work much better” about many of them, and then specify where the inefficiencies impact their overall effectiveness and workload. (A problem I have often found is that, where an organisation does undertake to make changes – be it a new system, process, or team – it is usually a higher-level decision that often doesn’t fully provide training, positioning, and applicable usage to the people actually doing the job, and can be either too simplistic, over-complicated, or ill-applied – in other words, not appropriate to resolving the core issue. This is why listening to the people doing it matters).

If this is the case, and benefits clearly outweigh risks… why not change it to make it work better?

The place to start with processes, change and the fear of that change is the same: you start with the people.

 

Why start there?

 

All processes, all base decisions, and all value delivered stems from the people within an organisation. People are interconnected individuals working within an organisational structure towards a common set of goals in a variety of ways; without those people – and their interconnections – the innovation, the products, the organisation itself would not exist.

Another way to say this is that people both create and are the value delivered by an organisation. Or, to put it in a more succinct fashion, Value Streams are made of People (Keogh).

So, recognising that the value of your organisation is the people is an important step, for a number of reasons. It is people who fear change, not the products or the infrastructure within an organisation; it is people who make an organisation work.

People fear the change wrought in any organisation because it disrupts processes and workarounds that may work imperfectly but still more or less work, and allow at least some value delivery. Worse, it may cause further inefficiency and unnecessary stress, or expose workarounds that are not strictly in line with company policy – but bureaucracy may have left them no other choice to achieve their business goals, which brings potential personal risk into play even in a clearly failing scenario.

“It works well enough.” “Let sleeping dogs lie”. “Don’t rock the boat.” “Don’t stick your head above the rest.” “Don’t stick your neck out.” “Don’t be sales/delivery prevention.”

These are human qualifications of not wanting to cause further potential problems, and become progressively more fearful of being singled out for causing issues, even if the root aim is to resolve perhaps more fundamental issues within the organisation to provide better, smoother value streams. Politics, bureaucracy, interdependency and tradition can all turn what looks on the surface to be a simple change into a highly complex situation and possibly render a goal unattainable, even though it may be to the greater good of the organisation. In a perfect world, a flexible and reactive enough organisation – one that recognises itself as a complex system overallshouldn’t need covert workarounds; experimentation should be built in.

A root of this fear lies in uncertainty. People require certainty to maintain stability, comfort, and (relatively!) low stress. Knowing a situation is good or bad is far preferable to not knowing if it is good or bad or even what it is, so the natural inclination is to maintain the status quo and not be singled out, as long as this isn’t disruptive enough to become worse than the potential uncertainty (there is a fantastic example of the effects of uncertainty in a study involving rocks and snakes used by Liz Keogh in her talks).

 

Why do organisations not recognise this?

 

Some do, of course, but not many seem to fully realise the causes behind it. One of the most important things to understand is that the landscape has shifted and is shifting in modern business, even recently. Knowledge has become the primary global economy, with business being undertaken around the world, around the clock, and data being digitised and made available and changeable at exponentially greater quantities and speeds than ever before.

The management of this knowledge and the methods used have become key to an organisation’s productivity, innovation, and agility (Snowden, Stanbridge). Sprawling bureaucracies have given way to entrepreneurial practices, and many companies are caught between the two, trying to apply often contradictory methodologies of both to their staff and their products.

At the same time, the latest and not yet widely-understood shift to virtual systems, the increasing use of AI, and knowledge digitisation has moved business to a realm we have no prior experience of or reference for, and this causes fear and concern because we are being forced to change at both a personal and industrial level. Organisations push back against this by acting as they always have, cutting costs, replacing management teams constantly, and so on, but the simple procedures that once worked do not produce new benefits past the very short-term now.

This is because, without realising it, we are now experiencing the Fourth Industrial Revolution (Kirk), which is an entirely new landscape requiring new understanding and actions. Because organisations do not have either, many of them currently “feel like they are in Hell” as a result of the Dark Triad (Kirk): 

 

• Stress

• Fatigue

• Antagonism (“Arseholeness!”)

 

…and they occur both at an organisational and a personal level.

One of the key reasons for these responses may be because of the still-existing and long-term investment in structures based in Taylorism (which dates back to the 19th century, yet is still a core of today’s management science), a root of Process Engineering. This can be interpreted as the belief and (and action upon the belief) that an organisation is a machine with people as cogs or components that will consistently deliver the exact same output in quality and quantity – or, that an organisation is both inherently ordered and conforms exactly to rules.

 

 

(Un-ordered)

MATHEMATICAL COMPLEXITY

SOCIAL COMPLEXITY

(Ordered)

PROCESS ENGINEERING

SYSTEMS THINKING

 

(Rule-based)

(Heuristic-based)

Cynefin Knowledge Management Matrix (Cognitive Edge)

 

 Despite the realisation for decades that Taylorism is actually detrimental, because that just isn’t how people work, and supposedly eschewing it in favour of a more Systems Thinking approach (or, where an organisation is ordered, with greater flexibility from using heuristics) and a shift from a perception of “machine” to “human(Peters, Senge, Nonaka), businesses have really only changed it slightly.

There has been a concerted effort to balance the Mintzberg et al Process Engineering-centric Schools of Strategy (Designing, Planning, Positioning), and the Systems Thinking-centric Schools (Entrepreneurial, Cognitive, Learning, Power, Cultural, Environmental, Configuration), but in my own experience of companies, especially some US-based organisations, I have still found a far greater leaning to the Process Engineering side with some nods towards System Thinking, and a greater perception towards an organisation being a machine, not people. In other words, here we try to force an organisation to fit the modified concepts of Taylorism because it is trusted and traditional, despite being proven ineffective, and act as if it will forever output the exact same quality and quantity.

Of course, even the most balanced approach here between the two still treats an organisation as an ordered construct with a variable spectrum of rules and heuristics, but the very presence of humans who can vary output, focus, workloads and innovation both within and driving an organisation dependent on a number of factors that aren’t necessarily causal or logical – that is to say, complexity – means an organisation can’t be a rigidly ordered system. It is by nature complex, un-ordered, but the tools we mostly use to resolve issues are based on it being an ordered structure with simple rules. The understandable preference, based on certainty and comfort, is to seek simplistic identically-repeatable approaches (“recipes”) based on clear and idealistic outcomes (Snowden).

 

 

Ontologies in relation to basic Domains (Cynefin)

 

What’s interesting is that people will try to manage an organisation as ordered when it isn’t, yet adapt very quickly to managing home life which is similarly un-ordered, often within the same day! This brings into focus the concept of our different identities, or aspects we transition between seamlessly to fit into different situations.

It is also very easy to miss that many instances can be multi-ontological. As a very simple example, if I run a technical training lab, I deal with an obvious domain in much of the basic subject, but also complicated areas; the systems I use to train are largely complicated; and the addition of students themselves bring complexity, as the students drive the class and every class is different to any before as a result (it’s rare that a class descends into chaos, but it’s not unknown, and usually requires outside influence!). So I can end up dealing with all three ontologies in one course! Order, un-ordered complexity, and un-ordered chaos all require different management, but they can all be managed (I touched on some of this briefly in my last blog post: https://www.involvemetraining.com/best-practices-vs-heuristics-in-teaching/).

 

 

The visible effects

 

By not changing from primarily Process Engineering thought structures for 50+ years of business practice, and many organisations not fully comprehending that the shift of many markets from product to service requires organisational agility (as a core concept, not a modular application!), markets are seeing the stifling of innovation and a downwards dive of productivity (Snowden).

This inevitably sparks a frantic reaction (change of focus, sudden arbitrary swerves to “disrupt the market” without recognition of opportunity outside a narrowly focused goal, cost cutting, redundancies, management team swap-outs, further cash injections, etc) without looking at what is working, and more importantly understanding that this is not a one-fits-all recipe that can merely be transplanted inter-organisation for success (Snowden).

It is becoming clearer that collaborative competitiveness, reactive approaches, SME level agility and innovation are where markets now grow in this new landscape of people being and delivering value via a knowledge economy, and this is a beneficial realisation for organisations struggling “in Hell” to take a first step into new understanding.

 

 

So what now?

 

“…Where we go from there is a choice I leave up to you…”

 

The more I look at the current struggles to achieve the results of yesteryear, my own experiences of the last twenty years plus, and the new evidence of Industry 4.0 (Kirk), the more I realise how accurate the above is. Interdependency and collaboration is clearly now essential in a new, barely understood industry of High Demand/Ambiguity/Complexity/Relentless Pace (Kirk). We haven’t been here before.

To find balance and prosperity, and deliver real value once more, collaboration, agility of approach and innovation are all required. We need to sense-make; we need to path-find, or forge our own new paths.

“Reacting by “re-acting, or repeating our actions, merely causes problems to perpetuate. In a new landscape, a new reaction is required for change” (Kirk). This is also one of the keys to Cynefin and managing complex situations; it is virtually impossible to close the gap between the current situation and a goal when dealing with complexity, a system with only some constraints where each aspect affects all others. Instead, you must see where you can make a change, see where you can monitor that change in real-time, and recognise the opportunities to amplify success and ignore failure when it arises via experimentation (Snowden). Or: instead of trying to achieve an idealistic goal impossible from your current standpoint, instead make changes to the system that may throw up even better goals, watch for them instead of focusing on the old goal exclusively, and then grasp them when they arise. You must start from somewhere, but the key is to start – a certain step is the first one to conquering uncertainty.

“Organisations and people ALL matter, because they drive, innovate and ARE value; we matter because everyone else matters” (Kirk), and industry becomes, not forced into trying to be a destined-to-fail machine system, but a safe-to-fail ecosystem – holistic and interconnected, not only able to adapt to change, but driven by it.

 

The problems we still face

 

The issue in many organisations, and with many managers, is that it is very easy to believe correlation = causation, and that simple universally-applicable recipes give idealistic outcomes. This leads to problems, and is a driver of the industry “waves” of best practice management fads that don’t work long-term but propagate because they are new, and short or medium term results may have been seen by some other organisations.

What works to fix or improve one organisation is not necessarily (in fact very unlikely) to work perfectly for other organisations, or work subject to simplification and/for general application. This is a core concept still used that conforms to the Process Engineering ideology. You cannot take something in complex situations and reduce it to a repeatable generic recipe that works perfectly; it just… won’t. No two organisations are alike. Every instance should be approached, investigated, and worked on individually and holistically to see if it should be managed as ordered, or un-ordered (complex or chaotic). There is benefit from seeing what other organisations did to resolve similar problems as long as it is understood the approach and fit must be modified: the incorporation of aspects, rather than the dogmatic following of a whole.

Furthermore, the more people find approaches to be effective, the more they seek to codify the concepts – which is fine to a point, but can easily lead to them then structuring the approaches, modularising them, and then seeking to force them back into the ordered ontology (the Cynefin domains of Obviousness or Complication) as a simple, universally repeatable recipe, when many are ultimately agile and flexible tools to manage un-ordered systems (Complexity or Chaos). This is something that appears to be happening to the concept of Agile at the moment; it is becoming less agile itself as it is taken in by large organisations and constrained!

At the same time, there are constant clashes intra-organisation. Organisations want to both be fully ordered with infinitely repeatable output, but also flexible and innovative. The first of these is causal (repeatable cause and effect), and the second is dispositional (you can say where you are and where you may end up, even simulate, but not causally repeat or predict). They are very different in nature. By their very nature and composition, an organisation cannot be a simple ordered system, and this is where the work within Cynefin by Dave Snowden into Social Complexity/Anthro-Complexity begins to make sense of these systems and the management of complexity and chaos.

There is also the requirement for a deeper comprehension of the fuzzy liminality of whether or not you should make a change, which differs in each situation; a risk/benefit exercise where we weigh up the benefits  – deep and long term as well as short term – of making a change, where the former is often ignored in favour of short-term profitability. Where the dangers of making a change are not defined or understood, or are clearly not beneficial, it is wise to consider carefully whether you should do so – and if so, what the correct manner of doing so is.

 

From Hierarchy to Ecology

 

One of the fundamental movements that resolves some of these issues I think will be a shift from Hierarchies, where organisations are ranked internally relative to status and authority with a focus on control (power), to Ecologies, where organisations recognise the relationships of every person to each other and to the organisation, with a focus on delivery (value).

This may then acknowledge change and the driving by change, and that organisations are largely complex and cannot be distilled into simple recipes repeatable for idealistic outcomes. The market, the industries, the universe itself inflicts change, as do the people within, and order is impossible to maintain rigidly, so adaptivity and recognising how to manage un-ordered systems is required.

Before this can happen, organisations (and the management thereof!) need to understand how much efficiency and value delivery they will gain from the also-fundamental shifts in their traditional beliefs: it is understandable that organisations wish to impose order and tighten control to make sense, but Dave Snowden warns against the effects of “over-constraining a system that is not naturally constrainable” – you are asking for more inefficiency and problems, not less.

 

And how exactly does this all fit in with Teaching and Learning?

 

Many of the concepts are relatively new and evolving, and touch on Agile, Lean, Cynefin, and other concepts and frameworks all at once. Teaching these concepts correctly and helping organisations and individuals understand how to learn them effectively (applicably understand them), at the same time as steering away from the temptation to use easy one-size-fits-all fads is therefore key, and the next step in our progress. Understanding of them is blossoming, and now it must be effectively conveyed, used, and put into practice! None of this is any use if it cannot be effectively taught and learned. At the same time, this all fits very neatly into the overall concepts of learning and teaching, which are not by nature ordered and simple. 

Equally important is learning when to change. It should not be forced for the sake of change, or without clarity or understanding. Not all change is necessary; it’s knowing when it is and where to start that is crucial, or you could lose opportunities you already have.

Perhaps one of the most important things to teach, and learn, is this: Change is a fact of life, business, and the Universe in general, and it can be feared for good reason; but that should not stop change where change is required or beneficial, or strive to stop change that cannot be stopped. Instead of fearing change, we can teach ourselves to change fear into something more productive: an awareness of grasping opportunities that change will throw up.

You only learn when you are open to change, you move outside your comfort zone, and you accept failure as a lesson that builds success; that uncertainty is the point from which new understanding can grow. The more used to taking that first certain step into uncertainty you get, the less you fear the challenge, and the more you relish it. A good teacher & consultant can help place your feet on that path, and walk the first steps with you.

 

 

Image result for rosa parks quote on fear

 

 

Sources:

Liz Keogh (lunivore.com)

Katherine Kirk (https://www.linkedin.com/in/ktkirk/)

Dave Snowden (cognitive-edge.com)

 

Best Practices vs Heuristics in teaching

Best Practice & Heuristics

Note: This article focuses on the basic concept of Best Practice understood by most organisations and does not cover the Cynefin models of Best, Good, Emerging, and Novel Practice in their relevant domains – more on that another time!

Best Practices exist in business for a reason. When we need to do something optimally, over time or using prediction we can determine the best methodology that has the least cost/risk for the output. In a perfect world, we want something that works first time, every time. This is the level many people in business tend to work at (or ignore!).

However, real life has an unpleasant if invigorating habit of not always providing us with a clear application of Best Practice – a wonderful opportunity to learn which we might not appreciate, say, mid-disaster. Time constraints, political expedience, resource limitations or complexity can all get soundly in the way.

Best Practice can also be a misnomer. Oddly, the approved, documented, step-driven way of doing things correctly to ensure consistent results can sometimes actually be less effective than doing things another way – as long as you are cognizant of pitfalls and expected results (usually via a solid mixture of expertise and experience). This means we sometimes have to choose between the proven, approved methodology and the effective methodology to achieve a goal within constraints. Even further, sometimes Best Practice simply cannot be applied at all to resolve an issue.

What we see here then is a disparity between Best Practice, the approved and most optimal hypothesised or sterile-tested way to achieve a goal, and Heuristics, a more practical approach to problem solving or learning that has no guarantee of being optimal (or sometimes even rational) but will adequately reach an immediate goal in the real world.

Or:

Best Practices as the supported optimal hypothetical route to achieve a long-term goal
Heuristics as a rule-of-thumb route to practically adequately achieve an immediate goal

The ability to choose the correct one to solve a problem is as important as having either. Spending time applying the incorrect one can at best waste your time, and at worst make the problem far worse.

Demonstrating the difference

Let’s look at a relatively simple example of this. Because I’m from a technical background originally, I’ll use an IT example (which may seem complicated, but it is actually quite logical):

Moving data between storage arrays

Using a Data Protection solution Base sitting on Windows, System Independent Format data is held on simple drive letters, typically storage arrays, as primary storage. Caveats are that it is a live system accepting and sending data constantly as well as maintaining that data in storage, the data could be into the tens of TB, and the SIDF files for each task are exclusively locked during this process, requiring a task-priority-driven queue to manage. The strong preference is that critical backups are not interrupted if possible.

Let’s suppose you run three basic RAID 5 arrays, each one a volume, and you wish to replace E as it is older or unreliable hardware, so E is marked Read-Only so no new data can be written. Moving data between these is simple:

Using this particular solution, a command “Move to another Location” is used above. This tells the solution that E will be retired. Data is automatically redistributed to remaining drive letters, database indices are updated with the new locations for the data, and then E can be removed. This is simple, easy, obvious, and conforms readily to Best Practice.

So far, so good – but what if you wish to replace E with a new volume? (H being the obvious).

Here we can see that by marking all drives except H Read-Only, the data has only one location it can go to (unless you have a really crummy solution, it is unlikely you can also mark the last location Read-Only!). So here, you can see that this is quite simple, logical, and conforms readily to a Best Practice (follow these steps for the optimal outcome every time).

So now let’s look at what happens when the project increases in scope and complexity and assume the three volumes are split over a single RAID 5 array, for example, and you wish to replace the whole array for future-proofing/performance/storage space concerns in a like-for-like scenario (to maintain some simplicity):

Using the above method, you add the new three volumes (which must of course be at least as large and will typically be much larger). Here is where things cease being obvious and become complicated, although there is still a Best Practice. If you mark the originals Read-Only, and use the command above to move the data, the data will be moved by Best Practice, optimally, with updated indices in the database, with clear logs. No new data will collect on E, F, or G, and once the operation is complete they can be removed from the solution as storage locations. This is the approved, proven, supported, and repeatable methodology.

However!

I personally would not use this method, for a number of reasons; the main being “the real world”, which introduces complexity, illogic, and external factors a go-go. An alternative takes into account time constraints, monitoring and man-hours, Solution activity, simplicity, and effectiveness, as well as other unknown factors I cannot predict that could impact the process; the Heuristic approach defined practically on previous occasions and arrived at by testing in extremis. This is a work-around using different methods to achieve a more immediate goal:

In this example, I offline all Base services immediately after backups have ceased, so the Base is not running at all. I then copy through Windows, direct from volume->volume, E to H, F to I, G to J. On arrays, which run disks in parity, you can run multiple operations concurrently, so these are all done simultaneously. I then place a marker in each new volume (notepad document typically) saying “I used to be E!” I used to be F!” I used to be G!”

Then I unplug (leaving the original data intact!) the original arrays, rename the new volume letters to the original, and restart the services (i.e. turn it back on).  And the Base says, yawn… where is my data on E, F- oh, there it is. Same indices, same locations… completely different hardware.

Let’s simplify the concept of what I have done here re the arrays:

https://www.involvemetraining.com/wp/wp-content/uploads/2018/08/image011.gif

(Disclaimer: I cannot guarantee there isn’t a stone ball waiting. That’s the problem with the real world. There could be.)

I find a lot of value in using real world examples to underpin my reasoning here:

A client had four Base machines of circa 12TB. He wished to upgrade the storage on each one from a 12TB array to a much newer, larger 48TB array. He asked my advice on moving this data. I ran him through the “Chris-approved” Heuristic method and the reasoning. He then got a second opinion from Support, who dictated the Best Practice method. He followed their advice. The copy of 12TB of data on local disk should be achievable within 8-12 hours, well within a backup window. The Base would never know what happened. Since tasks of a much higher priority were constantly running and interrupting (Replication, Backup, Restore, Optimisation, Expiration, etc), it instead took him almost 4 weeks! …he told me not to say “I told you so”.

What lies behind Best Practice and Heuristics usage

So – let’s move out from the “technical” aspect above and look a level deeper at the concepts behind the problems faced.

Best Practice here is the company line, using the tool built for the job, but actually the difference in efficiency and process is significant, although the end result is the same. Whether it is better to transplant wholly or create entirely new indices for the same data is debatable given the achievement, i.e. the replacement of the hardware. Which one was more effective in the real world is readily apparent.

Most training I have been on will teach only the first concepts, if you even get those; there is simply too much information overload in a short space of time. Training courses are not always particularly efficient, and are subject to the same choices between Best Practice and Heuristics as the subjects they cover. But in fact you can break it down even more simply than this. It’s generally accepted that you typically encounter three main types of problem (certainly in trainings) at varying levels:

  • Simple problems (You need to move the data off a volume. You click move in a clearly explained wizard. It moves.)

Obvious causality; known parameters for problem and resolution; correct answers exist and will be achieved through logic; resolution can be achieved by anyone.

You know what it does, and how to achieve it.

  • Complicated problems  (You need to move the data. You can spread it to multiple locations or send it to one but this requires decisions, knowledge and scoping. You consider then configure parameters and click move. It moves.)

Causality isn’t immediately obvious; parameters may be known but not completely; there may be multiple correct answers; expertise is required to resolve them.

You know what it should do, and how to work to resolve it if it doesn’t.

  • Complex problems (You need to move the data. You cannot complete this via the wizard in the time allotted, external factors may or will interfere, you are not aware of all factors and cannot anticipate everything. Expertise doesn’t resolve the fundamental issue. You have to experiment to find another method. You test. You find the best possible path given constraints and work around the base issues. You shut it all down, copy the volumes, turn it all back on. It’s moved. You check it worked!)

Causality is unknown; parameters are unknown; there are no absolute “correct” answers; logic doesn’t resolve it; expertise alone is ineffective; innovation and lateral thinking are required.

You know what it should do but not why it doesn’t or how to resolve it. You must test different methods to find an immediate resolution.

At this point we start moving from troubleshooting closer to the realms of Cynefin, Dave Snowden’s framework for decision making (which has another few areas a little less relevant to the core of this article). There is a wealth of incredible information here by Liz Keogh, a very talented Agile Consultant and keynote speaker who speaks and teaches on this subject globally; I strongly recommend looking at her blog.

With the above problems however it becomes obvious that Best Practice can be applied to the first, and at least GOOD Practice to the second, but neither to the last; Heuristics are required to resolve complexity because Best Practice simply cannot exist there. It is further worth noting that issues are not always one of these problem areas alone!

So how does a teacher best approach this with a class?

Applying this to teaching is an interesting conundrum then, as by nature teaching is at the same time simple, complicated and complex. Cause and effect exists for most subjects, with basic troubleshooting. But you are also teaching students to diagnose, and start down the path to expertise. The subject and the systems can usually be predicted. Yet you have a class full of individual people, and you cannot predict their actions or responses. Effectively balancing atmosphere, skill level, collaborative potential, action, understanding, interest – and what wonderful technical issues they may throw up to learn from! – requires a flexible and innovative approach to each class. It may be best to consider each training solution a unique problem with several concurrent paths to resolution, and balancing logical process and flexibility to deliver the optimal mix of learning.

Classes for me are an incredible mixture of these concepts; teaching people how to teach is a much-different prospect from simple subject knowledge transfer.

I have found over many years that, by and large, a class able to understand and apply Best Practice is also capable of also deciding when and if to apply Best Practice – or Heuristics – but won’t necessarily do so. It is very easy to fall into the ruts of teaching a class by rote, and subconsciously teaching them to follow rote themselves. Humans, adaptable as we are, prefer ease and comfort, and will often follow this detrimentally. This is the darker side of Best Practice; following a set of steps without thought or reactivity, trying something again and again because it should work, and anything else is effort. Heuristics are effort. Let’s apply the principle of Occam’s Razor to this, then:

If what you are doing doesn’t work… do something else!

Resolving problems

I often find that a problem in a sterile lab environment which is a complicated issue becomes a complex issue in the real world, simply because of unknown variables and environments. It is also why I am not in favour of unrealistic teaching environments, which may teach only the shape of the spoon (Chapter 7, Involve Me). If you teach for real-world usage and problem solving, you must make your teaching as real-world as possible or application is limited at best.

Best Practices can change and refine over time, and must be constantly updated, but for a simple or complicated scenario deliver a consistent result. Heuristics cannot be relied on for everything, and may not be optimal or concise, but they can be used to resolve a problem that does not conform to a Best Practise – in other words, a complex problem. When I’m teaching people how to teach, I encourage them to:

  • Identify Best Practices
  • Identify possible issues
  • Be prepared to react Heuristically
  • Identify if the issue is human-based or system-based
  • Impart guidance on how to direct student to fix this themselves
  • Learn from doing!

 

This requires both  pro- and re- active responses. A planned approach is key, but the ability to react and absorb changes is also critical and sometimes missed. As mentioned above, it is all too easy to fall into the habit of continuing to follow set instructions, and I see this in class a lot. If an approach doesn’t work, often I see it repeated again, and then the student sits and frowns. This is one reason in fact I have a very flexible approach to any teaching and use little presentation or documentation for anything past conceptual or reference material – these can’t be changed on the fly and allow less lateral thinking and reactivity when rigidly followed, whether from class dynamics or technical difficulties (and so forth). Where Best Practice does not fulfil all criteria, Heuristics often can.

Humans can individually be wonderfully chaotic in approach, and you cannot as easily predict people as you can systems. We are where any complexity is usually introduced (in IT there are multiple terms – I say “Chair-to-Keyboard interface error”, but you also have PEBCAK, PICNIC, and the wonderful Eastern European “The device in front of the monitor has a problem”. They all mean “human error”.). What this amounts to is – people break stuff; often, illogically, and sometimes gleefully. Best Practice usually works for systems, but not for humans.

Or:

Systems usually follow rules; people usually don’t.

In war it is oft-quoted that “no plan survives contact with the enemy”. If you stick only to a plan despite changes in expected enemy deployment and composition, you are likely to find the battle does not turn out as you hoped. In extremis, you throw things at the wall and see what sticks. This is where agility of mindset and lateral thinking are critical, vital assets of both teachers and students.

A project is the same; a training course is the same. Teaching students flexibility of thought and logic of application is important, and constantly improves your own, and whilst Best Practices are key to this in many industries, we must all be mindful of surrounding situational modifiers. If it fails, or doesn’t fit the requirements, heuristically define what is definitely and immediately effective, use it, and qualify it logically with real-world examples of why this worked. This is my final key point for students in my classes:

Use the appropriate method at the point of decision.

Ultimately, you can only provide tools and capability to use them effectively: you can only open the door. Walking through is up to the students.

 

 

Why Business Efficiency is dependent on learning

Good old-fashioned learning: one of the simplest, yet most complex, things we undertake. We are learning machines, from the moment we awaken through into adulthood; the manner and ease may change, but our learning never ceases to be critical for progress.

Humans adapt and learn quickly at multiple levels, which is one reason we are so successful. But we also have the perhaps unique ability to choose what we wish to learn, detrimentally or not.

This is because we are also creatures of profound habit, and enjoy ease and comfort. Learning is not easy or comfortable. Thus in life – and in business – we eventually end up in ruts that defy logic and impede progress.

Learning helps us develop Best Practices and use what I think of as cognitive common sense – not merely common sense, i.e. the obvious, but something that perhaps becomes obvious when you think about it and apply lateral thinking or logic. There is a constant battle within companies, management, and workforce to balance best practice, cost, return, risk, and many other parameters.