Comfort Zones, and where to find them

The chances are you’ve heard, read or used the expression “Comfort Zones” even if it isn’t part of your day-to-day work.

I find however that a lot of people often talk about them as yet another buzzword, a platitude to trot out, even up to and to the point of telling people to dive into crippling fear, but they don’t often think about how they can best be used. Like anything else, they require context and nuance!

This will come as no surprise to anyone familiar with my work, but… much of this is all about balance.

So, let’s explore some models showing what we think they are, how they might apply, and why you might not be using your understanding of them to your best advantage for yourself… or others.

(My model below is a work-in-progress, and is subject to future change!)

What is a Comfort Zone?

The typical definition of a Comfort Zone is a behavioural state, within which an individual operates in an anxiety-neutral condition.

When we are comfortable and experiencing no anxiety or challenging stimulus, humans tend to become extremely sedentary, both physically and mentally. Although we excel at change, if we see no reason to do so, we won’t. We value comfort and convenience above almost all else, in fact, and we are very good at lateral thinking and finding shortcuts to simplify and ease processes, which means we are reluctant to make changes to these systems once in place.

Routine, Pattern, Familiarity, Relaxation, static Repetition are all hallmarks of a Comfort Zone.

That said, and taking into account the rest of the article is dealing with representations of movement out of this Zone, Comfort Zones are absolutely a good thing. They provide safety, recovery, mental surety, and balance, and are much needed and natural parts of us. We should be balancing our time between comfort and growth; comfort is a physical and mental resting place.

It is not at all true that we must always be moving outside our comfort zones. We like comfort for a very good reason!

The misinterpretation of “Comfort Zones”

There is an oft-repeated school of thought that implies that this is a Comfort Zone, and one I see represented a lot on LinkedIn and other social media:

No alt text provided for this image

This is not really accurate, nor is the assumption that you will automatically grow and learn simply by moving outside the comfort zone. All this does is open up the opportunity and the motivation do do so, but work and risk are still required, and there is nuance depending on context.

People tend to speak about “moving outside your Comfort Zone” as a binary action; it isn’t.

Most of the motivational posts I see work on this basis, and that’s fine; a first step is a first step. But you have to take more steps after the first one to continue a journey. It’s fantastic to be inspired to take that step for change, but how many people are then discouraged by taking a risk and not seeing themselves grow or learn quickly or obviously? Human nature then makes it less likely we will do this again in the future.

We also are conditioned to want quick results, but like getting fit, these things require time and consistency. In my role I constantly see people expecting quick results and change simply from doing something they normally wouldn’t, and being disappointed if their life doesn’t radically shift.

As with most things humans get involved with, we love to over-simplify a concept that requires a little thought. Comfort Zones are complex, because the humans that form them are complex, and they are dispositional – you can guess their boundaries and what might happen, but they can change depending on circumstance, and you can’t predict what will happen.

Other interpretations

Comfort Zones are subjective in nature; they are intensely personal to us all. What is comfort for one is discomfort for another – outside a basic defining scope, of course (sofa vs torture rack tends to be a no-brainer!). Many of us have our own mental image of our own Comfort Zones.

Whilst there is hypothetically no “wrong” way to represent them, it is important that the way they are represented is clear and in line with scientific, psychological, and sociological understanding, so that people can more accurately map it to their own context.

For example, I occasionally see models like this:

No alt text provided for this image

It’s a great motivator and outline for a number of steps, but in terms of how we work under normal circumstances, or in the normal order of things, it isn’t really accurate, which can lead to some confusion or differing expectations.

For example, I wouldn’t call the Fear Zone here Fear, but Demotivation, or perhaps Reluctance. Fear – true fear – is almost always greatly inhibitive to learning and growth. If you are panicking or in a heightened state of anxiety, you can’t learn, because your body has essentially shut everything down but fight-or-flight. Those are things you can’t really push “deeper” through; you have to control them, because the deeper you get the less control or higher thought processes you can maintain. Anxiety and panic typically get worse the more you push, not better!

There is always an element of pushing through initial risk, fear, uncertainty, complacency, and anxiety to catalyse change, but I see this as less a zone and more a border between zones. They are the gatekeepers we must overcome to move into a zone where we can change and optimally perform, be challenged.

Many of these models also give an apparently clear progression, direction, and almost waterfall-style expectation of how you can progress, and that isn’t how we work, especially when you realise these Zones are tied into emotion as well as cognition. If you look at basic psychology you will get an idea of how we work – we’ve known for centuries and more that thrusting someone untempered into a danger zone has a much higher attrition rate than safely teaching them over time. It’s make or break – and that may be beneficial in extreme circumstances, but it isn’t the best way for us all to learn and progress!

In terms of motivation and pushing these models are fine, but I prefer a more realistic model that reflects how humans actually make decisions and work based on our current knowledge.

So what’s a more accurate representation?

A typical modern model of a comfort zone will usually look something like this:

No alt text provided for this image

This is very basic, of course, but I find it’s accurate for most situations. There is no specific direction or set of things that may happen; it shows the progression that typically happens when you make changes to learn and grow, expanding outwards. For me here, learning and growth are so intertwined as to be synonymous.

The middle zone is labelled optimal performance instead of growth or learning, because it doesn’t only apply to those concepts. It suggests that a relatively small amount of stress motivates or catalyses us to do something with greater focus, which gives the opportunity to optimally grow and learn, but it’s not such a great leap that it shuts us down in utter panic.

As an example, you don’t learn and grow in swimming terms by throwing yourself alone into the deep end of a pool when you learn to swim; this typically only delivers terminal feedback where you drown, and if not you haven’t really learned much of use. Instead, you learn to swim in increments in shallower areas or with swimming aids, and preferably with an instructor, creating stress and risk but also psychological safety, and as you get better you push your boundaries. It’s important to clarify what constitutes “pushing” and “fear” here!

Of all things, humans fear uncertainty the most. It’s the most consistently stressful state for us to be in. But there’s a modicum of stress and uncertainty that gives us adrenaline and heightened perception, makes us ready and breaks complacency. It allows us to perform tasks we know, or learn tasks we don’t, at an optimum level of focus and control.

That’s quite different to such high stress and anxiety levels that our brains shut down and we’re operating purely on adrenaline and cortisol.

I’ve spent some time looking at how learners operate and learn over the years, as well as doing so myself continually, and I’ve also spent a lot of time looking at how humans make decisions and how our minds work and form patterns; it’s integral to a lot of what I do with agility, culture, learning, leadership training and more. These toes in science, psychology, and sociology have helped me develop a more detailed model that integrates with what we know, not just how we learn.

With that in mind, there are three major things to bear in mind when you consider Comfort Zones:


Something we don’t think about much because it’s intrinsic to our ability to do many things is Identity.

No alt text provided for this image

All of us have multiple different identities, to which we link different modes of thinking and understanding. These are in turn linked to mental patterns and how and why we form them, as well as tribes we form – or are formed around us from meta-complex tribes (you can think of these as tribes-within-tribes at differing levels of complex systems, like a 3D Venn Diagram. Don’t think about it too hard for now!).

All of this makes identities quite a variable and often conflicting arena for us to navigate.

We switch between these identities, which are unique in combination to each person, quite seamlessly and without thinking about it; it’s almost as if our brains rewire on the fly to operate differently depending on circumstance. I’ve always been fascinated by how some of the greatest thinkers I know, who are methodical and quiet, can do another activity (watch a game of rugby, for example) and become intensely loud, tribal, and involved, as if they are a different person, and think nothing of the process. I love watching someone termed excitable and with attention deficits find their favourite hobby (such as painting!) and spending hours quietly working on it.

None of us are two dimensional in aspect; we have myriad faces, and this is important to remember when we consider Comfort Zones and how we deal with them.

Systems of Support

The most widespread, automatic support structures we have are tribal. Humans create tribes without thinking, both in the real world (families, communities, countries, et al) and in the abstract (music, hobbies, philosophies and more). I’ll go into tribes and their negatives more another time, but here they serve multiple positive purposes, including humanising, binding and helping people invest and be collaborative to mutual benefit, even if that’s just moral or psychological support rather than physical survival. When you integrate into a tribe, you assume an identity for that tribe.

Not all identities are tribal. We may not share them with others – they may be intensely personal and thus segregated. But many identities are tribal, because we are social creatures who share knowledge and are comfortable with a sense of belonging.

Add alt text

No alt text provided for this image
How many different tribes and identities can you link to this? Where might they apply, and which ones do you belong to? Are any of them oppositional?

The Zone is not Alone

Given that we have multiple identities – and the tribes they may link to – it then makes sense to look at models which acknowledge that we have multiple Comfort Zones, and each has different boundaries and limits.

Think about it for a moment: do you know anyone who is quiet, shy, retiring, who is not shy and retiring at something quite specific? Or perhaps you go to do something you are very comfortable with, but the situation and environment makes it suddenly uncomfortable?

A wonderful thing about identities and tribes is that they buffer us against uncertainty, because you have a degree of support and understanding. When many people come together doing this, it means you all support and buffer each other as well. You might take a risk on a night out with friends you never would if you were alone; the same goes if you are at work, making a decision that affects a company result where you are protected to an extent by the bureaucratic structure and policies, as well as colleagues, in a way you aren’t in your personal life – in which we are far more reluctant to chance lasting consequences.

All of these tribes and identities link into – but exist outside of – your Personal Comfort Zone, which is really your inner sanctum.

This is the one you can least afford to breach, and your willingness to risk and expose yourself here will by nature be far less. It’s your last defence before the naked you, as it were, and we usually find the idea of changing who we are at our core anathema, because then we would potentially not be who we are any more. So we take less risks, change more reluctantly, and our Danger Zone is much larger, with the Optimal Growth Zone smaller than perhaps some others; it’s much easier to overstep into panic or anxiety and uncertainty.

Conversely, strong tribes that we identify with have different Comfort Zones. One more thing to consider is how we tend to collate smaller identities under larger ones – and how they sometimes affiliate to more than one tribe.

So, to expand upon the above example of Personal vs Work with a general example:

No alt text provided for this image

Notice that there is a definite line of danger between your personal and professional comfort zones – although skills and actions may pass across, there are things we will do in one we absolutely wouldn’t do in the other!

Our professional Zones may have a larger Comfort Zone because we do a lot of mundane, safe things day in day out, and the optimal Performance Zone may likewise be larger because giving it a go isn’t often as risky as in our personal lives (for example, whilst not desirable, losing a job is generally less long-term destructive than losing a personal Zone – consider what happens when someone’s confidence is destroyed, and the knock on effects it has personally, professionally and more).

The Danger Zone for our personal Zone is therefore likely to be proportionately larger than our professional one, because at work (depending on role and company!) we generally accept or hide that we make the odd mistake; the impact of a mistake in our personal lives can be much more shameful or impactful to us.

To dive into Cynefinthink for a moment – consider the boundaries between the Zones in a model as constraints, and consider how they may be more rigid, elastic or permeable etc depending on which model you’re in; where there might be catastrophic failure; and how you can equate “psychological safety” to “Safe-to-Fail Probes” in Complexity and shallow Chaos.

The interesting thing here is that the different identity-linked models also feed into each other; you may take small amounts of confidence or lack of confidence from one to the other, depending on your mental framing and state, so for example proficiency over time at work or a sport can feed easily into personal life, and vice versa.

Finding New Comfort

I’m still thinking about the correct visual representation of a basic model, but imagine the Personal Comfort Zone in the centre, and other identity(/tribally)-linked Comfort Zones all around it, each Zone connecting to every other model’s Zone, and every Zone a different relative size, as if they were neurons in a network, and you start to realise how many – and how interconnected! – they can be.

Hopefully this exploration into how much (and many!) more Comfort Zones are than our usual daily perception is useful, and has given some food for thought. Far from being a simple concept, Comfort Zones have many levels and contexts, and are actually very fluid and ever-changing – and at times, we all need to return to our Comfort Zones.

Consider how it might all apply to yourself. The next time you think about “moving outside your Comfort Zone”, remember not to assume growth is automatic – we have to work at it! Think about which Comfort Zone model it might be, how far is too far, how to maintain the psychological safety for optimum learning or operation, and how, if it isn’t your personal model, it might be used to grow that, too.

You might be surprised how many you find you have.

Rise of the Machines Part I: Mind Machinery

Here’s a field that heavily integrates into a number of the areas I talk about, and I think it’s a good time to explore a few areas. Internet of Things, Internet of Us, Automation, AI: simultaneously incredibly exciting prospects with amazing potential, yet new tiresome buzzwords used to jump on the bandwagon (about 40% of European “AI Companies” don’t use AI at all).

I think it’s best to split out the fields of AI and Automation here, as, although they are linked in some cases, they affect us in different ways, so first, let’s look at the rise of the Machine Intelligence; watch out for Rise of the Machines Part II for discussions on Automation. I’ll also abbreviate a lot of the terms as per headings below.

Artificial Intelligence (AI) and Cognitive Computing (CC)

I heard about true AI in business nearly 15 years ago as the next big thing, and I think we erroneously believed it was immediately poised to drastically change the market. It then apparently went quiet. AI has yet to burst into our consciousness as we gleefully describe it. Certainly in its nascent state, I don’t think it had the support and understanding required.

Fast forward to the now, and most people know it’s been used for some time algorithmically for social media, or that a computer beat Gary Kasparov at chess – but it now goes far deeper than that. AI can be used to find new viewpoints, crunch huge amounts of data, position things to hack into group human behaviour, even manipulate our decisions.

In fact this is a huge field, and one I am learning more about all the time. You can probably split it initially into two main fields:

  • ArtificiaI Intelligence, which looks to solve complex problems to produce a result
  • Cognitive Computing, which looks to emulate solving complex problems as a human would to produce a process

So, going back to Gary Kasparov, Deep Blue was definitely AI because it performed what was essentially a brute-force computation to solve a complex task better than a human can, but it isn’t Cognitive Computing, because it wasn’t mimicking how a human would play chess at all (and it seems there are multiple different types of “mimicking”).

Which of these you want will be contextual. Do we want a self-driving car emulating human decisions? Or do we want it to give the best possible result as quickly as possible? Food for thought; the answer may be “both”.

To further confuse things, some AI can also be CC – but both of these are already also becoming a buzzword, a cargo-cult (“do you do AI?”), so it’s also going to be defined by industry marketing in some cases.

The influence of AI on business structure, not just process

Back to an old favourite, the Knowledge management matrix!

No alt text provided for this image

Now, we know about Taylorism, Process Engineering, and Systems Thinking (at least, if you read any of my articles you do!) – explanation here.

But when we look at how humans ACTUALLY work, and thus most companies, we see much of it lies firmly in Social Complexity.

AI/Machine Learning(ML) is currently fairly solidly in the Mathematical Complexity quadrant, with a touch on Systems Thinking; essentially, the use of mathematical models and algorithms to find optimal output. Where we go wrong again here is that we often use this to predict, where it can only really simulate.

What true Artificial Neural Network Intelligence (ANNI) may eventually offer is an interesting cross-over potential of mathematical complexity and systems thinking – and with cognitive computing leveraging these, the possibility of a non-human processing unit that at least partially understands human social complexity. We’re already looking at branches of AI for decision-making.

This could be very interesting – and potentially harmful, as humans are (demonstrably!) easily socially manipulated; but also because as a nonhuman, an AI capable of doing this would be out of context and therefore not bound by any human constraints. Even a CC-AI would be emulating a human, not being one, and I think it’s going to be some time before that becomes fairly accurate. It’s a very dispositional field. 

To extend the future exploration, it’s hard to tell if a resulting endpoint intelligence would be alien – or because of our insistence on modelling human minds, produce an intelligence that has a sociopathy or psychopathy. If there is this much understanding of humanity, the NN might need more to try to give a feeling of empathy, and it’s hard to say how that would interact for a nonhuman intelligence.

Speaking of Neural Networks…

Artificial Neural Network Intelligence (ANNI/ANN/NN)

This is much closer to the human brain in structure, and is a subset of AI. Neural Networks have been around since the 40’s, as scientists have long been fascinated with human brains.

No alt text provided for this image

We have an approximate storage of 2-3gb in our heads, which is pretty poor. It’s not even a decent single layer DVD! But because we store and recall data not just by creating neural connections between neurons, but firing them in sequence, our actual storage is estimated at about 2 PB. That’s enormous – although it’s not immediately accessible (we just don’t work like that). It’s also something we use in myriad, intuitive, individual ways to arrive at decisions.

We knew computers worked totally differently, logically, via calculation, and the race has (until recently, with CC and Qubits) been about how quickly we can sequentially do it. Even with CC, we just can’t emulate the human intuitive leaps and individual distributed cognitive functions that arise from social complexity – yet, if ever.

But now we’re edging into the fringes of these areas, and ANNI are being combined with incredible hardware, software, and new understanding to incrementally produce something much closer to sentient intelligence.

Machine Learning (ML)

Although my specialty is Human Learning, when you start talking about Machine Learning and AI there are some interesting similarities, as well as some drastic differences.

Machine Learning and AI aren’t quite the same. AI pertains to the field of artificial intelligence as a whole, and machine learning and Deep Learning are subsets of this. They rely on the use of algorithms for pattern-hunting and inference – although I need to spend more time understanding if the latter is sometimes more imputation (reasonable substitution) than actually inferring in many cases.

Machine learning can also be considered a specific, logical process which doesn’t carry the human intuition aspect that AI and neural networks are seen as able to edge into with Cognitive Computing. In this arena, an amusing example of Machine learning could be:

 Me: I’ll test this smart AI with some basic maths! What’s 2+2?

 Machine: Zero.

 Me: No, it’s 4.

 Machine: Yes, It’s 5.

 Me: No, it’s 4!

 Machine: It’s 4.

 Me: Great! What’s 2+3?

 Machine: 4.

Machine Learning tends to ask what everyone else is doing – but some AI may be able to decide what to do for itself on extrapolation. There is a clearly a nuance.

Deep Learning (DL)

This is an expansion of basic machine learning. Input is everything, and where traditional processing cannot utilise everything, methods like deep learning can, because it progressively uses multiple layers to extract more and more information. Human-based systems are easily saturated, distracted, and fallible, and traditional IT is really still an offshoot of this methodology, using automation and tools to make this task easier; augmenting human effort.

Deep learning needs as much data as it can get however – which is how Big Data and algorithms that can take billions of user’s data can change how we work.

I remember many years ago working a data protection deal with the largest radio telescope in the world at the time (I believe now a precursor to SKA SA) in South Africa. They needed to back up the data they pulled in, but I believe (and these figures are approximate, from memory) that they could only process 1% of the data that the array pulled in, and that’s what we were looking to protect and offsite.

This was around 2008. Imagine that sheer quantity of data with today’s storage and ANN/DL capabilities. Given the right patterns to look for, you don’t NEED human intervention any more. I strongly suspect that when SKA SA comes online around 2027, it will be using Deep Learning and AI, if not ANN, to parse, categorise, and archive the bulk of that data for searching.

Deep learning and AI has the capability to be a game-changer for how we analyse the world and advance; if we combine that with quantum computing capabilities, we’re starting to work out where the genesis of the godlike Minds of Iain M. Bank’s novels could come from, if we’re lucky.

 Or Skynet’s People Processing Services if we’re not. Personally, I would prefer the former!

Big Data

You can’t talk about AI without the latest buzzword/required input for AI. The term “big data” refers to data that is so large, fast or complex that it’s difficult or impossible to process using traditional methods. The act of accessing and storing large amounts of information for analytics has been around a long time, but when a certain level of complexity and data saturation is reached, false positives become an issue. (another danger here is Big Biased Data, so having as much as possible may help reduce this).

Where AI variants shine is that they need this data to make decisions or produce better results, so now this is on many lips too.

No alt text provided for this image

The supporting structures

AI requires connectivity, integration, a variety of data required; the ability to comprehend rather than merely operate on statistics, failure incidences and calculation; automation, IoT, and potentially IoU.

These are much more prevalent today, and more importantly the human supporting structure is there; AI/CC is already now proven for certain deliverables.

Without these, AI alone isn’t able to affect much. It requires input, constraints to act against; indeed I believe this is why the initial fanfare around AI was premature some years ago. It wasn’t in any way ready (AI has only just beaten professional human players at StarCraft II, which is totally different to calculating chess moves in advance and requires nuance), and more importantly the surrounding structures weren’t ready either – which I have long considered integral to its success.

Internet of Things (IoT)

One of the things any organism – artificial or otherwise – requires to learn and grow is feedback to stimulus; information. And in terms of AI, this is likely to amount to as much connectivity and data as possible to carry out its tasks.

IoT is therefore interesting because it offers the opportunity to both optimise our lives, and learn frightening amounts of data about us. This is already being massively misused by humans – an example being Facebook, or Amazon, using AI and algorithms. Could this be worse with a full AI entity? That depends on what its purpose is, and whether humans have access to all the results (initially at least the answer is probably “yes”).

What I find fascinating is that there is the potential here to have IoT act somewhat analogous to a Peripheral Nervous System to an ANN’s CNS (Neural network and immediate supporting structures). Facebook does this in a rudimentary way with mobile devices; Siri, Cortana, and Google AI exist; Amazon also uses Alexa and analogues.

Special mention: Internet of Us (IoU)

And now we come to something really interesting. What happens when humans integrate into this? And I mean really integrate?

Jowan Osterlund has done some fascinating, groundbreaking work which I’ve referred to a number of times regarding the conscious biochipping of humans with full control over the composition, sharing, and functionality of the data involved.

This has amazing potential, including ID and medical emergency information, and giving full control to the owner means it can be highly personalised. And therein may lie a weakness for us as well as a strength, as far as AI is concerned.

There’s currently no way to track an inert chip like that via GPS or our contemporary navigation systems; however, AI integration could potentially chart the almost real-time progress of someone through payment systems, IoT integration for building security, even medical checkups where human agencies couldn’t and wouldn’t.

On the other hand, the potential for human and AI collaboration here is immense. Imagine going into Tesla for an afternoon with one of Jowan’s chips implanted in your hand, and coming out with it programmed to respond to the car as you approached (assuming no power source was required for the fob, which I believe it currently is). Your car would unlock because it’s you.

That’s fantastic, but also open to vast potential and dangerous misuse by humans, let alone AI. Cyborgs already exist, but they just aren’t quite at the Neuromancer stage yet, and neither are the AI’s (or “Black Ice Firewalls” – Gibson is recommended reading!).

No alt text provided for this image

Stories Vs Reality

I think there is definite value in reading Sci-Fi and looking at how people imagine AI, because we’ve already seen life imitating art as well as art imitating life – and there are so many narratives of AI, from the highly beneficial to the apocalyptic, that there is something of warning or hope across the board. This can help us take a balanced approach, perhaps – but it needs to be tempered by reality.

Our ability to craft stories of AI gone awry as a deep-rooted fear of the usurpation of humanity, its subsequent destruction at our hands in a pretty violent way, and the other myths we surround ourselves with might not reflect well upon us should a learning ANNI come across it unprepared. We simply don’t know how, or even if, any of this data would be taken in.

The Dangers of AI

AI as a tool has a number of worrying possibilities. It is developing so fast that the danger we will ourselves not adapt in time is real; additionally, we need to balance job losses with new roles around the new tech, which is exponentially faster and more disruptive than the physical and hybrid processes that came before. If we have massive numbers of people losing jobs and don’t find a solution, this is a cause for real concern.

Of course a tool can be used for good as well; but AI is a dynamic tool that can potentially learn to think and change. Some very smart people, including the late Professor Stephen Hawking, have been concerned about the dangers. There are some great examples here (, as well as a few worrying instances recently:

Tay was a bot designed to be targeted towards people of ages 15-24 to better understand their methods of communication and learn to respond. It initially held language patterns of an age 19 US girl. Tay was deliberately subverted; it lasted only 16 hours before removal.

Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet… as a result, [Tay] began releasing racist and sexually-charged messages in response to other Twitter users. Artificial intelligence researcher Roman Yampolskiy commented that Tay’s misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior.

Within 16 hours of its release, and after Tay had tweeted more than 96,000 times, Microsoft suspended the Twitter account for adjustments, saying that it suffered from a “coordinated attack by a subset of people” that “exploited a vulnerability in Tay.”

(Source: Wikipedia)

Another AI has been designed to be deliberately psychopathic. Norman – MIT’s psychopathic AI– has been designed to highlight that it isn’t necessarily algorithms at fault, but the bias of data fed to them.

The same method can see very different things in an image, even sick things, if trained on [a negatively biased] data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.

Norman is a controlled study to highlight these dangers, and in fact there is a survey available to help Norman “learn” to fix itself – but imagine if code was leaked, or elements of Norman somehow were used by other AI to learn. This is similar to a cerebral virus – once it escapes a lab, it’s very hard to contain, so let’s hope it doesn’t (I’m not going to speculate on the results of any MIT robots being subject to this!).

A third – and the most damaging IMO – example is Facebook’s algorithms twinned with their data harvesting and manipulation practices. Fed by the deeply troubling human-led Cambridge Analytica data, for example, it not only expands on the above issues but adds in the wholesale manipulation and misinformation of large portions of populations around the globe. The intent may be to make money for Facebook (and they still show little mores where this is concerned, especially politically and with a bent towards specific leanings, which continues to alarm many), but the reality is these algorithms display a great understanding of making humans do something en masse. This is now changing politics. Humans are wired to tap into fragmented narratives because of our evolution of mental patterns; we should have checks in place on social media before implementing these measures. We don’t, and that’s likely deliberate where corporate profit is concerned. It’s alarming enough when humans direct this – what if AI manages to change to the point it’s now taking this decision?

AI also ties directly into digital warfare and the removal of the human condition from life decisions – terrorism, drones, vital infrastructure disruptions, and more, all directed by humans, and the errors could be as damaging as the aims. If we enable this, and AI decides to include more than our directives, this becomes doubly problematic.

Bear in mind – in many instances of machines not working as expected, especially in computers, the root cause is almost always human error, through mistake, misunderstanding, or lack of foresight. We are and will be further complicit in mistakes made by and of AI as we move forward, so we must take care how we step ourselves. We can’t actually predict all this, only simulate, because an AI in the wild would be totally alien to us. It would not have any recognisable humanity. And therein lies a danger, or not. It could potentially see us as a threat, or be utterly indifferent to us; it might not understand us at all. Add into this that many people find it amusing to deliberately warp the process without care or thought for consequences, and there is genuine cause for concern and will skew things further.

However, even with all of this, these have still been directed or influenced directly by humans. Extrapolating further, it’s hard to project what AI-derived AI would be like. A lot of this depends on how it’s approached by us. It’s possible that sentient AI could be, in human terms, schizophrenic, isolated, sociopathic, psychotic, or any combination of these. It’s equally possible that these terms simply don’t apply to what stories love to describe as “cold, machine intelligence”. Or perhaps we’ll go full Futurama or Star Trek and install “emotion chips” to emulate full empathy. It’s hard to say, but I think it goes without saying that we need to step sensibly and cautiously, and not simply focus on profit and convenience.

My own concern isn’t so much what an innately innocent self-determining AI would do; it’s what an AI would do at the behest of the creatures that created it – and who often crave power without caring about others of their species. Instill those attributes into an AI, and we have some of the worst elements of humanity, along with an alien lack of compassion.

It’s a fascinating field of study and projection with a deep level of complexity, and we know only one thing for sure; whatever we do now will have unforeseen and unintended consequences. This is where Cynefin is really important; we need to make our AI development safe-to-fail, and not attempt “failsafes”.

(I’ll be writing an article on safe-to-fail vs failsafes another time).

No alt text provided for this image

Looking ahead…

Much of this is in the future; in terms of human replacement, AI is currently behind automation, which has a good head start. AI is currently either set to augment human thinking, or analyse it – not completely replace it.

No alt text provided for this image

Even in some of the better future stories I’ve read, such as Tad Williams’s Otherland series, the AI capability still requires gifted human integration to be truly potent, and we’re probably going to be at that level for some time (albeit in a less spectacular fashion).

So there is some interesting exploration of AI and linked fields. Some of this no doubt sounds far-fetched, and I have obviously read my fair share of hard and soft sci fi as well as real-life research and study – but the truth is we simply don’t know where we will end up; only simulate at best. We must tread emergent pathways cautiously.

“The real worry isn’t an AI that passes the Turing Test – it’s an AI that self-decides to deliberately fail it!”

I hope this has been a useful exploration of the disruption of AI and its impact on the market – keep your eyes out for Rise of the Machines Part II, where we also delve into Automation.