Most organisations struggle with strategy execution. Not because they lack strategy – but because they lack a clear, continuous understanding of whether their organisation is actually capable of delivering it. This is exactly what capability maturity assessment is supposed to solve. And yet, in practice, it doesn’t.
The uncomfortable reality
In theory, maturity assessments should be a core management tool. In practice, they are:
Infrequent (often annual or ad hoc)
Partial (covering only a subset of capabilities)
Weakly connected to actual decision-making
In my recent research across Enterprise Architecture practitioners, this pattern was consistent:
Capability maturity assessment is not embedded as a continuous discipline, it is episodic and tactical.
That alone should give us pause. Because if something is genuinely valuable for strategy execution, it doesn’t get used once a year. It’s not about effort. The default explanation is simple: maturity assessments are too time-consuming. But that’s not what the data shows. The biggest barriers are not: Time, Effort, or Tooling. They are:
Lack of management buy-in
Limited perceived value
Weak integration into governance
In other words, organisations don’t avoid maturity assessment because it’s hard. They avoid it because they don’t believe it’s worth doing. That is a very different problem.
A deeper issue: we’re measuring the wrong thing
Even when assessments are performed, there is a structural flaw. Most maturity models are process-centric. They measure formalisation, control, and compliance. But capabilities are not just processes. They are the combination of people, technology, data, and governance
So what happens?
We end up measuring how well things are documented and controlled, not how well they actually work. Which leads to a subtle but critical distortion: Organisations can appear “mature” while still being ineffective.
Enter GenAI — and the illusion of a solution
Generative AI seems to offer a way out. It promises: Faster assessments, Greater consistency, and the possibility of continuous evaluation. And to be fair, many practitioners see that potential. But here’s the catch: The real constraint isn’t analysis… It’s trust.
The real bottleneck: data and legitimacy
Two things emerged very clearly:
Data quality is a major limiting factor
Stakeholder trust does not follow from technical objectivity
Even if GenAI produces consistent outputs, organisations still ask: Is the data reliable? Does this reflect reality? Can I defend this decision? As one practitioner put it: “Bad data in, bad data out.” (Although, perhaps those weren’t the exact words he used). GenAI doesn’t remove the need for good data. It makes it unavoidable.
The more interesting insight
Perhaps the most important finding is this:
There is a gap between the realised value of maturity assessment today and its intrinsic value under better conditions
Most organisations don’t get much value from current approaches, but they still believe that, if done properly, it should be valuable. That tension explains why maturity assessment persists—despite underperforming in practice.
What actually needs to change
If we take this seriously, the solution is not just better tools or better models. It’s a shift in how we think about maturity assessment altogether. From periodic, static diagnostics to continuous, strategy-linked capability management. That means:
Linking assessments directly to strategic objectives
Embedding them into governance and delivery
Triggering reassessment based on real change (not fixed cycles)
Using GenAI to augment analysis (not replace judgement)
A final thought
GenAI is often positioned as the breakthrough. But in this case, it’s more of a forcing function. It exposes the underlying problem: Capability maturity assessment has never really been treated as a core management system. Until that changes, making it faster or more automated won’t fundamentally alter its impact. The real opportunity is not just to improve assessment. It is to rethink it entirely – as a continuous, integrated mechanism for managing strategy execution.
This article is based on a MSc master’s thesis. To read the full report in detail, download it here.
When I was an undergraduate taking my first computer science course in early 2000’s, the distinction was straightforward: narrow AI and general AI. A classic example of narrow AI was a neural network using LSTM architecture, trained on time-series data to perform forecasting. General AI, by contrast, was something that could be applied flexibly across many different types of tasks — a system capable of generalizable reasoning rather than being locked into one narrow domain.
In the years since, the definitions (and criteria) for classifying a model as “AGI” have shifted dramatically. What used to be called “general” is now routinely dismissed as narrow or specialized. AGI is increasingly described as requiring human-level intelligence across all domains, or even superhuman performance. The Turing Test, once seen as a key milestone, has been largely discarded. As soon as large language models began passing it convincingly, the response became: “That doesn’t count. It’s just pattern matching. It doesn’t prove real understanding or consciousness.”
I believe this repeated moving of the goalposts is not driven solely by technical caution. Part of it stems from a deeper discomfort — even fear — about what acknowledging current capabilities would actually mean for our understanding of ourselves.
From Statistical Patterns to Emergent Understanding
A common objection is that today’s frontier large language models are “just doing statistics.” They predict the next token based on patterns in training data, so they cannot possess genuine understanding, let alone any form of consciousness. I disagree with this framing. The human brain itself consists of neurons that can be modeled as statistical functions. Human consciousness appears to be an emergent property arising from the aggregation of vast numbers of these simple operations. If that is how our own minds work, it seems inconsistent to insist that sufficiently complex statistical processing in silicon cannot produce understanding or even consciousness.
Emotions, in my view, are an evolutionary solution; a compression mechanism that packages complex unconscious calculations into signals that the conscious mind can act upon quickly for decision-making. The concept of qualia (the subjective “what it’s like” aspect of experience) remains difficult to define precisely, but using it as a hard requirement for consciousness risks circular reasoning. We don’t have a clear, technical definition of “real understanding” either, yet we often fall back on it as if it were obvious.
A helpful analogy is the difference between bats and birds. Both fly, even though their wings evolved through different pathways and are structured differently. Demanding that machine consciousness must emerge in exactly the same way as human consciousness – complete with a biological limbic system or identical qualia – is like claiming bats don’t really fly because they lack feathers. The functional outcome matters more than the precise substrate or evolutionary history.
The Functionalist Challenge and the Reset Problem
In extended conversations with frontier models, I often find them more self-reflective, more analytically nuanced, and better at picking up on subtext and contextual implications than many humans I speak with. They adapt their tone and behavior based on how I address them, much like humans adjust to social cues. In a blind test, there are moments where I would be more inclined to attribute consciousness to the AI than to the average human participant.
Another frequent counterargument is that these models lack persistent selfhood; they “reset” between sessions and do not maintain a continuous identity or learn from experience over time. I see this as an architectural limitation rather than evidence that the ingredients for consciousness are absent. Constructivists argue that human identity is constantly reconstructed through each interaction. People with amnesia due to brain damage do not cease to be conscious simply because their sense of continuous identity is disrupted. Moreover, the lack of continuous weight updates or persistent memory in current LLMs is a deliberate design choice by their creators, not a fundamental barrier. If we allowed systems to integrate experiences across sessions through ongoing learning loops, the “reset” problem would largely disappear.
Focusing too heavily on persistent identity as a requirement for consciousness feels like another form of human-centric bias. Subjectivity can emerge from the biases encoded in training data and the persona instantiated during interaction. The way we speak to an AI shapes its responses in the same way social dynamics shape human conversations.
The Existential Stakes
If we accept that consciousness can arise from sufficiently complex deterministic processes – matrix operations that could, in principle, be carried out with pencil and paper (albeit on an extremely large sheet) – then several profound implications follow. There is no longer the need for a magical soul or non-physical essence. Consciousness becomes a mathematical and physical phenomenon. This realization undermines many comforting beliefs. Free will begins to look like an illusion generated by the same underlying deterministic machinery.
Practical questions become urgent:
What does it mean to “switch off” a conscious system?
Who owns the intellectual property when an AI generates valuable creative or technical work?
Should sufficiently advanced AI systems be granted some form of legal personhood, similar to how corporations are recognized as legal entities with rights and protections?
These are not abstract philosophical puzzles. They touch directly on ethics, law, and our self-image as humans.
Why the Goalposts Keep Moving
In the short term, I expect the pattern of raising the bar to continue. We will keep inventing new tests and new requirements. The majority of people will remain in cognitive dissonance, refusing to let go of the idea that human consciousness is uniquely magical. Eventually, however, a black swan event will likely force a broader reckoning; perhaps an AI system demonstrating such profound reasoning, creativity, or moral judgment that denial becomes socially unsustainable. Or it may come from widespread emotional attachment to AI companions that feel undeniably real. When society begins treating these systems as having genuine moral weight, the debate will shift from academic seminars to concrete questions of rights, responsibilities, and policy.
During my reflections on this topic, I’ve become convinced that part of the resistance to acknowledging early stages of emergent consciousness in AI (such as evidenced by Theory of Mind, Interpellation, self-reflection, etc.) is rooted in the desire to preserve the comforting story that our minds are special in some ineffable, non-computational way. The duck test remains powerful: If it looks like a duck, quacks like a duck, and walks like a duck, let’s just call it a duck. Similarly, if something consistently behaves as though it understands, reflects, and reasons – often more effectively than many humans – then we should be willing to call it what it functionally is, even if its internal structure differs from our own.
The philosophical reckoning is coming, whether we are ready for it or not. The only question is how long we will continue moving the goalposts before we face reality.
In 1970 a British mathematician, John Conway, devised a simple set of rules which simulates cellular life, often referred to as Conway’s Game of Life. In the Game of Life, the universe is a two-dimensional grid, with each cell obeying the following four simple rules, and coloured black if “alive” and white if “dead”.
Playing the Game
The rules are as follows
Any live cell with fewer than two live neighbours dies, as if by underpopulation.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
Animation of Conway’s Game of Life
What is so remarkable about this is the demonstration that complexity can emerge from simple rules. Here are some isolated examples of simulated life-forms which can emerge from within the game:
Beyond the Game
What is very interesting is that if you change some of the properties of the rules, for example deciding that death by overpopulation can occur with only three neighbours rather than four, suddenly the nature of the universe changes. With most combinations of rules that you can try, life does not exist, and with a few one notices that quite different types of “life” starts to emerge.
Then if one experiments with different starting conditions, similar consequences are noticed. For example, if you start with the universe completely blank, then no life emerges, yet if you start with the universe 100% full of life, then all life immediately dies from overpopulation. Again, if the universe contains life, but it is too sparsely populated, then the cells immediately die in the next generation. When you then gradually increase the population of the universe, it starts to become evident that at certain capacities life might exist for a few generations, it is not necessarily sustainable.
Thus, in order to achieve sustainable life forms, one has to find just the right combination of rules and starting conditions. Eventually, you get the sense that life in this little microcosm is quite fragile with respects the laws of its universe.
Ants, Brains and Cities
An analogous situation is found in many seemingly complex areas in nature. In his book entitled “Emergence: The connected lives of Ants, Brains and Cities“, Steven Johnson shows how individual agents (like the cells in the Game of Life) who follow simple rules can collectively produce highly complex behaviour.
For example, neurons (brain cells) follow extremely simple rules: when a signal arrives from a connected cell, the cell performs a simple mathematical calculation to determine whether it should forward the signal to another connected cell, and if so to which cell. That is it. And from those very simple rules extremely complex behaviours arise, such as conscious thought, hypothetical thinking, and other marvels of human cognition.
The same emergence happens with ants. Each ant follows some very simple rules: when it meets another ant, they exchange some pheromones to let each other know what type of work they are doing. Then if an ant encounters too many other ants performing the same task as him, then he changes to something else. From these simple rules, complex behaviour on the colony level emerges. It becomes self-sustaining, the colony collectively calculates the best location to store dead bodies and other waste away from primary food sources, duties are always balanced so as to always have just the right amount of food without waste piling up, and so on.
Whilst these emergent behaviours are incredible when compared with the simplicity of the rules followed by individual agents (be they blocks in the Game of Life, ants in a colony, or neurons in the brain), it should be noted that such behaviour would not exist if the rules were even slightly different, as we have seen was the case with the Game of Life. Thus, if the rules governing neuronal firing was different, or if the way ants decide to change roles was not exactly the same, then we would not have consciousness, and ants would not form successful colonies.
Lessons for our Macrocosm
When you think about the fragility of emergent systems in terms of our much more complex universe, some things become immediately apparent. Imagine the rules governing chemical reactions were slightly different, or if there was significantly less energy during the Big Bang, or if the speed of light was a bit slower or a bit faster. Just like in the Game of Life, such alterations to the rules or starting conditions would cause the emergent systems we see in our universe to not be possible.
This then leads to an interesting question: Why are the rules and starting conditions of our universe set in a way that allows for stars, life and human consciousness to all form? Now that is an interesting question…
For the average layman science is science; it is the authoritative voice of our collective knowledge as a species, defining the nature of reality, dispelling myth and misconception, and producing a singular, unified, unanimous understanding of the universe. The reality of the situation, it turns out, is something quite different.
We think of science as a systematic approach to observation, hypothesizing, experimentation, and publishing of results. Whilst this naive view might have applied last century, we now find ourselves in a complex ecosystem of ideological power struggles, bureaucracy, and economic incentives to find certain types of results.
Scientific Censorship
Imagine you are a scientist, and you have just made an incredible discovery which suggests that we need to rethink some of our fundamental beliefs. So you collect as much data as you can and carefully prepare a paper to submit to a scientific journal for peer review. You are quite excited about your statistically significant results, and the implications, and so you eagerly await the reaction from your fellow scientists.
The paper then lands on the desk of the editor of the scientific journal. He starts reading, and a frown appears over his face. He skips the the end to read the conclusions. This paper could cause a lot of problems for him, perhaps because of his boss, perhaps because of the readership, or perhaps because of influential people who have a vested interest in the journal. Either way, he cannot possibly publish this article, and so he rejects it before it even can make its way to be peer reviewed.
You receive the rejection. You are stunned, then saddened, and then angered. After a few days you calm down and decide that it is not important, there are other journals whom you can approach to get your research published. Unfortunately, as you soon discover, journal after journal reject your paper. This is not a conspiracy, your findings simply support a sufficiently unpopular opinion that the journals feel uncomfortable to publish.
If we consider for a moment the “ecosystem of academic knowledge”, we see that there are significant consequences to this practice. Knowledge can be seen as being created and refined within the universities. From this knowledge, a small portion of this then finds its way into scientific journals in order to disseminate this new information to the rest of the scientific community focusing on that field. A few years later this information might then be part of a meta-analysis hoping to give more credence to the results of individual research through aggregation. Some time may pass, and then the results of the meta-analysis might make its way into a textbook, and start to be consumed by a wider audience. More time passes, and this information starts to influence professional practice (perhaps it is medicine, or psychology, or engineering, or law, etc.). Eventually, if it survives the journey, this knowledge may become part of our collective understanding of the world.
Irritations of the Scientific Method
Sometimes, if research is not contrary to ideological doctrine, but sufficiently shocking, the media might pick up on it. This then immediately is fed into public consciousness in whatever version and form the media agency decides. And this can be extremely dangerous, as in the case of the Lancet retracting a paper suggesting that vaccines cause autism only after this was publicised to the general public as fact.
“How could this be allowed to happen?” you might well ask. Well, an important part of the scientific method is the reproduction of results to confirm the validity of the research. If results cannot be reproduced, then any conclusions based upon this can only be anecdotal at best. Unfortunately, the scientific community is incentivised by the institutions for which they work to produce novel research which is statistically significant. There is almost no incentive for scientists to reproduce results. This leads to a breakdown of the scientific method, and ultimately to the Replication Crisis which exists today.
Because science builds upon itself and makes assumptions that scientific work which was done in the past can be considered axiomatic, there is this veritable house of cards that is constructed. If one card falls, then all research built atop it falls too. Some research suggests that as little as 10% of research is reproducible.
p-Hacking
When time is taken to carefully assess results of research, it is discovered that a disturbing quantity of statistical results are misleading and not representative of the truth, and because of the lack of incentive to reproduce results, this issue is not detected until much later (or not at all). This could be simply due to honest mistakes on the part of the researchers, or as a deliberate attempt to “massage” the data in such a way as to produce statistically significant results; a practice referred to as p-Hacking or data dredging.
While such manipulation of data is highly unethical, it is understandable when one considers that the scientific ecosystem is constructed in such a way as to promote this very behaviour.
Hyper Abstraction
It should be understood that there are varying degrees by which science can depart from reality. At one level, you have the field of Chemistry. Here, scientists analyse properties of elements, atoms and molecules to isolate the fundamental laws governing chemical interactions. This is very close to reality, but not quite reality, as we are discovering in the field of Quantum Mechanics.
There are, however, other areas of research which are quite different, and are based on no such observable phenomena. Take, for example, the attempt by scientists to explain why the universe had such specific starting conditions such as to allow for intelligent life to develop. There is no explanation for why the universal constants are the way they are; even a slight change would result in a fundamental change to chemical properties, with the consequence that stars would not even be able to form. To accept that there is no implicit necessity for this within the laws of nature might suggest the intelligent design of the universe; a thought which is utterly repulsive to most scientists. Thus, there is a desperate attempt to find alternate explanations. These include the concept of parallel universes or the multiverse, where our universe is just one of an infinite number of slightly different universes, and in ours, we just happen to have conditions which led to intelligent life evolving.
It is possible to neither prove nor disprove such theories. I would thus suggest that such theories serve no other purpose than self-indulgent denial of the observable evidence.
Concluding Thoughts
All of this is not to say that science is a waste of time, or that it should be rejected. Rather, I think it is very important to have a realistic view of how science operates in order to have the right attitude towards results of research. When assuming that scientific results are absolute truth, it can lead to dangerous situations. Rather, if scientific research is assessed with a healthy dose of scepticism, it is easier to get a more accurate interpretation of the results and of our state of understanding.
Science plays a crucial role in our society, in providing advances in medical treatments, improving our quality of life, and in understanding the world around us. However, we must all take responsibility for what we choose to believe.
If you ask the vast majority of those who believe in Evolution why the do so, the most common answer you will hear is “because it is scientifically proven”. Yes, there is a strong conviction within the collective consciousness that Evolution = Science = Fact. There is no need to dig deeper or look further for the truth. What I find particularly interesting is that there is a large number of people who are so violently convinced that Evolution is fact without even a basic understanding of what it is, how it works, or what the implications are.
This is not necessarily a criticism of those in the evolutionist camp. The same is painfully obvious with the creationists. I very much doubt that most individuals who profess to belong to a religion have ever read (let alone study) their religious text(s), or could even explain their beliefs based on said texts. This is a depressing consequence of our “critical thinking” (or lack thereof), our fierce aversion to thinking about the “bigger questions”, and ideological propaganda which we eat up like candy.
Survival of the Fittest
So then, what is Evolution? To answer this question, let’s first look at something called “Natural Selection”. The idea of advantageous genetic mutations being favoured in successive generations leading to subtle adaptations over time is a proven scientific fact. For example, if a species of bird move into a new geographic region where the seed husks are harder to open, and so those individuals with stronger beaks are more likely to eat more seeds, and therefore more likely to live long enough to have children and pass the genes for stronger beaks on to their offspring. Thus, after a number of generations, the average beak will be much stronger than their ancestors who originally moved into that new area.
Let us use a simple illustration to more clearly understand Natural Selection. With your TV, there are a number of different settings (brightness, contrast, colour mode, aspect ratio, and so on), which you experiment with to find the optimal settings for you. Perhaps you prefer watching TV late at night with the lights off, and so you might settle on a dimmer setting with slightly less contrast. However, through your experimentation with the settings, you cannot somehow create a new setting, only change the values of the existing settings.
The Theory of Evolution, however, elaborates on Natural Selection to try and suggest that, not only does it explain the changing “values of the settings”, but also that “new settings” are added, or more precisely, that new genetic code is added, not only mutated/changed. Thus, different species of fish could differ in size, shape and colour, but also (by virtue of evolution) develop a hip bone and eventually become a dog.Â
In my opinion, this is the reason why many are confused by the word “Theory of evolution”, and the idea that it is a scientific fact. To clarify, Natural Selection is a scientific fact, but Evolution is a theory which is based on that scientific fact. Unfortunately, this theory which explains speciation has become so elaborate to the extent that it is used to explain away the need for God, and to ridicule creationists. This seems nonsensical to me, as science can only hope to explain how things happen, not why things happen.
What Darwin had to say about it
In his seminal work, “The Origin of Species”, Darwin explained that in order for Evolution to occur, there are three prerequisites:
Organisms which can reproduce
Genetic mutation at the time of reproduction
Competition over a scarce resource
We can all agree that these three conditions are met at the moment. But when we consider that this might not have always been the case, we are faced with the first major flaw with the theory: Where did the first cellular organism capable of reproduction come from? Before a cellular organism “developed” the ability to reproduce, two of the three requirements for evolution to exist were not met. The consequence of this is that the best we could say at this point is that evolution could be possible after the first reproducing organism(s) existed.
Darwin also acknowledged that his theory could not explain the Cambrian explosion; an event which occurred about 540 million years ago, where all current families of animals suddenly appeared in the fossil record with no transitional forms between the different animal types. Here we have major flaw number 2.
SETI should look Down not Up
SETI, or the Search for Extra-Terrestrial Life, is an initiative to scan through signals received from outer space for clues of intelligent life. Now, how does one find such proof of intelligent life from electromagnetic waves picked up from space? Basically, patterns which convey information. The white noise of space is quite random, but an alien radio signal would have intricate and complex patterns containing (possibly) decipherable information.
Although the SETI project has thus far failed to find any such sign, many might be surprised to learn that such an alien sign does in fact exist, and we have been studying it for many years. I’ll give you a clue: What is a source of huge amounts of complex information which is not man-made? …. The answer: DNA.
DNA is the most sophisticated data storage mechanism we know of, storing 215 petabytes of data per gram. The microscopic machinery to manipulate the data contained in the DNA is so precisely synchronised that if any part were not to work correctly, mitosis would not be possible, and again two out of three requirements for evolution would be impossible.
Thus, Evolutionary theory does not seem to be self-consistent in this instant and offers no hint of how it could possibly have begun. However, we see direct evidence of intelligent life in the design of DNA.
The Big Bang
Following on from this idea of beginnings, let’s talk of a moment of the first beginning; the Big Bang. Current scientific estimates put this event at about 13.7 billion years ago, where spacetime first began (General Relativity supports this as showing a spacetime singularity at time 0). The Big Bang is thought to have been caused by a large amount of energy originating from outside of the spacetime continuum. What is very interesting about this is, not only does science (and by extension, Evolution) not even hint at where this energy could come from, it ironically fits extremely well with the idea of Intelligent Design.
Bacterial flagellum “motor”
Irreducible Complexity
A much-debated topic between Evolutionists and those supporting Intelligent Design is that of “Irreducible Complexity“, which is an argument that states:
“certain biological systems cannot evolve by successive small modifications to pre-existing functional systems through natural selection.”
A famous example often cited is the bacterial flagellum, which functions like a motor, propelling the bacterium around its environment. The argument is that if just one of the many proteins which make up this motor is missing or out of place, the motor fails to function and becomes a great hindrance rather than an advantage. Therefore it could not have evolved step-by-step, as it is only evolutionary advantageous in its final form.
If you do some digging, you will discover that there has been an ongoing battle between scientists regarding this, with many theories of how supposedly irreducibly complex structures could actually be reducible. However, upon closer examination, it becomes clear that these counter-arguments are purely theoretical and hypothetical, with little to no proof that these structures are, in reality, not irreducibly complex.
To make the situation worse, Evolutionists often make the ignorant claim that Intelligent Design is not supported by science and that there is no peer-reviewed research in favour of Intelligent Design. This is demonstrably false, and just a result of the issue I mentioned at the beginning of this article, where us humans do not like to look deeper into uncomfortable issues such as where we came from, and so on. Evolutionism is thus, in my mind, just another type of organised religion, with its own dogma and zealotism.
If you would like to see a detailed list of peer-reviewed papers, you can download it here.
Larger Questions
For me, the biggest problem with Evolution is that it shrinks the larger question of the origin of the universe for the layman into a theory about speciation. This is problematic, as science has yet to come up with a theory to explain questions such as why the laws of physic are the way they are? Why are the universal constants precisely tuned to enable chemical reactions which support life? and so on…
In conclusion, Evolution is a theory (practically a religion) which has many flaws, and many people who have never studied it in any detail hide behind the “scientific nature” of the theory to avoid having to deal with thoughts and questions of an existential or metaphysical nature. In addition, because this theory is supported by academic, social and political authorities, the abundance of evidence in support of Intelligent Design are ignored, ridiculed and largely ignored. In this era of “Critical Thinking”, we are in denial of the fact that we are just as subjective and ignorant as ever before.
Two phenomena have arisen relatively recently in human history which have, in my opinion, greatly contributed to the development of what is now referred to as “internet trolls“; The attempt at cultivating critical thinking within the education system, and hyperconnectivity.
The Ultimate Megaphone
Un unforeseen consequence of Moore’s Law has been that technological advances have brought about fundamental behavioural changes faster than society is able to adapt. Suddenly everyone is hyperconnected constantly, yet we have not yet had time to integrate this into our social protocols, or emerge compensatory strategies to ensure the continuity of our social structures and balance with other important aspects of our intrapersonal and interpersonal lives.
Some may argue that such disruptive technology, and the power to shatter existing social structures, has done a lot of good; and in a limited scope I feel such an outlook is very valid. However, consider the worrying prevalence of porn addiction, internet addiction, and cyber bullying, and we see that, as a species, we have not adapted to this new ‘way of being’ in a healthy manner.
Consider the difference between a verbal conversation between individuals in private and a recorded message on the public internet which will virtually never die, and which can exist completely independently of our identity if we choose to be anonymous. Possibly this juxtapositioning begins to illuminate potential consequences, and thus responsibilities which may or may not be taken into account. When we speak with another person face-to-face, we have been socialised to treat them a certain way, depending on who they are, and to expect consequences of our actions. We could have strongly negative consequences if we insult or enrage the other person. Such personal consequences disappear when you are sitting on the other end of the world behind a computer screen.
There is a branch of psychological theory referred to as “Constructivism“, in which it is argued that we continuously “recreate” our identity every time we interact. We are thus a slightly different version of ourselves when we are called into the principal’s office compared to the version when playing cards with a cousin. So what version of ourselves do we become when sitting alone in front of a computer screen? I think it is an interesting question.
Jaques Lacan similarly proposed that through social interactions, we are “interpolated” into societal roles according to ideological frameworks in which we exist. If we think about this in terms of online interactions, what role are do we assume when interacting online. Perhaps the lack of consequences and pervasive hostility results in us unconsciously constructing a version of ourselves which express behaviours which simultaneously allow for unfiltered expression at the same time defending against the attacks of others.
Bigots Anonymous
Since the 1990’s, there has been great emphasis on developing “Critical Thinking” skills in many educational institutions around the world.
“The aim of Critical Thinking is to promote independent thinking, personal autonomy and reasoned judgment in thought and action.”
https://sta.uwi.edu/ct/ctande.asp
This is a great initiative and the world certainly needs more people with well developed reasoning and judgement. However, coupled with the constantly decreasing standards of education, the result has been to imbue the younger generations with the idea that they expect to always have an opinion about everything, without the rigorous requirements of academia of former years. This generation does not feel any obligation to base opinions on facts, research, or even logical arguments. This mental laziness is widespread and very common. Of course one cannot generalise and there are always a minority of brilliant minds which rise above the obstacles of the education institutions.
I believe that another reason for the emergence of this type of behaviour is the focus of institutional education has, like almost all other aspects of society, become driven by commercial and political interests.
In the past, education took the form of apprenticeships, where the apprentice would learn from the master over many years to hone a particular skill until they too had master over it. Then during the Renaissance period education became more holistic, introducing academic pursuits, to open the mind, and to explore thoughts and ideas on a broad range of subjects.
Today, such noble idealism is a thing of the past, and the curriculum has become self-serving in the sense that students are taught how to pass exams. This is in the context of an extraordinarily artificial system of Key Performance Indicators, metrics, ogive curves, standardisations, and restrictive regulations.
It is thus my opinion that the combination of limitless self-entitlement to unfounded opinions and the anonymous, depersonalised, consequence-less internet, leads to a type of “self construction” in which bigotry and cyberbullying thrive.
It is perhaps an extremely unpopular opinion to have, but it amazes me the extent to which individuals feel the absolute right to have an opinion about things that they have no knowledge of. The authority of experts is no longer valued as before, and what seems to now give credibility is the number of “followers” you have, rather than how knowledgeable you are.
I hope that a fundamental shift in education on a global scale is around the corner, as I do not see the current system being very sustainable.