(P2): Surprise and Weak Signals

A goldfish with a look of surprise on the face.

  • Learning By Surprise
  • What is the Adjacent Possible?
  • The Hindsight Bias
  • “I wanted a perfect ending. Now I’ve learned, the hard way, that some poems don’t rhyme, and some stories don’t have a clear beginning, middle, and end. Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next. Delicious Ambiguity.”
    ― Gilda Radner

Instead of glossing over surprises as failures of understanding, we should focus on them until we have grasped their novelty and how that novelty needs to change our view of reality. We need to avoid abstracting from surprise to make it only another example of what we already know to be true.

Novel occurrences are novel for us, but they are also typically some “next step” from that with which we are already familiar. They are often called the “adjacent possible” because once they have occurred, it is fairly easy to see how they came about. This is true even if no one anticipates them. It is important to remember that in a Complex Adaptive System, there are always many adjacent possibilities for the future.

There is another common problem that results from rationalizing surprises. We look back on the surprise and try to figure out who accurately anticipated it. We think this will improve our prediction capability in the future.

Looking back does improve our understanding of the current situation. It doesn’t improve our ability to predict any genuinely novel future. If we examine what people thought about the future before the novel occurrence, we will see a very large number of ideas about what might happen.  The particular idea about the future that turned out to be accurate had no more or less information about its likelihood than many of the other ideas. The novelty tells us something useful about the current state of the CAS we are in and where it might evolve in the short term. It doesn’t improve our ability to foresee the genuinely new.

Creative Commons Attribution 4.0 International License

(P2): Perceiving Weak Signals Overview

A woman with bruising injuries on her face from partner abuse, holding a sign that says 'Did you notice me?'

  • Great things are done by a series of small things brought together – Van Gogh
  • Even the largest avalanche is triggered by small things –Vernor Vinge
  • You’ve got to think about big things while you’re doing small things so that all the small things go in the right direction –Alvin Toffler
  • Men don’t pay attention to small things – Katherine Johnson

Because our usual habits make us ignore weak signals, we need to cultivate new habits that make it more likely we will notice them. These new habits can’t be automatic. They must involve reflective attention-not just attention to something that is there, but consideration of it’s meaning. Below is a list of techniques and concepts that I hope will aid you in seeing what is important, but almost not there:

  • Surprise Can Point to Weak Signals
  • Environmental Scanning for Weak Signals
  • Tinkering and Bricolage to find Weak Signals Right Around You
  • Learning About Weak Signals Through Safe-to-Fail Experiments
  • Thinking of Weak Signals as Insurgencies

I’ll try to amplify each of these in the posts below.

Creative Commons Attribution 4.0 International License

(P2): So What Do We Do with Those Weak Signals

WhatToDoWithWeakSignals

First, we have to actually pay attention to them. Our default is to ignore them as unimportant. That means we have to have a way of making them stand out.  Most importantly, we have to conserve the meaning in the story of any weak signal instead of homogenizing that meaning or averaging it or abstracting it through ordinary statistical analysis. That is one of the strengths of SenseMaker. Its function is, first of all, to make raw weak signals stand out in a number of ways. We need to do the same.

Then, we have to ask ourselves about the value of the narratives we have acquired to support or undermine positive change. This isn’t simple to do. But our first order goal with these signals is to increase the ones that support positive change and decrease the ones that undermine it. Because these are weak signals, it is feasible for us to try out ways to do both of these in time frames that let us change our approach as we learn which weak signals we can effectively increase and decrease, and when we need to look at different initiatives to produce these outcomes.

The reason why this works at all in trying to change a CAS is that the cycle of experiment and evaluation is short. Such an approach respects the dispositional nature of CAS and doesn’t require us to use prediction and mechanical outcomes as the signs of progress.

Creative Commons Attribution 4.0 International License

Part Two: Detecting and Using Weak Signals (Cynefin)

A Specimen Cynefin Diagram (not the newest, not the oldest).  Simple / Obvious: The simple/obvious domain represents the 'known knowns'. This means that there are rules in place (or best practice), the situation is stable, and the relationship between cause and effect is clear. Complicated: The complicated domain consists of the 'known unknowns'. The relationship between cause and effect requires analysis or expertise; there are a range of right answers. The framework recommends 'sense–analyze–respond': assess the facts, analyze, and apply the appropriate good operating practice. Complex: The complex domain represents the 'unknown unknowns'. Cause and effect can only be deduced in retrospect, and there are no right answers. 'Instructive patterns ... can emerge,' write Snowden and Boone, 'if the leader conducts experiments that are safe to fail.' Cynefin calls this process 'probe–sense–respond'. Chaotic: In the chaotic domain, cause and effect are unclear.[e] Events in this domain are 'too confusing to wait for a knowledge-based response'. managers 'act–sense–respond': act to establish order; sense where stability lies; respond to turn the chaotic into the complex. Disorder / Confusion: The dark disorder domain in the centre represents situations where there is no clarity about which of the other domains apply.

Cynefin is a body of knowledge and tools to assist in changing CAS, among other things. Cynefin, as an enterprise intervention, also has developed a “narrative access and analysis tool” called SenseMaker™. Sensemaker allows the intervenors to accurately access raw views by the participants as short narratives without groupthink or homogenization. It is this ability that allows for the detection of weak signals.

Because SenseMaker has developed an app, it is possible for its users to engage huge numbers of people in a very short time. The example that had the most impact on my understanding of its capacities was an effort to work around the unwillingness of local citizens to say what they actually thought to US civil and military personnel in SE Asia.

The system was used to ask children to relate a story from their grandparents about the most important lesson that the grandparents had learned in their lives. Then the children sent the stories using the SenseMaker app. This project got 50,000 stories in four days.  There is simply nothing else that supports authentic narrative by real participants with the speed of SenseMaker.

Unfortunately for our community, SenseMaker is an enterprise tool and is priced that way. I have been exploring ways we might be able to use this system in our community, but I am some distance from a genuine solution.

That doesn’t mean that we can’t make use of the idea if we can come up with ways to assure fidelity to SenseMaker’s ability to easily access real raw narratives from participants.

I’ll discuss some ideas for using this general framework to get meaningful narratives in our community in later posts. For now, I hope you can see the importance of weak signals in the development and use of our FutureStrategy.

Creative Commons Attribution 4.0 International License

(P1): Why Are Weak Signals Ignored?

A slide: Weak Signals Detection with Social media-No surprise at all? Theory: In contemporary future studies the term weak signals refers to an observed anomaly in the known path or transformation that surprises us somehow. (Kuosa, 2014 p, 22) Our Experiences; Are We Alone? Possible Explanations:  #1 Noisy social media and other limits #2 Filters #3 Customers are Experts #4 Epistemological Limits

Most of the ways we have of finding signals in CAS make us ignore the weak signals.

Surveys, focus groups, social media scans, and almost all the paraphernalia of social studies research homogenize signals to allow the “provable” detection of the Big Signals, the ones that represent larger trends in the CAS. And statistics, as it is usually used in these studies, is designed to relegate weak signals (at best) to a distant periphery where it can be ignored.  Think about what you were taught about the bell-shaped curve, and what you believe is meaningful about the data.

This approach to detecting signals is a framework that our social and profit-driven CAS imposes on us as the meaning of “worthwhile pursuit”.  Weak signals are seen as useless in this framework and, thus, meaningless.

To find weak signals, we have to access the raw narrative that the signal creates once it comes into existence. We have to deliberately prevent the homogenization and loss of the weak signal through our usual methods of assigning meaning to the information. We have to learn to pay attention to the small, weak, and powerless.

Creative Commons Attribution 4.0 International License

(P1): Weak Signals

A network diagram with a large hub (strong links) and many smaller hubs (weak links)

One way to think about weak signals is through network modeling.

It is intuitive to view strong links in a network as the important ones and the weak links as unnecessary details or random defects in the network that don’t contribute to the purpose or function of the network. But in complex adaptive systems, strong links generate volatile unpredictable behavior. Weak links buffer the volatility of the activity of these strong links and are largely responsible for the stability of the network, even as the CAS goes about its merry way.

Interestingly, there are two communities that deliberately eliminate weak links from their social lives:

  • People who are homeless and desperate, I suppose because they believe that persons they don’t know very well are persons they can’t trust.
  • Very rich people, I suppose because they believe that those who aren’t their peers can’t be trusted and are after their money.

Both of these communities are largely right in their loss of trust for weak links, which says something about their location in the current CAS and their personal futures in the CAS.

Note that authoritarian regimes and cults both eliminate weak links in the belief that their survival only depends on the strong links that produce (in their view) their power and wealth. These kinds of “strong” CAS are notoriously volatile and readily suffer collapse if any insurgency can disturb the control.

This framework maps to the basic CAS change concept that powerful system trends are very difficult to control for positive change (even if they might support our change). The best opportunity for change lies in the weak links, because they are small now, but can grow to have much greater influence.

But identifying the weak links that might be the best support for CAS change efforts remains difficult because those links aren’t poking us in the face.

Creative Commons Attribution 4.0 International License

(P1): Why the Obvious Problems are the Hardest to Change

A political cartoon from a paper in Massachusetts in 1812 showing a Gerrymandered district just like the ones we have today.

We usually approach change by focusing on the most apparent problem in our environmental horizon (what is called a pain point in customer service). Note that the slide image is a gerrymandering cartoon from 1812, and, in my mind, gives pause to the idea that we can deal with current gerrymandering through normal problem-solving (voting, passing laws, constitutional amendments, getting the right people into office, and so on).

The most obvious problems for us are usually the ones best supported by the operation of the current CAS. There are more diverse forces supporting our obvious problems, and mechanically organized problem solving will miss most of the supporting forces in its quest for changing the obvious. So our problem solving will fail, often in the short run, but eventually in any case.  This can be true even when there are powerful forces supporting change.

Often, our most obvious problems in a complex adaptive system are the core of its strength as a system and support its resilience to meaningful change efforts.

At the same time, the CAS is constantly generating new and sometimes old trends that have been gone for a while. These variations of process are small scale, and we almost never pay any attention to them. No one ever says, “Let’s stop ignoring the flea in the room”.

But the potential for long term change in a CAS lies precisely in these small variations, or in systems theory, “weak signals”. The weak signals are the indicators (not guarantors) of where to look for levers of change.

Creative Commons Attribution 4.0 International License

(P1): Basic Ways of Thinking about CAS

A hand drawn diagram of the Cynefin Framework which is ironically very complex. Text Version through link.

By Edward Stoop at Sketching Maniacs
Text Version of Hand-drawn Cynefin Diagram

Because changing a CAS requires an entirely different way of engaging, we must develop new skills and new ways of perceiving in order to manage the losses we will not be able to avoid and to frame our future actions more strategically. These new skills are not mechanical procedures or recipes. They require ongoing engagement with the CAS and flexibility of response. These two dimensions of our CAS change strategy are the very things we have spent millennia trying to eliminate from our change plans, and our work to increase engagement and flexibility result from the rejection of the “system as machine” mentality.

This is not in any way a moralistic judgment. Unintended consequences don’t occur because there is some personal moral sanction being made by the universe that your actions are bad. Every time we create a short-term advantage for ourselves, we create an unintended and largely unperceived consequence somewhere down the tunnel of the future elsewhere in the CAS.

Humans are evolutionarily favored in devising and using short-term tactics to secure some immediate good. Before states were a reality (say, 7,000 years ago), this worked well for us generally. There was enough room in the world for our waste or mistakes to be recycled as we migrated elsewhere. The world would be “fixed” before we came back to the place we started, as it were. Now, over time, someone will eventually pay for our short-term thinking. Unintended consequences are triggered by all our efforts to stay ahead of the results of our current decisions. And, everyone else is doing the same thing. So, we or our descendants all eventually get burned by the distant actions of someone else. Our tweaks just make things worse over time.

The following posts will focus on one aspect of engaging CAS or another. The image in this slide is itself an engaging way to think about CAS.

Creative Commons Attribution 4.0 International License

(P1): Approaching the Wild CAS

A Large powerful waterfall at Eagle River in Michigan's Upper Peninsula, as an example of a wild CAS
Eagle River

One of the ways of thinking about modern society is that our lives are becoming more like membership in a wild ecosystem. Our common CAS is becoming more like the ecosystems that existed before humans had such a profound impact on nature.

For many centuries, societies have reflected some set of values and outcomes derived from the effort by elites to make society gratify elite needs. But the shift toward a more ecosystem-like CAS is gradually undermining this hierarchical control, and like an ecosystem, it is becoming more difficult for any individual to organize their own future.  Hence, the willingness of Tech tycoons to consider going to another planet in order to preserve their privilege (see linked article above).

Although we think of power as something that an individual has, power is gifted to a person or group by a larger community (human, financial, religious, etc.). It can be and is taken away when the community no longer sees that the person or group contributes to its purpose. While an “apex predator” makes a convenient political metaphor for power and control, in a real ecosystem, the predators die off if the actual ecological basis of their supposed “power” disintegrates.

Our society is becoming more like other evolutionary systems, and there is no guarantee that such a process shift will favor humans (or our disability community), or for that matter anything that now exists. Evolutionary systems favor continuing evolution, not any of the “parts” of the CAS. The continuation of evolutionary change depends on the generation of variation as evolution’s hedge against the uncertainty of the future. That future uncertainty clouds all efforts to control the future and spawns a dodgy business opportunity for anyone willing to claim they can predict the future.

We humans tried to work around that reality by isolating and organizing our exploitation of nature to buffer our goals against the relentlessly increasing complexity of unintended consequences. We are losing that long-standing effort for the same reason that all short-term advantage gives way to the “revenge” of long term biological processes.

My point is, as it is elsewhere, that traditional control behavior is becoming less and less effective and more and more expensive every second of every day.

Creative Commons Attribution 4.0 International License

Part 1: Changing Complex Adaptive Systems (CAS): How Exactly?

A slide entitled Wrangling Complexity, with stick figures working on computers to manage a network shown on a large screen.

There are two problems with the Title Question: How and Exactly.

We automatically use machine model beliefs and operational planning to build and implement change. Such an approach is largely inadequate for a CAS.

Any CAS has mechanical aspects and we can delude ourselves into thinking that because we can change the mechanical aspects with operational planning, that we should be able to change the whole Kit and Kaboodle that way (as in, for example, “you can only change what you can measure”, and similar sophistry).

It ain’t so.

As the slide image suggests, changing a CAS is more like wrangling (cats, say) than standard measurable outcomes based operational planning would have us believe.

If our old standard tools and operating assumptions don’t give us the control we seek, what will?

Well, the bad news is that nothing will give us that amount or scope of control over a CAS. The good news, as unlikely as it might seem, is that we can change the way we approach CAS and be effective.

A CAS is a “dispositional system”, not a mechanical one. A CAS tends to move in directions, but because of the interaction of its parts, there are no guarantees about where it will end up.

Changing a CAS is like approaching a cobra to take close up photos. You can get those photos, but you need to be circumspect to avoid getting bit.

Creative Commons Attribution 4.0 International License