(P3): Destabilizing Weak Constraints in Advocacy

Image of the Night King from Game of Thrones Series

  • I came to a stark realization: chronic surpluses could be almost as destabilizing as chronic deficits. –Alan Greenspan
  • One of the points about distractions is that everything they do is destabilizing.
    -Bruce Sterling
  • Yet, history has shown that if material force can defeat some ideologies it can no longer obliterate a civilization without destabilizing the whole planet.
    Abdelaziz Bouteflika

In a Complex Adaptive System (CAS), any form of interaction between the system and the outside world can be usefully viewed as a weak constraint and a potential target for destabilization. Obviously, some constraints are closer to the heart of your advocacy outcome than others.  But there are always more ways to go after a valued change than whatever works the first time we use it.

The biggest problem we advocates have in interacting with the CAS is that we settle on a technique or procedure that has worked for us in the past. This approach, while understandable, dramatically reduces the palate of ways we might destabilize the CAS for a valued purpose.

When we use the same techniques with the same CAS over and over, the CAS will adapt to them, making our advocacy more complex and expensive for us to use. Additionally, when the larger environment in which our target faces the same set of destabilization techniques, that larger environment will also adapt, narrowing the impact of our efforts to destabilize and making the outcomes we achieve more predictable, and, thus, more manageable by targets. Both the target and our advocacy become more rigid.

An example (in the next post) will give you the idea of how local, state, and national CAS and our advocacy approaches adapt over time to successful advocacy.

Creative Commons Attribution 4.0 International License

(P2): A Weak Constraint as a Potential Insurgency

A painting of a medieval revolt. Many people and soldiers fighting one another.

Mostly, CAS (Complex Adaptive Systems) view both internally generated and externally driven encounters as disturbances or perturbations. For purposes of understanding how you can advocate for change in a CAS, I prefer to think of these triggers as insurgencies.

An adjacent possible is something you can do readily from where you are right now. Some insurgencies keep resurfacing, an indication of an adjacent possible.

There are always more adjacent possibles than you know. They are often weak constraints, and we tend to pick one, stick with it as our preferred novel change target, and fail to see the other possibilities lurking close by. Our ability to survey the possibilities of the uncertain world around us is encumbered by our automatic focus on the easiest possibility to perceive.

Insurgencies subvert by their mere existence. In fact, a traditional way to turn a weak constraint into an insurgency is to trigger a response from the Target CAS. This is part of the reason why they are so hard to eliminate. Failed insurgencies are typically replaced by changes that will also trigger a new set of possibilities and a new insurgency.

Subversion is always possible. There is no way to build a fortress that is impervious to an insurgency. In fact, I think it is reasonable to describe the ongoing human conflicts in every State in the last 7,000 years as an insurgent struggle for change and freedom against a status quo struggling to increase and preserve control.

So, an insurgency is a kind of constraint, and it can move from a “weak” constraint to a powerful force for change just because the target reacts to its disturbance.

Creative Commons Attribution 4.0 International License

(P2): Safe-to-Fail Experiments as Weak Constraints

Strange yellow and black bicycle with a perfectly square frame and no brakes.

The idea of Safe-to-Fail Experiments was developed by Dave Snowden as part of his Cynefin framework. The technique is a way to learn about a complex adaptive system without triggering unintended consequences that are out of your control (See the link above.) But the concept of using probes to learn about complex systems is useful in many other contexts, most notably, in social justice advocacy.

Most advocacy is premised on the idea that there are legal constraints on the behavior of target systems, and that these constraints can be used to change the behavior of the system. In other words, advocacy can use procedures repeatedly to create change. Implicitly, we only need to understand the legal constraint under which a system operates and the change procedures (complaints, lawsuits, etc.). We don’t need to understand the politics or history of the system we are trying to change, all of which are, of course, other kinds of constraints.

But we do need to appreciate these aspects of a system before we can hope to successfully change it. This is because even the most apparently logical procedural path of some bureaucratic machine is, as we all know, a little “Peyton Place”, more complex and messier than the bones of the procedure would suggest.  Which is to say, all bureaucracies are Complex Adaptive Systems using much of their available energy to prevent disturbance from creating change through forcing them to modify existing constraints.

From inside a bureaucracy (or any large organization, including for-profit corporations), creating change must involve experiments too small to trigger annihilation of the experimenters or the CAS, but enabling you to learn something useful about the systems dispositional trajectory, about its system of constraints.

Safe-To-Fail is also a useful tool for changing that most personal of CAS, yourself.

Creative Commons Attribution 4.0 International License

(P2): Weak Constraints Can Change Future CAS Behavior

Black and White photo of a small child trying to move a large boulder.
High Hopes

In addition to the obvious effect of weak constraints on the target system described earlier, we also need to understand weak constraints as fulcrums for coordination, in the same manner, that our bones, joints, and muscles serve as fulcrums for our movement, even the most sophisticated.

If there are no such constraints the system seems freer than it is when these constraints to movement are present. But, this freedom is like that of an amoeba. You can move anywhere but without any sophistication. You are “free” to do much less than you could do if the “constraints” were present. This idea of using constraints as fulcrums for sophisticated advocacy is the key to understanding how we can use the weak constraints (and sometimes the strong constraints) in a system to leverage change. What constraints enable, among other things, is the coordination of our advocacy work to achieve meaningful impact.

Because strong constraints are well defended in target CAS, it can be difficult to change them directly. But the strong constraints still represent fulcrums that the target must respect. So they can be used in much the same way as Aikido or Jiu-jitsu, by channeling the investment in energy that the target CAS must provide in order to prevent damage to itself, into “forced” change. This is different from trying to eliminate or replace strong constraints, which, frankly, almost always ends very badly.

Creative Commons Attribution 4.0 International License

(P2): Surprise and Weak Signals

A goldfish with a look of surprise on the face.

  • Learning By Surprise
  • What is the Adjacent Possible?
  • The Hindsight Bias
  • “I wanted a perfect ending. Now I’ve learned, the hard way, that some poems don’t rhyme, and some stories don’t have a clear beginning, middle, and end. Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next. Delicious Ambiguity.”
    ― Gilda Radner

Instead of glossing over surprises as failures of understanding, we should focus on them until we have grasped their novelty and how that novelty needs to change our view of reality. We need to avoid abstracting from surprise to make it only another example of what we already know to be true.

Novel occurrences are novel for us, but they are also typically some “next step” from that with which we are already familiar. They are often called the “adjacent possible” because once they have occurred, it is fairly easy to see how they came about. This is true even if no one anticipates them. It is important to remember that in a Complex Adaptive System, there are always many adjacent possibilities for the future.

There is another common problem that results from rationalizing surprises. We look back on the surprise and try to figure out who accurately anticipated it. We think this will improve our prediction capability in the future.

Looking back does improve our understanding of the current situation. It doesn’t improve our ability to predict any genuinely novel future. If we examine what people thought about the future before the novel occurrence, we will see a very large number of ideas about what might happen.  The particular idea about the future that turned out to be accurate had no more or less information about its likelihood than many of the other ideas. The novelty tells us something useful about the current state of the CAS we are in and where it might evolve in the short term. It doesn’t improve our ability to foresee the genuinely new.

Creative Commons Attribution 4.0 International License

Part Two: Detecting and Using Weak Signals (Cynefin)

A Specimen Cynefin Diagram (not the newest, not the oldest).  Simple / Obvious: The simple/obvious domain represents the 'known knowns'. This means that there are rules in place (or best practice), the situation is stable, and the relationship between cause and effect is clear. Complicated: The complicated domain consists of the 'known unknowns'. The relationship between cause and effect requires analysis or expertise; there are a range of right answers. The framework recommends 'sense–analyze–respond': assess the facts, analyze, and apply the appropriate good operating practice. Complex: The complex domain represents the 'unknown unknowns'. Cause and effect can only be deduced in retrospect, and there are no right answers. 'Instructive patterns ... can emerge,' write Snowden and Boone, 'if the leader conducts experiments that are safe to fail.' Cynefin calls this process 'probe–sense–respond'. Chaotic: In the chaotic domain, cause and effect are unclear.[e] Events in this domain are 'too confusing to wait for a knowledge-based response'. managers 'act–sense–respond': act to establish order; sense where stability lies; respond to turn the chaotic into the complex. Disorder / Confusion: The dark disorder domain in the centre represents situations where there is no clarity about which of the other domains apply.

Cynefin is a body of knowledge and tools to assist in changing CAS, among other things. Cynefin, as an enterprise intervention, also has developed a “narrative access and analysis tool” called SenseMaker™. Sensemaker allows the intervenors to accurately access raw views by the participants as short narratives without groupthink or homogenization. It is this ability that allows for the detection of weak signals.

Because SenseMaker has developed an app, it is possible for its users to engage huge numbers of people in a very short time. The example that had the most impact on my understanding of its capacities was an effort to work around the unwillingness of local citizens to say what they actually thought to US civil and military personnel in SE Asia.

The system was used to ask children to relate a story from their grandparents about the most important lesson that the grandparents had learned in their lives. Then the children sent the stories using the SenseMaker app. This project got 50,000 stories in four days.  There is simply nothing else that supports authentic narrative by real participants with the speed of SenseMaker.

Unfortunately for our community, SenseMaker is an enterprise tool and is priced that way. I have been exploring ways we might be able to use this system in our community, but I am some distance from a genuine solution.

That doesn’t mean that we can’t make use of the idea if we can come up with ways to assure fidelity to SenseMaker’s ability to easily access real raw narratives from participants.

I’ll discuss some ideas for using this general framework to get meaningful narratives in our community in later posts. For now, I hope you can see the importance of weak signals in the development and use of our FutureStrategy.

Creative Commons Attribution 4.0 International License

(P1): Why Are Weak Signals Ignored?

A slide: Weak Signals Detection with Social media-No surprise at all? Theory: In contemporary future studies the term weak signals refers to an observed anomaly in the known path or transformation that surprises us somehow. (Kuosa, 2014 p, 22) Our Experiences; Are We Alone? Possible Explanations:  #1 Noisy social media and other limits #2 Filters #3 Customers are Experts #4 Epistemological Limits

Most of the ways we have of finding signals in CAS make us ignore the weak signals.

Surveys, focus groups, social media scans, and almost all the paraphernalia of social studies research homogenize signals to allow the “provable” detection of the Big Signals, the ones that represent larger trends in the CAS. And statistics, as it is usually used in these studies, is designed to relegate weak signals (at best) to a distant periphery where it can be ignored.  Think about what you were taught about the bell-shaped curve, and what you believe is meaningful about the data.

This approach to detecting signals is a framework that our social and profit-driven CAS imposes on us as the meaning of “worthwhile pursuit”.  Weak signals are seen as useless in this framework and, thus, meaningless.

To find weak signals, we have to access the raw narrative that the signal creates once it comes into existence. We have to deliberately prevent the homogenization and loss of the weak signal through our usual methods of assigning meaning to the information. We have to learn to pay attention to the small, weak, and powerless.

Creative Commons Attribution 4.0 International License

(P1): Why the Obvious Problems are the Hardest to Change

A political cartoon from a paper in Massachusetts in 1812 showing a Gerrymandered district just like the ones we have today.

We usually approach change by focusing on the most apparent problem in our environmental horizon (what is called a pain point in customer service). Note that the slide image is a gerrymandering cartoon from 1812, and, in my mind, gives pause to the idea that we can deal with current gerrymandering through normal problem-solving (voting, passing laws, constitutional amendments, getting the right people into office, and so on).

The most obvious problems for us are usually the ones best supported by the operation of the current CAS. There are more diverse forces supporting our obvious problems, and mechanically organized problem solving will miss most of the supporting forces in its quest for changing the obvious. So our problem solving will fail, often in the short run, but eventually in any case.  This can be true even when there are powerful forces supporting change.

Often, our most obvious problems in a complex adaptive system are the core of its strength as a system and support its resilience to meaningful change efforts.

At the same time, the CAS is constantly generating new and sometimes old trends that have been gone for a while. These variations of process are small scale, and we almost never pay any attention to them. No one ever says, “Let’s stop ignoring the flea in the room”.

But the potential for long term change in a CAS lies precisely in these small variations, or in systems theory, “weak signals”. The weak signals are the indicators (not guarantors) of where to look for levers of change.

Creative Commons Attribution 4.0 International License

(P1): Basic Ways of Thinking about CAS

A hand drawn diagram of the Cynefin Framework which is ironically very complex. Text Version through link.

By Edward Stoop at Sketching Maniacs
Text Version of Hand-drawn Cynefin Diagram

Because changing a CAS requires an entirely different way of engaging, we must develop new skills and new ways of perceiving in order to manage the losses we will not be able to avoid and to frame our future actions more strategically. These new skills are not mechanical procedures or recipes. They require ongoing engagement with the CAS and flexibility of response. These two dimensions of our CAS change strategy are the very things we have spent millennia trying to eliminate from our change plans, and our work to increase engagement and flexibility result from the rejection of the “system as machine” mentality.

This is not in any way a moralistic judgment. Unintended consequences don’t occur because there is some personal moral sanction being made by the universe that your actions are bad. Every time we create a short-term advantage for ourselves, we create an unintended and largely unperceived consequence somewhere down the tunnel of the future elsewhere in the CAS.

Humans are evolutionarily favored in devising and using short-term tactics to secure some immediate good. Before states were a reality (say, 7,000 years ago), this worked well for us generally. There was enough room in the world for our waste or mistakes to be recycled as we migrated elsewhere. The world would be “fixed” before we came back to the place we started, as it were. Now, over time, someone will eventually pay for our short-term thinking. Unintended consequences are triggered by all our efforts to stay ahead of the results of our current decisions. And, everyone else is doing the same thing. So, we or our descendants all eventually get burned by the distant actions of someone else. Our tweaks just make things worse over time.

The following posts will focus on one aspect of engaging CAS or another. The image in this slide is itself an engaging way to think about CAS.

Creative Commons Attribution 4.0 International License

(P1): Approaching the Wild CAS

A Large powerful waterfall at Eagle River in Michigan's Upper Peninsula, as an example of a wild CAS
Eagle River

One of the ways of thinking about modern society is that our lives are becoming more like membership in a wild ecosystem. Our common CAS is becoming more like the ecosystems that existed before humans had such a profound impact on nature.

For many centuries, societies have reflected some set of values and outcomes derived from the effort by elites to make society gratify elite needs. But the shift toward a more ecosystem-like CAS is gradually undermining this hierarchical control, and like an ecosystem, it is becoming more difficult for any individual to organize their own future.  Hence, the willingness of Tech tycoons to consider going to another planet in order to preserve their privilege (see linked article above).

Although we think of power as something that an individual has, power is gifted to a person or group by a larger community (human, financial, religious, etc.). It can be and is taken away when the community no longer sees that the person or group contributes to its purpose. While an “apex predator” makes a convenient political metaphor for power and control, in a real ecosystem, the predators die off if the actual ecological basis of their supposed “power” disintegrates.

Our society is becoming more like other evolutionary systems, and there is no guarantee that such a process shift will favor humans (or our disability community), or for that matter anything that now exists. Evolutionary systems favor continuing evolution, not any of the “parts” of the CAS. The continuation of evolutionary change depends on the generation of variation as evolution’s hedge against the uncertainty of the future. That future uncertainty clouds all efforts to control the future and spawns a dodgy business opportunity for anyone willing to claim they can predict the future.

We humans tried to work around that reality by isolating and organizing our exploitation of nature to buffer our goals against the relentlessly increasing complexity of unintended consequences. We are losing that long-standing effort for the same reason that all short-term advantage gives way to the “revenge” of long term biological processes.

My point is, as it is elsewhere, that traditional control behavior is becoming less and less effective and more and more expensive every second of every day.

Creative Commons Attribution 4.0 International License