(P2): A Weak Constraint as a Potential Insurgency

A painting of a medieval revolt. Many people and soldiers fighting one another.

Mostly, CAS (Complex Adaptive Systems) view both internally generated and externally driven encounters as disturbances or perturbations. For purposes of understanding how you can advocate for change in a CAS, I prefer to think of these triggers as insurgencies.

An adjacent possible is something you can do readily from where you are right now. Some insurgencies keep resurfacing, an indication of an adjacent possible.

There are always more adjacent possibles than you know. They are often weak constraints, and we tend to pick one, stick with it as our preferred novel change target, and fail to see the other possibilities lurking close by. Our ability to survey the possibilities of the uncertain world around us is encumbered by our automatic focus on the easiest possibility to perceive.

Insurgencies subvert by their mere existence. In fact, a traditional way to turn a weak constraint into an insurgency is to trigger a response from the Target CAS. This is part of the reason why they are so hard to eliminate. Failed insurgencies are typically replaced by changes that will also trigger a new set of possibilities and a new insurgency.

Subversion is always possible. There is no way to build a fortress that is impervious to an insurgency. In fact, I think it is reasonable to describe the ongoing human conflicts in every State in the last 7,000 years as an insurgent struggle for change and freedom against a status quo struggling to increase and preserve control.

So, an insurgency is a kind of constraint, and it can move from a “weak” constraint to a powerful force for change just because the target reacts to its disturbance.

Creative Commons Attribution 4.0 International License

(P2): Safe-to-Fail Experiments as Weak Constraints

Strange yellow and black bicycle with a perfectly square frame and no brakes.

The idea of Safe-to-Fail Experiments was developed by Dave Snowden as part of his Cynefin framework. The technique is a way to learn about a complex adaptive system without triggering unintended consequences that are out of your control (See the link above.) But the concept of using probes to learn about complex systems is useful in many other contexts, most notably, in social justice advocacy.

Most advocacy is premised on the idea that there are legal constraints on the behavior of target systems, and that these constraints can be used to change the behavior of the system. In other words, advocacy can use procedures repeatedly to create change. Implicitly, we only need to understand the legal constraint under which a system operates and the change procedures (complaints, lawsuits, etc.). We don’t need to understand the politics or history of the system we are trying to change, all of which are, of course, other kinds of constraints.

But we do need to appreciate these aspects of a system before we can hope to successfully change it. This is because even the most apparently logical procedural path of some bureaucratic machine is, as we all know, a little “Peyton Place”, more complex and messier than the bones of the procedure would suggest.  Which is to say, all bureaucracies are Complex Adaptive Systems using much of their available energy to prevent disturbance from creating change through forcing them to modify existing constraints.

From inside a bureaucracy (or any large organization, including for-profit corporations), creating change must involve experiments too small to trigger annihilation of the experimenters or the CAS, but enabling you to learn something useful about the systems dispositional trajectory, about its system of constraints.

Safe-To-Fail is also a useful tool for changing that most personal of CAS, yourself.

Creative Commons Attribution 4.0 International License