We have learnt, in our cultural setting, to classify behavior into ‘means’ and ‘ends’ and if we go on defining ends as separate from means and apply the social sciences as crudely instrumental means, using the recipes of science to manipulate people, we shall arrive at a totalitarian rather than a democratic system of life.

—Margaret Mead

My purpose in writing these essays is to examine various maps—systems of metaphors for an unknowable territory—in order to suss out their usefulness as aids in navigation, which means decision making. I’m interested in how things work in terms of these systems, so along the way I’m going to be leaning heavily on the works of anthropologist Gregory Bateson, not the first but one of the better-known names in systems theory and cybernetics. My interest in Bateson is not completely unrelated to my interest in chicken wings. Let me digress:

My buddy Rus tells a story of the time be bought a plate of chicken wings for a friend. He’d met the group at some sort of wing-vending establishment, and found this particular friend hungry and without money. I can’t recall if he was broke or just forgot his wallet, but Rus did what a friend does and sprung for a plate of wings for the guy. Wingwaster had grown up a bit privileged, and apparently the habit of the well-to-do when faced with a messy pile of meat, skin, sauce, bone, and connective tissue is to take one or two big bites out of the easy pickin’s and discard the rest.

Rus watched this with mounting disgust. He knew that the proper way to eat a wing is to gnaw every little morsel of anything from the joints until you’re left with a thin pile of dry bones. Maybe Wingwaster just liked clean eating. Maybe he wasn’t that hungry, or he didn’t respect chickens and wing cooks and all the effort they made on his behalf. Maybe he was just ungrateful. Rus being Rus, he summarized his feelings on the matter for Wingwaster, probably using a lot of short words, and that was the end of the friendship. Good riddance.

Bateson, to me, is a bone with a lot of meat still on it. A bone I’ll be returning to again and again. He has the good Socratic sense to understand the limits of his own knowledge, and to ask the less obvious questions about the assumptions that more confident men build their maps on. While Mind and Nature: A Necessary Unity is by far his most popular work, I want to start pickin’ at a less-traveled essay he wrote in defense of an argument put forth by his wife, Margaret Mead: Social Planning and the Concept of Deutero-Learning, published in his book Steps Toward an Ecology of Mind.

Mead argued that our culture—and the social sciences, in particular—had the unquestioned habit of thinking of policy decisions in terms of means and ends. If the ends are attractive enough, we tend to justify almost any means it might take to get there. This was in contrast to certain cultures she studied in her fieldwork, in which the means-to-and-end mentality was absent, replaced by process, absent any clear goal. Humans are capable of thinking and even setting up entire cultures along either terms, so it’s worth examining which map allows for better navigation of the same territory.

I’ll introduce today’s opponent, process, as a system of behaviors and feedbacks not ncessarily directed at a pre-selected end goal. Of course, a process can be directed at an end, but I’m going to reserve an end-motivated process for the word “means,” just for the sake of clarity. The expression, “by whatever means,” implies a subject, and that subject is reached through those means.

We navigate by decision-making, and those decisions are made along a stream of events, “life,” or maybe a substream like “career” or “public policy.” The stream is a stream in time, a sequence from birth to death in the main case. If our end goal is death, the decisions are relatively easy, but I doubt that’s what anyone is talking about when they say, “a means to an end.” Rather, that end is usually a pleasant homeostatic state, arrived at after a certain amount of time conducting the means.

This sort of goal orientation tacitly encourages the thinker to arbitrarily break the stream into segments. It’s a pirate map with a big red X, “Here be treasure, cut what throats ye may.” But pirate life doesn’t end when the treasure is dug up, unless another pirate ends it for you. The red X is not a state, it is a moment in a longer stream, and there are decisions to be made during and later. In another metaphor, you go for a hike to Inspiration Point. You arrive at Inspiration Point. Then you hike home. You could just stay there for the rest of your life, I guess, but that’s not a static state, it’s still a series of decisions in a flowing stream of time.

Goal-orientation divides a given stream into three segments. Before the red X, the means. At the red X, the end. After the red x is left entirely unconsidered, presumably because that state is too perfect to leave or change, or because the pirate plans to name another X to pursue once he finds the first one. But the means that got you to the first may or may not tell you how to deal with things once there, how to leave it, or how to choose your next destination. So is it an end, or a waypoint? If the latter, on the way to what?

There’s also a certain amount of hubris in a goal. It implies that you have the wisdom to know what’s best (for yourself, or let’s say the city of Santa Barbara if you’re a city planner), that you know exactly how to get there. That’s a nontrivial point, especially if you used the glorious ends to justify an unpleasant means. I’m going to switch metaphors again, because they’re just maps anyway, and I use whichever ones I need to get where I’m going.

Let’s say that I’m a city councilman in Easter City, where I have a grand vision to solve the problem of rampant hunger with an initiative I call Easter Eggs for Everyone (EEE). Curing hunger sounds like a really good idea, so who would oppose me? In fact, they’ll probably go along with any means I suggest so long as it seems feasible. Luckily, I know exactly how to do it, and it involves strangling a white rabbit in sacrifice to the Great Easter Bunny every Friday. In public. I’m not saying it’ll be easy, or happen overnight, but we have a lot of white rabbits in Easter City. Even if it takes years, it’s worth it.

And it does take years. Three years later, I’m still crushing rabbit windpipes, and people are still starving. In fact, if anything, the problem has gotten worse. Many folks are starting to question the wisdom of EEE, and I might be one of them. Unfortunately, by justifying a very unpleasant means solely with the promise of a wonderful end, I can only continue to justify my rabbit murders if the end is viable. There is no intrinsic value in strangling bunnies. I’m faced with a sunk cost fallacy. Having committed heinous acts that I can’t undo, it is in my best interest to maintain through whatever mental gymnastic necessary that the end is still a possibility. To admit otherwise is to plead guilty to mass rabbit murder. As long as I have any power, you can bet I’ll be there every Friday with a new rabbit, because my good name and freedom depend on it. When the public has finally had enough and put me on trial, I’ll no doubt claim that it would have worked if not for that group of gophers undermining my plan at every move.

Instead, Bateson points out, “We have to find the value of a planned act implicit in and simultaneous with the act itself, not separate from it in the sense that the act would derive its value from reference to a future end or goal.” In contrast to an end-driven means, a process is a system of feedbacks. More or less, a response (Action B) triggered by Action A meeting a certain threshold. The two basic kinds of feedbacks are negative and positive. The negative one is “good.” The classic example of a negative feedback is a thermostat. When the temperature in your house gets too high, the AC kicks on (negating the rising heat). When it gets too low, the heater kicks on (negating the plummeting cold). The feedback system maintains homeostasis within parameters to avoid freezing you or roasting you. An example of a positive feedback would be two competitive gentlemen yelling at one another, “I can eat more wings than you!” “No, I can eat more wings than you!” With each statement reinforcing the same response in the other, until it reaches a state of runaway, i.e. a fist fight.

In the goal oriented mindset, feedbacks are irrelevant—we just need more time, more data, more funding, twice as many dead rabbits. Not even crossing the threshold after which the goal becomes impossible guarantees a change to the actions of the actors. Only when belief in the possibility of achieving the end disappears is another approach considered. As time passes, the range of actions one might consider to achieve the end is reduced from the original state (of all possible means) until the end is achieved or it runs off the rails.

Being goal-oriented is convergent. Funneling from a broad set of options to a single possible end. There’s more room to maneuver along the tree of options early on, and less as you go. The tournament begins with thirty-two possible champions, and ends with only one. Except in this metaphor, the champion is predetermined, and the bracket must be arranged and rearranged each round, irrespective of who actually scored more points in the individual games. Changes to the entire ecosystem are made to bring about an ideal, so if they fail, or if they work and the ideal isn’t exactly ideal, all the feedbacks are still set up to reinforce an undesirable condition.

Contrast this with the divergent nature of process-orientation. In Nassim Nicholas Taleb’s Antifragile, the importance of options are underscored. If instead of one hard goal I have three options, not of end-states but actions to take right now, I’m free to explore all of them until the point where I’m forced to choose. What’s more, I can decide based on feedbacks from my environment instead of a protocol written by the council members of Easter City (in a binder with many colored tabs). From there I can collect more options and repeat the process at each stage. Apply this to the biosphere and it’s called “natural selection.” The fittest residents of Condition A survive to Condition B. Then the fittest residents of Condition B survive to Condition C (not necessarily the same ones that were around during A). The crucial point is no one knows what’s going to work under the present conditions, much less what future conditions will be, and what will work under them. So we avoid predicting the future and take what the defense gives us.

Bateson uses the metaphor of the science of ecology for how he explores the workings of the mind and other systems. It’s a useful one I’ll return to constantly, because most people can agree 1) there something we call Nature, 2) it’s a complex system of feedbacks, 3) which follow the laws of the universe, 4) the observance of which is consistent across orders of magnitude, from petri dish to planet Earth. As members of that biosphere, it stands to reason that this might be a good source of inspiration and precedents when building the feedback systems we use to make decisions—when building an ecology of mind.

So what would a process-driven ecosystem for decision-making look like? I admit I have no idea, and that’s why I started this blog—to organize my thoughts and hopefully get a little closer. Taleb’s work on risk strikes me as a good place to start. The list below is an initial brainstorm, by no means exhaustive.

1) Identify acts with intrinsic value.

The value of any action should be apparent without considering a goal. “Acts should have intrinsic value” does not mean “acts should be pleasant and easy at all times.” Most people can see the value in a hard day’s work and a job well-done. Or in a sacrifice (of the personal variety, not of white rabbits). Taleb makes a sound argument for even small, non-critical failures that give information about how best to proceed, what he calls “tinkering.”

At the very least, an action should do no harm. Now that’s tricky, because harm can occur to one order without harm to an adjacent order. For this discussion, I’m offering the following orders of magnitude: individual, family, community, nation, species, biosphere. A single act might help the individual but harm the family, help the community but harm the nation, help the species but harm the biosphere. Which one am I not harming? It’s not an easy question even if we know the particulars of the act. In general, I would say harm as few orders as possible, with greater weight given to the higher ones.

2) Eliminate acts that obviously cause harm

Taleb calls this addition “via negativa,” addition by subtraction. Since we are very bad at predicting the future and knowing the complex repercussions of our actions, the best bet is usually to do less rather than more. An excellent choice is hard to spot in advance, but a terrible one is much easier. Stop doing obviously stupid things. In other words, if you have cirrhosis of the liver, don’t ask what pill you should start taking if you haven’t stopped binge drinking.

3) Eliminate naive intervention.

With or without good intentions. For the reasons mentioned in #2, it’s wise to avoid sticking your nose into complex systems, especially those that are very mature and likely to already have their own subtle feedbacks in place (or else how would they have survived so long?). Taleb uses the term “iatrogenics,” or healer-generated harm. The doctor convinces you he can fix your aching back with a surgery to decompress the lumbar discs, and accidentally nicks your spinal cord with the scalpel (at least your lower back doesn’t hurt!).

To use another of his terms, we should avoid intervening in ways that are fragile. I’ll touch more on the fragile-robust-antifragile ternary in #5, but for now, something fragile is something that likes things nice and quiet. It breaks when exposed to shocks and disorders. There really is only one way that it can work as intended, and a lot of ways for it to fail, i.e. if anything at all goes wrong. For example, an investment that produces a 4% annual return every year that the economy is booming, but goes bust if there is a significant plunge in the market. It has small maximum potential for gain under normal circumstances, and potential for complete annihilation under rare but inevitable chaos.

The difference between numbers 2 and 3 is that harm is obvious in #2. In #3, there may not be an obvious mechanism for harm, but the intervention is subject to a lot of moving parts, a lot of unpredictability, and at least a rare risk of cataclysm. Interventions work best when the maximum possible harm, even under rare circumstances, is trivial. Or when the maximum possible harm is less than the assured harm from no intervention.

4) Mind the feedbacks.

In Taleb’s system, risk provides the best negative feedback. When an idea, or the person who uses it, is put at risk, survival or lack thereof will indicate its usefulness. It protects the higher orders at the expense of the individual. There is no way to avoid producing risk. When the actor who generates it insulates himself from it, he passes it on to others of the same order, then higher and higher orders until a runaway state ensues and the system is destroyed. In the example of the men fighting over who can eat more wings, imagine that instead of just duking it out, or eating wings, (runaway achieved at the two-person level), they gather more and more and supporters until the runway state is achieved at a higher order via World War III. Risk accumulates until someone or something of equal value suffers.

You could also look at this as a negative feedback occurring on a much higher order, as opposed to a positive feedback causing runaway within a single order.

5) Set up, or leave alone, systems that are antifragile.

A system that is antifragile gains from chaos, shock, disorder.* A muscle that is tested and rested is stronger at the next test, while it atrophies if left unused. In contrast to fragility, which functions as expected most of the time and shatters upon impact, an antifragile system craves certain kinds of risk. It’s the action with a small maximal downside, and a tremendous (if unlikely) maximum upside. This kind of system doesn’t attempt to guess what the end will be, or how things may play out along the way. It’s set up to avoid ruin, and to benefit from the unpredictable by surviving to play again. Each action is weighed on its own merit, and flexibility between numerous options replaces the central plan.

The key here is not that it avoids shocks, or failures. It courts them, because it keeps the failures small and treats them as information. With each bit of information, we can make adjustments. The unpredictable event to a fragile system is a cataclysm, but to an antifragile system, it’s opportunity. A divergent process instead of a convergent means to an end.

*To complete the ternary, something that is robust is resistant to shock, but does not benefit from it. This is the middle ground between fragility and antifragility.

I’m not expecting people in the Western world to be able to exorcise goal-oriented thinking completely. Just about every narrative in our culture teaches us to focus on the end state, the happily ever after. It would take a Herculean effort to completely melon-ball that part of the brain. What we can do, though, is start from an end. Then broaden the scope of that end to include all things in that class of states. For example, a city’s goal of “attract more business to the downtown area,” could broaden to “improve the local economy.” One prong of that approach may be trying to attract business to downtown, but it’s entirely possible that that would end up hurting the economy in an unforseen way, or that the particular methods employed to that effect would harm the town in other ways. Besides trying a number of ways to increase the flow of money and people through an area, it could also try to decrease waste of already-present resources, and reduce the drain of money, jobs, housing, etc. to other areas.

In other words, take a goal to a class of goals, then ask, “in what context do these kind of ‘states’ materialize?” Every action taken to set up that context would have intrinsic value without needing to be justified with the goal of an improved local economy. Many options, none of them ruinous if unsuccessful, could be explored. Like tree branches diverging from a single trunk, the stronger ones could be followed and the weaker dropped. Hopefully, this constellation of valuable action would eventually arrive somewhere that looked like “an improved local economy,” but not treat that as an end-state to settle into. In my first essay I said that a map is a living document and a metaphor in time—and therefore needs to change periodically to remain accurate. That means the process of improving the local economy would necessarily continue to adapt to conditions as they change.

That, I think, is what Margaret Mead meant in her critique of the social sciences’ habit of implementing centralized plans for the betterment of poor ignoramuses, and an important consideration in building an ecology of mind at any level.

A process-oriented approach means a system of actions, each of intrinsic value, feeding back and adapting in an organic, bottom-up fashion (as opposed to mechanical, imposed from the top-down). It does not deal in end states, but continues to adapt over time. If a goal must be specified, it’s best left vague and open to change if better opportunities arise, or if it incites harmful processes. It admits the limits of our ability to predict the results of complex interactions. Survival, and the collecting and exercising of options, replace the detailed plan. If all actions within the process have intrinsic value, a snapshot of the state of the system at any time should look more or less “valuable,” trending upward.

Does that mean goals are useless? Quite the opposite. There are times when the best—and maybe the only—way to achieve something is through a tightly focused goal that lets the processes unfold unconsciously. If that sounds like it contradicts everything I just said, it does. It’s a different map, for a different territory. One that I’ll discuss next.

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22232425 262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 15th, 2025 11:49 am
Powered by Dreamwidth Studios