This is a brief post to remind myself of ideas from “Agile Sense-Making in the Battlespace”. So what relevance do weary soldiers have for those of us back home?
Getting started: the jargon
We spot some technical jargon immediately. Computer people like “agile”. What they mean is doing just as much as we can to be able to get some feedback.
Imagine it this way, when we begin a journey from London to Edinburgh, we ask the SatNav for a route and then we tend to assume the journey will then be pain free. Often the journey is not and the real outcome involves looking for another route in a mild panic when the inevitable happens and we are diverted.
The alternative is a SatNav that works like this.
- It finds the best route and first junction about 15-20 minutes away
- As we reach that junction, it quietly rechecks the route taking into account any weather and traffic information that has arrived since then
Agile is simple getting on with the first task but allowing that the overall route to the destination may change.
Sense-making is best understood through the SIR COPE acronym of Karl Weick.
Sense is not about truth. Sense is about piecing together whatever information we have so that there are no discrepancies and so that we are willing to ‘stay in the game’.
Sense-making is an ongoing process; it is a confusing process; and it is ultimately a social process because a key factor in our decision to stay or go is our judgement of the people around us and their loyalty and commitment. In military terms, it is ‘morale’ – do I even want to belong to ‘this man’s army’?
What can we learn from weary soldiers managing the battlespace?
So jargon aside, what new does William Mitchell add in his description of thinking clearly in the battle space? I will be using my words now because these are my notes. I hope you find them useful but if you do, check back to the original article.
#1 Think imaginatively
Technically, we call imagination in “thinking about the systems of systems” or in Mitchell’s words “network philosophy”.
In practice, we think like this: I want to attract more customers to my business. They either don’t know I exist, or barely pay me any attention, and when they notice me, don’t trust me. I want to win their trust.
Of course, I can woo them directly and sometimes I will. But they already have relations among themselves. So when I woo the fellows who, say, wear hats, the fellows who don’t wear hats don’t want to take part. That second level effect is systems thinking.
When we are busy, or in goal mode, our systems thinking tends to get turned off. Let’s go back to driving from London to Edinburgh. When I set my SatNav and I head out onto the motorway, I know the trip will be boring, so I don’t want to know about all the wonderful places I could visit just 5 miles off the motorway, or I will not stick to my task.
But I also don’t know about the inter-schools football championship that is about to disgorge a flood of cars into the junction ahead of me. That’s what management intelligence is for. To make a system that scans for the opportunities or threats that we aren’t scanning for, and should not be scanning for, because we are in executive-mode and concentrating on something else.
But the key takeaway is not that we have lookouts. The key takeaway is that we have lookouts how understand second order effects – what causes what. And for there to be any point to having intelligent lookouts, we need managers who understand the messages from lookouts. That’s why managers must be fluent in systems of systems thinking. They must be able to follow the briefings and ask the right questions.
#2 Write things down
Technically, we call this state “iterative modelling”. We write down what we think to build a bridge from our brainstorming to our action.
In practice, we log our interactions with potential customers and we see how well we are doing. We calculate our open rates and click through rates and sales. We use numbers to focus our attention on what must be done and to learn how to do what we do even better.
Very simply, when we drive from London to Edinburgh, part of the system is written down for us. The SatNav is doing the map calculations for us using a straightforward A* algorithm and some detailed information from maps. Then it presents it on a map annotated with voice commands.
We do the rest. We look at our clock. We note the time to destination on the SatNav. And we note what time we ‘must’ arrive and make our decisions accordingly. We can see immediately that SatNavs are going to become much, much better at learning.
There are several skills involved in modelling dynamic information. We have to know what to model. We have to capture data. We have to write programs of very many sorts. We have to lay out information. And we have to learn, a lot, about how to make the whole system better.
And in that morass of work, we might forget what all this is about: to bridge the dynamism of systems about systems thinking with action that has to be taken in some instances, in a split second. This is what we are doing this for!
#3 Look at alternatives
Technically, the third stage is called “hypothesis generation and testing” or “scenario planning”. Oh my, how we hate to do this when we are in the thick of action! To be goal-oriented means to be confident of what we are doing. And we resist any undermining of our confidence including thinking about what else might be a good idea!
But snap decisions are dangerous and unwise. A good MI system delivers the correct information to make choice at the right time. We slow down thinking to speed up work – or avoid false starts and over commitment to unwise courses of action.
Let’s imagine, for example, that we are very attracted to selling big ticket items to wealthy customers. And that we are reasonably successful. But that our smaller items fly off the shelves in our ‘outlet’ shop around the corner. Now imagine we have a choice: spend the next hour serving the high value customer, or spend the next hour helping move the queue around the corner. It’s helpful to have a display that shows our two choices and their consequences so we can make the choice in terms of what we will achieve and not simply our personal preference.
Equally, when we are driving from London to Edinburgh and we are diverted, in the time we have to reroute, it would be helpful to have a display that shows the best 5 choices rather than requires us to step through them painfully – a task that cannot be done until we find somewhere to pull over.
Every MI system has assumptions built into them. And though we use the systems in a very trusting way on a day-to-day basis, we should know what those assumptions are and what information we are not seeing. Yes, the data must come packaged ready for action. But we must have people in the background looking at alternatives and produces displays for those too. Caveat emptor: If we rely on computer systems that we don’t understand and don’t insist on getting better and better, then we only have ourselves to blame.
The three steps of Agile Sense-Making
So this is it agile sense-making –
- Think imaginatively (imagine the side-effects)
- Write things down to bridge imagination to action (a computer program counts as writing things down)
- Have alternative programs that bring together analyses in different ways (our methods must learn)
This is the new world of management consulting folks – data driven. Now let’s find the clients to match!
Like your take very much. After that piece was done I had my second tour in Helmand AFG. The challenges of moving the focus of strict cold war doctrine to more network thinking was hard. The iterative modeling works well with networked insurgent environments but is a hard sell to a lot of officers who only see their designated geographic area of responsibility as drilled into them. I wish we could simplify doctrinise your three steps, but common sense and 200 yrs of military tradition would have a hard time with it. It was really good to stumble on your comments… Best regards Will