Blog Archives

Model Thinking

Model Framework

The fundamental tenet of theory is the concept of “empiria“. Empiria refers to our observations. Based on observations, scientists and researchers posit a theory – it is part of scientific realism.

A scientific model is a causal explanation of how variables interact to produce a phenomenon, usually linearly organized.  A model is a simplified map consisting of a few, primary variables that is gauged to have the most explanatory powers for the phenomenon being observed.  We discussed Complex Physical Systems and Complex Adaptive Systems early on this chapter. It is relatively easier to map CPS to models than CAS, largely because models become very unwieldy as it starts to internalize more variables and if those variables have volumes of interaction between them. A simple analogy would be the use of multiple regression models: when you have a number of independent variables that interact strongly between each other, autocorrelation errors occur, and the model is not stable or does not have predictive value.

thinking

Research projects generally tend to either look at a case study or alternatively, they might describe a number of similar cases that are logically grouped together. Constructing a simple model that can be general and applied to many instances is difficult, if not impossible. Variables are subject to a researcher’s lack of understanding of the variable or the volatility of the variable. What further accentuates the problem is that the researcher misses on the interaction of how the variables play against one another and the resultant impact on the system. Thus, our understanding of our system can be done through some sort of model mechanics but, yet we share the common belief that the task of building out a model to provide all of the explanatory answers are difficult, if not impossible. Despite our understanding of our limitations of modeling, we still develop frameworks and artifact models because we sense in it a tool or set of indispensable tools to transmit the results of research to practical use cases. We boldly generalize our findings from empiria into general models that we hope will explain empiria best. And let us be mindful that it is possible – more so in the CAS systems than CPS that we might have multiple models that would fight over their explanatory powers simply because of the vagaries of uncertainty and stochastic variations.

Popper says: “Science does not rest upon rock-bottom. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and when we cease our attempts to drive our piles into a deeper layer, it is not because we have reached firm ground. We simply stop when we are satisfied that they are firm enough to carry the structure, at least for the time being”. This leads to the satisficing solution: if a model can choose the least number of variables to explain the greatest amount of variations, the model is relatively better than other models that would select more variables to explain the same. In addition, there is always a cost-benefit analysis to be taken into consideration: if we add x number of variables to explain variation in the outcome but it is not meaningfully different than variables less than x, then one would want to fall back on the less-variable model because it is less costly to maintain.

problemsol

Researchers must address three key elements in the model: time, variation and uncertainty. How do we craft a model which reflects the impact of time on the variables and the outcome? How to present variations in the model? Different variables might vary differently independent of one another. How do we present the deviation of the data in a parlance that allows us to make meaningful conclusions regarding the impact of the variations on the outcome? Finally, does the data that is being considered are actual or proxy data? Are the observations approximate? How do we thus draw the model to incorporate the fuzziness: would confidence intervals on the findings be good enough?

Two other equally other concepts in model design is important: Descriptive Modeling and Normative Modeling.

Descriptive models aim to explain the phenomenon. It is bounded by that goal and that goal only.

There are certain types of explanations that they fall back on: explain by looking at data from the past and attempting to draw a cause and effect relationship. If the researcher is able to draw a complete cause and effect relationship that meets the test of time and independent tests to replicate the results, then the causality turns into law for the limited use-case or the phenomenon being explained. Another explanation method is to draw upon context: explaining a phenomenon by looking at the function that the activity fulfills in its context. For example, a dog barks at a stranger to secure its territory and protect the home. The third and more interesting type of explanation is generally called intentional explanation: the variables work together to serve a specific purpose and the researcher determines that purpose and thus, reverse engineers the understanding of the phenomenon by understanding the purpose and how the variables conform to achieve that purpose.

This last element also leads us to thinking through the other method of modeling – namely, normative modeling. Normative modeling differs from descriptive modeling because the target is not to simply just gather facts to explain a phenomenon, but rather to figure out how to improve or change the phenomenon toward a desirable state. The challenge, as you might have already perceived, is that the subjective shadow looms high and long and the ultimate finding in what would be a normative model could essentially be a teleological representation or self-fulfilling prophecy of the researcher in action. While this is relatively more welcome in a descriptive world since subjectivism is diffused among a larger group that yields one solution, it is not the best in a normative world since variation of opinions that reflect biases can pose a problem.

How do we create a representative model of a phenomenon? First, we weigh if the phenomenon is to be understood as a mere explanation or to extend it to incorporate our normative spin on the phenomenon itself. It is often the case that we might have to craft different models and then weigh one against the other that best represents how the model can be explained. Some of the methods are fairly simple as in bringing diverse opinions to a table and then agreeing upon one specific model. The advantage of such an approach is that it provides a degree of objectivism in the model – at least in so far as it removes the divergent subjectivity that weaves into the various models. Other alternative is to do value analysis which is a mathematical method where the selection of the model is carried out in stages. You define the criteria of the selection and then the importance of the goal (if that be a normative model). Once all of the participants have a general agreement, then you have the makings of a model. The final method is to incorporate all all of the outliers and the data points in the phenomenon that the model seeks to explain and then offer a shared belief into those salient features in the model that would be best to apply to gain information of the phenomenon in a predictable manner.

business model

There are various languages that are used for modeling:

Written Language refers to the natural language description of the model. If price of butter goes up, the quantity demanded of the butter will go down. Written language models can be used effectively to inform all of the other types of models that follow below. It often goes by the name of “qualitative” research, although we find that a bit limiting.  Just a simple statement like – This model approximately reflects the behavior of people living in a dense environment …” could qualify as a written language model that seeks to shed light on the object being studied.

Icon Models refer to a pictorial representation and probably the earliest form of model making. It seeks to only qualify those contours or shapes or colors that are most interesting and relevant to the object being studied. The idea of icon models is to pictorially abstract the main elements to provide a working understanding of the object being studied.

Topological Models refer to how the variables are placed with respect to one another and thus helps in creating a classification or taxonomy of the model. Once can have logical trees, class trees, Venn diagrams, and other imaginative pictorial representation of fields to further shed light on the object being studied. In fact, pictorial representations must abide by constant scale, direction and placements. In other words, if the variables are placed on a different scale on different maps, it would be hard to draw logical conclusions by sight alone. In addition, if the placements are at different axis in different maps or have different vectors, it is hard to make comparisons and arrive at a shared consensus and a logical end result.

Arithmetic Models are what we generally fall back on most. The data is measured with an arithmetic scale. It is done via tables, equations or flow diagrams. The nice thing about arithmetic models is that you can show multiple dimensions which is not possible with other modeling languages. Hence, the robustness and the general applicability of such models are huge and thus is widely used as a key language to modeling.

Analogous Models refer to crafting explanations using the power of analogy. For example, when we talk about waves – we could be talking of light waves, radio waves, historical waves, etc.  These metaphoric representations can be used to explain phenomenon, but at best, the explanatory power is nebulous, and it would be difficult to explain the variations and uncertainties between two analogous models.  However, it still is used to transmit information quickly through verbal expressions like – “Similarly”, “Equivalently”, “Looks like ..” etc. In fact, extrapolation is a widely used method in modeling and we would ascertain this as part of the analogous model to a great extent. That is because we time-box the variables in the analogous model to one instance and the extrapolated model to another instance and we tie them up with mathematical equations.

 

The Law of Unintended Consequences

The Law of Unintended Consequence is that the actions of a central body that might claim omniscient, omnipotent and omnivalent intelligence might, in fact, lead to consequences that are not anticipated or unintended.

The concept of the Invisible Hand as introduced by Adam Smith argued that it is the self-interest of all the market agents that ultimately create a system that maximizes the good for the greatest amount of people.

Robert Merton, a sociologist, studied the law of unintended consequence. In an influential article titled “The Unanticipated Consequences of Purposive Social Action,” Merton identified five sources of unanticipated consequences.

Ignorance makes it difficult and impossible to anticipate the behavior of every element or the system which leads to incomplete analysis.

Errors that might occur when someone uses historical data and applies the context of history into the future. Linear thinking is a great example of an error that we are wrestling with right now – we understand that there are systems, looking back, that emerge exponentially but it is hard to decipher the outcome unless one were to take a leap of faith.

Biases work its way into the study as well. We study a system under the weight of our biases, intentional or unintentional. It is hard to strip that away even if there are different bodies of thought that regard a particular system and how a certain action upon the system would impact it.

Weaved with the element of bias is the element of basic values that may require or prohibit certain actions even if the long-term impact is unfavorable. A good example would be the toll gates established by the FDA to allow drugs to be commercialized. In its aim to provide a safe drug, the policy might be such that the latency of the release of drugs for experiments and commercial purposes are so slow that many patients who might otherwise benefit from the release of the drug lose out.

Finally, he discusses the self-fulfilling prophecy which suggests that tinkering with the elements of a system to avert a catastrophic negative event might in actuality result in the event.

It is important however to acknowledge that unintended consequences do not necessarily lead to a negative outcome. In fact, there are could be unanticipated benefits. A good example is Viagra which started off as a pill to lower blood pressure, but one discovered its potency to solve erectile dysfunctions. The discovery that ships that were sunk became the habitat and formation of very rich coral reefs in shallow waters that led scientists to make new discoveries in the emergence of flora and fauna of these habitats.

unitended con ahead

If there are initiatives exercised that are considered “positive initiative” to influence the system in a manner that contribute to the greatest good, it is often the case that these positive initiatives might prove to be catastrophic in the long term. Merton calls the cause of this unanticipated consequence as something called the product of the “relevance paradox” where decision makers thin they know their areas of ignorance regarding an issue, obtain the necessary information to fill that ignorance gap but intentionally or unintentionally neglect or disregard other areas as its relevance to the final outcome is not clear or not lined up to values. He goes on to argue, in a nutshell, that unintended consequences relate to our hubris – we are hardwired to put our short-term interest over long term interest and thus we tinker with the system to surface an effect which later blow back in unexpected forms. Albert Camus has said that “The evil in the world almost always comes of ignorance, and good intentions may do as much harm as malevolence if they lack understanding.”

An interesting emergent property that is related to the law of unintended consequence is the concept of Moral Hazard. It is a concept that individuals have incentives to alter their behavior when their risk or bad decision making is borne of diffused among others. For example:

If you have an insurance policy, you will take more risks than otherwise. The cost of those risks will impact the total economics of the insurance and might lead to costs being distributed from the high-risk takers to the low risk takers.

Unintended-Consequences cartoon

How do the conditions of the moral hazard arise in the first place? There are two important conditions that must hold. First, one party has more information than another party. The information asymmetry thus creates gaps in information and that creates a condition of moral hazard. For example, during 2006 when sub-prime mortgagors extended loans to individuals who had dubitable income and means to pay. The Banks who were buying these mortgages were not aware of it. Thus, they ended up holding a lot of toxic loans due to information asymmetry. Second, is the existence of an understanding that might affect the behavior of two agents. If a child knows that they are going to get bailed out by the parents, he/she might take some risks that he/she would otherwise might not have taken.

To counter the possibility of unintended consequences, it is important to raise our thinking to second-order thinking. Most of our thinking is simplistic and is based on opinions and not too well grounded in facts. There are a lot of biases that enter first order thinking and in fact, all of the elements that Merton touches on enters it – namely, ignorance, biases, errors, personal value systems and teleological thinking. Hence, it is important to get into second-order thinking – namely, the reasoning process is surfaced by looking at interactions of elements, temporal impacts and other system dynamics. We had mentioned earlier that it is still difficult to fully wrestle all the elements of emergent systems through the best of second-order thinking simply because the dynamics of a complex adaptive system or complex physical system would deny us that crown of competence. However, this fact suggests that we step away from simple, easy and defendable heuristics to measure and gauge complex systems.

Emergent Systems: Introduction

The whole is greater than the sum of its parts. “Emergent properties” refer to those properties that emerge that might be entirely unexpected. As discussed in CAS, they arise from the collaborative functioning of a system. In other words, emergent properties are properties of a group of items, but it would be erroneous for us to reduce such systems into properties of atomic elements and use those properties as binding elements to understand emergence Some common examples of emergent properties include cities, bee hives, ant colonies and market systems. Out thinking attributes causal effects – namely, that behavior of elements would cause certain behaviors in other hierarchies and thus an entity emerges at a certain state. However, we observe that a process of emergence is the observation of an effect without an apparent cause. Yet it is important to step back and regard the relationships and draw lines of attribution such that one can concur that there is an impact of elements at the lowest level that surfaces, in some manner, at the highest level which is the subject of our observation.

emergent

Jochenn Fromm in his paper “Types and Forms of Emergence” has laid this out best. He says that emergent properties are “amazing and paradox: fundamental but familiar.” In other words, emergent properties are changeless and changing, constant and fluctuating, persistent and shifting, inevitable and unpredictable. The most important note that he makes is that the emergent property is part of the system and at the same time it might not always be a part of the system. There is an undercurrent of novelty or punctuated gaps that might arise that is inexplicable, and it is this fact that renders true emergence virtually irreducible. Thus, failure is embodied in all emergent systems – failure being that the system does not behave according to expectation. Despite all rules being followed and quality thresholds are established at every toll gate at the highest level, there is still a possibility of failure which suggests that there is some missing information in the links. It is also possible that the missing information is dynamic – you do not step in the same water twice – which makes the study to predict emergent systems to be a rather difficult exercise. Depending on the lens through which we look at, the system might appear or disappear.

emergent cas

There are two types of emergence: Descriptive and Explanatory emergence. Descriptive emergence means that properties of wholes cannot be necessarily defined through the properties of the pasts. Explanatory emergence means laws of complex systems cannot be deduced from the laws of interaction of simpler elements that constitute it. Thus the emergence is a result of the amount of variety embodied in the system, the amount of external influence that weights and shapes the overall property and direction of the system, the type of resources that the system consumes, the type of constraints that the system is operating under and the number of levels of sub-systems that work together to build out the final system. Thus, systems can be benign as in the system is relatively more predictable whereas a radical system is a material departure of a system from expectation. If the parts that constitute a system is independent of its workings from other parts and can be boxed within boundaries, emergent systems become more predictable. A watch is an example of a system that follows the different mechanical elements in a watch that are geared for reading the time as it ultimate purpose. It is a good example of a complex physical system. However, these systems are very brittle – a failure in one point can cascade into a failure of the entire system. Systems that are more resilient are those where the elements interact and learn from one another. In other words, the behavior of the elements excites other elements – all of which work together to create a dance toward a more stable state. They deploy what is often called the flocking trick and the pheromone trick. Flocking trick is largely the emulation of the particles that are close to each other – very similar to the cellular automata as introduced by Neumann and discussed in the earlier chapter. The Pheromone trick reflects how the elements leave marks that are acted upon as signals by other elements and thus they all work together around these signal trails to behave and thus act as a forcing function to create the systems.

emerg strategy

There are systems that have properties of extremely strong emergence. What does Consciousness, Life, and Culture have in common? How do we look at Climate? What about the organic development of cities? These are just some examples of system where determinism is nigh impossible. We might be able to tunnel through the various and diverse elements that embody the system, but it would be difficult to coherently and tangibly draw all set of relationships, signals, effectors and detectors, etc. to grapple with a complete understanding of the system. Wrestling a strong emergent system would be a task that might even be outside the purview of the highest level of computational power available. And yet, these systems exist, and they emerge and evolve. Yet we try to plan for these systems or plan to direct policies to influence the system, not fully knowing the impact. This is also where the unintended consequences of our action might take free rein.

Complex Physical and Adaptive Systems

There are two models in complexity. Complex Physical Systems and Complex Adaptive Systems! For us to grasp the patterns that are evolving, and much of it seemingly out of our control – it is important to understand both these models. One could argue that these models are mutually exclusive. While the existing body of literature might be inclined toward supporting that argument, we also find some degree of overlap that makes our understanding of complexity unstable. And instability is not to be construed as a bad thing! We might operate in a deterministic framework, and often, we might operate in the realms of a gradient understanding of volatility associated with outcomes. Keeping this in mind would be helpful as we deep dive into the two models. What we hope is that our understanding of these models would raise questions and establish mental frameworks for intentional choices that we are led to make by the system or make to influence the evolution of the system.

 

Complex Physical Systems (CPS)

Complex Physical Systems are bounded by certain laws. If there are initial conditions or elements in the system, there is a degree of predictability and determinism associated with the behavior of the elements governing the overarching laws of the system. Despite the tautological nature of the term (Complexity Physical System) which suggests a physical boundary, the late 1900’s surfaced some nuances to this model. In other words, if there is a slight and an arbitrary variation in the initial conditions, the outcome could be significantly different from expectations. The assumption of determinism is put to the sword.  The notion that behaviors will follow established trajectories if rules are established and the laws are defined have been put to test. These discoveries by introspection offers an insight into the developmental block of complex physical systems and how a better understanding of it will enable us to acknowledge such systems when we see it and thereafter allow us to establish certain toll-gates and actions to navigate, to the extent possible, to narrow the region of uncertainty around outcomes.

universe

The universe is designed as a complex physical system. Just imagine! Let this sink in a bit. A complex physical system might be regarded relatively simpler than a complex adaptive system. And with that in mind, once again …the universe is a complex physical system. We are awed by the vastness and scale of the universe, we regard the skies with an illustrious reverence and we wonder and ruminate on what lies beyond the frontiers of a universe, if anything. Really, there is nothing bigger than the universe in the physical realm and yet we regard it as a simple system. A “Simple” Complex Physical System. In fact, the behavior of ants that lead to the sustainability of an ant colony, is significantly more complex: and we mean by orders of magnitude.

ant colony

Complexity behavior in nature reflects the tendency of large systems with many components to evolve into a poised “critical” state where minor disturbances or arbitrary changes in initial conditions can create a seemingly catastrophic impact on the overall system such that system changes significantly. And that happens not by some invisible hand or some uber design. What is fundamental to understanding complex systems is to understand that complexity is defined as the variability of the system. Depending on our lens, the scale of variability could change and that might lead to different apparatus that might be required to understand the system. Thus, determinism is not the measure: Stephen Jay Gould has argued that it is virtually impossible to predict the future. We have hindsight explanatory powers but not predictable powers. Hence, systems that start from the initial state over time might represent an outcome that is distinguishable in form and content from the original state. We see complex physical systems all around us. Snowflakes, patterns on coastlines, waves crashing on a beach, rain, etc.

Complex Adaptive Systems (CAS)

Complex adaptive systems, on the contrary, are learning systems that evolve. They are composed of elements which are called agents that interact with one another and adapt in response to the interactions.

cas

Markets are a good example of complex adaptive systems at work.

CAS agents have three levels of activity. As described by Johnson in Complexity Theory: A Short Introduction – the three levels of activity are:

  1. Performance (moment by moment capabilities): This establishes the locus of all behavioral elements that signify the agent at a given point of time and thereafter establishes triggers or responses. For example, if an object is approaching and the response of the agent is to run, that would constitute a performance if-then outcome. Alternatively, it could be signals driven – namely, an ant emits a certain scent when it finds food: other ants will catch on that trail and act, en masse, to follow the trail. Thus, an agent or an actor in an adaptive system has detectors which allows them to capture signals from the environment for internal processing and it also has the effectors that translate the processing to higher order signals that influence other agents to behave in certain ways in the environment. The signal is the scent that creates these interactions and thus the rubric of a complex adaptive system.
  2. Credit assignment (rating the usefulness of available capabilities): When the agent gathers experience over time, the agent will start to rely heavily on certain rules or heuristics that they have found useful. It is also typical that these rules may not be the best rules, but it could be rules that are a result of first discovery and thus these rules stay. Agents would rank these rules in some sequential order and perhaps in an ordinal ranking to determine what is the best rule to fall back on under certain situations. This is the crux of decision making. However, there are also times when it is difficult to assign a rank to a rule especially if an action is setting or laying the groundwork for a future course of other actions. A spider weaving a web might be regarded as an example of an agent expending energy with the hope that she will get some food. This is a stage setting assignment that agents have to undergo as well. One of the common models used to describe this best is called the bucket-brigade algorithm which essentially states that the strength of the rule depends on the success of the overall system and the agents that constitute it. In other words, all the predecessors and successors need to be aware of only the strengths of the previous and following agent and that is done by some sort of number assignment that becomes stronger from the beginning of the origin of the system to the end of the system. If there is a final valuable end-product, then the pathway of the rules reflect success. Once again, it is conceivable that this might not be the optimal pathway but a satisficing pathway to result in a better system.
  3. Rule discovery (generating new capabilities): Performance and credit assignment in agent behavior suggest that the agents are governed by a certain bias. If the agents have been successful following certain rules, they would be inclined toward following those rules all the time. As noted, rules might not be optimal but satisficing. Is improvement a matter of just incremental changes to the process? We do see major leaps in improvement … so how and why does this happen? In other words, someone in the process have decided to take a different rule despite their experiences. It could have been an accident or very intentional.

One of the theories that have been presented is that of building blocks. CAS innovation is a result of reconfiguring the various components in new ways. One quips that if energy is neither created, nor destroyed …then everything that exists today or will exist tomorrow is nothing but a reconfiguration of energy in new ways. All of tomorrow resides in today … just patiently waiting to be discovered. Agents create hypotheses and experiment in the petri dish by reconfiguring their experiences and other agent’s experiences to formulate hypotheses and the runway for discovery. In other words, there is a collaboration element that comes into play where the interaction of the various agents and their assignment as a group to a rule also sets the stepping stone for potential leaps in innovation.

Another key characteristic of CAS is that the elements are constituted in a hierarchical order. Combinations of agents at a lower level result in a set of agents higher up and so on and so forth. Thus, agents in higher hierarchical orders take on some of the properties of the lower orders but it also includes the interaction rules that distinguishes the higher order from the lower order.

The Unbearable Lightness of Being

Where the mind is without fear and the head is held high
Where knowledge is free
Where the world has not been broken up into fragments
By narrow domestic walls
Where words come out from the depth of truth
Where tireless striving stretches its arms towards perfection
Where the clear stream of reason has not lost its way
Into the dreary desert sand of dead habit
Where the mind is led forward by thee
Into ever-widening thought and action
Into that heaven of freedom, my Father, let my country awake.

–        Rabindranath  Tagore

Among the many fundamental debates in philosophy, one of the fundamental debates has been around the concept of free will. The debates have stemmed around two arguments associated with free will.

1)      Since future actions are governed by the circumstances of the present and the past, human beings future actions are predetermined on account of the learnings from the past.  Hence, the actions that happen are not truly a consequent of free will.

2)      The counter-argument is that future actions may not necessarily be determined and governed by the legacy of the present and the past, and hence leaves headroom for the individual to exercise free will.

Now one may wonder what determinism or lack of it has anything to do with the current state of things in an organizational context.  How is this relevant? Why are the abstract notions of determinism and free will important enough to be considered in the context of organizational evolution?  How does the meaning lend itself to structured institutions like business organizations, if you will, whose sole purpose is to create products and services to meet the market demand.

So we will throw a factual wrinkle in this line of thought. We will introduce now an element of chance. How does chance change the entire dialectic? Simply because chance is an unforeseen and random event that may not be pre-determined; in fact, a chance event may not have a causal trigger. And chance or luck could be meaningful enough to untether an organization and its folks to explore alternative paths.  It is how the organization and the people are aligned to take advantage of that random nondeterministic future that could make a huge difference to the long term fate of the organization.

The principle of inductive logic states that what is true for n and n+1 would be true for n+2.  The inductive logic creates predictability and hence organizations create pathways to exploit the logical extension of inductive logic. It is the most logical apparatus that exists to advance groups in a stable but robust manner to address the multitude of challenges that that they have to grapple with. After all, the market is governed by animal spirits! But let us think through this very carefully.  All competition or collaboration that occurs among groups to address the market demands result in homogenous behavior with general homogeneous outcomes.  Simply put, products and services become commoditized. Their variance is not unique and distinctive.  However, they could be just be distinctive enough to eke out enough profits in the margins before being absorbed into a bigger whole. At that point, identity is effaced over time.  Organizations gravitate to a singularity.  Unique value propositions wane over time.

So let us circle back to chance.  Chance is our hope to create divergence. Chance is the factoid that cancels out the inductive vector of industrial organization. Chance does not exist … it is not a “waiting for Godot” metaphor around the corner.  If it always did, it would have been imputed by the determinists in their inductive world and we would end up with a dystopian homogenous future.  Chance happens.  And sometimes it has a very short half-life. And if the organization and people are aligned and their mindset is adapted toward embracing and exploiting that fleeting factoid of chance, the consequences could be huge.  New models would emerge, new divergent paths would be traduced and society and markets would burst into a garden of colorful ideas in virtual oasis of new markets.

So now to tie this all to free will and to the unbearable lightness of being! It is the existence of chance that creates the opportunity to exercise free will on the part of an individual, but it is the organizations responsibility to allow the individual to unharness themselves from organization inertia. Thus, organizations have to perpetuate an environment wherein employees are afforded some headroom to break away.  And I don’t mean break away as in people leaving the organization to do their own gigs; I mean breakaway in thought and action within the boundaries of the organization to be open to element of chance and exploit it. Great organizations do not just encourage the lightness of being … unharnessing the talent but rather – the great organizations are the ones that make the lightness of being unbearable.  These individuals are left with nothing but an awareness and openness to chance to create incredible values … far more incredible and awe inspiring and momentous than a more serene state of general business as usual affairs.

MECE Framework, Analysis, Synthesis and Organization Architecture toward Problem-Solving

MECE is a thought tool that has been systematically used in McKinsey. It stands for Mutually Exclusive, Comprehensively Exhaustive.  We will go into both these components in detail and then relate this to the dynamics of an organization mindset. The presumption in this note is that the organization mindset has been engraved over time or is being driven by the leadership. We are looking at MECE since it represents a tool used by the most blue chip consulting firm in the world. And while doing that, we will , by the end of the article, arrive at the conclusion that this framework alone will not be the panacea to all investigative methodology to assess a problem – rather, this framework has to reconcile with the active knowledge that most things do not fall in the MECE framework, and thus an additional system framework is needed to amplify our understanding for problem solving and leaving room for chance.

So to apply the MECE technique, first you define the problem that you are solving for. Once you are past the definition phase, well – you are now ready to apply the MECE framework.

MECE is a framework used to organize information which is:

  1. Mutually exclusive: Information should be grouped into categories so that each category is separate and distinct without any overlap; and
  2. Collectively exhaustive: All of the categories taken together should deal with all possible options without leaving any gaps.

In other words, once you have defined a problem – you figure out the broad categories that relate to the problem and then brainstorm through ALL of the options associated with the categories. So think of  it as a mental construct that you move across a horizontal line with different well defined shades representing categories, and each of those partitions of shades have a vertical construct with all of the options that exhaustively explain those shades. Once you have gone through that exercise, which is no mean feat – you will be then looking at an artifact that addresses the problem. And after you have done that, you individually look at every set of options and its relationship to the distinctive category … and hopefully you are well on your path to coming up with relevant solutions.

Now some may argue that my understanding of MECE is very simplistic. In fact, it may very well be. But I can assure you that it captures the essence of very widely used framework in consulting organizations. And this framework has been imported to large organizations and have cascaded down to different scale organizations ever since.

Here is a link that would give you a deeper understanding of the MECE framework:

http://firmsconsulting.com/2010/09/22/a-complete-mckinsey-style-mece-decision-tree/

Now we are going to dig a little deeper.  Allow me to digress and take you down a path less travelled. We will circle back to MECE and organizational leadership in a few moments. One of the memorable quotes that have left a lasting impression is by a great Nobel Prize winning physicist, Richard Feynman.

“I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say “look how beautiful it is,” and I’ll agree. Then he says “I as an artist can see how beautiful this is but you as a scientist takes this all apart and it becomes a dull thing,” and I think that he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is … I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to theexcitement, the mystery and the awe of a flower! It only adds. I don’t understand how it subtracts.”

The above quote by Feynman lays the groundwork to understand two different approaches – namely, the artist approaches the observation of the flower from the synthetic standpoint, whereas Feynman approaches it from an analytic standpoint. Both do not offer views that are antithetical to one another: in fact, you need both to gather a holistic view and arrive at a conclusion – the sum is greater than the parts. Feynman does not address the essence of beauty that the artist puts forth; he looks at the beauty of how the components and its mechanics interact well and how it adds to our understanding of the flower.  This is very important because the following dialogue with explore another concept to drive this difference between analysis and synthesis home.

There are two possible ways of gaining knowledge. Either we can proceed from the construction of the flower ( the Feynman method) , and then seek to determine the laws of the mutual interaction of its parts as well as its response to external stimuli; or we can begin with what the flower accomplishes and then attempt to account for this. By the first route we infer effects from given causes, whereas by the second route we seek causes of given effects. We can call the first route synthetic, and the second analytic.

 

We can easily see how the cause effect relationship is translated into a relationship between the analytic and synthetic foundation.

 

A system’s internal processes — i.e. the interactions between its parts — are regarded as the cause of what the system, as a unit, performs. What the system performs is thus the effect. From these very relationships we can immediately recognize the requirements for the application of the analytic and synthetic methods.

 

The synthetic approach — i.e. to infer effects on the basis of given causes — is therefore appropriate when the laws and principles governing a system’s internal processes are known, but when we lack a detailed picture of how the system behaves as a whole.

Another example … we do not have a very good understanding of the long-term dynamics of galactic systems, nor even of our own solar system. This is because we cannot observe these objects for the thousands or even millions of years which would be needed in order to map their overall behavior.

 

However, we do know something about the principles, which govern these dynamics, i.e. gravitational interaction between the stars and planets respectively. We can therefore apply a synthetic procedure in order to simulate the gross dynamics of these objects. In practice, this is done with the use of computer models which calculate the interaction of system parts over long, simulated time periods.

The analytical approach — drawing conclusions about causes on the basis of effects – is appropriate when a system’s overall behavior is known, but when we do not have clear or certain knowledge about the system’s internal processes or the principles governing these. On the other hand, there are a great many systems for which we neither have a clear and certain conception of how they behave as a whole, nor fully understand the principles at work which cause that behavior. Organizational behavior is one such example since it introduces the fickle spirits of the employees that, at an aggregate create a distinct character in the organization.

Leibniz was among the first to define analysis and synthesis as modern methodological concepts:

“Synthesis … is the process in which we begin from principles and [proceed to] build up theorems and problems … while analysis is the process in which we begin with a given conclusion or proposed problem and seek the principles by which we may demonstrate the conclusion or solve the problem.”

 

So we have wandered down this path of analysis and synthesis and now we will circle back to MECE and the organization. MECE framework is a prime example of the application of analytics in an organization structure. The underlying hypothesis is that the application of the framework will illuminate and add clarity to understanding the problems that we are solving for. But here is the problem:  the approach could lead to paralysis by analysis. If one were to apply this framework, one would lose itself in the weeds whereas it is just as important to view the forest.  So organizations have to step back and assess at what point we stop the analysis i.e. we have gathered information and at what point we set our roads to discovering a set of principles that will govern the action to solve a set of problems.  It is almost always impossible to gather all information to make the best decision – especially where speed, iteration, distinguishing from the herd quickly, stamping a clear brand etc. are becoming the hallmarks of great organizations.

Applying the synthetic principle in addition to “MECE think” leaves room for error and sub-optimal solutions. But it crowd sources the limitless power of imagination and pattern thinking that will allow the organization to make critical breakthroughs in innovative thinking. It is thus important that both the principles are promulgated by the leadership as coexisting principles that drive an organization forward. It ignites employee engagement, and it imputes the stochastic errors that result when employees may not have all the MECE conditions checked off.

 

In conclusion, it is important that the organization and its leadership set its architecture upon the traditional pillars of analysis and synthesis – MECE and systems thinking.  And this architecture serves to be the springboard for the employees that allows for accidental discoveries, flights of imagination, Nietzschean leaps that transform the organization toward the pathway of innovation, while still grounded upon the bedrock of facts and empirical observations.

 

 

Risk Management and Finance

If you are in finance, you are a risk manager. Say what? Risk management! Imagine being the hub in a spoke of functional areas, each of which is embedded with a risk pattern that can vary over time. A sound finance manager would be someone who would be best able to keep pulse, and be able to support the decisions that can contain the risk. Thus, value management becomes critical: Weighing the consequence of a decision against the risk that the decision poses. Not cost management, but value management. And to make value management more concrete, we turn to cash impact or rather – the discounted value of future stream of cash that may or may not be a consequent to a decision. Companies carry risks. If not, a company will not offer any premiums in value to the market. They create competitive advantage – defined as sorting a sustained growth in free cash flow as the key metric that becomes the separator.

John Kay, an eminent strategist, had identified four sources of competitive advantage: Organization Architecture and Culture, Reputation, Innovation and Strategic Assets. All of these are inextricably intertwined, and must be aligned to service value in the company. The business value approach underpins the interrelationships best. And in so doing, scenario planning emerges as a sound machination to manage risks. Understanding the profit impact of a strategy, and the capability/initiative tie-in is one of the most crucial conversations that a good finance manager could encourage in a company. Product, market and internal capabilities become the anchor points in evolving discussions. Scenario planning thus emerges in context of trends and uncertainties: a trend in patterns may open up possibilities, the latter being in the domain of uncertainty.

There are multiple methods one could use in building scenarios and engaging in fruitful risk assessment.
1.Sensitivity Assessment: Evaluate decisions in the context of the strategy’s reliance on the resilience of business conditions. Assess the various conditions in a scenario or mutually exclusive scenarios, assess a probabilistic guesstimate on success factors, and then offer simple solutions. This assessment tends to be heuristic oriented and excellent when one is dealing with few specific decisions to be made. There is an elevated sense of clarity with regard to the business conditions that may present itself. And this is most commonly used, but does not thwart the more realistic conditions where clarity is obfuscated and muddy.
2.Strategy Evaluation: Use scenarios to test a strategy by throwing a layer of interaction complexity. To the extent you can disaggregate the complexity, the evaluation of a strategy is better tenable. But once again, disaggregation has its downsides. We don’t operate in a vacuum. It is the aggregation, and negotiating through this aggregation effectively is where the real value is. You may have heard of the Mckinsey MECE (Mutually Exclusive; Comprehensively Exhaustive) methodology where strategic thrusts are disaggregated and contained within a narrow framework. The idea is that if one does that enough, one has an untrammeled confidence in choosing one initiative over another. That is true again in some cases, but my belief is that the world operates at a more synthetic level than pure analytic. We resort to analytics since it is too damned hard to synthesize, and be able to agree on an optimal solution. I am not creaming analytics; I am only suggesting that there is some possibility that a false hypothesis is accepted and a true one rejected. Thus analytics is an important tool, but must be weighed along with the synthetic tradition.
3.Synthetic Development: By far the most interesting and perhaps the most controversial with glint of academic and theoretical monstrosities included – this represents developing and broadcasting all scenarios equally weighed, and grouping interaction of scenarios. Thus, if introducing a multi-million dollar initiative in untested waters is a decision you have to weigh, one must go through the first two methods, and then review the final outcome against peripheral factors that were not introduced initially. A simple statement or realization like – The competition for Southwest is the Greyhound bus – could significantly alter the expanse of the strategy.

If you think of the new world of finance being nothing more than crunching numbers … stop and think again. Yes …crunching those numbers play a big part, less a cause than an effect of the mental model that you appropriate in this prized profession.