Category Archives: Vision

Navigating Chaos and Model Thinking

An inherent property of a chaotic system is that slight changes in  initial conditions in the system result in a disproportionate change    in outcome that is difficult to predict. Chaotic systems appear to create outcomes that appear to be random: they are generated by simple and non-random processes but the complexity of such systems emerge over time driven by numerous iterations of simple rules. The elements that compose chaotic systems might be few in number, but these elements work together to produce an intricate set of dynamics that amplifies the outcome and makes it hard to be predictable. These systems evolve over time, doing so according to rules and initial conditions and how the constituent elements work together.


Complex systems are characterized by emergence. The interactions between the elements of the system with its environment create new properties which influence the structural development of the system and the roles of the agents. In such systems there is self-organization characteristics that occur, and hence it is difficult to study and effect a system by studying the constituent parts that comprise it. The task becomes even more formidable when one faces the prevalent reality that most systems exhibit non-linear dynamics.


So how do we incorporate management practices in the face of chaos and complexity that is inherent in organization structure and market dynamics?  It would be interesting to study this in light of the evolution of management principles in keeping with the evolution of scientific paradigms.


Newtonian Mechanics and Taylorism

Traditional organization management has been heavily influenced by Newtonian mechanics. The five key assumptions of Newtonian mechanics are:

  1. Reality is objective
  2. Systems are linear and there is a presumption that all underlying cause and effect are linear
  3. Knowledge is empirical and acquired through collecting and analyzing data with the focus on surfacing regularities, predictability and control
  4. Systems are inherently efficient. Systems almost always follows the path of least resistance
  5. If inputs and process is managed, the outcomes are predictable

Frederick Taylor is the father of operational research and his methods were deployed in automotive companies in the 1940’s. Workers and processes are input elements to ensure that the machine functions per expectations. There was a linearity employed in principle. Management role was that of observation and control and the system would best function under hierarchical operating principles. Mass and efficient production were the hallmarks of management goal.

toyota way

Randomness and the Toyota Way

The randomness paradigm recognized uncertainty as a pervasive constant. The various methods that Toyota Way invoked around 5W rested on the assumption that understanding the cause and effect is instrumental and this inclined management toward a more process-based deployment. Learning is introduced in this model as a dynamic variable and there is a lot of emphasis on the agents and providing them the clarity and purpose of their tasks. Efficiencies and quality are presumably driven by the rank and file and autonomous decisions are allowed. The management principle moves away from hierarchical and top-down to a more responsibility driven labor force.


Complexity and Chaos and the Nimble Organization

Increasing complexity has led to more demands on the organization. With the advent of social media and rapid information distribution and a general rise in consciousness around social impact, organizations have to balance out multiple objectives. Any small change in initial condition can lead to major outcomes: an advertising mistake can become a global PR nightmare; a word taken out of context could have huge ramifications that might immediately reflect on the stock price; an employee complaint could force management change. Increasing data and knowledge are not sufficient to ensure long-term success. In fact, there is no clear recipe to guarantee success in an age fraught with non-linearity, emergence and disequilibrium. To succeed in this environment entails the development of a learning organization that is not governed by fixed top-down rules: rather the rules are simple and the guidance is around the purpose of the system or the organization. It is best left to intellectual capital to self-organize rapidly in response to external information to adapt and make changes to ensure organization resilience and success.


Companies are dynamic non-linear adaptive systems. The elements in the system are constantly interacting between themselves and their external environment. This creates new emergent properties that are sensitive to the initial conditions. A change in purpose or strategic positioning could set a domino effect and can lead to outcomes that are not predictable. Decisions are pushed out to all levels in the organization, since the presumption is that local and diverse knowledge that spontaneously emerge in response to stimuli is a superior structure than managing for complexity in a centralized manner. Thus, methods that can generate ideas, create innovation habitats, and embrace failures as providing new opportunities to learn are best practices that companies must follow. Traditional long-term planning and forecasting is becoming a far harder exercise and practically impossible. Thus, planning is more around strategic mindset, scenario planning, allowing local rules to auto generate without direct supervision, encourage dissent and diversity, stimulate creativity and establishing clarity of purpose and broad guidelines are the hall marks of success.


Principles of Leadership in a New Age

We have already explored the fact that traditional leadership models originated in the context of mass production and efficiencies. These models are arcane in our information era today, where systems are characterized by exponential dynamism of variables, increased density of interactions, increased globalization and interconnectedness, massive information distribution at increasing rapidity, and a general toward economies driven by free will of the participants rather than a central authority.

planning 2

Complexity Leadership Theory (Uhl-Bien) is a “framework for leadership that enables the learning, creative and adaptive capacity of complex adaptive systems in knowledge-producing organizations or organizational units. Since planning for the long-term is virtually impossible, Leadership has to be armed with different tool sets to steer the organization toward achieving its purpose. Leaders take on enabler role rather than controller role: empowerment supplants control. Leadership is not about focus on traits of a single leader: rather, it redirects emphasis from individual leaders to leadership as an organizational phenomenon. Leadership is a trait rather than an individual. We recognize that complex systems have lot of interacting agents – in business parlance, which might constitute labor and capital. Introducing complexity leadership is to empower all of the agents with the ability to lead their sub-units toward a common shared purpose. Different agents can become leaders in different roles as their tasks or roles morph rapidly: it is not necessarily defined by a formal appointment or knighthood in title.

Thus, complexity of our modern-day reality demands a new strategic toolset for the new leader. The most important skills would be complex seeing, complex thinking, complex knowing, complex acting, complex trusting and complex being. (Elena Osmodo, 2012)


Complex Seeing: Reality is inherently subjective. It is a page of the Heisenberg Uncertainty principle that posits that the independence between the observer and the observed is not real. If leaders are not aware of this independence, they run the risk of engaging in decisions that are fraught with bias. They will continue to perceive reality with the same lens that they have perceived reality in the past, despite the fact that undercurrents and riptides of increasingly exponential systems are tearing away their “perceived reality.”  Leader have to be conscious about the tectonic shifts, reevaluate their own intentions, probe and exclude biases that could cloud the fidelity of their decisions,  and engage in a continuous learning process. The ability to sift and see through this complexity sets the initial condition upon which the entire system’s efficacy and trajectory rests.


Complex Thinking: Leaders have to be cognizant of falling prey to linear simple cause and effect thinking. On the contrary, leaders have to engage in counter-intuitive thinking, brainstorming and creative thinking. In addition, encouraging dissent, debates and diversity encourage new strains of thought and ideas.


Complex Feeling: Leaders must maintain high levels of energy and be optimistic of the future. Failures are not scoffed at; rather they are simply another window for learning. Leaders have to promote positive and productive emotional interactions. The leaders are tasked to increase positive feedback loops while reducing negative feedback mechanisms to the extent possible. Entropy and attrition taxes any system as is: the leader’s job is to set up safe environment to inculcate respect through general guidelines and leading by example.


Complex Knowing: Leadership is tasked with formulating simple rules to enable learned and quicker decision making across the organization. Leaders must provide a common purpose, interconnect people with symbols and metaphors, and continually reiterate the raison d’etre of the organization. Knowing is articulating: leadership has to articulate and be humble to any new and novel challenges and counterfactuals that might arise. The leader has to establish systems of knowledge: collective learning, collaborative learning and organizational learning. Collective learning is the ability of the collective to learn from experiences drawn from the vast set of individual actors operating in the system. Collaborative learning results due to interaction of agents and clusters in the organization. Learning organization, as Senge defines it, is “where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspirations are set free, and where people are continually learning to see the whole together.”


Complex Acting: Complex action is the ability of the leader to not only work toward benefiting the agents in his/her purview, but also to ensure that the benefits resonates to a whole which by definition is greater than the sum of the parts. Complex acting is to take specific action-oriented steps that largely reflect the values that the organization represents in its environmental context.


Complex Trusting: Decentralization requires conferring power to local agents. For decentralization to work effectively, leaders have to trust that the agents will, in the aggregate, work toward advancing the organization. The cost of managing top-down is far more than the benefits that a trust-based decentralized system would work in a dynamic environment resplendent with the novelty of chaos and complexity.


Complex Being: This is the ability of the leaser to favor and encourage communication across the organization rapidly. The leader needs to encourage relationships and inter-functional dialogue.


The role of complex leaders is to design adaptive systems that are able to cope with challenging and novel environments by establishing a few rules and encouraging agents to self-organize autonomously at local levels to solve challenges. The leader’s main role in this exercise is to set the strategic directions and the guidelines and let the organizations run.

Distribution Economics

Distribution is a method to get products and services to the maximum number of customers efficiently.

dis channel

Complexity science is the study of complex systems and the problems that are multi-dimensional, dynamic and unpredictable. It constitutes a set of interconnected relationships that are not always abiding to the laws of cause and effect, but rather the modality of non-linearity. Thomas Kuhn in his pivotal essay: The Structure of Scientific Revolutions posits that anomalies that arise in scientific method rise to the level where it can no longer be put on hold or simmer on a back-burner: rather, those anomalies become the front line for new methods and inquiries such that a new paradigm necessarily must emerge to supplant the old conversations. It is this that lays the foundation of scientific revolution – an emergence that occurs in an ocean of seeming paradoxes and competing theories. Contrary to a simple scientific method that seeks to surface regularities in natural phenomenon, complexity science studies the effects that rules have on agents. Rules do not drive systems toward a predictable outcome: rather it sets into motion a high density of interactions among agents such that the system coalesces around a purpose: that being necessarily that of survival in context of its immediate environment. In addition, the learnings that follow to arrive at the outcome is then replicated over periods to ensure that the systems mutate to changes in the external environment. In theory, the generative rules leads to emergent behavior that displays patterns of parallelism to earlier known structures.

channel dev

For any system to survive and flourish, distribution of information, noise and signals in and outside of a CPS or CAS is critical. We have touched at length that the system comprises actors and agents that work cohesively together to fulfill a special purpose. Specialization and scale matter! How is a system enabled to fulfill their purpose and arrive at a scale that ensures long-term sustenance? Hence the discussion on distribution and scale which is a salient factor in emergence of complex systems that provide the inherent moat of “defensibility” against internal and external agents working against it.


Distribution, in this context, refers to the quality and speed of information processing in the system. It is either created by a set of rules that govern the tie-ups between the constituent elements in the system or it emerges based on a spontaneous evolution of communication protocols that are established in response to internal and external stimuli. It takes into account the available resources in the system or it sets up the demands on resource requirements. Distribution capabilities have to be effective and depending upon the dynamics of external systems, these capabilities might have to be modified effectively. Some distribution systems have to be optimized or organized around efficiency: namely, the ability of the system to distribute information efficiently. On the other hand, some environments might call for less efficiency as the key parameter, but rather focus on establishing a scale – an escape velocity in size and interaction such that the system can dominate the influence of external environments. The choice between efficiency and size is framed by the long-term purpose of the system while also accounting for the exigencies of ebbs and flows of external agents that might threaten the system’s existence.

Partner Ecosystem

Since all systems are subject to the laws of entropy and the impact of unintended consequences, strategies have to be orchestrated accordingly. While it is always naïve to assume exactitude in the ultimate impact of rules and behavior, one would surmise that such systems have to be built around the fault lines of multiple roles for agents or group of agents to ensure that the system is being nudged, more than less, toward the desired outcome. Hence, distribution strategy is the aggregate impact of several types of channels of information that are actively working toward a common goal. The idea is to establish multiple channels that invoke different strategies while not cannibalizing or sabotaging an existing set of channels. These mutual exclusive channels have inherent properties that are distinguished by the capacity and length of the channels, the corresponding resources that the channels use and the sheer ability to chaperone the system toward the overall purpose.

social economics

The complexity of the purpose and the external environment determines the strategies deployed and whether scale or efficiency are the key barometers for success. If a complex system must survive and hopefully replicate from strength to greater strength over time, size becomes more paramount than efficiency. Size makes up for the increased entropy which is the default tax on the system, and it also increases the possibility of the system to reach the escape velocity. To that end, managing for scale by compromising efficiency is a perfectly acceptable means since one is looking at the system with a long-term lens with built-in regeneration capabilities. However, not all systems might fall in this category because some environments are so dynamic that planning toward a long-term stability is not practical, and thus one has to quickly optimize for increased efficiency. It is thus obvious that scale versus efficiency involves risky bets around how the external environment will evolve. We have looked at how the systems interact with external environments: yet, it is just as important to understand how the actors work internally in a system that is pressed toward scale than efficiency, or vice versa. If the objective is to work toward efficiency, then capabilities can be ephemeral: one builds out agents and actors with capabilities that are mission-specific. On the contrary, scale driven systems demand capabilities that involve increased multi-tasking abilities, the ability to develop and learn from feedback loops, and to prime the constraints with additional resources. Scaling demand acceleration and speed: if a complex system can be devised to distribute information and learning at an accelerating pace, there is a greater likelihood that this system would dominate the environment.


Scaling systems can be approached by adding more agents with varying capabilities. However, increased number of participants exponentially increase the permutations and combinations of channels and that can make the system sluggish. Thus, in establishing the purpose and the subsequent design of the system, it is far more important to establish the rules of engagement. Further, the rules might have some centralized authority that will directionally provide the goal while other rules might be framed in a manner to encourage a pure decentralization of authority such that participants act quickly in groups and clusters to enable execution toward a common purpose.

push pull

In business we are surrounded by uncertainty and opportunities. It is how we calibrate around this that ultimately reflects success. The ideal framework at work would be as follows:

  1. What are the opportunities and what are the corresponding uncertainties associated with the opportunities? An honest evaluation is in order since this is what sets the tone for the strategic framework and direction of the organization.
  2. Should we be opportunistic and establish rules that allow the system to gear toward quick wins: this would be more inclined toward efficiencies. Or should we pursue dominance by evaluating our internal capability and the probability of winning and displacing other systems that are repositioning in advance or in response to our efforts? At which point, speed and scale become the dominant metric and the resources and capabilities and the set of governing rules have to be aligned accordingly.
  3. How do we craft multiple channels within and outside of the system? In business lingo, that could translate into sales channels. These channels are selling products and services and can be adding additional value along the way to the existing set of outcomes that the system is engineered for. The more the channels that are mutually exclusive and clearly differentiated by their value propositions, the stronger the system and the greater the ability to scale quickly. These antennas, if you will, also serve to be receptors for new information which will feed data into the organization which can subsequently process and reposition, if the situation so warrants. Having as many differentiated antennas comprise what constitutes the distribution strategy of the organization.
  4. The final cut is to enable a multi-dimensional loop between external and internal system such that the system expands at an accelerating pace without much intervention or proportionate changes in rules. In other words, system expands autonomously – this is commonly known as the platform effect. Scale does not lead to platform effect although the platform effect most definitely could result in scale. However, scale can be an important contributor to platform effect, and if the latter gets root, then the overall system achieves efficiency and scale in the long run.

Winner Take All Strategy

Being the first to cross the finish line makes you a winner in only one phase of life. It’s what you do after you cross the line that really counts.
– Ralph Boston

Does winner-take-all strategy apply outside the boundaries of a complex system? Let us put it another way. If one were to pursue a winner-take-all strategy, then does this willful strategic move not bind them to the constraints of complexity theory? Will the net gains accumulate at a pace over time far greater than the corresponding entropy that might be a by-product of such a strategy? Does natural selection exhibit a winner-take-all strategy over time and ought we then to regard that winning combination to spur our decisions around crafting such strategies? Are we fated in the long run to arrive at a world where there will be a very few winners in all niches and what would that mean? How does that surmise with our good intentions of creating equal opportunities and a fair distribution of access to resources to a wider swath of the population? In other words, is a winner take all a deterministic fact and does all our trivial actions to counter that constitute love’s labor lost?

business award

Natural selection is a mechanism for evolution. It explains how populations or species evolve or modify over time in such a manner that it becomes better suited to their environments. Recall the discussion on managing scale in the earlier chapter where we discussed briefly about aligning internal complexity to external complexity. Natural selection is how it plays out at a biological level. Essentially natural selection posits that living organisms have inherited traits that help them to survive and procreate. These organisms will largely leave more offspring than their peers since the presumption is that these organisms will carry key traits that will survive the vagaries of external complexity and environment (predators, resource scarcity, climate change, etc.) Since these traits are passed on to the next generate, these traits will become more common until such time that the traits are dominant over generations, if the environment has not been punctuated with massive changes. These organisms with these dominant traits will have adapted to their environment. Natural selection does not necessarily suggest that what is good for one is good for the collective species.


An example that was shared by Robert Frank in his book “The Darwin Economy” was the case of large antlers of the bull elk. These antlers developed as an instrument for attracting mates rather than warding off predators. Big antlers would suggest a greater likelihood of the bull elk to marginalize the elks with smaller antlers. Over time, the bull elks with small antlers would die off since they would not be able to produce offspring and pass their traits. Thus, the bull elks would largely comprise of those elks with large antlers. However, the flip side is that large antlers compromise mobility and thus are more likely to be attacked by predators. Although the individual elk with large antler might succeed to stay around over time, it is also true that the compromised mobility associated with large antlers would overall hurt the propagation of the species as a collective group. We will return to this very important concept later. The interests of individual animals were often profoundly in conflict with the broader interests of their own species. Corresponding to the development of the natural selection mechanism is the introduction of the concept of the “survival of the fittest” which was introduced by Herbert Spencer. One often uses natural selection and survival of the fittest interchangeable and that is plain wrong. Natural selection never claims that the species that will emerge is the strongest, the fastest, the largest, etc.: it simply claims that the species will be the fittest, namely it will evolve in a manner best suited for the environment in which it resides. Put it another way: survival of the most sympathetic is perhaps more applicable. Organisms that are more sympathetic and caring and work in harmony with the exigencies of an environment that is largely outside of their control would likely succeed and thrive.

dig collab

We will digress into the world of business. A common conception that is widely discussed is that businesses must position toward a winner-take-all strategy – especially, in industries that have very high entry costs. Once these businesses entrench themselves in the space, the next immediate initiative would be to literally launch a full-frontal assault involving huge investments to capture the mind and the wallet of the customer. Peter Thiel says – Competition is for losers. If you want to create and capture lasting value, look to build a monopoly.” Once that is built, it would be hard to displace!


Scaling the organization intentionally is key to long-term success. There are a number of factors that contribute toward developing scale and thus establishing a strong footing in the particular markets. We are listing some of the key factors below:

  1. Barriers to entry: Some organizations have natural cost prohibitive barriers to entry like utility companies or automobile plants. They require large investments. On the other hand, organizations can themselves influence and erect huge barriers to entry even though the barriers did not exist. Organizations would massively invest in infrastructure, distribution, customer acquisition and retention, brand and public relations. Organizations that are able to rapidly do this at a massive scale would be the ones that is expected to exercise their leverage over a big consumption base well into the future.
  2. Multi-sided platform impacts: The value of information across multiple subsystems: company, supplier, customer, government increases disproportionately as it expands. We had earlier noted that if cities expand by 100%, then there is increasing innovating and goods that generate 115% -the concept of super-linear scaling. As more nodes are introduced into the system and a better infrastructure is created to support communication and exchange between the nodes, the more entrenched the business becomes. And interestingly, the business grows at a sub-linear scale – namely, it consumes less and less resources in proportion to its growth. Hence, we see the large unicorn valuation among companies where investors and market makers place calculated bets on investments of colossal magnitudes. The magnitude of such investments is relatively a recent event, and this is largely driven by the advances in technology that connect all stakeholders.
  3. Investment in learning: To manage scale is to also be selective of information that a system receives and how the information is processed internally. In addition, how is this information relayed to the external system or environment. This requires massive investment in areas like machine learning, artificial intelligence, big data, enabling increased computational power, development of new learning algorithms, etc. This means that organizations have to align infrastructure and capability while also working with external environments through public relations, lobbying groups and policymakers to chaperone a comprehensive and a very complex hard-to-replicate learning organism.
  4. Investment in brand: Brand personifies the value attributes of an organization. One connects brand to customer experience and perception of the organization’s product. To manage scale and grow, organizations must invest in brand: to capture increased mindshare of the consumer. In complexity science terms, the internal systems are shaped to emit powerful signals to the external environment and urge a response. Brand and learning work together to allow a harmonic growth of an internal system in the context of its immediate environment.


However, one must revert to the science of complexity to understand the long-term challenges of a winner-take-all mechanism. We have already seen the example that what is good for the individual bull-elk might not be the best for the species in the long-term. We see that super-linear scaling systems also emits significant negative by-products. Thus, the question that we need to ask is whether the organizations are paradoxically cultivating their own seeds of destruction in their ambitions of pursuing scale and market entrenchment.

Model Thinking

Model Framework

The fundamental tenet of theory is the concept of “empiria“. Empiria refers to our observations. Based on observations, scientists and researchers posit a theory – it is part of scientific realism.

A scientific model is a causal explanation of how variables interact to produce a phenomenon, usually linearly organized.  A model is a simplified map consisting of a few, primary variables that is gauged to have the most explanatory powers for the phenomenon being observed.  We discussed Complex Physical Systems and Complex Adaptive Systems early on this chapter. It is relatively easier to map CPS to models than CAS, largely because models become very unwieldy as it starts to internalize more variables and if those variables have volumes of interaction between them. A simple analogy would be the use of multiple regression models: when you have a number of independent variables that interact strongly between each other, autocorrelation errors occur, and the model is not stable or does not have predictive value.


Research projects generally tend to either look at a case study or alternatively, they might describe a number of similar cases that are logically grouped together. Constructing a simple model that can be general and applied to many instances is difficult, if not impossible. Variables are subject to a researcher’s lack of understanding of the variable or the volatility of the variable. What further accentuates the problem is that the researcher misses on the interaction of how the variables play against one another and the resultant impact on the system. Thus, our understanding of our system can be done through some sort of model mechanics but, yet we share the common belief that the task of building out a model to provide all of the explanatory answers are difficult, if not impossible. Despite our understanding of our limitations of modeling, we still develop frameworks and artifact models because we sense in it a tool or set of indispensable tools to transmit the results of research to practical use cases. We boldly generalize our findings from empiria into general models that we hope will explain empiria best. And let us be mindful that it is possible – more so in the CAS systems than CPS that we might have multiple models that would fight over their explanatory powers simply because of the vagaries of uncertainty and stochastic variations.

Popper says: “Science does not rest upon rock-bottom. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and when we cease our attempts to drive our piles into a deeper layer, it is not because we have reached firm ground. We simply stop when we are satisfied that they are firm enough to carry the structure, at least for the time being”. This leads to the satisficing solution: if a model can choose the least number of variables to explain the greatest amount of variations, the model is relatively better than other models that would select more variables to explain the same. In addition, there is always a cost-benefit analysis to be taken into consideration: if we add x number of variables to explain variation in the outcome but it is not meaningfully different than variables less than x, then one would want to fall back on the less-variable model because it is less costly to maintain.


Researchers must address three key elements in the model: time, variation and uncertainty. How do we craft a model which reflects the impact of time on the variables and the outcome? How to present variations in the model? Different variables might vary differently independent of one another. How do we present the deviation of the data in a parlance that allows us to make meaningful conclusions regarding the impact of the variations on the outcome? Finally, does the data that is being considered are actual or proxy data? Are the observations approximate? How do we thus draw the model to incorporate the fuzziness: would confidence intervals on the findings be good enough?

Two other equally other concepts in model design is important: Descriptive Modeling and Normative Modeling.

Descriptive models aim to explain the phenomenon. It is bounded by that goal and that goal only.

There are certain types of explanations that they fall back on: explain by looking at data from the past and attempting to draw a cause and effect relationship. If the researcher is able to draw a complete cause and effect relationship that meets the test of time and independent tests to replicate the results, then the causality turns into law for the limited use-case or the phenomenon being explained. Another explanation method is to draw upon context: explaining a phenomenon by looking at the function that the activity fulfills in its context. For example, a dog barks at a stranger to secure its territory and protect the home. The third and more interesting type of explanation is generally called intentional explanation: the variables work together to serve a specific purpose and the researcher determines that purpose and thus, reverse engineers the understanding of the phenomenon by understanding the purpose and how the variables conform to achieve that purpose.

This last element also leads us to thinking through the other method of modeling – namely, normative modeling. Normative modeling differs from descriptive modeling because the target is not to simply just gather facts to explain a phenomenon, but rather to figure out how to improve or change the phenomenon toward a desirable state. The challenge, as you might have already perceived, is that the subjective shadow looms high and long and the ultimate finding in what would be a normative model could essentially be a teleological representation or self-fulfilling prophecy of the researcher in action. While this is relatively more welcome in a descriptive world since subjectivism is diffused among a larger group that yields one solution, it is not the best in a normative world since variation of opinions that reflect biases can pose a problem.

How do we create a representative model of a phenomenon? First, we weigh if the phenomenon is to be understood as a mere explanation or to extend it to incorporate our normative spin on the phenomenon itself. It is often the case that we might have to craft different models and then weigh one against the other that best represents how the model can be explained. Some of the methods are fairly simple as in bringing diverse opinions to a table and then agreeing upon one specific model. The advantage of such an approach is that it provides a degree of objectivism in the model – at least in so far as it removes the divergent subjectivity that weaves into the various models. Other alternative is to do value analysis which is a mathematical method where the selection of the model is carried out in stages. You define the criteria of the selection and then the importance of the goal (if that be a normative model). Once all of the participants have a general agreement, then you have the makings of a model. The final method is to incorporate all all of the outliers and the data points in the phenomenon that the model seeks to explain and then offer a shared belief into those salient features in the model that would be best to apply to gain information of the phenomenon in a predictable manner.

business model

There are various languages that are used for modeling:

Written Language refers to the natural language description of the model. If price of butter goes up, the quantity demanded of the butter will go down. Written language models can be used effectively to inform all of the other types of models that follow below. It often goes by the name of “qualitative” research, although we find that a bit limiting.  Just a simple statement like – This model approximately reflects the behavior of people living in a dense environment …” could qualify as a written language model that seeks to shed light on the object being studied.

Icon Models refer to a pictorial representation and probably the earliest form of model making. It seeks to only qualify those contours or shapes or colors that are most interesting and relevant to the object being studied. The idea of icon models is to pictorially abstract the main elements to provide a working understanding of the object being studied.

Topological Models refer to how the variables are placed with respect to one another and thus helps in creating a classification or taxonomy of the model. Once can have logical trees, class trees, Venn diagrams, and other imaginative pictorial representation of fields to further shed light on the object being studied. In fact, pictorial representations must abide by constant scale, direction and placements. In other words, if the variables are placed on a different scale on different maps, it would be hard to draw logical conclusions by sight alone. In addition, if the placements are at different axis in different maps or have different vectors, it is hard to make comparisons and arrive at a shared consensus and a logical end result.

Arithmetic Models are what we generally fall back on most. The data is measured with an arithmetic scale. It is done via tables, equations or flow diagrams. The nice thing about arithmetic models is that you can show multiple dimensions which is not possible with other modeling languages. Hence, the robustness and the general applicability of such models are huge and thus is widely used as a key language to modeling.

Analogous Models refer to crafting explanations using the power of analogy. For example, when we talk about waves – we could be talking of light waves, radio waves, historical waves, etc.  These metaphoric representations can be used to explain phenomenon, but at best, the explanatory power is nebulous, and it would be difficult to explain the variations and uncertainties between two analogous models.  However, it still is used to transmit information quickly through verbal expressions like – “Similarly”, “Equivalently”, “Looks like ..” etc. In fact, extrapolation is a widely used method in modeling and we would ascertain this as part of the analogous model to a great extent. That is because we time-box the variables in the analogous model to one instance and the extrapolated model to another instance and we tie them up with mathematical equations.


Comparative Literature and Business Insights

Literature is the art of discovering something extraordinary about ordinary people, and saying with ordinary words something extraordinary.” – Boris Pasternak


It is literature which for me opened the mysterious and decisive doors of imagination and understanding. To see the way others see. To think the way others think. And above all, to feel.” – Salman Rushdie


There is a common theme that cuts across literature and business. It is called imagination!

Great literature seeds the mind to imagine faraway places across times and unique cultures. When we read a novel, we are exposed to complex characters that are richly defined and the readers’ subjective assessment of the character and the context defines their understanding of how the characters navigate the relationships and their environment. Great literature offers many pauses for thought, and long after the book is read through … the theme gently seeps in like silt in the readers’ cumulative experiences. It is in literature that the concrete outlook of humanity receives its expression. Comparative literature which is literature assimilated across many different countries enable a diversity of themes that intertwine into the readers’ experiences augmented by the reality of what they immediately experience – home, work, etc. It allows one to not only be capable of empathy but also … to craft out the fluid dynamics of ever changing concepts by dipping into many different types of case studies of human interaction. The novel and the poetry are the bulwarks of literature. It is as important to study a novel as it is to enjoy great poetry. The novel characterizes a plot/(s) and a rich tapestry of actions of the characters that navigates through these environments: the poetry is the celebration of the ordinary into extraordinary enactments of the rhythm of the language that transport the readers, through images and metaphor, into single moments. It breaks the linear process of thinking, a perpendicular to a novel.

comp literature

Business insights are generally a result of acute observation of trends in the market, internal processes, and general experience. Some business schools practice case study method which allows the student to have a fairly robust set of data points to fall back upon. Some of these case studies are fairly narrow but there are some that gets one to think about personal dynamics. It is a fact that personal dynamics and biases and positioning plays a very important role in how one advocates, views, or acts upon a position. Now the schools are layering in classes on ethics to understand that there are some fundamental protocols of human nature that one has to follow: the famous adage – All is fair in love and war – has and continues to lose its edge over time. Globalization, environmental consciousness, individual rights, the idea of democracy, the rights of fair representation, community service and business philanthropy are playing a bigger role in today’s society. Thus, business insights today are a result of reflection across multiple levels of experience that encompass not the company or the industry …but encompass a broader array of elements that exercises influence on the company direction. In addition, one always seeks an end in mind … they perpetually embrace a vision that is impacted by their judgments, observations and thoughts. Poetry adds the final wing for the flight into this metaphoric realm of interconnections – for that is always what a vision is – a semblance of harmony that inspires and resurrects people to action.


I contend that comparative literature is a leading indicator that allows a person to get a feel for the general direction of the express and latent needs of people. Furthermore, comparative literature does not offer a solution. Great literature does not portend a particular end. They leave open a multitude of possibilities and what-ifs. The reader can literally transport themselves into the environment and wonder at how he/she would act … the jump into a vicarious existence steeps the reader into a reflection that sharpens the intellect. This allows the reader in a business to be better positioned to excavate and address the needs of current and potential customers across boundaries.

“Literature gives students a much more realistic view of what’s involved in leading” than many business books on leadership, said the professor. “Literature lets you see leaders and others from the inside. You share the sense of what they’re thinking and feeling. In real life, you’re usually at some distance and things are prepared, polished. With literature, you can see the whole messy collection of things that happen inside our heads.” – Joseph L. Badaracco, the John Shad Professor of Business Ethics at Harvard Business School (HBS)

Aaron Swartz took down a piece of the Berlin Wall! We have to take it all down!

“The world’s entire scientific … heritage … is increasingly being digitized and locked up by a handful of private corporations… The Open Access Movement has fought valiantly to ensure that scientists do not sign their copyrights away but instead ensure their work is published on the Internet, under terms that allow anyone to access it.”  – Aaron Swartz

Information, in the context of scholarly articles by research at universities and think-tanks, is not a zero sum game. In other words, one person cannot have more without having someone have less. When you start creating “Berlin” walls in the information arena within the halls of learning, then learning itself is compromised. In fact, contributing or granting the intellectual estate into the creative commons serves a higher purpose in society – an access to information and hence, a feedback mechanism that ultimately enhances the value to the end-product itself. How? Since now the product has been distributed across a broader and diverse audience, and it is open to further critical analyses.


The universities have built a racket. They have deployed a Chinese wall between learning in a cloistered environment and the world who are not immediate participants. The Guardian wrote an interesting article on this matter and a very apt quote puts it all together.

“Academics not only provide the raw material, but also do the graft of the editing. What’s more, they typically do so without extra pay or even recognition – thanks to blind peer review. The publishers then bill the universities, to the tune of 10% of their block grants, for the privilege of accessing the fruits of their researchers’ toil. The individual academic is denied any hope of reaching an audience beyond university walls, and can even be barred from looking over their own published paper if their university does not stump up for the particular subscription in question.

journal paywalls

This extraordinary racket is, at root, about the bewitching power of high-brow brands. Journals that published great research in the past are assumed to publish it still, and – to an extent – this expectation fulfils itself. To climb the career ladder academics must get into big-name publications, where their work will get cited more and be deemed to have more value in the philistine research evaluations which determine the flow of public funds. Thus they keep submitting to these pricey but mightily glorified magazines, and the system rolls on.”


JSTOR is a not-for-profit organization that has invested heavily in providing an online system for archiving, accessing, and searching digitized copies of over 1,000 academic journals.  More recently, I noticed some effort on their part to allow public access to only 3 articles over a period of 21 days. This stinks! This policy reflects an intellectual snobbery beyond Himalayan proportions. The only folks that have access to these academic journals and studies are professors, and researchers that are affiliated with a university and university libraries.  Aaron Swartz noted the injustice of hoarding such knowledge and tried to distribute a significant proportion of JSTOR’s archive through one or more file-sharing sites. And what happened thereafter was perhaps one of the biggest misapplication of justice.  The same justice that disallows asymmetry of information in Wall Street is being deployed to preserve the asymmetry of information at the halls of learning.


MSNBC contributor Chris Hayes criticized the prosecutors, saying “at the time of his death Aaron was being prosecuted by the federal government and threatened with up to 35 years in prison and $1 million in fines for the crime of—and I’m not exaggerating here—downloading too many free articles from the online database of scholarly work JSTOR.”

The Associated Press reported that Swartz’s case “highlights society’s uncertain, evolving view of how to treat people who break into computer systems and share data not to enrich themselves, but to make it available to others.”

Chris Soghioian, a technologist and policy analyst with the ACLU, said, “Existing laws don’t recognize the distinction between two types of computer crimes: malicious crimes committed for profit, such as the large-scale theft of bank data or corporate secrets; and cases where hackers break into systems to prove their skillfulness or spread information that they think should be available to the public.”


Kelly Caine, a professor at Clemson University who studies people’s attitudes toward technology and privacy, said Swartz “was doing this not to hurt anybody, not for personal gain, but because he believed that information should be free and open, and he felt it would help a lot of people.”

And then there were some modest reservations, and Swartz actions were attributed to reckless judgment. I contend that this does injustice to someone of Swartz’s commitment and intellect … the recklessness was his inability to grasp the notion that an imbecile in the system would pursue 35 years of imprisonment and $1M fine … it was not that he was not aware of what he was doing but he believed, as does many, that scholarly academic research should be available as a free for all.

We have a Berlin wall that needs to be taken down. Swartz started that but he was unable to keep at it. It is important to not rest in this endeavor and that everyone ought to actively petition their local congressman to push bills that will allow open access to these academic articles.

John Maynard Keynes had warned of the folly of “shutting off the sun and the stars because they do not pay a dividend”, because what is at stake here is the reach of the light of learning. Aaron was at the vanguard leading that movement, and we should persevere to become those points of light that will enable JSTOR to disseminate the information that they guard so unreservedly.






The Unbearable Lightness of Being

Where the mind is without fear and the head is held high
Where knowledge is free
Where the world has not been broken up into fragments
By narrow domestic walls
Where words come out from the depth of truth
Where tireless striving stretches its arms towards perfection
Where the clear stream of reason has not lost its way
Into the dreary desert sand of dead habit
Where the mind is led forward by thee
Into ever-widening thought and action
Into that heaven of freedom, my Father, let my country awake.

–        Rabindranath  Tagore

Among the many fundamental debates in philosophy, one of the fundamental debates has been around the concept of free will. The debates have stemmed around two arguments associated with free will.

1)      Since future actions are governed by the circumstances of the present and the past, human beings future actions are predetermined on account of the learnings from the past.  Hence, the actions that happen are not truly a consequent of free will.

2)      The counter-argument is that future actions may not necessarily be determined and governed by the legacy of the present and the past, and hence leaves headroom for the individual to exercise free will.

Now one may wonder what determinism or lack of it has anything to do with the current state of things in an organizational context.  How is this relevant? Why are the abstract notions of determinism and free will important enough to be considered in the context of organizational evolution?  How does the meaning lend itself to structured institutions like business organizations, if you will, whose sole purpose is to create products and services to meet the market demand.

So we will throw a factual wrinkle in this line of thought. We will introduce now an element of chance. How does chance change the entire dialectic? Simply because chance is an unforeseen and random event that may not be pre-determined; in fact, a chance event may not have a causal trigger. And chance or luck could be meaningful enough to untether an organization and its folks to explore alternative paths.  It is how the organization and the people are aligned to take advantage of that random nondeterministic future that could make a huge difference to the long term fate of the organization.

The principle of inductive logic states that what is true for n and n+1 would be true for n+2.  The inductive logic creates predictability and hence organizations create pathways to exploit the logical extension of inductive logic. It is the most logical apparatus that exists to advance groups in a stable but robust manner to address the multitude of challenges that that they have to grapple with. After all, the market is governed by animal spirits! But let us think through this very carefully.  All competition or collaboration that occurs among groups to address the market demands result in homogenous behavior with general homogeneous outcomes.  Simply put, products and services become commoditized. Their variance is not unique and distinctive.  However, they could be just be distinctive enough to eke out enough profits in the margins before being absorbed into a bigger whole. At that point, identity is effaced over time.  Organizations gravitate to a singularity.  Unique value propositions wane over time.

So let us circle back to chance.  Chance is our hope to create divergence. Chance is the factoid that cancels out the inductive vector of industrial organization. Chance does not exist … it is not a “waiting for Godot” metaphor around the corner.  If it always did, it would have been imputed by the determinists in their inductive world and we would end up with a dystopian homogenous future.  Chance happens.  And sometimes it has a very short half-life. And if the organization and people are aligned and their mindset is adapted toward embracing and exploiting that fleeting factoid of chance, the consequences could be huge.  New models would emerge, new divergent paths would be traduced and society and markets would burst into a garden of colorful ideas in virtual oasis of new markets.

So now to tie this all to free will and to the unbearable lightness of being! It is the existence of chance that creates the opportunity to exercise free will on the part of an individual, but it is the organizations responsibility to allow the individual to unharness themselves from organization inertia. Thus, organizations have to perpetuate an environment wherein employees are afforded some headroom to break away.  And I don’t mean break away as in people leaving the organization to do their own gigs; I mean breakaway in thought and action within the boundaries of the organization to be open to element of chance and exploit it. Great organizations do not just encourage the lightness of being … unharnessing the talent but rather – the great organizations are the ones that make the lightness of being unbearable.  These individuals are left with nothing but an awareness and openness to chance to create incredible values … far more incredible and awe inspiring and momentous than a more serene state of general business as usual affairs.

Importance of Heroes and Narratives in Organizations

“My own heroes are the dreamers, those men and women who tried to make the world a better place than when they found it, whether in small ways or great ones. Some succeeded, some failed, most had mixed results… but it is the effort that’s heroic, as I see it. Win or lose, I admire those who fight the good fight.” – George Martin


Stories, like people and butterflies and songbirds’ eggs and human hearts and dreams, are also fragile things, made up of nothing stronger or more lasting than twenty-six letters and a handful of punctuation marks. Or they are words on the air, composed of sounds and ideas-abstract, invisible, gone once they’ve been spoken-and what could be more frail than that? But some stories, small, simple ones about setting out on adventures or people doing wonders, tales of miracles and monsters, have outlasted all the people who told them, and some of them have outlasted the lands in which they were created.” – Neil Gaiman

Heroes are not born. Circumstance and happenstance create heroes. In some cases, heroes are individuals who walk into a minefield of uncertainty that threatens their natural inclination for self-preservation in the interest of value systems and people that are alien to the individual. Thus, a private in an army is a hero already in the fact that he/she is walking into possible harm’s way and serving a cause to serve and protect people not necessarily related to him/her. One has heard the adage – one man’s freedom fighter is another person’s terrorist.  Thus, someone whom we call a terrorist may be perceived a hero by someone else. Thus, in this case …it all becomes a matter of a point of view, but the fundamental point remains – a hero is considered a person who abnegates and abjures their rights to self-preservation for some greater perceived good.

Sustaining innovation is a vital yet difficult task. Innovation requires the coordinated efforts of many actors to facilitate (1) the recombination of ideas to generate novelty, (2) real-time problem solving, and (3) linkages between present innovation efforts with past experiences and future aspirations. Innovation narratives are cultural mechanisms that address these coordination requirements by enabling translation. Specifically, innovation narratives are powerful mechanisms for translating ideas across the organization so that they are comprehensible and appear legitimate to others. Narratives also enable people to translate emergent situations that are ambiguous or equivocal so as to promote real-time problem solving. With their accumulation, innovation narratives provide a generative memory for organizations that enable people to translate ideas accumulated from particular instances of past innovation to inform current and future efforts.

The concept of collective identity has gained prominence within organizational theory as researchers have studied how it consequentially shapes organizational behavior. However, much less attention has been paid to the question of how nascent collective identities become legitimated. Although it is conventionally argued that membership expansion leads to collective identity legitimacy, one draws on the notion of cultural entrepreneurship to argue that the relationship is more complex and is culturally mediated by the stories told by group members. Legitimacy is more likely to be achieved when members articulate a clear defining collective identity story that identifies the group’s orienting purpose and core practices. Although membership expansion can undermine legitimation by introducing discrepant actors and practices to a collective identity, this potential downside is mitigated by compelling narratives, which help to coordinate expansion. And that is where the heroes can be interwoven into organizational theory and behavior. It is important to create environments that by happenstance and circumstance create heroes. The architecture of great organizations imputes heroes and narratives in their tapestry.

Heroes and narratives are instrumental in organizations that forge a pathway to long-term sustenance and growth. Hence, we are quick to idolize figures – Iacocca, Welch, Jobs, Ellison, Gates, Benioff, Gerstner, Branson, Bezos, Zuckerberg, Brin and Page, etc.  We learn narratives through case studies, news print, scholarly books on successful companies; and we emulate and steal and copy and parody and so much more … not necessarily because we want to be them but we want to create our identity in our own lair in ecosystems that move with or against the strongest currents.

So it is essential to celebrate the heroes and the narratives of great companies as an additional instrument to ignite engagement and foray into uncharted territories and conquer the unknown. Hence, personally I have also found solace in reading biographies of people who have made a difference, and a great pleasure in vicariously living through the ebbs and troughs of great companies


Reality Distortion Field: A Powerful Motivator in Organizations!

The reality distortion field was a confounding mélange of a charismatic rhetorical style, an indomitable will, and an eagerness to bend any fact to fit the purpose at hand. If one line of argument failed to persuade, he would deftly switch to another. Sometimes, he would throw you off balance by suddenly adopting your position as his own, without acknowledging that he ever thought differently.  “

–         Andy Hertzfield on Steve Jobs’ Reality Distortion Field.

Many of us have heard the word – Reality Distortion Field.  The term has been attributed to Steve Jobs who was widely known to have communicated messages to his constituency in a manner such that the reality of the situation was supplanted by him packaging the message so that people would take the bait and pursue paths that would, upon closer investigation, be dissonant from reality. But having been an avid acolyte of Jobs, I would imagine that he himself would be disturbed and unsettled by the label. Since when did the promise of a radiant future constitute a Reality Distortion Field? Since when did the ability of a person to embrace what seemingly is impossible and far-fetched and instill confidence in the troops to achieve it constitute a Reality Distortion Field? Since when did the ability of leadership to share in the wonders of unique and disruptive creations constitute a Reality Distortion Field? Since when did dreams of a better future underpinned with executable actions to achieve it constitute a Reality Distortion Field?

The Reality Distortion Field usage reflects the dissonance between what is and what needs to be. It is a slapstick term which suggests that you are envisioning tectonic dissonance rifts between reality and possibilities and that you are leading the awestruck starry-eyed followers off a potential cliff.  Some people have renamed RDF as hype of Bulls*#t.  They believe that RDF is extremely bad for organizations because it pushes the people outside the comfort zone of physical and logical constraints and is a recipe for disaster. The argument continues that organizations that are grounded upon the construct of reality and to communicate the same are essential to advance the organization. I beg to differ.

So let me address this on two fronts:  RDF label and if we truly accept what RDF means … then my position is that it is the single most important attribute that a strong leader ought to embrace in the organization.

The RDF label:

We all know this to be true: A rose by any other name is still a rose. We just happen to call this rose in this context a RDF. It is presumed to be the ability of a person to cast possibilities in a different light … so much so that the impossibilities are reduced to elements just within the grasp of reality.  Now I ask you – What is wrong with that? For a leader to be able to cast their vision within the inimitable grasp of an organization is a huge proxy for the faith of the leader of the people in the organization. If a project realistically would take 3 months but a RDF is cast to get a project done in 15 days – that is a tall order – but think of the consequences if people are “seduced” into the RDF and hence acts upon it. It immediately unfolds new pathways of collaboration, unforeseen discoveries into super-efficient and effective methods, it creates trench camaraderie, it distills focus into singularity points to be executed against, it instills and ignites a passion and an engagement around the new stakes in the ground, people become keepers of one another for a consequential and significant conquest, it brings out the creative energies and the limitless possibilities, once the goal is accomplished, of disruptive innovation in means and ends.  Of course, one could also counter-argue a plethora of incidental issues in such cases: employees would burn out under the burden of unrealistic goals, employees are set more for failing than succeeding, it would create a disorderly orientation upon groups working together to meet RDF standards, and if one were to fall short …it would be a last straw that may break the camel’s back. So essentially this speaks to the ordinal magnitude of the RDF schema that is being pushed out by leadership.

RDF and the beneficial impact to an organization:

It is the sine qua non of great leadership to be able to push organizations beyond the boundaries of plain convenience.  I have, in my career, been fortunate to have been challenged and on many occasions, forced out of my comfort zone. But in having done so successfully on many occasions, it has also given me the confidence to scale mountains. And that confidence is a perquisite that the organization leadership has to provide on a daily basis.  After all, one of the biggest assets that an employee in an organization ought to have is pride and sense of accomplishment to their work. RDF unfolds that possibility.

We hear of disruptive innovations. These are defined as innovations that leapfrog the bounds of technology inertia.  How does a company enable that? It is certainly not incremental thinking. It is a vision that marginally lies outside our aggregated horizon of sight.  The age today which is a result of path breaking ideas and execution have been a result of those visionaries that have aimed beyond the horizons, instilled faith amongst the line men to align and execute, and made the impossible possible.  We ought to thank our stars for having leaders that emit an RDF and lead us off our tenebrous existence in our diurnal professional lives.

There is absolutely no doubt that such leadership would create resistance and fierce antipathy among some.  But despite some of the ill effects, the vector that drives great innovations lies in the capacity of the organization to embrace degrees of RDF to hasten and make the organizations competitive, distinctive and powerful.