Internal versus External Scale
This article discusses internal and external complexity before we tee up a more detailed discussion on internal versus external scale. This chapter acknowledges that complex adaptive systems have inherent internal and external complexities which are not additive. The impact of these complexities is exponential. Hence, we have to sift through our understanding and perhaps even review the salient aspects of complexity science which have already been covered in relatively more detail in earlier chapter. However, revisiting complexity science is important, and we will often revisit this across other blog posts to really hit home the fundamental concepts and its practical implications as it relates to management and solving challenges at a business or even a grander social scale.
A complex system is a part of a larger environment. It is a safe to say that the larger environment is more complex than the system itself. But for the complex system to work, it needs to depend upon a certain level of predictability and regularity between the impact of initial state and the events associated with it or the interaction of the variables in the system itself. Note that I am covering both – complex physical systems and complex adaptive systems in this discussion. A system within an environment has an important attribute: it serves as a receptor to signals of external variables of the environment that impact the system. The system will either process that signal or discard the signal which is largely based on what the system is trying to achieve. We will dedicate an entire article on system engineering and thinking later, but the uber point is that a system exists to serve a definite purpose. All systems are dependent on resources and exhibits a certain capacity to process information. Hence, a system will try to extract as many regularities as possible to enable a predictable dynamic in an efficient manner to fulfill its higher-level purpose.
Let us understand external complexities. We can interchangeably use the word environmental complexity as well. External complexity represents physical, cultural, social, and technological elements that are intertwined. These environments beleaguered with its own grades of complexity acts as a mold to affect operating systems that are mere artifacts. If operating systems can fit well within the mold, then there is a measure of fitness or harmony that arises between an internal complexity and external complexity. This is the root of dynamic adaptation. When external environments are very complex, that means that there are a lot of variables at play and thus, an internal system has to process more information in order to survive. So how the internal system will react to external systems is important and they key bridge between those two systems is in learning. Does the system learn and improve outcomes on account of continuous learning and does it continually modify its existing form and functional objectives as it learns from external complexity? How is the feedback loop monitored and managed when one deals with internal and external complexities? The environment generates random problems and challenges and the internal system has to accept or discard these problems and then establish a process to distribute the problems among its agents to efficiently solve those problems that it hopes to solve for. There is always a mechanism at work which tries to align the internal complexity with external complexity since it is widely believed that the ability to efficiently align the systems is the key to maintaining a relatively competitive edge or intentionally making progress in solving a set of important challenges.
Internal complexity are sub-elements that interact and are constituents of a system that resides within the larger context of an external complex system or the environment. Internal complexity arises based on the number of variables in the system, the hierarchical complexity of the variables, the internal capabilities of information pass-through between the levels and the variables, and finally how it learns from the external environment. There are five dimensions of complexity: interdependence, diversity of system elements, unpredictability and ambiguity, the rate of dynamic mobility and adaptability, and the capability of the agents to process information and their individual channel capacities.
If we are discussing scale management, we need to ask a fundamental question. What is scale in the context of complex systems? Why do we manage for scale? How does management for scale advance us toward a meaningful outcome? How does scale compute in internal and external complex systems? What do we expect to see if we have managed for scale well? What does the future bode for us if we assume that we have optimized for scale and that is the key objective function that we have to pursue?
Model Thinking
Model Framework |
The fundamental tenet of theory is the concept of “empiria“. Empiria refers to our observations. Based on observations, scientists and researchers posit a theory – it is part of scientific realism.
A scientific model is a causal explanation of how variables interact to produce a phenomenon, usually linearly organized. A model is a simplified map consisting of a few, primary variables that is gauged to have the most explanatory powers for the phenomenon being observed. We discussed Complex Physical Systems and Complex Adaptive Systems early on this chapter. It is relatively easier to map CPS to models than CAS, largely because models become very unwieldy as it starts to internalize more variables and if those variables have volumes of interaction between them. A simple analogy would be the use of multiple regression models: when you have a number of independent variables that interact strongly between each other, autocorrelation errors occur, and the model is not stable or does not have predictive value.
Research projects generally tend to either look at a case study or alternatively, they might describe a number of similar cases that are logically grouped together. Constructing a simple model that can be general and applied to many instances is difficult, if not impossible. Variables are subject to a researcher’s lack of understanding of the variable or the volatility of the variable. What further accentuates the problem is that the researcher misses on the interaction of how the variables play against one another and the resultant impact on the system. Thus, our understanding of our system can be done through some sort of model mechanics but, yet we share the common belief that the task of building out a model to provide all of the explanatory answers are difficult, if not impossible. Despite our understanding of our limitations of modeling, we still develop frameworks and artifact models because we sense in it a tool or set of indispensable tools to transmit the results of research to practical use cases. We boldly generalize our findings from empiria into general models that we hope will explain empiria best. And let us be mindful that it is possible – more so in the CAS systems than CPS that we might have multiple models that would fight over their explanatory powers simply because of the vagaries of uncertainty and stochastic variations.
Popper says: “Science does not rest upon rock-bottom. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and when we cease our attempts to drive our piles into a deeper layer, it is not because we have reached firm ground. We simply stop when we are satisfied that they are firm enough to carry the structure, at least for the time being”. This leads to the satisficing solution: if a model can choose the least number of variables to explain the greatest amount of variations, the model is relatively better than other models that would select more variables to explain the same. In addition, there is always a cost-benefit analysis to be taken into consideration: if we add x number of variables to explain variation in the outcome but it is not meaningfully different than variables less than x, then one would want to fall back on the less-variable model because it is less costly to maintain.
Researchers must address three key elements in the model: time, variation and uncertainty. How do we craft a model which reflects the impact of time on the variables and the outcome? How to present variations in the model? Different variables might vary differently independent of one another. How do we present the deviation of the data in a parlance that allows us to make meaningful conclusions regarding the impact of the variations on the outcome? Finally, does the data that is being considered are actual or proxy data? Are the observations approximate? How do we thus draw the model to incorporate the fuzziness: would confidence intervals on the findings be good enough?
Two other equally other concepts in model design is important: Descriptive Modeling and Normative Modeling.
Descriptive models aim to explain the phenomenon. It is bounded by that goal and that goal only.
There are certain types of explanations that they fall back on: explain by looking at data from the past and attempting to draw a cause and effect relationship. If the researcher is able to draw a complete cause and effect relationship that meets the test of time and independent tests to replicate the results, then the causality turns into law for the limited use-case or the phenomenon being explained. Another explanation method is to draw upon context: explaining a phenomenon by looking at the function that the activity fulfills in its context. For example, a dog barks at a stranger to secure its territory and protect the home. The third and more interesting type of explanation is generally called intentional explanation: the variables work together to serve a specific purpose and the researcher determines that purpose and thus, reverse engineers the understanding of the phenomenon by understanding the purpose and how the variables conform to achieve that purpose.
This last element also leads us to thinking through the other method of modeling – namely, normative modeling. Normative modeling differs from descriptive modeling because the target is not to simply just gather facts to explain a phenomenon, but rather to figure out how to improve or change the phenomenon toward a desirable state. The challenge, as you might have already perceived, is that the subjective shadow looms high and long and the ultimate finding in what would be a normative model could essentially be a teleological representation or self-fulfilling prophecy of the researcher in action. While this is relatively more welcome in a descriptive world since subjectivism is diffused among a larger group that yields one solution, it is not the best in a normative world since variation of opinions that reflect biases can pose a problem.
How do we create a representative model of a phenomenon? First, we weigh if the phenomenon is to be understood as a mere explanation or to extend it to incorporate our normative spin on the phenomenon itself. It is often the case that we might have to craft different models and then weigh one against the other that best represents how the model can be explained. Some of the methods are fairly simple as in bringing diverse opinions to a table and then agreeing upon one specific model. The advantage of such an approach is that it provides a degree of objectivism in the model – at least in so far as it removes the divergent subjectivity that weaves into the various models. Other alternative is to do value analysis which is a mathematical method where the selection of the model is carried out in stages. You define the criteria of the selection and then the importance of the goal (if that be a normative model). Once all of the participants have a general agreement, then you have the makings of a model. The final method is to incorporate all all of the outliers and the data points in the phenomenon that the model seeks to explain and then offer a shared belief into those salient features in the model that would be best to apply to gain information of the phenomenon in a predictable manner.
There are various languages that are used for modeling:
Written Language refers to the natural language description of the model. If price of butter goes up, the quantity demanded of the butter will go down. Written language models can be used effectively to inform all of the other types of models that follow below. It often goes by the name of “qualitative” research, although we find that a bit limiting. Just a simple statement like – This model approximately reflects the behavior of people living in a dense environment …” could qualify as a written language model that seeks to shed light on the object being studied.
Icon Models refer to a pictorial representation and probably the earliest form of model making. It seeks to only qualify those contours or shapes or colors that are most interesting and relevant to the object being studied. The idea of icon models is to pictorially abstract the main elements to provide a working understanding of the object being studied.
Topological Models refer to how the variables are placed with respect to one another and thus helps in creating a classification or taxonomy of the model. Once can have logical trees, class trees, Venn diagrams, and other imaginative pictorial representation of fields to further shed light on the object being studied. In fact, pictorial representations must abide by constant scale, direction and placements. In other words, if the variables are placed on a different scale on different maps, it would be hard to draw logical conclusions by sight alone. In addition, if the placements are at different axis in different maps or have different vectors, it is hard to make comparisons and arrive at a shared consensus and a logical end result.
Arithmetic Models are what we generally fall back on most. The data is measured with an arithmetic scale. It is done via tables, equations or flow diagrams. The nice thing about arithmetic models is that you can show multiple dimensions which is not possible with other modeling languages. Hence, the robustness and the general applicability of such models are huge and thus is widely used as a key language to modeling.
Analogous Models refer to crafting explanations using the power of analogy. For example, when we talk about waves – we could be talking of light waves, radio waves, historical waves, etc. These metaphoric representations can be used to explain phenomenon, but at best, the explanatory power is nebulous, and it would be difficult to explain the variations and uncertainties between two analogous models. However, it still is used to transmit information quickly through verbal expressions like – “Similarly”, “Equivalently”, “Looks like ..” etc. In fact, extrapolation is a widely used method in modeling and we would ascertain this as part of the analogous model to a great extent. That is because we time-box the variables in the analogous model to one instance and the extrapolated model to another instance and we tie them up with mathematical equations.
Complexity: An Introduction
The past was so simple. Life was so simple and good. Those were the good old days. How often have you heard these ruminations? It is fairly common! Surprisingly, as we forge a path into the future, these ruminations gather pace. We become nostalgic and we thus rake fear of the future. We attribute a good life to a simple life. But the simple life is measured against the past. In fact, our modus operandi is to chunk up the past into timeboxes and then surface all the positive elements. While that is an endeavor that might give us some respite from what is happening today, the fact is that the nostalgia is largely grounded in fiction. It would be foolish to recall the best elements and compare it to what we see emerging today which conflates good and bad. We are wired for survival: If we have survived into the present, it makes for a good argument perhaps that the conditions that led to our survival today can only be due to a constellation of good factors that far outweighed the bad. But when we look into the future rife with uncertainty, we create this rather dystopian world – a world of gloom and doom and then we wonder: why are we so stressed? Soon we engage in a vicious cycle of thought and our actions are governed by the thought. You have heard – Hope for the best and plan for the worst. Really? I would imagine that when one hopes for the best and the facts do not undermine the trend, would it not be better to hope for the best and plan for the best. It is true that things might not work out as planned but ought we to always build out models and frameworks to counter that possibility. We say that the world is complex and that the complexity forces us to establish certain heuristics to navigate the plenar forces of complexity. So let us understand what complexity is. What does it mean? And with our new understanding of complexity through the course of this chapter, would we perhaps arrive at a different mindset that speaks of optimism and innovation. We will certainly not settle that matter at the end of this chapter, but we hope that we will surface enough questions, so you can reflect upon where we are and where we are going in a more judicious manner – a manner grounded on facts and values. Let us now begin our journey!
The sky is blue. We hear this statement. It is a simple statement. There is a noun, a verb and an adjective. In the English-speaking world, we can only agree on what constitutes the “sky”. We might have a hard time defining it – Merriam Webster defines the sky as the upper atmosphere or expanse of space that constitutes an apparent great vault or arch over the earth. A five-year-old would point to the sky to define sky. Now how do we define blue. A primary color between green and violet. Is that how you think about blue or do you just arrive at an understanding of what that color means. Once again, a five-year-old would identify blue: she would not look at green and violet as constituent colors. The statement – The sky is blue – for the sake of argument is fairly simple!
However, if we say that the sky is a shade of blue, we introduce an element of ambiguity, don’t we? Is it dark blue, light blue, sky blue (so we get into recursive thinking), or some other property that is bluish but not quite blue. What has emerged thus is an element of complexity – a new variable that might be considered a slider on a scale. How we slide our understanding is determined by our experience, our perception or even our wishful thinking. The point being that complexity ceases to be purely an objective property. Rather it is an emergent property driven by our interpretation. Protagoras, an ancient Greek philosopher, says that the man is a measure of all things. What he is saying is that our lens of evaluation is purely predicated on our experiences in life. There is nothing that exists outside the boundaries of our experience. Now Socrates arrived at a different view – namely, he proved that certain elements are ordered in a manner that exists outside the boundaries of our experience. We will get back to this in later chapters. The point being that complexity is an emergent phenomenon that occurs due to our interpretation. Natural scientists will argue, like Socrates, that there are complex systems that exist despite our interpretations. And that is true as well. So how do we balance these opposing views at the same time: is that a sign of insanity? Well, that is a very complex question (excuse my pun) and so we need to further expand on the term Complexity.
In order to define complexity, let us now break this up a bit further. Complex systems have multiple variables: these variables interact with each other; these variables might be subject to interpretation in the human condition; if not, these variables interact in a manner to enable emergent properties which might have a life of its own. These complex systems might be decentralized and have information processing pathways outside the lens of science and human perception. The complex systems are malleable and adaptive.
Markets are complex institution. When we try to centralize the market, then we take a position that we feel we understand the complexity and thus can determine the outcomes in a certain way. Socialist governments have long tried to manage markets but have not been successful. Nobel winner, Frederich Hayek, has long argued that the markets are a result of spontaneous order and not design. It has multiple variables, significant information processing is underway at any given time in an active market, and the market adapts to the information processing mechanism. But there are winners and losers in a market as well. Why? Because each of them observes the market dynamics and arrive at different conclusions. Complexity does not follow a deterministic path. Neither does the market and we have lot of success and failures that suggest that to be the case.
Let us look at another example. Examples will probably give us an appreciation for the concept and this will be very important as we sped through the journey into the future.
Insect behavior is a case in point. Whether we look at bees or ants, it is a common fact that these insects have extremely complex systems despite the lack of sufficient instruments for survival for one bee or one ant. In 1705, Bernard Mandeville wrote a book called: Fable of the Bees. It was a poem. Here is a part of the poem. What Mandeville is clearly hinting at is the fact that there would be an innate failure to centralize complex systems like a bee hive. Rather, the complex systems emerge in a way to create innate systems that stabilize for success and survival in the long run.
A Spacious Hive well stock’d with Bees,
That lived in Luxury and Ease;
And yet as fam’d for Laws and Arms,
As yielding large and early Swarms;
Was counted the great Nursery
Of Sciences and Industry.
No Bees had better Government,
More Fickleness, or less Content.
They were not Slaves to Tyranny,
Nor ruled by wild Democracy;
But Kings, that could not wrong, because
Their Power was circumscrib’d by Laws.
Then we have the ant colonies. An ant is blind. Yet a colony has collective intelligence. The ants work together, despite individual shortcomings that challenge an individual survival, to figure out how to exist and propagate as group. How does a simple living organism that is subject to the whims and fancies of nature survive and seed every corner of the earth in great volumes? Entomologists and social scientists and biologists have tried to figure this out and have posited a lot of theories. The point is that complex systems are not bounded by our reason alone. The whole is greater than the sum of the parts.
Key Takeaway
A complex system is the result of the interaction of a network of variables that gives rise to collective behavior, information processing and self-learning and adaptive system that does not completely lie in the purview of human explanation.
Books to Read – 2017
It has been a while since I posted on this blog. It just so happens that life is what happens to you when you have other plans. Having said that, I decided early this year to ready 42 books this year across a wide range of genres. I have been trying to keep pace, and have succeeded so far.
Here are the books that I have read and plan to read:
- Song of Solomon by Toni Morrison ( Read)
- The Better Angels of Our Nature by Steven Pinker ( Read)
- Black Dogs by Ian McEwan ( Read)
- Nutshell: A Novel by Ian McEwan ( Read)
- Dr. Jekyl and Mr. Hyde by Robert Louis Stevenson ( Read)
- Moby Dick by Herman Melville
- The Plot Against America by Phil Roth
- Humboldt’s Gift by Saul Bellow
- The Innovators by Walter Isaacson
- Sapiens: A Brief History of Mankind by Yuval Harari
- The House of Morgan by Ron Chernow
- American Political Rhetoric: Essential Speeches and Writings by Peter Augustine Lawler and Robert Schaefer
- Keynes Hayek: The Clash that defined Modern Economics by Nicholas Wapshott
- The Year of Magical Thinking by Joan Didion
- Small Great Things by Jodi Picoult
- The Conscience of a Liberal by Paul Krugman
- Globalization and its Discontents by Joseph Stiglitz
- Twilight of the Elites: America after Meritocracy by Chris Hayes
- What is Mathematics: An Elementary Approach to Idea and Methods by Robbins & Stewart
- Algorithms to live by: Computer Science of Human Decisions by Christian & Griffiths
- Andrew Carnegie by David Nasaw
- Just Mercy: A Story of Justice and Redemption by Bryan Stevenson
- The Evolution of Everything: How New Ideas Emerge by Matt Ridley
- The Only Game in Town: Central Banks, Instability and Avoiding the Next Collapse by Mohammed El-Arian
- The Relentless Revolution: A History of Capitalism by Joyce Appleby
- The Industries of the Future by Alec Ross
- Where Good Ideas come from by Steven Johnson
- Original: How Non-Conformists move the world by Adam Grant
- Start with Why by Simon Sinek
- The Discreet Hero by Mario Vargas Llosa
- Istanbul by Orhan Pamuk
- Jefferson and Hamilton: The Rivalry that Forged a Nation by John Ferling
- The Orphan Master’s Son: A Novel by Adam Johnson
- Between the World and Me: Ta Nehisi-Coates
- Active Liberty: Interpreting our Democratic Constitution
- The Blue Guitar by John Banville
- The Euro Crisis and its Aftermath by Jean Pisani-Fery
- Africa: Why Economists get it wrong by Morten Jerven
- The Snowball: Warren Buffett and the Business of Life
- To Explain the World: The Discovery of Modern Science by Steven Weinberg
- The Meursalt Investigation by Daoud and Cullen
- The Stranger by Albert Camus
Building a Lean Financial Infrastructure!
A lean financial infrastructure presumes the ability of every element in the value chain to preserve and generate cash flow. That is the fundamental essence of the lean infrastructure that I espouse. So what are the key elements that constitute a lean financial infrastructure?
And given the elements, what are the key tweaks that one must continually make to ensure that the infrastructure does not fall into entropy and the gains that are made fall flat or decay over time. Identification of the blocks and monitoring and making rapid changes go hand in hand.
The Key Elements or the building blocks of a lean finance organization are as follows:
- Chart of Accounts: This is the critical unit that defines the starting point of the organization. It relays and groups all of the key economic activities of the organization into a larger body of elements like revenue, expenses, assets, liabilities and equity. Granularity of these activities might lead to a fairly extensive chart of account and require more work to manage and monitor these accounts, thus requiring incrementally a larger investment in terms of time and effort. However, the benefits of granularity far exceeds the costs because it forces management to look at every element of the business.
- The Operational Budget: Every year, organizations formulate the operational budget. That is generally a bottoms up rollup at a granular level that would map to the Chart of Accounts. It might follow a top-down directive around what the organization wants to land with respect to income, expense, balance sheet ratios, et al. Hence, there is almost always a process of iteration in this step to finally arrive and lock down the Budget. Be mindful though that there are feeders into the budget that might relate to customers, sales, operational metrics targets, etc. which are part of building a robust operational budget.
- The Deep Dive into Variances: As you progress through the year and part of the monthly closing process, one would inquire about how the actual performance is tracking against the budget. Since the budget has been done at a granular level and mapped exactly to the Chart of Accounts, it thus becomes easier to understand and delve into the variances. Be mindful that every element of the Chart of Account must be evaluated. The general inclination is to focus on the large items or large variances, while skipping the small expenses and smaller variances. That method, while efficient, might not be effective in the long run to build a lean finance organization. The rule, in my opinion, is that every account has to be looked and the question should be – Why? If the management has agreed on a number in the budget, then why are the actuals trending differently. Could it have been the budget and that we missed something critical in that process? Or has there been a change in the underlying economics of the business or a change in activities that might be leading to these “unexpected variances”. One has to take a scalpel to both – favorable and unfavorable variances since one can learn a lot about the underlying drivers. It might lead to managerially doing more of the better and less of the worse. Furthermore, this is also a great way to monitor leaks in the organization. Leaks are instances of cash that are dropping out of the system. Much of little leaks amounts to a lot of cash in total, in some instances. So do not disregard the leaks. Not only will that preserve the cash but once you understand the leaks better, the organization will step up in efficiency and effectiveness with respect to cash preservation and delivery of value.
- Tweak the process: You will find that as you deep dive into the variances, you might want to tweak certain processes so these variances are minimized. This would generally be true for adverse variances against the budget. Seek to understand why the variance, and then understand all of the processes that occur in the background to generate activity in the account. Once you fully understand the process, then it is a matter of tweaking this to marginally or structurally change some key areas that might favorable resonate across the financials in the future.
- The Technology Play: Finally, evaluate the possibilities of exploring technology to surface issues early, automate repetitive processes, trigger alerts early on to mitigate any issues later, and provide on-demand analytics. Use technology to relieve time and assist and enable more thinking around how to improve the internal handoffs to further economic value in the organization.
All of the above relate to managing the finance and accounting organization well within its own domain. However, there is a bigger step that comes into play once one has established the blocks and that relates to corporate strategy and linking it to the continual evolution of the financial infrastructure.
The essential question that the lean finance organization has to answer is – What can the organization do so that we address every element that preserves and enhances value to the customer, and how do we eliminate all non-value added activities? This is largely a process question but it forces one to understand the key processes and identify what percentage of each process is value added to the customer vs. non-value added. This can be represented by time or cost dimension. The goal is to yield as much value added activities as possible since the underlying presumption of such activity will lead to preservation of cash and also increase cash acquisition activities from the customer.