Model Thinking

Model Framework

The fundamental tenet of theory is the concept of “empiria“. Empiria refers to our observations. Based on observations, scientists and researchers posit a theory – it is part of scientific realism.

A scientific model is a causal explanation of how variables interact to produce a phenomenon, usually linearly organized.  A model is a simplified map consisting of a few, primary variables that is gauged to have the most explanatory powers for the phenomenon being observed.  We discussed Complex Physical Systems and Complex Adaptive Systems early on this chapter. It is relatively easier to map CPS to models than CAS, largely because models become very unwieldy as it starts to internalize more variables and if those variables have volumes of interaction between them. A simple analogy would be the use of multiple regression models: when you have a number of independent variables that interact strongly between each other, autocorrelation errors occur, and the model is not stable or does not have predictive value.

thinking

Research projects generally tend to either look at a case study or alternatively, they might describe a number of similar cases that are logically grouped together. Constructing a simple model that can be general and applied to many instances is difficult, if not impossible. Variables are subject to a researcher’s lack of understanding of the variable or the volatility of the variable. What further accentuates the problem is that the researcher misses on the interaction of how the variables play against one another and the resultant impact on the system. Thus, our understanding of our system can be done through some sort of model mechanics but, yet we share the common belief that the task of building out a model to provide all of the explanatory answers are difficult, if not impossible. Despite our understanding of our limitations of modeling, we still develop frameworks and artifact models because we sense in it a tool or set of indispensable tools to transmit the results of research to practical use cases. We boldly generalize our findings from empiria into general models that we hope will explain empiria best. And let us be mindful that it is possible – more so in the CAS systems than CPS that we might have multiple models that would fight over their explanatory powers simply because of the vagaries of uncertainty and stochastic variations.

Popper says: “Science does not rest upon rock-bottom. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and when we cease our attempts to drive our piles into a deeper layer, it is not because we have reached firm ground. We simply stop when we are satisfied that they are firm enough to carry the structure, at least for the time being”. This leads to the satisficing solution: if a model can choose the least number of variables to explain the greatest amount of variations, the model is relatively better than other models that would select more variables to explain the same. In addition, there is always a cost-benefit analysis to be taken into consideration: if we add x number of variables to explain variation in the outcome but it is not meaningfully different than variables less than x, then one would want to fall back on the less-variable model because it is less costly to maintain.

problemsol

Researchers must address three key elements in the model: time, variation and uncertainty. How do we craft a model which reflects the impact of time on the variables and the outcome? How to present variations in the model? Different variables might vary differently independent of one another. How do we present the deviation of the data in a parlance that allows us to make meaningful conclusions regarding the impact of the variations on the outcome? Finally, does the data that is being considered are actual or proxy data? Are the observations approximate? How do we thus draw the model to incorporate the fuzziness: would confidence intervals on the findings be good enough?

Two other equally other concepts in model design is important: Descriptive Modeling and Normative Modeling.

Descriptive models aim to explain the phenomenon. It is bounded by that goal and that goal only.

There are certain types of explanations that they fall back on: explain by looking at data from the past and attempting to draw a cause and effect relationship. If the researcher is able to draw a complete cause and effect relationship that meets the test of time and independent tests to replicate the results, then the causality turns into law for the limited use-case or the phenomenon being explained. Another explanation method is to draw upon context: explaining a phenomenon by looking at the function that the activity fulfills in its context. For example, a dog barks at a stranger to secure its territory and protect the home. The third and more interesting type of explanation is generally called intentional explanation: the variables work together to serve a specific purpose and the researcher determines that purpose and thus, reverse engineers the understanding of the phenomenon by understanding the purpose and how the variables conform to achieve that purpose.

This last element also leads us to thinking through the other method of modeling – namely, normative modeling. Normative modeling differs from descriptive modeling because the target is not to simply just gather facts to explain a phenomenon, but rather to figure out how to improve or change the phenomenon toward a desirable state. The challenge, as you might have already perceived, is that the subjective shadow looms high and long and the ultimate finding in what would be a normative model could essentially be a teleological representation or self-fulfilling prophecy of the researcher in action. While this is relatively more welcome in a descriptive world since subjectivism is diffused among a larger group that yields one solution, it is not the best in a normative world since variation of opinions that reflect biases can pose a problem.

How do we create a representative model of a phenomenon? First, we weigh if the phenomenon is to be understood as a mere explanation or to extend it to incorporate our normative spin on the phenomenon itself. It is often the case that we might have to craft different models and then weigh one against the other that best represents how the model can be explained. Some of the methods are fairly simple as in bringing diverse opinions to a table and then agreeing upon one specific model. The advantage of such an approach is that it provides a degree of objectivism in the model – at least in so far as it removes the divergent subjectivity that weaves into the various models. Other alternative is to do value analysis which is a mathematical method where the selection of the model is carried out in stages. You define the criteria of the selection and then the importance of the goal (if that be a normative model). Once all of the participants have a general agreement, then you have the makings of a model. The final method is to incorporate all all of the outliers and the data points in the phenomenon that the model seeks to explain and then offer a shared belief into those salient features in the model that would be best to apply to gain information of the phenomenon in a predictable manner.

business model

There are various languages that are used for modeling:

Written Language refers to the natural language description of the model. If price of butter goes up, the quantity demanded of the butter will go down. Written language models can be used effectively to inform all of the other types of models that follow below. It often goes by the name of “qualitative” research, although we find that a bit limiting.  Just a simple statement like – This model approximately reflects the behavior of people living in a dense environment …” could qualify as a written language model that seeks to shed light on the object being studied.

Icon Models refer to a pictorial representation and probably the earliest form of model making. It seeks to only qualify those contours or shapes or colors that are most interesting and relevant to the object being studied. The idea of icon models is to pictorially abstract the main elements to provide a working understanding of the object being studied.

Topological Models refer to how the variables are placed with respect to one another and thus helps in creating a classification or taxonomy of the model. Once can have logical trees, class trees, Venn diagrams, and other imaginative pictorial representation of fields to further shed light on the object being studied. In fact, pictorial representations must abide by constant scale, direction and placements. In other words, if the variables are placed on a different scale on different maps, it would be hard to draw logical conclusions by sight alone. In addition, if the placements are at different axis in different maps or have different vectors, it is hard to make comparisons and arrive at a shared consensus and a logical end result.

Arithmetic Models are what we generally fall back on most. The data is measured with an arithmetic scale. It is done via tables, equations or flow diagrams. The nice thing about arithmetic models is that you can show multiple dimensions which is not possible with other modeling languages. Hence, the robustness and the general applicability of such models are huge and thus is widely used as a key language to modeling.

Analogous Models refer to crafting explanations using the power of analogy. For example, when we talk about waves – we could be talking of light waves, radio waves, historical waves, etc.  These metaphoric representations can be used to explain phenomenon, but at best, the explanatory power is nebulous, and it would be difficult to explain the variations and uncertainties between two analogous models.  However, it still is used to transmit information quickly through verbal expressions like – “Similarly”, “Equivalently”, “Looks like ..” etc. In fact, extrapolation is a widely used method in modeling and we would ascertain this as part of the analogous model to a great extent. That is because we time-box the variables in the analogous model to one instance and the extrapolated model to another instance and we tie them up with mathematical equations.

 

The Law of Unintended Consequences

The Law of Unintended Consequence is that the actions of a central body that might claim omniscient, omnipotent and omnivalent intelligence might, in fact, lead to consequences that are not anticipated or unintended.

The concept of the Invisible Hand as introduced by Adam Smith argued that it is the self-interest of all the market agents that ultimately create a system that maximizes the good for the greatest amount of people.

Robert Merton, a sociologist, studied the law of unintended consequence. In an influential article titled “The Unanticipated Consequences of Purposive Social Action,” Merton identified five sources of unanticipated consequences.

Ignorance makes it difficult and impossible to anticipate the behavior of every element or the system which leads to incomplete analysis.

Errors that might occur when someone uses historical data and applies the context of history into the future. Linear thinking is a great example of an error that we are wrestling with right now – we understand that there are systems, looking back, that emerge exponentially but it is hard to decipher the outcome unless one were to take a leap of faith.

Biases work its way into the study as well. We study a system under the weight of our biases, intentional or unintentional. It is hard to strip that away even if there are different bodies of thought that regard a particular system and how a certain action upon the system would impact it.

Weaved with the element of bias is the element of basic values that may require or prohibit certain actions even if the long-term impact is unfavorable. A good example would be the toll gates established by the FDA to allow drugs to be commercialized. In its aim to provide a safe drug, the policy might be such that the latency of the release of drugs for experiments and commercial purposes are so slow that many patients who might otherwise benefit from the release of the drug lose out.

Finally, he discusses the self-fulfilling prophecy which suggests that tinkering with the elements of a system to avert a catastrophic negative event might in actuality result in the event.

It is important however to acknowledge that unintended consequences do not necessarily lead to a negative outcome. In fact, there are could be unanticipated benefits. A good example is Viagra which started off as a pill to lower blood pressure, but one discovered its potency to solve erectile dysfunctions. The discovery that ships that were sunk became the habitat and formation of very rich coral reefs in shallow waters that led scientists to make new discoveries in the emergence of flora and fauna of these habitats.

unitended con ahead

If there are initiatives exercised that are considered “positive initiative” to influence the system in a manner that contribute to the greatest good, it is often the case that these positive initiatives might prove to be catastrophic in the long term. Merton calls the cause of this unanticipated consequence as something called the product of the “relevance paradox” where decision makers thin they know their areas of ignorance regarding an issue, obtain the necessary information to fill that ignorance gap but intentionally or unintentionally neglect or disregard other areas as its relevance to the final outcome is not clear or not lined up to values. He goes on to argue, in a nutshell, that unintended consequences relate to our hubris – we are hardwired to put our short-term interest over long term interest and thus we tinker with the system to surface an effect which later blow back in unexpected forms. Albert Camus has said that “The evil in the world almost always comes of ignorance, and good intentions may do as much harm as malevolence if they lack understanding.”

An interesting emergent property that is related to the law of unintended consequence is the concept of Moral Hazard. It is a concept that individuals have incentives to alter their behavior when their risk or bad decision making is borne of diffused among others. For example:

If you have an insurance policy, you will take more risks than otherwise. The cost of those risks will impact the total economics of the insurance and might lead to costs being distributed from the high-risk takers to the low risk takers.

Unintended-Consequences cartoon

How do the conditions of the moral hazard arise in the first place? There are two important conditions that must hold. First, one party has more information than another party. The information asymmetry thus creates gaps in information and that creates a condition of moral hazard. For example, during 2006 when sub-prime mortgagors extended loans to individuals who had dubitable income and means to pay. The Banks who were buying these mortgages were not aware of it. Thus, they ended up holding a lot of toxic loans due to information asymmetry. Second, is the existence of an understanding that might affect the behavior of two agents. If a child knows that they are going to get bailed out by the parents, he/she might take some risks that he/she would otherwise might not have taken.

To counter the possibility of unintended consequences, it is important to raise our thinking to second-order thinking. Most of our thinking is simplistic and is based on opinions and not too well grounded in facts. There are a lot of biases that enter first order thinking and in fact, all of the elements that Merton touches on enters it – namely, ignorance, biases, errors, personal value systems and teleological thinking. Hence, it is important to get into second-order thinking – namely, the reasoning process is surfaced by looking at interactions of elements, temporal impacts and other system dynamics. We had mentioned earlier that it is still difficult to fully wrestle all the elements of emergent systems through the best of second-order thinking simply because the dynamics of a complex adaptive system or complex physical system would deny us that crown of competence. However, this fact suggests that we step away from simple, easy and defendable heuristics to measure and gauge complex systems.

Emergent Systems: Introduction

The whole is greater than the sum of its parts. “Emergent properties” refer to those properties that emerge that might be entirely unexpected. As discussed in CAS, they arise from the collaborative functioning of a system. In other words, emergent properties are properties of a group of items, but it would be erroneous for us to reduce such systems into properties of atomic elements and use those properties as binding elements to understand emergence Some common examples of emergent properties include cities, bee hives, ant colonies and market systems. Out thinking attributes causal effects – namely, that behavior of elements would cause certain behaviors in other hierarchies and thus an entity emerges at a certain state. However, we observe that a process of emergence is the observation of an effect without an apparent cause. Yet it is important to step back and regard the relationships and draw lines of attribution such that one can concur that there is an impact of elements at the lowest level that surfaces, in some manner, at the highest level which is the subject of our observation.

emergent

Jochenn Fromm in his paper “Types and Forms of Emergence” has laid this out best. He says that emergent properties are “amazing and paradox: fundamental but familiar.” In other words, emergent properties are changeless and changing, constant and fluctuating, persistent and shifting, inevitable and unpredictable. The most important note that he makes is that the emergent property is part of the system and at the same time it might not always be a part of the system. There is an undercurrent of novelty or punctuated gaps that might arise that is inexplicable, and it is this fact that renders true emergence virtually irreducible. Thus, failure is embodied in all emergent systems – failure being that the system does not behave according to expectation. Despite all rules being followed and quality thresholds are established at every toll gate at the highest level, there is still a possibility of failure which suggests that there is some missing information in the links. It is also possible that the missing information is dynamic – you do not step in the same water twice – which makes the study to predict emergent systems to be a rather difficult exercise. Depending on the lens through which we look at, the system might appear or disappear.

emergent cas

There are two types of emergence: Descriptive and Explanatory emergence. Descriptive emergence means that properties of wholes cannot be necessarily defined through the properties of the pasts. Explanatory emergence means laws of complex systems cannot be deduced from the laws of interaction of simpler elements that constitute it. Thus the emergence is a result of the amount of variety embodied in the system, the amount of external influence that weights and shapes the overall property and direction of the system, the type of resources that the system consumes, the type of constraints that the system is operating under and the number of levels of sub-systems that work together to build out the final system. Thus, systems can be benign as in the system is relatively more predictable whereas a radical system is a material departure of a system from expectation. If the parts that constitute a system is independent of its workings from other parts and can be boxed within boundaries, emergent systems become more predictable. A watch is an example of a system that follows the different mechanical elements in a watch that are geared for reading the time as it ultimate purpose. It is a good example of a complex physical system. However, these systems are very brittle – a failure in one point can cascade into a failure of the entire system. Systems that are more resilient are those where the elements interact and learn from one another. In other words, the behavior of the elements excites other elements – all of which work together to create a dance toward a more stable state. They deploy what is often called the flocking trick and the pheromone trick. Flocking trick is largely the emulation of the particles that are close to each other – very similar to the cellular automata as introduced by Neumann and discussed in the earlier chapter. The Pheromone trick reflects how the elements leave marks that are acted upon as signals by other elements and thus they all work together around these signal trails to behave and thus act as a forcing function to create the systems.

emerg strategy

There are systems that have properties of extremely strong emergence. What does Consciousness, Life, and Culture have in common? How do we look at Climate? What about the organic development of cities? These are just some examples of system where determinism is nigh impossible. We might be able to tunnel through the various and diverse elements that embody the system, but it would be difficult to coherently and tangibly draw all set of relationships, signals, effectors and detectors, etc. to grapple with a complete understanding of the system. Wrestling a strong emergent system would be a task that might even be outside the purview of the highest level of computational power available. And yet, these systems exist, and they emerge and evolve. Yet we try to plan for these systems or plan to direct policies to influence the system, not fully knowing the impact. This is also where the unintended consequences of our action might take free rein.

Complex Physical and Adaptive Systems

There are two models in complexity. Complex Physical Systems and Complex Adaptive Systems! For us to grasp the patterns that are evolving, and much of it seemingly out of our control – it is important to understand both these models. One could argue that these models are mutually exclusive. While the existing body of literature might be inclined toward supporting that argument, we also find some degree of overlap that makes our understanding of complexity unstable. And instability is not to be construed as a bad thing! We might operate in a deterministic framework, and often, we might operate in the realms of a gradient understanding of volatility associated with outcomes. Keeping this in mind would be helpful as we deep dive into the two models. What we hope is that our understanding of these models would raise questions and establish mental frameworks for intentional choices that we are led to make by the system or make to influence the evolution of the system.

 

Complex Physical Systems (CPS)

Complex Physical Systems are bounded by certain laws. If there are initial conditions or elements in the system, there is a degree of predictability and determinism associated with the behavior of the elements governing the overarching laws of the system. Despite the tautological nature of the term (Complexity Physical System) which suggests a physical boundary, the late 1900’s surfaced some nuances to this model. In other words, if there is a slight and an arbitrary variation in the initial conditions, the outcome could be significantly different from expectations. The assumption of determinism is put to the sword.  The notion that behaviors will follow established trajectories if rules are established and the laws are defined have been put to test. These discoveries by introspection offers an insight into the developmental block of complex physical systems and how a better understanding of it will enable us to acknowledge such systems when we see it and thereafter allow us to establish certain toll-gates and actions to navigate, to the extent possible, to narrow the region of uncertainty around outcomes.

universe

The universe is designed as a complex physical system. Just imagine! Let this sink in a bit. A complex physical system might be regarded relatively simpler than a complex adaptive system. And with that in mind, once again …the universe is a complex physical system. We are awed by the vastness and scale of the universe, we regard the skies with an illustrious reverence and we wonder and ruminate on what lies beyond the frontiers of a universe, if anything. Really, there is nothing bigger than the universe in the physical realm and yet we regard it as a simple system. A “Simple” Complex Physical System. In fact, the behavior of ants that lead to the sustainability of an ant colony, is significantly more complex: and we mean by orders of magnitude.

ant colony

Complexity behavior in nature reflects the tendency of large systems with many components to evolve into a poised “critical” state where minor disturbances or arbitrary changes in initial conditions can create a seemingly catastrophic impact on the overall system such that system changes significantly. And that happens not by some invisible hand or some uber design. What is fundamental to understanding complex systems is to understand that complexity is defined as the variability of the system. Depending on our lens, the scale of variability could change and that might lead to different apparatus that might be required to understand the system. Thus, determinism is not the measure: Stephen Jay Gould has argued that it is virtually impossible to predict the future. We have hindsight explanatory powers but not predictable powers. Hence, systems that start from the initial state over time might represent an outcome that is distinguishable in form and content from the original state. We see complex physical systems all around us. Snowflakes, patterns on coastlines, waves crashing on a beach, rain, etc.

Complex Adaptive Systems (CAS)

Complex adaptive systems, on the contrary, are learning systems that evolve. They are composed of elements which are called agents that interact with one another and adapt in response to the interactions.

cas

Markets are a good example of complex adaptive systems at work.

CAS agents have three levels of activity. As described by Johnson in Complexity Theory: A Short Introduction – the three levels of activity are:

  1. Performance (moment by moment capabilities): This establishes the locus of all behavioral elements that signify the agent at a given point of time and thereafter establishes triggers or responses. For example, if an object is approaching and the response of the agent is to run, that would constitute a performance if-then outcome. Alternatively, it could be signals driven – namely, an ant emits a certain scent when it finds food: other ants will catch on that trail and act, en masse, to follow the trail. Thus, an agent or an actor in an adaptive system has detectors which allows them to capture signals from the environment for internal processing and it also has the effectors that translate the processing to higher order signals that influence other agents to behave in certain ways in the environment. The signal is the scent that creates these interactions and thus the rubric of a complex adaptive system.
  2. Credit assignment (rating the usefulness of available capabilities): When the agent gathers experience over time, the agent will start to rely heavily on certain rules or heuristics that they have found useful. It is also typical that these rules may not be the best rules, but it could be rules that are a result of first discovery and thus these rules stay. Agents would rank these rules in some sequential order and perhaps in an ordinal ranking to determine what is the best rule to fall back on under certain situations. This is the crux of decision making. However, there are also times when it is difficult to assign a rank to a rule especially if an action is setting or laying the groundwork for a future course of other actions. A spider weaving a web might be regarded as an example of an agent expending energy with the hope that she will get some food. This is a stage setting assignment that agents have to undergo as well. One of the common models used to describe this best is called the bucket-brigade algorithm which essentially states that the strength of the rule depends on the success of the overall system and the agents that constitute it. In other words, all the predecessors and successors need to be aware of only the strengths of the previous and following agent and that is done by some sort of number assignment that becomes stronger from the beginning of the origin of the system to the end of the system. If there is a final valuable end-product, then the pathway of the rules reflect success. Once again, it is conceivable that this might not be the optimal pathway but a satisficing pathway to result in a better system.
  3. Rule discovery (generating new capabilities): Performance and credit assignment in agent behavior suggest that the agents are governed by a certain bias. If the agents have been successful following certain rules, they would be inclined toward following those rules all the time. As noted, rules might not be optimal but satisficing. Is improvement a matter of just incremental changes to the process? We do see major leaps in improvement … so how and why does this happen? In other words, someone in the process have decided to take a different rule despite their experiences. It could have been an accident or very intentional.

One of the theories that have been presented is that of building blocks. CAS innovation is a result of reconfiguring the various components in new ways. One quips that if energy is neither created, nor destroyed …then everything that exists today or will exist tomorrow is nothing but a reconfiguration of energy in new ways. All of tomorrow resides in today … just patiently waiting to be discovered. Agents create hypotheses and experiment in the petri dish by reconfiguring their experiences and other agent’s experiences to formulate hypotheses and the runway for discovery. In other words, there is a collaboration element that comes into play where the interaction of the various agents and their assignment as a group to a rule also sets the stepping stone for potential leaps in innovation.

Another key characteristic of CAS is that the elements are constituted in a hierarchical order. Combinations of agents at a lower level result in a set of agents higher up and so on and so forth. Thus, agents in higher hierarchical orders take on some of the properties of the lower orders but it also includes the interaction rules that distinguishes the higher order from the lower order.

Short History of Complexity

Complexity theory began in the 1930’s when natural scientists and mathematicians rallied together to get a deeper understanding of how systems emerge and plays out over time.  However, the groundwork of complexity theory began in the 1850’s with Darwin’s introduction to Natural Selection. It was further extended by Mendel’s genetic algorithms. Darwin’s Theory of Evolution has been posited as a slow gradual process. He says that “Natural selection acts only by taking advantage of slight successive variations; she can never take a great and sudden leap, but must advance by short and sure, though slow steps.” Thus, he concluded that complex systems evolve by leaps and the result is an organic formulation of an irreducibly complex system which is composed of many parts, all of which work together closely for the overall system to function. If any part is missing or does not act as expected, then the system becomes unwieldy and breaks down. So it was an early foray into distinguishing the emergent property of a system from the elements that constitute it. Mendel, on the other hand, laid out the property of inheritance across generations. An organic system inherits certain traits that are reconfigured over time and adapts to the environment, thus leading to the development of an organism which for our purposes fall in the realm of a complex outcome. One would imagine that there is a common thread between Darwin’s Natural Selection and Mendel’s laws of genetic inheritance. But that is not the case and that has wide implications in complexity theory. Mendel focused on how the traits are carried across time: the mechanics which are largely determined by some probabilistic functions. The underlying theory of Mendel hinted at the possibility that a complex system is a result of discrete traits that are passed on while Darwin suggests that complexity arises due continuous random variations.

 

darwin statement

In the 1920’s, literature suggested that a complex system has elements of both: continuous adaptation and discrete inheritance that is hierarchical in nature. A group of biologists reconciled the theories into what is commonly known as the Modern Synthesis. The principles guiding Modern Synthesis were: Natural Selection was the major mechanism for evolutionary change. Small random variations of genes and natural selection result in the origin of new species. Furthermore, the new species might have properties different than the elements that constitute. Modern Synthesis thus provided the framework around Complexity theory. What does this great debate mean for our purposes? Once we arrive at determining whether a system is complex, then how does the debate shed more light into our understanding of complexity. Does this debate shed light into how we regard complexity and how we subsequently deal with it? We need to further extend our thinking by looking at a few new developments that occurred in the 20th century that would give us a better perspective. Let us then continue our journey into the evolution of the thinking around complexity.

 

Axioms are statements that are self-evident. It serves to be a premise or starting point for further reasoning and arguments. An axiom thus is not contestable because if it, then all the following reasoning that is extended against axioms would fall apart. Thus, for our purposes and our understanding of complexity theory – A complex system has an initial state that is irreducible physically or mathematically.

 

One of the key elements in Complexity is computation or computability. In the 1930’s, Turing introduced the abstract concept of the Turing machine. There is a lot of literature that goes into the specifics of how the machine works but that is beyond the scope of this book. However, there are key elements that can be gleaned from that concept to better understand complex systems.  A complex system that evolves is a result of a finite number of steps that would solve a specific challenge. Although the concept has been applied in the boundaries of computational science, I am taking the liberty to apply this to emerging complex systems. Complexity classes help scientists categorize the problems based on how much time and space is required to solve problems and verify solutions. The complexity is thus a function of time and memory. This is a very important concept and we have radically simplified the concept to attend to a self-serving purpose: understand complexity and how to solve the grand challenges?  Time complexity refers to the number of steps required to solve a problem. A complex system might not necessarily be the most efficient outcome but is nonetheless an outcome of a series of steps, backward and forward to result in a final state. There are pathways or efficient algorithms that are produced and the mechanical states to produce them are defined and known. Space complexity refers to how much memory that the algorithm depends on to solve the problem.  Let us keep these concepts in mind as we round this all up into a more comprehensive work that we will relay at the end of this chapter.

Around the 1940’s, John von Neumann introduced the concept of self-replicating machines. Like Turing, Von Neumann’s would design an abstract machine which, when run, would replicate itself. The machine consists of three parts: a ‘blueprint’ for itself, a mechanism that can read any blueprint and construct the machine (sans blueprint) specified by that blueprint, and a ‘copy machine’ that can make copies of any blueprint. After the mechanism has been used to construct the machine specified by the blueprint, the copy machine is used to create a copy of that blueprint, and this copy is placed into the new machine, resulting in a working replication of the original machine. Some machines will do this backwards, copying the blueprint and then building a machine. The implications are significant. Can complex systems regenerate? Can they copy themselves and exhibit same behavior and attributes? Are emergent properties equivalent? Does history repeat itself or does it rhyme? How does this thinking move our understanding and operating template forward once we identify complex systems?

complexity-sciences dis

Let us step forward into the late 1960’s when John Conway started doing experiments extending the concept of the cellular automata. He introduced the concept of the Game of Life in 1970 as a result of his experiments. His main theses was simple : The game is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves, or, for advanced players, by creating patterns with properties. The entire formulation was done on a two-dimensional universe in which patterns evolved over time. It is one of the finest examples in science of how a set of few simple non-arbitrary rules can result in an incredibly complex behavior that is fluid and provides a pleasing pattern over time. In other words, if one were an outsider looking in, you would see a pattern emerging from simple initial states and simple rules.  We encourage you to look at several patterns that many people have constructed using different Game of Life parameters.  The main elements are as follows. A square grid contains cells that are alive or dead. The behavior of each cell is dependent on the state of its eight immediate neighbors. Eight is an arbitrary number that Conway established to keep the model simple. These cells will strictly follow the rules.

Live Cells:

  1. A live cell with zero or one live neighbors will die
  2. A live cell with two or three live neighbors will remain alive
  3. A live cell with four or more live neighbors will die.

Dead Cells:

  1. A dead cell with exactly three live neighbors becomes alive
  2. In all other cases a dead cell will stay dead.

Thus, what his simulation led to is the determination that life is an example of emergence and self-organization. Complex patterns can emerge from the implementation of very simple rules. The game of life thus encourages the notion that “design” and “organization” can spontaneously emerge in the absence of a designer.

Stephen Wolfram introduced the concept of a Class 4 cellular automata of which the Rule of 110 is well known and widely studied. The Class 4 automata validates a lot of the thinking grounding complexity theory.  He proves that certain patterns emerge from initial conditions that are not completely random or regular but seems to hint at an order and yet the order is not predictable. Applying a simple rule repetitively to the simplest possible starting point would bode the emergence of a system that is orderly and predictable: but that is far from the truth. The resultant state is that the results exhibit some randomness and yet produce patters with order and some intelligence.

turing

 

Thus, his main conclusion from his discovery is that complexity does not have to beget complexity: simple forms following repetitive and deterministic rules can result in systems that exhibit complexity that is unexpected and unpredictable. However, he sidesteps the discussion around the level of complexity that his Class 4 automata generates. Does this determine or shed light on evolution, how human beings are formed, how cities evolve organically, how climate is impacted and how the universe undergoes change? One would argue that is not the case. However, if you take into account Darwin’s natural selection process, the Mendel’s law of selective genetics and its corresponding propitiation, the definitive steps proscribed by the Turing machine that captures time and memory,  Von Neumann’s theory of machines able to replicate themselves without any guidance, and Conway’s force de tour in proving that initial conditions without any input can create intelligent systems – you essentially can start connecting the dots to arrive at a core conclusion: higher order systems can organically create itself from initial starting conditions naturally. They exhibit a collective intelligence which is outside the boundaries of precise prediction. In the previous chapter we discussed complexity and we introduced an element of subjective assessment to how we regard what is complex and the degree of complexity. Whether complexity falls in the realm of a first-person subjective phenomenon or a scientific third-party objective phenomenon has yet to be ascertained. Yet it is indisputable that the product of a complex system might be considered a live pattern of rules acting upon agents to cause some deterministic but random variation.

Complexity: An Introduction

The past was so simple. Life was so simple and good. Those were the good old days. How often have you heard these ruminations? It is fairly common!  Surprisingly, as we forge a path into the future, these ruminations gather pace. We become nostalgic and we thus rake fear of the future. We attribute a good life to a simple life. But the simple life is measured against the past. In fact, our modus operandi is to chunk up the past into timeboxes and then surface all the positive elements. While that is an endeavor that might give us some respite from what is happening today, the fact is that the nostalgia is largely grounded in fiction. It would be foolish to recall the best elements and compare it to what we see emerging today which conflates good and bad. We are wired for survival: If we have survived into the present, it makes for a good argument perhaps that the conditions that led to our survival today can only be due to a constellation of good factors that far outweighed the bad. But when we look into the future rife with uncertainty, we create this rather dystopian world – a world of gloom and doom and then we wonder: why are we so stressed? Soon we engage in a vicious cycle of thought and our actions are governed by the thought. You have heard – Hope for the best and plan for the worst. Really? I would imagine that when one hopes for the best and the facts do not undermine the trend, would it not be better to hope for the best and plan for the best. It is true that things might not work out as planned but ought we to always build out models and frameworks to counter that possibility. We say that the world is complex and that the complexity forces us to establish certain heuristics to navigate the plenar forces of complexity. So let us understand what complexity is. What does it mean? And with our new understanding of complexity through the course of this chapter, would we perhaps arrive at a different mindset that speaks of optimism and innovation. We will certainly not settle that matter at the end of this chapter, but we hope that we will surface enough questions, so you can reflect upon where we are and where we are going in a more judicious manner – a manner grounded on facts and values. Let us now begin our journey!

compl

The sky is blue. We hear this statement. It is a simple statement. There is a noun, a verb and an adjective. In the English-speaking world, we can only agree on what constitutes the “sky”. We might have a hard time defining it – Merriam Webster defines the sky as the upper atmosphere or expanse of space that constitutes an apparent great vault or arch over the earth. A five-year-old would point to the sky to define sky. Now how do we define blue. A primary color between green and violet. Is that how you think about blue or do you just arrive at an understanding of what that color means. Once again, a five-year-old would identify blue: she would not look at green and violet as constituent colors. The statement – The sky is blue – for the sake of argument is fairly simple!

However, if we say that the sky is a shade of blue, we introduce an element of ambiguity, don’t we? Is it dark blue, light blue, sky blue (so we get into recursive thinking), or some other property that is bluish but not quite blue. What has emerged thus is an element of complexity – a new variable that might be considered a slider on a scale. How we slide our understanding is determined by our experience, our perception or even our wishful thinking. The point being that complexity ceases to be purely an objective property. Rather it is an emergent property driven by our interpretation. Protagoras, an ancient Greek philosopher, says that the man is a measure of all things. What he is saying is that our lens of evaluation is purely predicated on our experiences in life. There is nothing that exists outside the boundaries of our experience. Now Socrates arrived at a different view – namely, he proved that certain elements are ordered in a manner that exists outside the boundaries of our experience. We will get back to this in later chapters. The point being that complexity is an emergent phenomenon that occurs due to our interpretation. Natural scientists will argue, like Socrates, that there are complex systems that exist despite our interpretations. And that is true as well. So how do we balance these opposing views at the same time: is that a sign of insanity? Well, that is a very complex question (excuse my pun) and so we need to further expand on the term Complexity.

In order to define complexity, let us now break this up a bit further. Complex systems have multiple variables: these variables interact with each other; these variables might be subject to interpretation in the human condition; if not, these variables interact in a manner to enable emergent properties which might have a life of its own. These complex systems might be decentralized and have information processing pathways outside the lens of science and human perception. The complex systems are malleable and adaptive.

Markets are complex institution. When we try to centralize the market, then we take a position that we feel we understand the complexity and thus can determine the outcomes in a certain way. Socialist governments have long tried to manage markets but have not been successful. Nobel winner, Frederich Hayek, has long argued that the markets are a result of spontaneous order and not design. It has multiple variables, significant information processing is underway at any given time in an active market, and the market adapts to the information processing mechanism. But there are winners and losers in a market as well. Why? Because each of them observes the market dynamics and arrive at different conclusions. Complexity does not follow a deterministic path. Neither does the market and we have lot of success and failures that suggest that to be the case.

complex2

Let us look at another example. Examples will probably give us an appreciation for the concept and this will be very important as we sped through the journey into the future.

Insect behavior is a case in point. Whether we look at bees or ants, it is a common fact that these insects have extremely complex systems despite the lack of sufficient instruments for survival for one bee or one ant. In 1705, Bernard Mandeville wrote a book called: Fable of the Bees. It was a poem. Here is a part of the poem. What Mandeville is clearly hinting at is the fact that there would be an innate failure to centralize complex systems like a bee hive. Rather, the complex systems emerge in a way to create innate systems that stabilize for success and survival in the long run.

A Spacious Hive well stock’d with Bees,

That lived in Luxury and Ease;

And yet as fam’d for Laws and Arms,

As yielding large and early Swarms;

Was counted the great Nursery

Of Sciences and Industry.

No Bees had better Government,

More Fickleness, or less Content.

They were not Slaves to Tyranny,

Nor ruled by wild Democracy;

But Kings, that could not wrong, because

Their Power was circumscrib’d by Laws.

 

Then we have the ant colonies. An ant is blind. Yet a colony has collective intelligence. The ants work together, despite individual shortcomings that challenge an individual survival, to figure out how to exist and propagate as group. How does a simple living organism that is subject to the whims and fancies of nature survive and seed every corner of the earth in great volumes? Entomologists and social scientists and biologists have tried to figure this out and have posited a lot of theories. The point is that complex systems are not bounded by our reason alone. The whole is greater than the sum of the parts.

 

Key Takeaway

A complex system is the result of the interaction of a network of variables that gives rise to collective behavior, information processing and self-learning and adaptive system that does not completely lie in the purview of human explanation.

 

Books to Read – 2017

It has been a while since I posted on this blog. It just so happens that life is what happens to you when you have other plans. Having said that, I decided early this year to ready 42 books this year across a wide range of genres. I have been trying to keep pace, and have succeeded so far.

Here are the books that I have read and plan to read:

  1. Song of Solomon by Toni Morrison  ( Read)
  2. The Better Angels of Our Nature by Steven Pinker ( Read)
  3. Black Dogs by Ian McEwan ( Read)
  4. Nutshell: A Novel by Ian McEwan ( Read)
  5. Dr. Jekyl and Mr. Hyde by Robert Louis Stevenson ( Read)
  6. Moby Dick by Herman Melville
  7. The Plot Against America by Phil Roth
  8. Humboldt’s Gift by Saul Bellow
  9. The Innovators by Walter Isaacson
  10. Sapiens: A Brief History of Mankind by Yuval Harari
  11. The House of Morgan by Ron Chernow
  12. American Political Rhetoric: Essential Speeches and Writings by Peter Augustine Lawler and Robert Schaefer
  13. Keynes Hayek: The Clash that defined Modern Economics by Nicholas Wapshott
  14. The Year of Magical Thinking by Joan Didion
  15. Small Great Things by Jodi Picoult
  16. The Conscience of a Liberal by Paul Krugman
  17. Globalization and its Discontents by Joseph Stiglitz
  18. Twilight of the  Elites: America after Meritocracy by Chris Hayes
  19. What is Mathematics: An Elementary Approach to Idea and Methods by Robbins & Stewart
  20. Algorithms to live by: Computer Science of Human Decisions by Christian & Griffiths
  21. Andrew Carnegie by David Nasaw
  22. Just Mercy: A Story of Justice and Redemption by Bryan Stevenson
  23. The Evolution of Everything: How New Ideas Emerge by Matt Ridley
  24.  The Only Game in Town: Central Banks, Instability and Avoiding the Next Collapse by Mohammed El-Arian
  25. The Relentless Revolution: A History of Capitalism by Joyce Appleby
  26. The Industries of the Future by Alec Ross
  27. Where Good Ideas come from by Steven Johnson
  28. Original: How Non-Conformists move the world by Adam Grant
  29. Start with Why by Simon Sinek
  30. The Discreet Hero by Mario Vargas Llosa
  31. Istanbul by Orhan Pamuk
  32. Jefferson and Hamilton: The Rivalry that Forged a Nation by John Ferling
  33. The Orphan Master’s Son: A Novel by Adam Johnson
  34. Between the World and Me: Ta Nehisi-Coates
  35. Active Liberty: Interpreting our Democratic Constitution
  36. The Blue Guitar by John Banville
  37. The Euro Crisis and its Aftermath by Jean Pisani-Fery
  38. Africa: Why Economists get it wrong by Morten Jerven
  39. The Snowball: Warren Buffett and the Business of Life
  40. To Explain the World: The Discovery of Modern Science by Steven Weinberg
  41. The Meursalt Investigation by Daoud and Cullen
  42. The Stranger by Albert Camus

Building a Lean Financial Infrastructure!

A lean financial infrastructure presumes the ability of every element in the value chain to preserve and generate cash flow. That is the fundamental essence of the lean infrastructure that I espouse. So what are the key elements that constitute a lean financial infrastructure?

And given the elements, what are the key tweaks that one must continually make to ensure that the infrastructure does not fall into entropy and the gains that are made fall flat or decay over time. Identification of the blocks and monitoring and making rapid changes go hand in hand.

lean

The Key Elements or the building blocks of a lean finance organization are as follows:

  1. Chart of Accounts: This is the critical unit that defines the starting point of the organization. It relays and groups all of the key economic activities of the organization into a larger body of elements like revenue, expenses, assets, liabilities and equity. Granularity of these activities might lead to a fairly extensive chart of account and require more work to manage and monitor these accounts, thus requiring incrementally a larger investment in terms of time and effort. However, the benefits of granularity far exceeds the costs because it forces management to look at every element of the business.
  2. The Operational Budget: Every year, organizations formulate the operational budget. That is generally a bottoms up rollup at a granular level that would map to the Chart of Accounts. It might follow a top-down directive around what the organization wants to land with respect to income, expense, balance sheet ratios, et al. Hence, there is almost always a process of iteration in this step to finally arrive and lock down the Budget. Be mindful though that there are feeders into the budget that might relate to customers, sales, operational metrics targets, etc. which are part of building a robust operational budget. var
  3. The Deep Dive into Variances: As you progress through the year and part of the monthly closing process, one would inquire about how the actual performance is tracking against the budget. Since the budget has been done at a granular level and mapped exactly to the Chart of Accounts, it thus becomes easier to understand and delve into the variances. Be mindful that every element of the Chart of Account must be evaluated. The general inclination is to focus on the large items or large variances, while skipping the small expenses and smaller variances. That method, while efficient, might not be effective in the long run to build a lean finance organization. The rule, in my opinion, is that every account has to be looked and the question should be – Why? If the management has agreed on a number in the budget, then why are the actuals trending differently. Could it have been the budget and that we missed something critical in that process? Or has there been a change in the underlying economics of the business or a change in activities that might be leading to these “unexpected variances”. One has to take a scalpel to both – favorable and unfavorable variances since one can learn a lot about the underlying drivers. It might lead to managerially doing more of the better and less of the worse. Furthermore, this is also a great way to monitor leaks in the organization. Leaks are instances of cash that are dropping out of the system. Much of little leaks amounts to a lot of cash in total, in some instances. So do not disregard the leaks. Not only will that preserve the cash but once you understand the leaks better, the organization will step up in efficiency and effectiveness with respect to cash preservation and delivery of value.  deep dive
  4. Tweak the process: You will find that as you deep dive into the variances, you might want to tweak certain processes so these variances are minimized. This would generally be true for adverse variances against the budget. Seek to understand why the variance, and then understand all of the processes that occur in the background to generate activity in the account. Once you fully understand the process, then it is a matter of tweaking this to marginally or structurally change some key areas that might favorable resonate across the financials in the future.
  5. The Technology Play: Finally, evaluate the possibilities of exploring technology to surface issues early, automate repetitive processes, trigger alerts early on to mitigate any issues later, and provide on-demand analytics. Use technology to relieve time and assist and enable more thinking around how to improve the internal handoffs to further economic value in the organization.

All of the above relate to managing the finance and accounting organization well within its own domain. However, there is a bigger step that comes into play once one has established the blocks and that relates to corporate strategy and linking it to the continual evolution of the financial infrastructure.

The essential question that the lean finance organization has to answer is – What can the organization do so that we address every element that preserves and enhances value to the customer, and how do we eliminate all non-value added activities? This is largely a process question but it forces one to understand the key processes and identify what percentage of each process is value added to the customer vs. non-value added. This can be represented by time or cost dimension. The goal is to yield as much value added activities as possible since the underlying presumption of such activity will lead to preservation of cash and also increase cash acquisition activities from the customer.

Comparative Literature and Business Insights

Literature is the art of discovering something extraordinary about ordinary people, and saying with ordinary words something extraordinary.” – Boris Pasternak

 

It is literature which for me opened the mysterious and decisive doors of imagination and understanding. To see the way others see. To think the way others think. And above all, to feel.” – Salman Rushdie

  nobel

There is a common theme that cuts across literature and business. It is called imagination!

Great literature seeds the mind to imagine faraway places across times and unique cultures. When we read a novel, we are exposed to complex characters that are richly defined and the readers’ subjective assessment of the character and the context defines their understanding of how the characters navigate the relationships and their environment. Great literature offers many pauses for thought, and long after the book is read through … the theme gently seeps in like silt in the readers’ cumulative experiences. It is in literature that the concrete outlook of humanity receives its expression. Comparative literature which is literature assimilated across many different countries enable a diversity of themes that intertwine into the readers’ experiences augmented by the reality of what they immediately experience – home, work, etc. It allows one to not only be capable of empathy but also … to craft out the fluid dynamics of ever changing concepts by dipping into many different types of case studies of human interaction. The novel and the poetry are the bulwarks of literature. It is as important to study a novel as it is to enjoy great poetry. The novel characterizes a plot/(s) and a rich tapestry of actions of the characters that navigates through these environments: the poetry is the celebration of the ordinary into extraordinary enactments of the rhythm of the language that transport the readers, through images and metaphor, into single moments. It breaks the linear process of thinking, a perpendicular to a novel.

comp literature

Business insights are generally a result of acute observation of trends in the market, internal processes, and general experience. Some business schools practice case study method which allows the student to have a fairly robust set of data points to fall back upon. Some of these case studies are fairly narrow but there are some that gets one to think about personal dynamics. It is a fact that personal dynamics and biases and positioning plays a very important role in how one advocates, views, or acts upon a position. Now the schools are layering in classes on ethics to understand that there are some fundamental protocols of human nature that one has to follow: the famous adage – All is fair in love and war – has and continues to lose its edge over time. Globalization, environmental consciousness, individual rights, the idea of democracy, the rights of fair representation, community service and business philanthropy are playing a bigger role in today’s society. Thus, business insights today are a result of reflection across multiple levels of experience that encompass not the company or the industry …but encompass a broader array of elements that exercises influence on the company direction. In addition, one always seeks an end in mind … they perpetually embrace a vision that is impacted by their judgments, observations and thoughts. Poetry adds the final wing for the flight into this metaphoric realm of interconnections – for that is always what a vision is – a semblance of harmony that inspires and resurrects people to action.

interconnect

I contend that comparative literature is a leading indicator that allows a person to get a feel for the general direction of the express and latent needs of people. Furthermore, comparative literature does not offer a solution. Great literature does not portend a particular end. They leave open a multitude of possibilities and what-ifs. The reader can literally transport themselves into the environment and wonder at how he/she would act … the jump into a vicarious existence steeps the reader into a reflection that sharpens the intellect. This allows the reader in a business to be better positioned to excavate and address the needs of current and potential customers across boundaries.

“Literature gives students a much more realistic view of what’s involved in leading” than many business books on leadership, said the professor. “Literature lets you see leaders and others from the inside. You share the sense of what they’re thinking and feeling. In real life, you’re usually at some distance and things are prepared, polished. With literature, you can see the whole messy collection of things that happen inside our heads.” – Joseph L. Badaracco, the John Shad Professor of Business Ethics at Harvard Business School (HBS)

Debt Financing: Notable Elements to consider

We have discussed financing via Convertible Debts and Equity Financing. There is a third element that is equally important and ought to be in the arsenal for financing the working capital requirements for the company.

 

debt chain

Here are some common Term Sheet lexicons that you have to be aware of for opening up a  credit facility.

Formula based Line of Credit: There are some variants to this, but the key driver is that the LOC is extended against eligible receivables. Generally, eligible receivables are defined as receivables that are within 90 days at an uber level. There are some additional elements that can reduce the eligible base. Those items that can be excluded would be as follows

Accounts outstanding for more than 90 days from invoice date

Credit balances over 90 days

Foreign AR. Some banks would specifically exclude foreign AR.

Intra-Company AR

Banks might impose a concentration limit. For example, any account that represents more than 30% of the AR that is outstanding may be excluded from the mix. Alternatively, credit may be extended up to the cap of 30% and no more.

Cross Aging Limit of 35%, defined as those accounts where 35% or more of an accounts receivable past due (greater than 90 days). In such instances, the entire account is ineligible.

Pre-bills are not eligible. Services have to be rendered or goods shipped. That constitutes a true invoice.

Some instances, you may be precluded from including receivables from government. Non-Formula based LOC: Credit is extended not on AR but based on what you negotiate with the Bank. The Bank will generally provide a non-formula based LOC based on historical cash flows and EBITDA and a board-approved budget. In some instances, if you feel that you can capitalize the company via an equity line in the near future, the bank would be inclined to raise the LOC.

Interest Rate

In either of the above 2 cases, the interest rate charged is basically a prime reference rate + some basis points. For example, the bank may spell out that the interest rate is the Prime Referenced Rate + 1.25%. If the Prime rate is 3.25%, then the cost to the company is 4.5%. Note though that if the company is profitable and the average tax rate is 40%, then the real cost to the company is 4.5 %*( 1-40%) = 2.7%.

Maturity Period

For all facilities, there is Maturity Period. In most instances, it is 24 months. Interest is paid monthly and the principal is due at maturity.

Facility Fees

Banks will charge a Facility Fee. Depending on the size of the facility, there could be some amount due at close and some amount due at the first year anniversary from the date the facility contract has been executed.

First Priority Rights

Bank will have a first priority UCC-1 security interest on all assets of Borrower like present and future inventory, chattel paper, accounts, contract rights, unencumbered equipment, general intangibles (excluding intellectual property), and the right to proceeds from accounts receivable and inventory from the sale of intellectual property to repay any outstanding Bank debt.

Bank may insist on having the right to the IP. That becomes another negotiation point. You can negotiate a negative pledge which effectively means that you will not pledge your IP to any third party.  

Bank Covenants

The Bank will also insist on some financial covenants. Some of the common covenants are

  1. Adjusted Quick Ratio which is (Cash held at the Bank + Eligible Receivables)/ (Current Liabilities less Deferred Revenue)
  2. Trailing EBITDA requirement. Could be a six month or 12 month trailing EBITDA requirement
  3. EBIT to Interest Coverage Ratio = EBIT/Interest Payments. Bank may require a 1.5 or 2 coverage.

Monthly Financial Requirements

Bank will require the monthly financial statements according to GAAP and the Bank Compliance Certificate.

Bank may seek an Audit or an independent review of the Financial Statements within 90-180 days after each fiscal year ends.

You will have to provide AR and AP aging monthly and inventory breakdown.

In the event that there is a reforecast of the Budget or Operating Plan and it has been approved by the Board, you will have to provide the information to the bank as well.

banker

Bank Oversight and Audit

Bank will reserve the right to do a collateral audit for the formula based line of credit financing. You will have to pay the audit fees. In general, you can negotiate and cap these fees and the frequency of such audits.

Most of the above relate to a large number of startups that do not carry inventory and acquire inventory from international suppliers.

Bankers Acceptance

BAs are frequently used in international trade because of advantages for both sides. Exporters often feel safer relying on payment from a reputable bank than a business with which it has little if any history. Once the bank verifies, or “accepts”, a time draft, it becomes a primary obligation of that institution.

Here’s one typical example. You decide to purchase 100 widgets from Lee Ku, a Chinese exporter. After completing the trade agreement, you approach your bank for a letter of credit. This letter of credit makes your bank the intermediary responsible for completing the transaction.

Once Lee Ku, your supplier, ships the goods, it sends the appropriate documents – typically through its own financial bank to your bank in the United States. The exporter now has a couple choices. It could keep the acceptance until maturity, or it could sell it to a third party, perhaps to your bank responsible for making the payment. In this case, Lee Ku receives an amount less than the face value of the draft, but it doesn’t have to wait on the funds. Bank makes some fees and the Supplier gets their money.

When a bank buys back the acceptance at a lower price, it is said to be “discounting” the acceptance. If your bank does this, it essentially has the same choices that your Chinese exporter had. It could hold the draft until it matures, which is akin to extending the importer a loan. More commonly, though, the bank will charge you a fee in advance which is a percentage of the acceptance. Could be anywhere from 2-4% of the value of the acceptance. In theory, you can get anywhere between 90-180 days financing using BA as an instrument to fund your inventory.
 debt burden

Dangers of Debt Financing

Debt Financing can be a cheap financing method. However, it carries potential risk. If you are not able to service debt, then the bank can, at the extreme, force you into bankruptcy. Alternatively, they can put you in forbearance and work out a plan to get back their principal amount. They can take over the role of receivership and collect the money on your behalf. These are all draconian triggers that may happen, and hence it is important to maintain a good relationship with your banker. Most importantly, give them any bad news ahead of time. It is really bad when they learn of bad news later. It would limit your ability to negotiate terms with the bank.

Manage Debt

In general, if you draw down against the LOC, it is always a good idea to pay that down as soon as possible. That ought to be your primary operational strategy. That will minimize interest expense, keep the line open, establish a better rapport with the bank and most importantly – force you to become a more disciplined organization. You ought to regard the bank financing as a bridge for your working capital requirements. To the extent you can minimize the bridge by converting your receivables to cash, minimizing operating expenses, and maximizing your margin … you would be in a happier place. Debt financing also gives you the time to build value in the organization rather than relying upon equity line which is a costly form of financing. Having said that, there will be times when your investors may push back on your debt financing strategy. In fact, if you have raised equity prior to debt, you may even have to get signoff from the equity investors. Their big concern is that having leverage takes away from the value of the company. That is not necessarily true, because corporate finance theory suggests that intelligent debt financing can, in fact, increase corporate value. However, the investors may see debt as your way out to stall more investment requirements and thus defer their inclination toward owning more of your company at a lower value.

line-of-credit