Category Archives: Risk Management

Bias and Error: Human and Organizational Tradeoff

“I spent a lifetime trying to avoid my own mental biases. A.) I rub my own nose into my own mistakes. B.) I try and keep it simple and fundamental as much as I can. And, I like the engineering concept of a margin of safety. I’m a very blocking and tackling kind of thinker. I just try to avoid being stupid. I have a way of handling a lot of problems — I put them in what I call my ‘too hard pile,’ and just leave them there. I’m not trying to succeed in my ‘too hard pile.’” : Charlie Munger — 2020 CalTech Distinguished Alumni Award interview

Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error.  Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average.

Error refers to a outcome that is different from reality within the context of the objective function that is being pursued.

Thus, I would like to think that the Bias is a process that might lead to an Error. However, that is not always the case. There are instances where a bias might get you to an accurate or close to an accurate result. Is having a biased framework always a bad thing? That is not always the case. From an evolutionary standpoint, humans have progressed along the dimension of making rapid judgements – and much of them stemming from experience and their exposure to elements in society. Rapid judgements are typified under the System 1 judgement (Kahneman, Tversky) which allows bias and heuristic to commingle to effectively arrive at intuitive decision outcomes.

And again, the decision framework constitutes a continually active process in how humans or/and organizations execute upon their goals. It is largely an emotional response but could just as well be an automated response to a certain stimulus. However, there is a danger prevalent in System 1 thinking: it might lead one to comfortably head toward an outcome that is seemingly intuitive, but the actual result might be significantly different and that would lead to an error in the judgement. In math, you often hear the problem of induction which establishes that your understanding of a future outcome relies on the continuity of the past outcomes, and that is an errant way of thinking although it still represents a useful tool for us to advance toward solutions.

System 2 judgement emerges as another means to temper the more significant variabilities associated with System 1 thinking. System 2 thinking represents a more deliberate approach which leads to a more careful construct of rationale and thought. It is a system that slows down the decision making since it explores the logic, the assumptions, and how the framework tightly fits together to test contexts. There are a more lot more things at work wherein the person or the organization has to invest the time, focus the efforts and amplify the concentration around the problem that has to be wrestled with. This is also the process where you search for biases that might be at play and be able to minimize or remove that altogether. Thus, each of the two Systems judgement represents two different patterns of thinking: rapid, more variable and more error prone outcomes vs. slow, stable and less error prone outcomes.

So let us revisit the Bias vs. Variance tradeoff. The idea is that the more bias you bring to address a problem, there is less variance in the aggregate. That does not mean that you are accurate. It only means that there is less variance in the set of outcomes, even if all of the outcomes are materially wrong. But it limits the variance since the bias enforces a constraint in the hypotheses space leading to a smaller and closely knit set of probabilistic outcomes.  If you were to remove the constraints in the hypotheses space – namely, you remove bias in the decision framework – well, you are faced with a significant number of possibilities that would result in a larger spread of outcomes. With that said, the expected value of those outcomes might actually be closer to reality, despite the variance – than a framework decided upon by applying heuristic or operating in a bias mode.

So how do we decide then? Jeff Bezos had mentioned something that I recall: some decisions are one-way street and some are two-way. In other words, there are some decisions that cannot be undone, for good or for bad. It is a wise man who is able to anticipate that early on to decide what system one needs to pursue. An organization makes a few big and important decisions, and a lot of small decisions. Identify the big ones and spend oodles of time and encourage a diverse set of input to work through those decisions at a sufficiently high level of detail. When I personally craft rolling operating models, it serves a strategic purpose that might sit on shifting sands. That is perfectly okay! But it is critical to evaluate those big decisions since the crux of the effectiveness of the strategy and its concomitant quantitative representation rests upon those big decisions. Cutting corners can lead to disaster or an unforgiving result!

I will focus on the big whale decisions now. I will assume, for the sake of expediency, that the series of small decisions, in the aggregate or by itself, will not sufficiently be large enough that it would take us over the precipice. (It is also important however to examine the possibility that a series of small decisions can lead to a more holistic unintended emergent outcome that might have a whale effect: we come across that in complexity theory that I have already touched on in a set of previous articles).

Cognitive Biases are the biggest mea culpas that one needs to worry about. Some of the more common biases are confirmation bias, attribution bias, the halo effect, the rule of anchoring, the framing of the problem, and status quo bias. There are other cognition biases at play, but the ones listed above are common in planning and execution. It is imperative that these biases be forcibly peeled off while formulating a strategy toward problem solving.

But then there are also the statistical biases that one needs to be wary of. How we select data or selection bias plays a big role in validating information. In fact, if there are underlying statistical biases, the validity of the information is questionable.  Then there are other strains of statistical biases: the forecast bias which is the natural tendency to be overtly optimistic or pessimistic without any substantive evidence to support one or the other case. Sometimes how the information is presented: visually or in tabular format – can lead to sins of the error of omission and commission leading the organization and judgement down paths that are unwarranted and just plain wrong. Thus, it is important to be aware of how statistical biases come into play to sabotage your decision framework.

One of the finest illustrations of misjudgment has been laid out by Charlie Munger. Here is the excerpt link : https://fs.blog/great-talks/psychology-human-misjudgment/  He lays out a very comprehensive 25 Biases that ail decision making. Once again, stripping biases do not necessarily result in accuracy — it increases the variability of outcomes that might be clustered around a mean that might be closer to accuracy than otherwise.

Variability is Noise. We do not know a priori what the expected mean is. We are close, but not quite. There is noise or a whole set of outcomes around the mean. Viewing things closer to the ground versus higher would still create a likelihood of accepting a false hypothesis or rejecting a true one. Noise is extremely hard to sift through, but how you can sift through the noise to arrive at those signals that are determining factors, is critical to organization success. To get to this territory, we have eliminated the cognitive and statistical biases. Now is the search for the signal. What do we do then? An increase in noise impairs accuracy. To improve accuracy, you either reduce noise or figure out those indicators that signal an accurate measure.

This is where algorithmic thinking comes into play. You start establishing well tested algorithms in specific use cases and cross-validate that across a large set of experiments or scenarios. It has been proved that algorithmic tools are, in the aggregate, superior to human judgement – since it systematically can surface causal and correlative relationships. Furthermore, special tools like principal component analysis and factory analysis can incorporate a large input variable set and establish the patterns that would be impregnable for even System 2 mindset to comprehend. This will bring decision making toward the signal variants and thus fortify decision making.

The final element is to assess the time commitment required to go through all the stages. Given infinite time and resources, there is always a high likelihood of arriving at those signals that are material for sound decision making. Alas, the reality of life does not play well to that assumption! Time and resources are constraints … so one must make do with sub-optimal decision making and establish a cutoff point wherein the benefits outweigh the risks of looking for another alternative. That comes down to the realm of judgements. While George Stigler, a Nobel Laureate in Economics, introduce search optimization in fixed sequential search – a more concrete example has been illustrated in “Algorithms to Live By” by Christian & Griffiths. They suggested an holy grail response: 37% is the accurate answer.  In other words, you would reach a suboptimal decision by ensuring that you have explored up to 37% of your estimated maximum effort. While the estimated maximum effort is quite ambiguous and afflicted with all of the elements of bias (cognitive and statistical), the best thinking is to be as honest as possible to assess that effort and then draw your search threshold cutoff. 

An important element of leadership is about making calls. Good calls, not necessarily the best calls! Calls weighing all possible circumstances that one can, being aware of the biases, bringing in a diverse set of knowledge and opinions, falling back upon agnostic tools in statistics, and knowing when it is appropriate to have learnt enough to pull the trigger. And it is important to cascade the principles of decision making and the underlying complexity into and across the organization.

Winner Take All Strategy

Being the first to cross the finish line makes you a winner in only one phase of life. It’s what you do after you cross the line that really counts.
– Ralph Boston

Does winner-take-all strategy apply outside the boundaries of a complex system? Let us put it another way. If one were to pursue a winner-take-all strategy, then does this willful strategic move not bind them to the constraints of complexity theory? Will the net gains accumulate at a pace over time far greater than the corresponding entropy that might be a by-product of such a strategy? Does natural selection exhibit a winner-take-all strategy over time and ought we then to regard that winning combination to spur our decisions around crafting such strategies? Are we fated in the long run to arrive at a world where there will be a very few winners in all niches and what would that mean? How does that surmise with our good intentions of creating equal opportunities and a fair distribution of access to resources to a wider swath of the population? In other words, is a winner take all a deterministic fact and does all our trivial actions to counter that constitute love’s labor lost?

business award

Natural selection is a mechanism for evolution. It explains how populations or species evolve or modify over time in such a manner that it becomes better suited to their environments. Recall the discussion on managing scale in the earlier chapter where we discussed briefly about aligning internal complexity to external complexity. Natural selection is how it plays out at a biological level. Essentially natural selection posits that living organisms have inherited traits that help them to survive and procreate. These organisms will largely leave more offspring than their peers since the presumption is that these organisms will carry key traits that will survive the vagaries of external complexity and environment (predators, resource scarcity, climate change, etc.) Since these traits are passed on to the next generate, these traits will become more common until such time that the traits are dominant over generations, if the environment has not been punctuated with massive changes. These organisms with these dominant traits will have adapted to their environment. Natural selection does not necessarily suggest that what is good for one is good for the collective species.

ccollab

An example that was shared by Robert Frank in his book “The Darwin Economy” was the case of large antlers of the bull elk. These antlers developed as an instrument for attracting mates rather than warding off predators. Big antlers would suggest a greater likelihood of the bull elk to marginalize the elks with smaller antlers. Over time, the bull elks with small antlers would die off since they would not be able to produce offspring and pass their traits. Thus, the bull elks would largely comprise of those elks with large antlers. However, the flip side is that large antlers compromise mobility and thus are more likely to be attacked by predators. Although the individual elk with large antler might succeed to stay around over time, it is also true that the compromised mobility associated with large antlers would overall hurt the propagation of the species as a collective group. We will return to this very important concept later. The interests of individual animals were often profoundly in conflict with the broader interests of their own species. Corresponding to the development of the natural selection mechanism is the introduction of the concept of the “survival of the fittest” which was introduced by Herbert Spencer. One often uses natural selection and survival of the fittest interchangeable and that is plain wrong. Natural selection never claims that the species that will emerge is the strongest, the fastest, the largest, etc.: it simply claims that the species will be the fittest, namely it will evolve in a manner best suited for the environment in which it resides. Put it another way: survival of the most sympathetic is perhaps more applicable. Organisms that are more sympathetic and caring and work in harmony with the exigencies of an environment that is largely outside of their control would likely succeed and thrive.

dig collab

We will digress into the world of business. A common conception that is widely discussed is that businesses must position toward a winner-take-all strategy – especially, in industries that have very high entry costs. Once these businesses entrench themselves in the space, the next immediate initiative would be to literally launch a full-frontal assault involving huge investments to capture the mind and the wallet of the customer. Peter Thiel says – Competition is for losers. If you want to create and capture lasting value, look to build a monopoly.” Once that is built, it would be hard to displace!

NEffect

Scaling the organization intentionally is key to long-term success. There are a number of factors that contribute toward developing scale and thus establishing a strong footing in the particular markets. We are listing some of the key factors below:

  1. Barriers to entry: Some organizations have natural cost prohibitive barriers to entry like utility companies or automobile plants. They require large investments. On the other hand, organizations can themselves influence and erect huge barriers to entry even though the barriers did not exist. Organizations would massively invest in infrastructure, distribution, customer acquisition and retention, brand and public relations. Organizations that are able to rapidly do this at a massive scale would be the ones that is expected to exercise their leverage over a big consumption base well into the future.
  2. Multi-sided platform impacts: The value of information across multiple subsystems: company, supplier, customer, government increases disproportionately as it expands. We had earlier noted that if cities expand by 100%, then there is increasing innovating and goods that generate 115% -the concept of super-linear scaling. As more nodes are introduced into the system and a better infrastructure is created to support communication and exchange between the nodes, the more entrenched the business becomes. And interestingly, the business grows at a sub-linear scale – namely, it consumes less and less resources in proportion to its growth. Hence, we see the large unicorn valuation among companies where investors and market makers place calculated bets on investments of colossal magnitudes. The magnitude of such investments is relatively a recent event, and this is largely driven by the advances in technology that connect all stakeholders.
  3. Investment in learning: To manage scale is to also be selective of information that a system receives and how the information is processed internally. In addition, how is this information relayed to the external system or environment. This requires massive investment in areas like machine learning, artificial intelligence, big data, enabling increased computational power, development of new learning algorithms, etc. This means that organizations have to align infrastructure and capability while also working with external environments through public relations, lobbying groups and policymakers to chaperone a comprehensive and a very complex hard-to-replicate learning organism.
  4. Investment in brand: Brand personifies the value attributes of an organization. One connects brand to customer experience and perception of the organization’s product. To manage scale and grow, organizations must invest in brand: to capture increased mindshare of the consumer. In complexity science terms, the internal systems are shaped to emit powerful signals to the external environment and urge a response. Brand and learning work together to allow a harmonic growth of an internal system in the context of its immediate environment.

graph

However, one must revert to the science of complexity to understand the long-term challenges of a winner-take-all mechanism. We have already seen the example that what is good for the individual bull-elk might not be the best for the species in the long-term. We see that super-linear scaling systems also emits significant negative by-products. Thus, the question that we need to ask is whether the organizations are paradoxically cultivating their own seeds of destruction in their ambitions of pursuing scale and market entrenchment.

Internal versus External Scale

This article discusses internal and external complexity before we tee up a more detailed discussion on internal versus external scale. This chapter acknowledges that complex adaptive systems have inherent internal and external complexities which are not additive. The impact of these complexities is exponential. Hence, we have to sift through our understanding and perhaps even review the salient aspects of complexity science which have already been covered in relatively more detail in earlier chapter. However, revisiting complexity science is important, and we will often revisit this across other blog posts to really hit home the fundamental concepts and its practical implications as it relates to management and solving challenges at a business or even a grander social scale.

scale

A complex system is a part of a larger environment. It is a safe to say that the larger environment is more complex than the system itself. But for the complex system to work, it needs to depend upon a certain level of predictability and regularity between the impact of initial state and the events associated with it or the interaction of the variables in the system itself. Note that I am covering both – complex physical systems and complex adaptive systems in this discussion. A system within an environment has an important attribute: it serves as a receptor to signals of external variables of the environment that impact the system. The system will either process that signal or discard the signal which is largely based on what the system is trying to achieve. We will dedicate an entire article on system engineering and thinking later, but the uber point is that a system exists to serve a definite purpose. All systems are dependent on resources and exhibits a certain capacity to process information. Hence, a system will try to extract as many regularities as possible to enable a predictable dynamic in an efficient manner to fulfill its higher-level purpose.

compl pro

Let us understand external complexities. We can interchangeably use the word environmental complexity as well.  External complexity represents physical, cultural, social, and technological elements that are intertwined. These environments beleaguered with its own grades of complexity acts as a mold to affect operating systems that are mere artifacts. If operating systems can fit well within the mold, then there is a measure of fitness or harmony that arises between an internal complexity and external complexity. This is the root of dynamic adaptation. When external environments are very complex, that means that there are a lot of variables at play and thus, an internal system has to process more information in order to survive. So how the internal system will react to external systems is important and they key bridge between those two systems is in learning. Does the system learn and improve outcomes on account of continuous learning and does it continually modify its existing form and functional objectives as it learns from external complexity? How is the feedback loop monitored and managed when one deals with internal and external complexities? The environment generates random problems and challenges and the internal system has to accept or discard these problems and then establish a process to distribute the problems among its agents to efficiently solve those problems that it hopes to solve for. There is always a mechanism at work which tries to align the internal complexity with external complexity since it is widely believed that the ability to efficiently align the systems is the key to maintaining a relatively competitive edge or intentionally making progress in solving a set of important challenges.

Internal complexity are sub-elements that interact and are constituents of a system that resides within the larger context of an external complex system or the environment. Internal complexity arises based on the number of variables in the system, the hierarchical complexity of the variables, the internal capabilities of information pass-through between the levels and the variables, and finally how it learns from the external environment. There are five dimensions of complexity: interdependence, diversity of system elements, unpredictability and ambiguity, the rate of dynamic mobility and adaptability, and the capability of the agents to process information and their individual channel capacities.

types

If we are discussing scale management, we need to ask a fundamental question. What is scale in the context of complex systems? Why do we manage for scale? How does management for scale advance us toward a meaningful outcome? How does scale compute in internal and external complex systems? What do we expect to see if we have managed for scale well? What does the future bode for us if we assume that we have optimized for scale and that is the key objective function that we have to pursue?

Building a Lean Financial Infrastructure!

A lean financial infrastructure presumes the ability of every element in the value chain to preserve and generate cash flow. That is the fundamental essence of the lean infrastructure that I espouse. So what are the key elements that constitute a lean financial infrastructure?

And given the elements, what are the key tweaks that one must continually make to ensure that the infrastructure does not fall into entropy and the gains that are made fall flat or decay over time. Identification of the blocks and monitoring and making rapid changes go hand in hand.

lean

The Key Elements or the building blocks of a lean finance organization are as follows:

  1. Chart of Accounts: This is the critical unit that defines the starting point of the organization. It relays and groups all of the key economic activities of the organization into a larger body of elements like revenue, expenses, assets, liabilities and equity. Granularity of these activities might lead to a fairly extensive chart of account and require more work to manage and monitor these accounts, thus requiring incrementally a larger investment in terms of time and effort. However, the benefits of granularity far exceeds the costs because it forces management to look at every element of the business.
  2. The Operational Budget: Every year, organizations formulate the operational budget. That is generally a bottoms up rollup at a granular level that would map to the Chart of Accounts. It might follow a top-down directive around what the organization wants to land with respect to income, expense, balance sheet ratios, et al. Hence, there is almost always a process of iteration in this step to finally arrive and lock down the Budget. Be mindful though that there are feeders into the budget that might relate to customers, sales, operational metrics targets, etc. which are part of building a robust operational budget. var
  3. The Deep Dive into Variances: As you progress through the year and part of the monthly closing process, one would inquire about how the actual performance is tracking against the budget. Since the budget has been done at a granular level and mapped exactly to the Chart of Accounts, it thus becomes easier to understand and delve into the variances. Be mindful that every element of the Chart of Account must be evaluated. The general inclination is to focus on the large items or large variances, while skipping the small expenses and smaller variances. That method, while efficient, might not be effective in the long run to build a lean finance organization. The rule, in my opinion, is that every account has to be looked and the question should be – Why? If the management has agreed on a number in the budget, then why are the actuals trending differently. Could it have been the budget and that we missed something critical in that process? Or has there been a change in the underlying economics of the business or a change in activities that might be leading to these “unexpected variances”. One has to take a scalpel to both – favorable and unfavorable variances since one can learn a lot about the underlying drivers. It might lead to managerially doing more of the better and less of the worse. Furthermore, this is also a great way to monitor leaks in the organization. Leaks are instances of cash that are dropping out of the system. Much of little leaks amounts to a lot of cash in total, in some instances. So do not disregard the leaks. Not only will that preserve the cash but once you understand the leaks better, the organization will step up in efficiency and effectiveness with respect to cash preservation and delivery of value.  deep dive
  4. Tweak the process: You will find that as you deep dive into the variances, you might want to tweak certain processes so these variances are minimized. This would generally be true for adverse variances against the budget. Seek to understand why the variance, and then understand all of the processes that occur in the background to generate activity in the account. Once you fully understand the process, then it is a matter of tweaking this to marginally or structurally change some key areas that might favorable resonate across the financials in the future.
  5. The Technology Play: Finally, evaluate the possibilities of exploring technology to surface issues early, automate repetitive processes, trigger alerts early on to mitigate any issues later, and provide on-demand analytics. Use technology to relieve time and assist and enable more thinking around how to improve the internal handoffs to further economic value in the organization.

All of the above relate to managing the finance and accounting organization well within its own domain. However, there is a bigger step that comes into play once one has established the blocks and that relates to corporate strategy and linking it to the continual evolution of the financial infrastructure.

The essential question that the lean finance organization has to answer is – What can the organization do so that we address every element that preserves and enhances value to the customer, and how do we eliminate all non-value added activities? This is largely a process question but it forces one to understand the key processes and identify what percentage of each process is value added to the customer vs. non-value added. This can be represented by time or cost dimension. The goal is to yield as much value added activities as possible since the underlying presumption of such activity will lead to preservation of cash and also increase cash acquisition activities from the customer.

Debt Financing: Notable Elements to consider

We have discussed financing via Convertible Debts and Equity Financing. There is a third element that is equally important and ought to be in the arsenal for financing the working capital requirements for the company.

 

debt chain

Here are some common Term Sheet lexicons that you have to be aware of for opening up a  credit facility.

Formula based Line of Credit: There are some variants to this, but the key driver is that the LOC is extended against eligible receivables. Generally, eligible receivables are defined as receivables that are within 90 days at an uber level. There are some additional elements that can reduce the eligible base. Those items that can be excluded would be as follows

Accounts outstanding for more than 90 days from invoice date

Credit balances over 90 days

Foreign AR. Some banks would specifically exclude foreign AR.

Intra-Company AR

Banks might impose a concentration limit. For example, any account that represents more than 30% of the AR that is outstanding may be excluded from the mix. Alternatively, credit may be extended up to the cap of 30% and no more.

Cross Aging Limit of 35%, defined as those accounts where 35% or more of an accounts receivable past due (greater than 90 days). In such instances, the entire account is ineligible.

Pre-bills are not eligible. Services have to be rendered or goods shipped. That constitutes a true invoice.

Some instances, you may be precluded from including receivables from government. Non-Formula based LOC: Credit is extended not on AR but based on what you negotiate with the Bank. The Bank will generally provide a non-formula based LOC based on historical cash flows and EBITDA and a board-approved budget. In some instances, if you feel that you can capitalize the company via an equity line in the near future, the bank would be inclined to raise the LOC.

Interest Rate

In either of the above 2 cases, the interest rate charged is basically a prime reference rate + some basis points. For example, the bank may spell out that the interest rate is the Prime Referenced Rate + 1.25%. If the Prime rate is 3.25%, then the cost to the company is 4.5%. Note though that if the company is profitable and the average tax rate is 40%, then the real cost to the company is 4.5 %*( 1-40%) = 2.7%.

Maturity Period

For all facilities, there is Maturity Period. In most instances, it is 24 months. Interest is paid monthly and the principal is due at maturity.

Facility Fees

Banks will charge a Facility Fee. Depending on the size of the facility, there could be some amount due at close and some amount due at the first year anniversary from the date the facility contract has been executed.

First Priority Rights

Bank will have a first priority UCC-1 security interest on all assets of Borrower like present and future inventory, chattel paper, accounts, contract rights, unencumbered equipment, general intangibles (excluding intellectual property), and the right to proceeds from accounts receivable and inventory from the sale of intellectual property to repay any outstanding Bank debt.

Bank may insist on having the right to the IP. That becomes another negotiation point. You can negotiate a negative pledge which effectively means that you will not pledge your IP to any third party.  

Bank Covenants

The Bank will also insist on some financial covenants. Some of the common covenants are

  1. Adjusted Quick Ratio which is (Cash held at the Bank + Eligible Receivables)/ (Current Liabilities less Deferred Revenue)
  2. Trailing EBITDA requirement. Could be a six month or 12 month trailing EBITDA requirement
  3. EBIT to Interest Coverage Ratio = EBIT/Interest Payments. Bank may require a 1.5 or 2 coverage.

Monthly Financial Requirements

Bank will require the monthly financial statements according to GAAP and the Bank Compliance Certificate.

Bank may seek an Audit or an independent review of the Financial Statements within 90-180 days after each fiscal year ends.

You will have to provide AR and AP aging monthly and inventory breakdown.

In the event that there is a reforecast of the Budget or Operating Plan and it has been approved by the Board, you will have to provide the information to the bank as well.

banker

Bank Oversight and Audit

Bank will reserve the right to do a collateral audit for the formula based line of credit financing. You will have to pay the audit fees. In general, you can negotiate and cap these fees and the frequency of such audits.

Most of the above relate to a large number of startups that do not carry inventory and acquire inventory from international suppliers.

Bankers Acceptance

BAs are frequently used in international trade because of advantages for both sides. Exporters often feel safer relying on payment from a reputable bank than a business with which it has little if any history. Once the bank verifies, or “accepts”, a time draft, it becomes a primary obligation of that institution.

Here’s one typical example. You decide to purchase 100 widgets from Lee Ku, a Chinese exporter. After completing the trade agreement, you approach your bank for a letter of credit. This letter of credit makes your bank the intermediary responsible for completing the transaction.

Once Lee Ku, your supplier, ships the goods, it sends the appropriate documents – typically through its own financial bank to your bank in the United States. The exporter now has a couple choices. It could keep the acceptance until maturity, or it could sell it to a third party, perhaps to your bank responsible for making the payment. In this case, Lee Ku receives an amount less than the face value of the draft, but it doesn’t have to wait on the funds. Bank makes some fees and the Supplier gets their money.

When a bank buys back the acceptance at a lower price, it is said to be “discounting” the acceptance. If your bank does this, it essentially has the same choices that your Chinese exporter had. It could hold the draft until it matures, which is akin to extending the importer a loan. More commonly, though, the bank will charge you a fee in advance which is a percentage of the acceptance. Could be anywhere from 2-4% of the value of the acceptance. In theory, you can get anywhere between 90-180 days financing using BA as an instrument to fund your inventory.
 debt burden

Dangers of Debt Financing

Debt Financing can be a cheap financing method. However, it carries potential risk. If you are not able to service debt, then the bank can, at the extreme, force you into bankruptcy. Alternatively, they can put you in forbearance and work out a plan to get back their principal amount. They can take over the role of receivership and collect the money on your behalf. These are all draconian triggers that may happen, and hence it is important to maintain a good relationship with your banker. Most importantly, give them any bad news ahead of time. It is really bad when they learn of bad news later. It would limit your ability to negotiate terms with the bank.

Manage Debt

In general, if you draw down against the LOC, it is always a good idea to pay that down as soon as possible. That ought to be your primary operational strategy. That will minimize interest expense, keep the line open, establish a better rapport with the bank and most importantly – force you to become a more disciplined organization. You ought to regard the bank financing as a bridge for your working capital requirements. To the extent you can minimize the bridge by converting your receivables to cash, minimizing operating expenses, and maximizing your margin … you would be in a happier place. Debt financing also gives you the time to build value in the organization rather than relying upon equity line which is a costly form of financing. Having said that, there will be times when your investors may push back on your debt financing strategy. In fact, if you have raised equity prior to debt, you may even have to get signoff from the equity investors. Their big concern is that having leverage takes away from the value of the company. That is not necessarily true, because corporate finance theory suggests that intelligent debt financing can, in fact, increase corporate value. However, the investors may see debt as your way out to stall more investment requirements and thus defer their inclination toward owning more of your company at a lower value.

line-of-credit

Term Sheets: Landmines and Ticking Time Bombs!

Wall Street is the only place that people ride to in a Rolls Royce to get advice from those who take the subway. – Warren Buffett

are u ready

So the big day is here. You have evangelized your product across various circles and the good news is that a VC has stepped forward to invest in your company. So the hard work is all done! You can rest on your laurels, sign the term sheet that the VC has pushed across the table, and execute the sheet, trigger the stock purchase, voter and investor rights agreements, get the wire and you are up and running! Wait … sounds too good to be true, doesn’t it? And yes you are right! If only things were that easy. The devil is in the details. So let us go over some of the details that you need to watch out for.

1. First, term sheet does not trigger the wire. Signing a term sheet does not mean that the VC will invest in your company. The road is still long and treacherous.  All the term sheet does is that it requires you to keep silent on the negotiations, and may even prevent you to shop the deal to anyone else.  The key investment terms are laid out in the sheet and would be used in much greater detail when the stock purchase agreement, the investor rights agreement, the voting agreement and other documents are crafted.

landmines2. Make sure that you have an attorney representing you. And more importantly, an attorney that has experience in the field and has reviewed a lot of such documents. As noted, the devil is in the details. A little “and” or “or” can put you back significantly. But it is just as important for you to know some of the key elements that govern an investment agreement. You can quiz your attorney on these because some of these are important enough to impact your operating degree of freedom in the company.The starting point of a term sheet is valuation of the company. You will hear the concept of pre-money valuation vs. post-money valuation. It is quite simple.  The Pre-Money Valuation + Investment = Post-Money Valuation. In other words, Pre-money valuation refers to the value of a company not including external funding or the latest round of funding. Post-Money thus includes the pre-money plus the incremental injection of capital. Let us look at an example:

Let’s explain the difference by using an example. Suppose that an investor is looking to invest in a start up. Both parties agree that the company is worth $1 million and the investor will put in $250,000.

The ownership percentages will depend on whether this is a $1 million pre-money or post-money valuation. If the $1 million valuation is pre-money, the company is valued at $1 million before the investment and after investment will be valued at $1.25 million. If the $1 million valuation takes into consideration the $250,000 investment, it is referred to as post-money.  Thus in a pre-money valuation, the Investor owns 20%. Why? The total valuation is $1.25M which is $1M pre-money + $250K capital. So the math translates to $250K/$1,250K = 20%.  If the investor says that they will value company $1M post-money, what they are saying is that they are actually giving you a pre-money valuation of $750K. In other words, they will own 25% of the company rather than 20%. Your ownership rights go down by 5% which, for all intents and purposes, is significant.

3. When a round of financing is done, security is exchanged in lieu of cash received. You already have common stock but these are not the securities being exchanged. The company would issue preferred stock. Preferred stock comes with certain rights, preferences, privileges and covenants. Compared to common stock, it is a superior security. There are a number of important rights and privileges that investors secure via a preferred stock purchase, including a right to a board seat, information rights, a right to participate in future rounds to protect their ownership percentage (called a pro-rata right), a right to purchase any common stock that might come onto the market (called a right of first refusal), a right to participate alongside any common stock that might get sold (called a co-sale right), and an adjustment in the purchase price to reflect sales of stock at lower prices (called an anti-dilution right).  Let us examine this in greater detail now. There are two types of preferred. The regular vanilla Convertible Preferred and the Participating Preferred. As the latter name suggests, the Participating Preferred allows the VC to receive back their invested capital and the cumulative dividends, if any before common stockholders (that is you), but also enables them to participate on an as-converted basis in the returns to you, the common stockholder.  Here is the math:Let us say company raises $3M at a $3M pre-money valuation. As mentioned before in point (3), the stake is 50%-50% owner-investor.

Let us say company sells for $25M. Now the investor has participating preferred or convertible preferred. How does the difference impact you, the stockholder or the founder. Here goes!

i.      Participating Preferred. Investor gets their $3M back. There is still $22M left in the coffers. Investor splits 50-50 based on their participating preferred. You and Investor both take home $11M from the residual pool. Investor has $14M, and you have $11M. Congrats!

ii.      Convertible Preferred. Investor gets 50% or $12.5M and you get the same – $12.5M. In other words, convertible preferred just got you a few more drinks at the bar. Hearty Congratulations!

Bear in mind that if the Exit Value is lower, the difference becomes more meaningful. Let us say exit was $10M. The Preferred participant gets $3M + $3.5M = $6.5M while you end up with $3.5M.

bombs4. One of the key provisions is Liquidation Preferences. It can be a ticking time bomb. Careful! Some investors may sometimes ask for a multiple of their investment as a preference. This provision provides downside protection to investors. In the event of liquidation, the company has to pay back the capital injected for preferred. This would mean a 1X liquidation preference. However, you can have a 2X liquidation preference which means the investor will get back twice as much as what they injected. Most liquidation preferences range from 1X to 2X, although you can have higher liquidation preference multiples as well. However, bear in mind that this becomes important only when the company is forced to liquidate and sell of their assets. If all is gung-ho, this is a silent clause and no sweat off your brow.

5. Redemption rights. The right of redemption is the right to demand under certain conditions that the company buys back its own shares from its investors at a fixed price. This right may be included to require a company to buy back its shares if there has not been an exit within a pre-determined period. Failure to redeem shares when requested might result in the investors gaining improved rights, such as enhanced voting rights.

6. The terms could demand that a certain option pool or a pot of stock is kept aside for existing and future employees, or other service providers. It could be a range anywhere between 10-20% of the total stock. When you reserve this pool, you are cutting into your ownership stake. In those instances when you have series of financings and each financing requires you to set aside a small pool, it dilutes you and your previous investors.  In general, the way these pools are structured is to give you some headroom up to at least 24 months to accommodate employee growth and providing them incentives. The pool only becomes smaller with the passage of time.

7. Another term is the Anti-Dilution Provision.  In its simplest form, anti-dilution rights are a zero- sum game. No one has an advantage over the other. However, this becomes important only when there is a down round.  A down round basically means that the company is valued lower in subsequent financing than previously. A company valued at $25M in Series A and $15M in Series B – the Series B would be considered a down round.  Two Types of Anti-Dilution:

Full ratchet Anti-Dilution: If the new stock is priced lower than prior stock, the early investor has a clause to convert their shares to the new price. For example, if prior investor paid $1.00 and then it was reset in a later round to $0.50, then the prior investors will have 2X rights to common stock. In other words, you are hit with major dilution as are the later investors. This clause is a big hurdle for new investors.

Weighted Average Anti-Dilution. Old investor’s share is adjusted in proportion to the dilution impact of a down round

8. Pay to Play. These are clauses that work in your, the Company, favor. Basically, investors have to invest some money in later financings, and if they do not – their rights may be reduced.  However, having these clauses may put your mind at ease, but may create problems in terms of syndicating or getting investments. Some investors are reluctant to put their money in when there are pay to play clauses in the agreement.

9. Right of First Refusal. A company has no obligation to sell stock in future financing rounds to existing investors. Some investors would like to participate and may seek pro-rata participating to keep their ownership stake the same post-financing. Some investors may even want super pro-rata rights which means that they be allowed to participate to such an extent that their new ownership in the company is greater than their previous ownership stake.

10. Board of Directors. A large board creates complexity. Preferable to have a small but strategic board. New investors will require some representation. If too many investors request representation, the Company may have smaller internal representatives and may be outvoted on certain issues. Be aware of the dynamics of a mushrooming board!

11.Voting Rights. Investors may request certain veto authority or have rights to vote in favor of or against a corporate initiative.  Company founders may want super-voting rights to exercise greater control. These matters are delicate and going one way or the other may cause personal issues among the participants. However, these matters can be easily resolved by essentially having carve-outs that spell out rights and encumbrances.

12.Drag Along Provision. Might create an obligation on all shareholders of the company to sell their shares to a potential purchaser if a certain percentage of the shareholders (or of a specific class of shareholders) votes to sell to that purchaser. Often in early rounds drag along rights can only be enforced with the consent of those holding at least a majority of the shares held by investors. These rights can be useful in the context of a sale where potential purchasers will want to acquire 100% of the shares of the company in order to avoid having responsibilities to minority shareholders after the acquisition. Many jurisdictions provide for such a process, usually when a third party has acquired at least 90% of the shares.

13.Representations and Warranties. Venture capital investors expect appropriate representations and warranties to be provided by key founders, management and the company. The primary purpose of the representations and warranties is to provide the investors with a complete and accurate understanding of the current condition of the company and its past history so that the investors can evaluate the risks of investing in the company prior to subscribing for their shares. The representations and warranties will typically cover areas such as the legal existence of the company (including all share capital details), the company’s financial statements, the business plan, asset ownership (in particular intellectual property rights), liabilities (contingent or otherwise), material contracts, employees and litigation. It is very rare that a company is in a perfect state. The warrantors have the opportunity to set out issues which ought to be brought to the attention of the new investors through a disclosure letter or schedule of exceptions. This is usually provided by the warrantors and discloses detailed information concerning any exceptions to or carve-outs from the representations and warranties. If a matter is referred to in the disclosure letter the investors are deemed to have notice of it and will not be able to claim for breach of warranty in respect of that matter. Investors expect those providing representations and warranties about the company to reimburse the investors for the diminution in share value attributable to the representations and warranties being inaccurate or if there are exceptions to them that have not been fully disclosed. There are usually limits to the exposure of the warrantors (i.e. a dollar cap on the amount that can be recovered from individual warrantors). These are matters for negotiation when documentation is being finalized. The limits may vary according to the severity of the breach, the size of the investment and the financial resources of the warrantors. The limits which typically apply to founders are lower than for the company itself (where the company limit will typically be the sum invested or that sum plus a minimum return).

14. Information Rights. In order for venture capital investors to monitor the condition of their investment, it is essential that the company provides them with certain regular updates concerning its financial condition and budgets, as well as a general right to visit the company and examine its books and records. This sometimes includes direct access to the company’s auditors and bankers. These contractually defined obligations typically include timely transmittal of annual financial statements (including audit requirements, if applicable), annual budgets, and audited monthly and quarterly financial statements.

15. Exit. Venture capital investors want to see a path from their investment in the company leading to an exit, most often in the form of a disposal of their shares following an IPO or by participating in a sale. Sometimes the threshold for a liquidity event or will be a qualified exit. If used, it will mean that a liquidity event will only occur, and conversion of preferred shares will only be compulsory, if an IPO falls within the definition of a qualified exit. A qualified exit is usually defined as a sale or IPO on a recognized investment exchange which, in either case, is of a certain value to ensure the investors get a minimum return on their investment. Consequently, investors usually require undertakings from the company and other shareholders that they will endeavor to achieve an appropriate share listing or trade sale within a limited period of time (typically anywhere between 3 and 7 years depending on the stage of investment and the maturity of the company). If such an exit is not achieved, investors often build in structures which will allow them to withdraw some or the entire amount of their investment.

16. Non-Compete, Confidentiality Agreements. It is good practice for any company to have certain types of agreements in place with its employees. For technology start-ups, these generally include Confidentiality Agreements (to protect against loss of company trade secrets, know-how, customer lists, and other potentially sensitive information), Intellectual Property Assignment Agreements (to ensure that intellectual property developed by academic institutions or by employees before they were employed by the company will belong to the company) and Employment Contracts or Consultancy Agreements (which will include provisions to ensure that all intellectual property developed by a company’s employees belongs to the company). Where the company is a spin-out from an academic institution, the founders will frequently be consultants of the company and continue to be employees of the academic institution, at least until the company is more established. Investors also seek to have key founders and managers enter into Non-compete Agreements with the company. In most cases, the investment in the company is based largely on the value of the technology and management experience of the management team and founders. If they were to leave the company to create or work for a competitor, this could significantly affect the company’s value. Investors normally require that these agreements be included in the Investment Agreement as well as in the Employment/Consultancy Agreements with the founders and senior managers, to enable them to have a right of direct action against the founders’ and managers if the restrictions are breached.

vc process

Aaron Swartz took down a piece of the Berlin Wall! We have to take it all down!

“The world’s entire scientific … heritage … is increasingly being digitized and locked up by a handful of private corporations… The Open Access Movement has fought valiantly to ensure that scientists do not sign their copyrights away but instead ensure their work is published on the Internet, under terms that allow anyone to access it.”  – Aaron Swartz

Information, in the context of scholarly articles by research at universities and think-tanks, is not a zero sum game. In other words, one person cannot have more without having someone have less. When you start creating “Berlin” walls in the information arena within the halls of learning, then learning itself is compromised. In fact, contributing or granting the intellectual estate into the creative commons serves a higher purpose in society – an access to information and hence, a feedback mechanism that ultimately enhances the value to the end-product itself. How? Since now the product has been distributed across a broader and diverse audience, and it is open to further critical analyses.

journals

The universities have built a racket. They have deployed a Chinese wall between learning in a cloistered environment and the world who are not immediate participants. The Guardian wrote an interesting article on this matter and a very apt quote puts it all together.

“Academics not only provide the raw material, but also do the graft of the editing. What’s more, they typically do so without extra pay or even recognition – thanks to blind peer review. The publishers then bill the universities, to the tune of 10% of their block grants, for the privilege of accessing the fruits of their researchers’ toil. The individual academic is denied any hope of reaching an audience beyond university walls, and can even be barred from looking over their own published paper if their university does not stump up for the particular subscription in question.

journal paywalls

This extraordinary racket is, at root, about the bewitching power of high-brow brands. Journals that published great research in the past are assumed to publish it still, and – to an extent – this expectation fulfils itself. To climb the career ladder academics must get into big-name publications, where their work will get cited more and be deemed to have more value in the philistine research evaluations which determine the flow of public funds. Thus they keep submitting to these pricey but mightily glorified magazines, and the system rolls on.”

http://www.guardian.co.uk/commentisfree/2012/apr/11/academic-journals-access-wellcome-trust

jstor

JSTOR is a not-for-profit organization that has invested heavily in providing an online system for archiving, accessing, and searching digitized copies of over 1,000 academic journals.  More recently, I noticed some effort on their part to allow public access to only 3 articles over a period of 21 days. This stinks! This policy reflects an intellectual snobbery beyond Himalayan proportions. The only folks that have access to these academic journals and studies are professors, and researchers that are affiliated with a university and university libraries.  Aaron Swartz noted the injustice of hoarding such knowledge and tried to distribute a significant proportion of JSTOR’s archive through one or more file-sharing sites. And what happened thereafter was perhaps one of the biggest misapplication of justice.  The same justice that disallows asymmetry of information in Wall Street is being deployed to preserve the asymmetry of information at the halls of learning.

aswartz

MSNBC contributor Chris Hayes criticized the prosecutors, saying “at the time of his death Aaron was being prosecuted by the federal government and threatened with up to 35 years in prison and $1 million in fines for the crime of—and I’m not exaggerating here—downloading too many free articles from the online database of scholarly work JSTOR.”

The Associated Press reported that Swartz’s case “highlights society’s uncertain, evolving view of how to treat people who break into computer systems and share data not to enrich themselves, but to make it available to others.”

Chris Soghioian, a technologist and policy analyst with the ACLU, said, “Existing laws don’t recognize the distinction between two types of computer crimes: malicious crimes committed for profit, such as the large-scale theft of bank data or corporate secrets; and cases where hackers break into systems to prove their skillfulness or spread information that they think should be available to the public.”

 

Kelly Caine, a professor at Clemson University who studies people’s attitudes toward technology and privacy, said Swartz “was doing this not to hurt anybody, not for personal gain, but because he believed that information should be free and open, and he felt it would help a lot of people.”

And then there were some modest reservations, and Swartz actions were attributed to reckless judgment. I contend that this does injustice to someone of Swartz’s commitment and intellect … the recklessness was his inability to grasp the notion that an imbecile in the system would pursue 35 years of imprisonment and $1M fine … it was not that he was not aware of what he was doing but he believed, as does many, that scholarly academic research should be available as a free for all.

We have a Berlin wall that needs to be taken down. Swartz started that but he was unable to keep at it. It is important to not rest in this endeavor and that everyone ought to actively petition their local congressman to push bills that will allow open access to these academic articles.

John Maynard Keynes had warned of the folly of “shutting off the sun and the stars because they do not pay a dividend”, because what is at stake here is the reach of the light of learning. Aaron was at the vanguard leading that movement, and we should persevere to become those points of light that will enable JSTOR to disseminate the information that they guard so unreservedly.

 

 

 

 

 

Transparency in organizations

“We chose steel and extra wide panels of glass, which is almost like crystal. These are honest materials that create the right sense of strength and clarity between old and new, as well as a sense of transparency in the center of the institution that opens the campus up to the street.”

Renzo Piano

What is Transparency in the context of the organization?

It is the deliberate attempt by management to architect an organization that encourages open access to information, participation, and decision making, which ultimately creates a higher level of trust among the stakeholders.

The demand for transparency is becoming quite common. The users of goods and services are provoking the transparency question:

  1. Shareholder demand for increased financial accountability in the corporate world,
  2. Increased media diligence
  3. Increased regulatory diligence and requirements
  4. Increased demand by social interest and environmental groups
  5. Demands to see and check on compliance based on internal and external policies
  6. Increased employees’ interest in understanding how senior management decisions impact them, the organization and society

There are 2 big categories that organizations must consider and subsequently address while establishing systems in place to promote transparency.

  1. External Transparency
  2. Internal Transparency

 

External Transparency:

Some of the key elements are that organizations have to make the information accessible while also taking into account the risk of divulging too much information, make the information actionable, enable sharing and collaboration, managing risks, and establishing protocols and channels of communication that is open and democratic.

For example, it is important that employees ought to able to trace the integrity, quality, consistency and validity of the information back to the creator. In an open environment, it also unravels the landscape of risks that an organization maybe deliberately taking or may be carrying unknowingly. It bubbles up inappropriate decisions that can be dwelt on collectively by the management and the employees, and thus risks and inappropriateness are considerably mitigated. The other benefit obviously is that it enables too much overlap wherein people spread across the organizations may be doing the same thing in a similar manner. It affords better shared services platform and also encourages knowledge base and domain expertise that employees can tap into.

 

 Internal Transparency:

Organization has to create the structure to encourage people to be transparent. Generally, people come to work with a mask on. What does that mean? Generally, the employees focus on the job at hand but they may be interested to add value in other ways besides their primary responsibility. In fact, they may want to approach their primary responsibility in an ingenious manner that would help the organization. But the mask or the veil that they don separates their personal interest and passions with the obligations that the job demands. Now how cool would it be if the organization sets up a remarkably safe system wherein the distinction between the employees’ personal interest and the primary obligations of the employee materially dissolve? What I bet you would discover would be higher levels of employee engagement. In addressing internal transparency, what the organization would have done is to have successfully mined and surfaced the personal interests of an employee and laid it out among all participants in a manner that would benefit the organization and the employee and their peers.

Thus, it is important to address both – internal and external transparency. However, implementing transparency ethos is not immune to challenges wherein increased transparency may distort intent, slow processes, increase organizational vulnerabilities, create psychological dissonance among employees or groups, create new factions and sometimes even result in poor decisions. Despite the challenges, the aggregate benefit of increased transparency over time would outweigh the costs. At the end, if the organization continues to formalize transparency, it would also simultaneously create and encourage trust and proper norms and mores that would lay the groundwork for an effective workforce.

Reputation is often an organization’s most valuable asset. It is built over time through a focused commitment and response to members’ wants, needs, and expectations. A commitment to transparency will increasingly become a litmus test used to define an association’s reputation and will be used as a value judgment for participation. By gaining a reputation for value through the disclosure of information, extensive communications with stakeholders, and a solid track record of truth and high disclosure of information, associations will win the respect and involvement of current and future members.

Kanter and Fine use a great analogy of transparency like an ocean sponge. These pore bearing organisms let up to twenty thousand times their volume in water pass through them every day. These sponges can withstand open, constant flow without inhibiting it because they are anchored to the ocean floor. Transparent organizations behave like these sponges: anchored to their mission and still allowing people in and out easily. Transparent organizations actually benefit from the constant flow of people and information.

 

Plans to implement transparency

Businesses are fighting for trust from their intended audiences. Shel Holtz and John Havens, authors of “Tactical Transparency,” state that the realities associated with doing business in today’s “business environment have emerged as the result of recent trends: Declining trust in business as usual and the increased public scrutiny under which companies find themselves thanks to the evolution of social media.” It is important, now more than ever, for organizations to use tools successfully to be sincerely but prudently transparent in ways that matter to their stakeholders.

“Tactical Transparency” adopted the following definition for transparency:

Transparency is the degree to which an organization shares the following with its stakeholder publics:

▪   Its leaders: The leaders of transparent companies are accessible and are straightforward when talking with members of key audiences.

▪   Its employees: Employees or transparent companies are accessible, can reinforce the public view of the company, and able to help people where appropriate.

▪   Its values: Ethical behavior, fair treatment, and other values are on full display in transparent companies.

▪   Its culture: How a company does things is more important today than what it does. The way things are done is not a secret in transparent companies.

▪   The results of its business practices, both good and bad: Successes, failures, problems, and victories all are communicated by transparent companies.

▪   Its business strategy: Of particular importance to the investment community but also of interest to several other audiences, a company’s strategy is a key basis for investment decisions. Misalignment of a company’s strategy and investors’ expectations usually result in disaster.

Here are some great links around transparency.

According to J.D. Lasica, cofounder of Ourmedia.org and the Social Media Group, there are three levels of transparency that an organization should consider when trying to achieve tactical transparency.

▪   Operational Transparency: That involves creating or following an ethics code, conflict-of-interest policies, and any other guidelines your organization creates.

▪   Transactional Transparency: This type of strategy provides guidelines and boundaries for employees so they can participate in the conversation in and out of the office. Can they have a personal blog that discusses work-related issues?

▪   Lifestyle Transparency: This is personalized information coming from sites like Facebook and Twitter. These channels require constant transparency and authenticity.

 

Create an Action Plan around policies and circumstances to promote transparency:

Holtz and Havens outline specific situations where tactical transparency can transform a business, some of which are outlined in this list.

▪   Major Crises

▪   Major change initiatives

▪   Product changes

▪   New regulations that will impact business

▪   Financial matters

▪   Media interaction

▪   Employee interaction with the outside world

▪   Corporate Governance

▪   Whistleblower programs

▪   Monitoring corporate reputation internally and externally

▪   Whistleblower programs

▪   Accessibility of management

 

The Big Data Movement: Importance and Relevance today?

We are entering into a new age wherein we are interested in picking up a finer understanding of relationships between businesses and customers, organizations and employees, products and how they are being used,  how different aspects of the business and the organizations connect to produce meaningful and actionable relevant information, etc. We are seeing a lot of data, and the old tools to manage, process and gather insights from the data like spreadsheets, SQL databases, etc., are not scalable to current needs. Thus, Big Data is becoming a framework to approach how to process, store and cope with the reams of data that is being collected.

According to IDC, it is imperative that organizations and IT leaders focus on the ever-increasing volume, variety and velocity of information that forms big data.

  • Volume. Many factors contribute to the increase in data volume – transaction-based data stored through the years, text data constantly streaming in from social media, increasing amounts of sensor data being collected, etc. In the past, excessive data volume created a storage issue. But with today’s decreasing storage costs, other issues emerge, including how to determine relevance amidst the large volumes of data and how to create value from data that is relevant.
  • Variety. Data today comes in all types of formats – from traditional databases to hierarchical data stores created by end users and OLAP systems, to text documents, email, meter-collected data, video, audio, stock ticker data and financial transactions. By some estimates, 80 percent of an organization’s data is not numeric! But it still must be included in analyses and decision making.
  • Velocity. According to Gartner, velocity “means both how fast data is being produced and how fast the data must be processed to meet demand.” RFID tags and smart metering are driving an increasing need to deal with torrents of data in near-real time. Reacting quickly enough to deal with velocity is a challenge to most organizations.

SAS has added two additional dimensions:

  • Variability. In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something big trending in the social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage – especially with social media involved.
  • Complexity. When you deal with huge volumes of data, it comes from multiple sources. It is quite an undertaking to link, match, cleanse and transform data across systems. However, it is necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control. Data governance can help you determine how disparate data relates to common definitions and how to systematically integrate structured and unstructured data assets to produce high-quality information that is useful, appropriate and up-to-date.

 

So to reiterate, Big Data is a framework stemming from the realization that the data has gathered significant pace and that it’s growth has exceeded the capacity for an organization to handle, store and analyze the data in a manner that offers meaningful insights into the relationships between data points.  I am calling this a framework, unlike other materials that call Big Data a consequent of the inability of organizations to handle mass amounts of data. I refer to Big Data as a framework because it sets the parameters around an organizations’ decision as to when and which tools must be deployed to address the data scalability issues.

Thus to put the appropriate parameters around when an organization must consider Big Data as part of their analytics roadmap in order to understand the patterns of data better, they have to answer the following  ten questions:

  1. What are the different types of data that should be gathered?
  2. What are the mechanisms that have to be deployed to gather the relevant data?
  3. How should the data be processed, transformed and stored?
  4. How do we ensure that there is no single point of failure in data storage and data loss that may compromise data integrity?
  5. What are the models that have to be used to analyze the data?
  6. How are the findings of the data to be distributed to relevant parties?
  7. How do we assure the security of the data that will be distributed?
  8. What mechanisms do we create to implement feedback against the data to preserve data integrity?
  9. How do we morph the big data model into new forms that accounts for new patterns to reflect what is meaningful and actionable?
  10. How do we create a learning path for the big data model framework?

Some of the existing literature have commingled Big Data framework with analytics. In fact, the literature has gone on to make a rather assertive statement i.e. that Big Data and predictive analytics be looked upon in the same vein. Nothing could be further from the truth!

There are several tools available in the market to do predictive analytics against a set of data that may not qualify for the Big Data framework. While I was the CFO at Atari, we deployed business intelligence tools using Microstrategy, and Microstrategy had predictive modules. In my recent past, we had explored SAS and Minitab tools to do predictive analytics. In fact, even Excel can do multivariate, ANOVA and regressions analysis and best curve fit analysis. These analytical techniques have been part of the analytics arsenal for a long time. Different data sizes may need different tools to instantiate relevant predictive analysis. This is a very important point because companies that do not have Big Data ought to seriously reconsider their strategy of what tools and frameworks to use to gather insights. I have known companies that have gone the Big Data route, although all data points ( excuse my pun), even after incorporating capacity and forecasts, suggest that alternative tools are more cost-effective than implementing Big Data solutions. Big Data is not a one-size fit-all model. It is an expensive implementation. However, for the right data size which in this case would be very large data size, Big Data implementation would be extremely beneficial and cost effective in terms of the total cost of ownership.

Areas where Big Data Framework can be applied!

Some areas lend themselves to the application of the Big Data Framework.  I have identified broadly four key areas:

  1. Marketing and Sales: Consumer behavior, marketing campaigns, sales pipelines, conversions, marketing funnels and drop-offs, distribution channels are all areas where Big Data can be applied to gather deeper insights.
  2. Human Resources: Employee engagement, employee hiring, employee retention, organization knowledge base, impact of cross-functional training, reviews, compensation plans are elements that Big Data can surface. After all, generally over 60% of company resources are invested in HR.
  3. Production and Operational Environments: Data growth, different types of data appended as the business learns about the consumer, concurrent usage patterns, traffic, web analytics are prime examples.
  4. Financial Planning and Business Operational Analytics:  Predictive analytics around bottoms-up sales, marketing campaigns ROI, customer acquisitions costs, earned media and paid media, margins by SKU’s and distribution channels, operational expenses, portfolio evaluation, risk analysis, etc., are some of the examples in this category.

Hadoop: A Small Note!

Hadoop is becoming a more widely accepted tool in addressing Big Data Needs.  It was invented by Google so they could index the structural and text information that they were collecting and present meaningful and actionable results to the users quickly. It was further developed by Yahoo that tweaked Hadoop for enterprise applications.

Hadoop runs on a large number of machines that don’t share memory or disks. The Hadoop software runs on each of these machines. Thus, if you have for example – over 10 gigabytes of data – you take that data and spread that across different machines.  Hadoop tracks where all these data resides! The servers or machines are called nodes, and the common logical categories around which the data is disseminated are called clusters.  Thus each server operates on its own little piece of the data, and then once the data is processed, the results are delivered to the main client as a unified whole. The method of reducing the disparate sources of information residing in various nodes and clusters into one unified whole is the process of MapReduce, an important mechanism of Hadoop. You will also hear something called Hive which is nothing but a data warehouse. This could be a structured or unstructured warehouse upon which the Hadoop works upon, processes data, enables redundancy across the clusters and offers a unified solution through the MapReduce function.

Personally, I have always been interested in Business Intelligence. I have always considered BI as a stepping stone, in the new age, to be a handy tool to truly understand a business and develop financial and operational models that are fairly close to the trending insights that the data generates.  So my ear is always to the ground as I follow the developments in this area … and though I have not implemented a Big Data solution, I have always been and will continue to be interested in seeing its applications in certain contexts and against the various use cases in organizations.

 

MECE Framework, Analysis, Synthesis and Organization Architecture toward Problem-Solving

MECE is a thought tool that has been systematically used in McKinsey. It stands for Mutually Exclusive, Comprehensively Exhaustive.  We will go into both these components in detail and then relate this to the dynamics of an organization mindset. The presumption in this note is that the organization mindset has been engraved over time or is being driven by the leadership. We are looking at MECE since it represents a tool used by the most blue chip consulting firm in the world. And while doing that, we will , by the end of the article, arrive at the conclusion that this framework alone will not be the panacea to all investigative methodology to assess a problem – rather, this framework has to reconcile with the active knowledge that most things do not fall in the MECE framework, and thus an additional system framework is needed to amplify our understanding for problem solving and leaving room for chance.

So to apply the MECE technique, first you define the problem that you are solving for. Once you are past the definition phase, well – you are now ready to apply the MECE framework.

MECE is a framework used to organize information which is:

  1. Mutually exclusive: Information should be grouped into categories so that each category is separate and distinct without any overlap; and
  2. Collectively exhaustive: All of the categories taken together should deal with all possible options without leaving any gaps.

In other words, once you have defined a problem – you figure out the broad categories that relate to the problem and then brainstorm through ALL of the options associated with the categories. So think of  it as a mental construct that you move across a horizontal line with different well defined shades representing categories, and each of those partitions of shades have a vertical construct with all of the options that exhaustively explain those shades. Once you have gone through that exercise, which is no mean feat – you will be then looking at an artifact that addresses the problem. And after you have done that, you individually look at every set of options and its relationship to the distinctive category … and hopefully you are well on your path to coming up with relevant solutions.

Now some may argue that my understanding of MECE is very simplistic. In fact, it may very well be. But I can assure you that it captures the essence of very widely used framework in consulting organizations. And this framework has been imported to large organizations and have cascaded down to different scale organizations ever since.

Here is a link that would give you a deeper understanding of the MECE framework:

http://firmsconsulting.com/2010/09/22/a-complete-mckinsey-style-mece-decision-tree/

Now we are going to dig a little deeper.  Allow me to digress and take you down a path less travelled. We will circle back to MECE and organizational leadership in a few moments. One of the memorable quotes that have left a lasting impression is by a great Nobel Prize winning physicist, Richard Feynman.

“I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say “look how beautiful it is,” and I’ll agree. Then he says “I as an artist can see how beautiful this is but you as a scientist takes this all apart and it becomes a dull thing,” and I think that he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is … I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to theexcitement, the mystery and the awe of a flower! It only adds. I don’t understand how it subtracts.”

The above quote by Feynman lays the groundwork to understand two different approaches – namely, the artist approaches the observation of the flower from the synthetic standpoint, whereas Feynman approaches it from an analytic standpoint. Both do not offer views that are antithetical to one another: in fact, you need both to gather a holistic view and arrive at a conclusion – the sum is greater than the parts. Feynman does not address the essence of beauty that the artist puts forth; he looks at the beauty of how the components and its mechanics interact well and how it adds to our understanding of the flower.  This is very important because the following dialogue with explore another concept to drive this difference between analysis and synthesis home.

There are two possible ways of gaining knowledge. Either we can proceed from the construction of the flower ( the Feynman method) , and then seek to determine the laws of the mutual interaction of its parts as well as its response to external stimuli; or we can begin with what the flower accomplishes and then attempt to account for this. By the first route we infer effects from given causes, whereas by the second route we seek causes of given effects. We can call the first route synthetic, and the second analytic.

 

We can easily see how the cause effect relationship is translated into a relationship between the analytic and synthetic foundation.

 

A system’s internal processes — i.e. the interactions between its parts — are regarded as the cause of what the system, as a unit, performs. What the system performs is thus the effect. From these very relationships we can immediately recognize the requirements for the application of the analytic and synthetic methods.

 

The synthetic approach — i.e. to infer effects on the basis of given causes — is therefore appropriate when the laws and principles governing a system’s internal processes are known, but when we lack a detailed picture of how the system behaves as a whole.

Another example … we do not have a very good understanding of the long-term dynamics of galactic systems, nor even of our own solar system. This is because we cannot observe these objects for the thousands or even millions of years which would be needed in order to map their overall behavior.

 

However, we do know something about the principles, which govern these dynamics, i.e. gravitational interaction between the stars and planets respectively. We can therefore apply a synthetic procedure in order to simulate the gross dynamics of these objects. In practice, this is done with the use of computer models which calculate the interaction of system parts over long, simulated time periods.

The analytical approach — drawing conclusions about causes on the basis of effects – is appropriate when a system’s overall behavior is known, but when we do not have clear or certain knowledge about the system’s internal processes or the principles governing these. On the other hand, there are a great many systems for which we neither have a clear and certain conception of how they behave as a whole, nor fully understand the principles at work which cause that behavior. Organizational behavior is one such example since it introduces the fickle spirits of the employees that, at an aggregate create a distinct character in the organization.

Leibniz was among the first to define analysis and synthesis as modern methodological concepts:

“Synthesis … is the process in which we begin from principles and [proceed to] build up theorems and problems … while analysis is the process in which we begin with a given conclusion or proposed problem and seek the principles by which we may demonstrate the conclusion or solve the problem.”

 

So we have wandered down this path of analysis and synthesis and now we will circle back to MECE and the organization. MECE framework is a prime example of the application of analytics in an organization structure. The underlying hypothesis is that the application of the framework will illuminate and add clarity to understanding the problems that we are solving for. But here is the problem:  the approach could lead to paralysis by analysis. If one were to apply this framework, one would lose itself in the weeds whereas it is just as important to view the forest.  So organizations have to step back and assess at what point we stop the analysis i.e. we have gathered information and at what point we set our roads to discovering a set of principles that will govern the action to solve a set of problems.  It is almost always impossible to gather all information to make the best decision – especially where speed, iteration, distinguishing from the herd quickly, stamping a clear brand etc. are becoming the hallmarks of great organizations.

Applying the synthetic principle in addition to “MECE think” leaves room for error and sub-optimal solutions. But it crowd sources the limitless power of imagination and pattern thinking that will allow the organization to make critical breakthroughs in innovative thinking. It is thus important that both the principles are promulgated by the leadership as coexisting principles that drive an organization forward. It ignites employee engagement, and it imputes the stochastic errors that result when employees may not have all the MECE conditions checked off.

 

In conclusion, it is important that the organization and its leadership set its architecture upon the traditional pillars of analysis and synthesis – MECE and systems thinking.  And this architecture serves to be the springboard for the employees that allows for accidental discoveries, flights of imagination, Nietzschean leaps that transform the organization toward the pathway of innovation, while still grounded upon the bedrock of facts and empirical observations.