Category Archives: Analytics

Disseminating financial knowledge to develop engaged organizations

Financial awareness of key drivers are becoming the paramount leading indicators for organizational success. For most, the finance department is a corner office service that offers ad hoc analysis on strategic and operational initiatives to a company, and provides an ex-post assessment of the financial condition of the company among a select few. There are some key financial metrics that one wants to measure across all companies and all industries without exception, but then there are unique metrics that reflect the key underlying drivers for organizational success. Organizations align their forays into new markets, new strategies and new ventures around a narrative that culminates in a financial metric or a proxy that illustrates opportunities lost or gained.

Image

Having been cast in operational finance roles for a good length of my career, I have often encountered a high level of interest to learn financial concepts in areas such as engineering, product management, operations, sales, etc. I have to admit that I have been humbled by the fairly wide common-sense understanding of basic financial concepts that these folks have. However, in most cases, the understanding is less than skin deep with misunderstandings that are meaningful. The good news is that I have also noticed a promising trend, namely … the questions are more thoroughly weighed by the “non-finance” participants, and there seems to be an elevated understanding of key financial drivers that translate to commercial success. This knowledge continues to accelerate … largely, because of convergence of areas around data science, analytics, assessment of personal ownership stakes, etc. But the passing of such information across these channels to the hungry recipients are not formalized. In other words, I posit that having a formal channel of inculcating financial education across the various functional areas would pay rich dividends for the company in the long run. Finance is a vast enough field that partaking general knowledge in these concepts which are more than merely skin-deep would also enable the finance group to engage in meaningful conversations with other functional experts, thus allowing the narrative around the numbers to be more wholesome. Thus, imparting the financial knowledge would be beneficial to the finance department as well.

Image

To be effective in creating a formal channel of disseminating information of the key areas in finance that matter to the organization, it is important to understand the operational drivers. When I say operational drivers, I am expanding that to encompass drivers that may uniquely affect other functional areas. For example, sales may be concerned with revenue, margins whereas production may be concerned with server capacity, work-in-process and throughput, etc. At the end, the financial metrics are derivatives. They are cross products of single or multiple drivers and these are the elements that need to be fleshed out to effect a spirited conversation. That would then enable the production of a financial barometer that everyone in the organization can rally behind and understand, and more importantly … be able to assess how their individual contribution has and will advance organization goals.

The Political Campaign Juggernaut – What Obamney campaigns can teach Organizations!

The Presidential election is tomorrow. I shall not disclose my position, but I am a San Francisco/Bay Area Native. Any doubts who I most likely am inclined toward? Most likely not! But the campaign throughout the year got me thinking. Imagine … over $1.3B have been spent to either bash someone or to send a message out. Over $1.3B! I do not have the actual numbers, but what I do know is that about $1B was spent in 2008 and it is estimated that the total spend was at least 30% more for the 2012 campaign. That makes it one of the biggest annual marketing budgets. To put it in context, that is almost 50% more than what Apple spent on advertising in 2011 ($933M).


We are expecting about 100M people to vote. 100M people to give a like for either party. Now look at it this way. $1.3B suggests that the total presidential campaign budget would translate to over 400M clicks (assuming $3 per click) or over 650 billion impressions (assuming $2 per 1000 impressions). Of course, that is not actually the case because there is payroll, organization expenses, etc, etc, etc. But you get the point. It is a big big budget … and it is one of the very few budgets that tend to be managed very well. Despite the largesse, it does not take into account the volunteer base that goes into the campaigns.


Now the outcome associated with political campaigns is fairly concrete. Either you have put the money to good use, hence resulting in the election of the appropriate person or your money spent has not been good enough. Who do you fire? The person who loses either goes moves shop from White House or considers becoming the CEO of the next big thing – perhaps a public equity capital group. Either way, we can take some learnings from all that have transpired and apply it to organizations. Of course, most organizations do not have this massive budget but regardless … they do have substantial marketing budgets and so the question is: What can we learn from what we have seen in the political theater that would enable the organization to shape and landscape the customer and employee mindshare.

Here are a few key points:
1. Pounding the message: Organizations have to be focused on the end goal and ensure at all times that any and all message that is being delivered is being done to attain a set of key objectives that enables organization success. That means that there should be no ambiguity as to what the organization and its brand represents. Dilution of the message may open up pockets of undecided customers or employees that could vote with their wallet and their feet quite readily.
2. Creating advocacy groups: Organizations have to create and nurture product and message evangelists by placing these nodes across many fields where potential customers and employees may come in contact with the organization. That would mean almost all social media channels, offline channels, conferences, elicit testimonials, investor and public relations efforts, timing special news releases etc. Advocacy groups are a proxy for all channels that an organization must leverage.
3. Aspirational Inclinations: Sell a dream! Sell possibilities! Sell the Why Nots! People tend to converge upon a platform of optimism. Yet, organizations must also be able to short their competitor’s offerings or perhaps not mention them at all.
4. Polling the behavior: If you notice, political campaigns have taken a page out of Lean Startup methodology. If polls go haywire …resources and messages are tweaked to create a semblance of stability and to get back to desired radar frequencies. Tweaking of the message and the presence of the messenger becomes important. This is field deployment of solutions associated with what all the data intelligence gathered is telling you.
5. Super PACS and Angel Affiliates: You have limits as do all organizations! No problem! Create evangelists that are not directly on the take. These are folks that will push your culture to the furthest corners of the globe. So recognize them and support them. They carry the torch since they fully believe in your mission and that your organization outcomes will impact them positively. How? Let them know? Drill. Baby. Drilllll the message.
6. Electoral College wins, not popular polls: Focus on the profitable customers; get the very best employees. Stratify your business so that you buy the win. You may not have the most likes but you would have had enough among the strata that truly matters.
7. Give the final reason: Give customers and employees a reason to vote. You want them to vote for you, but all the same you still want them to vote. You want the market of ideas to expand, even though they may serve competing visions in the tapestry of organizations in your space. But in trying to harness the turnout to the polls, you will have done as well as you can to draw them to your mojo.


See you all possible voters in the polls tomorrow. Applaud and keep the flames of democracy alive.

The Big Data Movement: Importance and Relevance today?

We are entering into a new age wherein we are interested in picking up a finer understanding of relationships between businesses and customers, organizations and employees, products and how they are being used,  how different aspects of the business and the organizations connect to produce meaningful and actionable relevant information, etc. We are seeing a lot of data, and the old tools to manage, process and gather insights from the data like spreadsheets, SQL databases, etc., are not scalable to current needs. Thus, Big Data is becoming a framework to approach how to process, store and cope with the reams of data that is being collected.

According to IDC, it is imperative that organizations and IT leaders focus on the ever-increasing volume, variety and velocity of information that forms big data.

  • Volume. Many factors contribute to the increase in data volume – transaction-based data stored through the years, text data constantly streaming in from social media, increasing amounts of sensor data being collected, etc. In the past, excessive data volume created a storage issue. But with today’s decreasing storage costs, other issues emerge, including how to determine relevance amidst the large volumes of data and how to create value from data that is relevant.
  • Variety. Data today comes in all types of formats – from traditional databases to hierarchical data stores created by end users and OLAP systems, to text documents, email, meter-collected data, video, audio, stock ticker data and financial transactions. By some estimates, 80 percent of an organization’s data is not numeric! But it still must be included in analyses and decision making.
  • Velocity. According to Gartner, velocity “means both how fast data is being produced and how fast the data must be processed to meet demand.” RFID tags and smart metering are driving an increasing need to deal with torrents of data in near-real time. Reacting quickly enough to deal with velocity is a challenge to most organizations.

SAS has added two additional dimensions:

  • Variability. In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something big trending in the social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage – especially with social media involved.
  • Complexity. When you deal with huge volumes of data, it comes from multiple sources. It is quite an undertaking to link, match, cleanse and transform data across systems. However, it is necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control. Data governance can help you determine how disparate data relates to common definitions and how to systematically integrate structured and unstructured data assets to produce high-quality information that is useful, appropriate and up-to-date.

 

So to reiterate, Big Data is a framework stemming from the realization that the data has gathered significant pace and that it’s growth has exceeded the capacity for an organization to handle, store and analyze the data in a manner that offers meaningful insights into the relationships between data points.  I am calling this a framework, unlike other materials that call Big Data a consequent of the inability of organizations to handle mass amounts of data. I refer to Big Data as a framework because it sets the parameters around an organizations’ decision as to when and which tools must be deployed to address the data scalability issues.

Thus to put the appropriate parameters around when an organization must consider Big Data as part of their analytics roadmap in order to understand the patterns of data better, they have to answer the following  ten questions:

  1. What are the different types of data that should be gathered?
  2. What are the mechanisms that have to be deployed to gather the relevant data?
  3. How should the data be processed, transformed and stored?
  4. How do we ensure that there is no single point of failure in data storage and data loss that may compromise data integrity?
  5. What are the models that have to be used to analyze the data?
  6. How are the findings of the data to be distributed to relevant parties?
  7. How do we assure the security of the data that will be distributed?
  8. What mechanisms do we create to implement feedback against the data to preserve data integrity?
  9. How do we morph the big data model into new forms that accounts for new patterns to reflect what is meaningful and actionable?
  10. How do we create a learning path for the big data model framework?

Some of the existing literature have commingled Big Data framework with analytics. In fact, the literature has gone on to make a rather assertive statement i.e. that Big Data and predictive analytics be looked upon in the same vein. Nothing could be further from the truth!

There are several tools available in the market to do predictive analytics against a set of data that may not qualify for the Big Data framework. While I was the CFO at Atari, we deployed business intelligence tools using Microstrategy, and Microstrategy had predictive modules. In my recent past, we had explored SAS and Minitab tools to do predictive analytics. In fact, even Excel can do multivariate, ANOVA and regressions analysis and best curve fit analysis. These analytical techniques have been part of the analytics arsenal for a long time. Different data sizes may need different tools to instantiate relevant predictive analysis. This is a very important point because companies that do not have Big Data ought to seriously reconsider their strategy of what tools and frameworks to use to gather insights. I have known companies that have gone the Big Data route, although all data points ( excuse my pun), even after incorporating capacity and forecasts, suggest that alternative tools are more cost-effective than implementing Big Data solutions. Big Data is not a one-size fit-all model. It is an expensive implementation. However, for the right data size which in this case would be very large data size, Big Data implementation would be extremely beneficial and cost effective in terms of the total cost of ownership.

Areas where Big Data Framework can be applied!

Some areas lend themselves to the application of the Big Data Framework.  I have identified broadly four key areas:

  1. Marketing and Sales: Consumer behavior, marketing campaigns, sales pipelines, conversions, marketing funnels and drop-offs, distribution channels are all areas where Big Data can be applied to gather deeper insights.
  2. Human Resources: Employee engagement, employee hiring, employee retention, organization knowledge base, impact of cross-functional training, reviews, compensation plans are elements that Big Data can surface. After all, generally over 60% of company resources are invested in HR.
  3. Production and Operational Environments: Data growth, different types of data appended as the business learns about the consumer, concurrent usage patterns, traffic, web analytics are prime examples.
  4. Financial Planning and Business Operational Analytics:  Predictive analytics around bottoms-up sales, marketing campaigns ROI, customer acquisitions costs, earned media and paid media, margins by SKU’s and distribution channels, operational expenses, portfolio evaluation, risk analysis, etc., are some of the examples in this category.

Hadoop: A Small Note!

Hadoop is becoming a more widely accepted tool in addressing Big Data Needs.  It was invented by Google so they could index the structural and text information that they were collecting and present meaningful and actionable results to the users quickly. It was further developed by Yahoo that tweaked Hadoop for enterprise applications.

Hadoop runs on a large number of machines that don’t share memory or disks. The Hadoop software runs on each of these machines. Thus, if you have for example – over 10 gigabytes of data – you take that data and spread that across different machines.  Hadoop tracks where all these data resides! The servers or machines are called nodes, and the common logical categories around which the data is disseminated are called clusters.  Thus each server operates on its own little piece of the data, and then once the data is processed, the results are delivered to the main client as a unified whole. The method of reducing the disparate sources of information residing in various nodes and clusters into one unified whole is the process of MapReduce, an important mechanism of Hadoop. You will also hear something called Hive which is nothing but a data warehouse. This could be a structured or unstructured warehouse upon which the Hadoop works upon, processes data, enables redundancy across the clusters and offers a unified solution through the MapReduce function.

Personally, I have always been interested in Business Intelligence. I have always considered BI as a stepping stone, in the new age, to be a handy tool to truly understand a business and develop financial and operational models that are fairly close to the trending insights that the data generates.  So my ear is always to the ground as I follow the developments in this area … and though I have not implemented a Big Data solution, I have always been and will continue to be interested in seeing its applications in certain contexts and against the various use cases in organizations.