The Boulder BI Brain Trust

 

July 2013 Archives

The second quarter of the year typically proves to be one of the strongest.  This time around not only we were not disappointed but we also started seeing some strong upside for the remainder of the year.  Both our enterprise SaaS and adtech platform portfolio companies performed extremely well, and a few (both from enterprise and adtech) are seeing strong enough demand that could cause them to increase their 2H13 bookings and revenue targets.  As public SaaS companies are starting to report quarterly results we are starting to see similar strong performance by some of them, whereas the majority are expected to at least meet analyst expectations.  The North American market fuels this growth, whereas international markets, particularly Europe, remain a concern.  Also, we are seeing more activity in certain industries such as retail, parts of manufacturing, and logistics, where application budgets are holding steady and even increasing.  In other industries we continue to see budgetary pressures as I had mentioned in last quarter’s commentary

The public enterprise SaaS companies we monitor, e.g., Netsuite, Workday, Demandware, ServiceNow, Jive, Cornerstone OnDemand, have either started announcing or are expected to announce strong 2Q13 results, that are at least in-line with analyst expectations.  In fact Netsuite beat analyst expectations.  The public adtech companies had a rougher time during 2Q13.  Tremor Video and Marin Software are two adtech companies that went public during the quarter and the market didn’t welcome them with open arms.  They started trading down soon after their IPO.  Similarly, Valueclick and Millennial Media continue to be scrutinized by public markets.  Two other private adtech companies, Yume and Adap.tv, are expected to go public during this quarter, and a few more will file to go public before the end of the year.  I believe that the companies which have either already filed to go public, or are planning to file, are of higher quality than the ones that have already gone public.  As a result, I wouldn’t be surprised if their stocks perform better in the public markets than the current set of public adtech companies.

There were two acquisitions worth mentioning: Salesforce’s acquisition of ExactTarget and Adobe’s acquisition of Neolane.  In addition to the size of these transactions, it is interesting to note that they allow both acquirers to strengthen their CMO suites.  Another interesting SaaS transaction was the acquisition of CompuCom by TH Lee, a buyout firm.  In the broader cloud category I should also mention IBM’s acquisition of SoftLayer.  Large private equity firms are increasing the pace of their investments in SaaS and adtech companies, e.g., Insight Venture Partners’ investment in Brightedge.  Finally, SAP acquired Hybris which derives a relatively small percentage of its revenue from a SaaS application even though the majority comes from on-premise software.

Positive aspects of our SaaS portfolio’s performance:

  1. Strong license revenue growth of 10-30% QoQ for the online advertising platform companies, and 15-20% of the remaining SaaS companies.  We are seeing an accelerating trend by brands to enter into yearlong contracts with adtech platform companies, in the process bypassing their ad agencies and foregoing campaign-based contracts.  I had first reported this trend during last quarter’s commentary.  This type of contracts provides more predictable revenue ramp to adtech platform companies that the public markets appreciate.  
  2. The accelerating adoption of SaaS applications continues.  Our enterprise SaaS portfolio companies are signing more enterprise customers on a quarterly basis and they expand their footprint within each client enterprise.
  3. Steady renewal rates (90%+) and improvement on the churn we had seen in the social application companies. 
  4. Sales pipelines growing faster than in the past. 
  5. Large IT vendors continue to move aggressively to partner with private SaaS companies as they are trying to incorporate more cloud-based solutions in their portfolio.

Negative aspects of our SaaS portfolio performance:

  1. The negative macro environment outside the US particularly in Europe, and less so in Asia.  This is having more of an impact to our adtech platform companies.
  2. Mobile SaaS solutions are attracting more attention but not big dollars yet.
  3. Talent acquisition, particularly for sales and engineering remains difficult.

Under normal market conditions the second quarter tends to be better than the first and this year there was no exception.  However, 2Q13 gave us more indications that this can end up being a strong year for our SaaS portfolio if the economy continues to mend and remains in its current trajectory.

Insight Generation

| | TrackBacks (0)

In my last blog I tried to define the concept of insight.  In this post I discuss insight generation.  Insights are generated by systematically and exhaustively examining a) the output of various analytic models (including predictive, benchmarking, outlier-detection models, etc.) generated from a body of data, and b) the content and structure of the models themselves.  Insight generation is a process that takes place together with model generation, but is separate from the decisioning process during which the generated models, as well as the insights and their associated action plans are applied on new data.  

Insight generation depends on our ability to a) collect, organize and retain data, b) generate a variety of analytic models from that data, and c) analyze the generated models themselves.  Therefore, in order to generate insights, we must have the ability to generate models. And in order to do that we must have data.  Insights can be generated from collected data, data derived from the collected data, as well as the metadata of the collected data.  This means that we need to be thinking not only about the data collection, management and archiving processes, but also about how to post-process the collected data; what attributes to derive, what metadata to collect. 

In some cases data is collected by conducting reproducible experiments or simulations (synthetic data). In other cases there is only one shot at collecting a particular data set.  Regardless, insight generation is highly dependent on how an environment is "instrumented."  For example, consumer marketers have gone from measuring a few attributes per consumer, think of the early consumer panels run by companies such as Nielsen, to measuring thousands of attributes, including consumer web behavior, and most recently, consumer interactions in social networks.  The "right" instrumentation is not always immediately obvious, i.e., it is not obvious which of the data that can be captured needs to be captured.  Oftentimes, it may not even be immediately possible to capture particular types of data.  For example, it took some time between the advent of the web and our ability to capture browsing activity through cookies.  But obviously, the better the instrumentation the better the analytic models, and thus the higher the likelihood that insights can be generated.  Knowing how to instrument an environment and ultimately how to use the instrumentation to measure and gather data can be thought of as an experiment-design process and frequently requires domain knowledge. 

Insight generation also involves the ability to organize murky data, which is typically the situation with environments involving big data, and focus on the data that  makes "sense," given a specific context and state of domain knowledge.  Focusing on specific data given a particular data doesn't mean that the rest of the collected data is unimportant.  It's just that one cannot make sense of it at that point in time.  

It is important to not only collect and organize data, but also to properly archive it, since insight generation may only become possible when a body of archived data is combined with a set of newly collected data under a particular context.  Or that the combination of archived with new data may lead to additional insights to those generated in the past.  As the body of domain knowledge increases and new data is collected it may be possible to extract new insights even from data collected in the past. Consequently, having inexpensive and scalable big data infrastructures enables this capability.

Insight generation is serendipitous in nature.  For this reason, insights are more likely to be generated from the examination of several analytic models that have been created from the same body of data because each model-creation approach considers different characteristics of the data to identify relations.  We maintain that model analysis, and therefore insight generation, is facilitated when models can be expressed declaratively.  A good example, of the advocated approach is used by IBM's Watson system.  This system uses ensemble learning to create many expert analytic models.  Each created model provides a different perspective on a specific topic.  Watson ensemble learning approach utilizes optimization, outlier identification and analysis, benchmarking, etc. techniques in the process of trying to generate insights. 

While we are able to describe data collection and model creation in quite detailed ways, and have been able to largely automate them, this is still not the case with insight generation.  This is in fact the most compelling reason for offering insight as a service; because we have not been able to broadly automate the generation of insights.  What we have characterized as insight today has to be generated manually by the analysis of each analytic model derived from a body of data, even though there there is academic research that is starting to point to approaches for the automatic generation of insights.  The analysis of the derived analytic models will enable us to understand which of the relations comprising a model are simply correlations supported by the analyzed data set (but don't constitute insights because they don't satisfy the other characteristics an insight must possess), and which are actually meaningful, important and satisfy all the characteristics we outlined before. 

As I mentioned, in most cases today utilizing insights that are generated manually by experts and offered in the form of a service may be the only alternative organizations have to fully benefit from the big data they collect. The best examples are companies like FICO, Exelate, Opera Solutions, Gaininsight and a few others.   However, there are additional advantages to offering insights as a service:

  1. Certain types of insights, e.g., benchmarking, can only offered as a service because the provider needs to compare data from a variety of organizations being benchmarked.
  2. Offering insight as a service could lower the overall cost of generating and reasoning over insights.  This means that even organizations that can generate insights on their own may ultimately decide to outsource the insight generation and reasoning processes because specialized organizations may be able to perform them more efficiently and cost effectively.
  3. Offering insight as a service enables organizations to benefit from the expertise the insight generator develops by offering insights to multiple organizations of the same type. For example, FICO has now developed tremendous credit insight expertise which no single financial services organization can replicate.

I wanted to close by making the following point: I have argued that for an insight to be valid it must have an action associated with it.  This action is applied during a decisioning process.  The characteristics of a particular decisioning process will also need to be taken into consideration during the insight/action-generation process because the time (and maybe even other costs) allocated to apply a particular  action during the decisioning process is very important.   Watson's Jeopardy play provided a great illustration of this point, as the system had a limited amount of time to come up with the correct response to beat its opponents.  Below I provide an initial, rudimentary illustration of the time it needs to take to action specific actions in particular domains.Time to Action

We are starting to make progress in understanding the difference between patterns and correlations derived from a data set and insights.  This is becoming particularly important as we are dealing more frequently with big data but also because we need to use insights to gain a competitive advantage. Offering insight-generation manual services provides us with some short term reprieve but ultimately we need to develop automated systems because the data is getting bigger and our ability to act on it is not improving proportionately.

Defining Insight

| | TrackBacks (0)

A little over two years ago I wrote a series of blogs introducing Insight-as-a-Service.  My idea on how companies can provide insight as a service started by observing my SaaS portfolio companies.  In addition to each customer's operational data used by their SaaS applications, like all SaaS companies, these companies collect and store application usage data.  As a result, they have the capacity to benchmark the performance of their customers and help them improve their corporate and application performance.  I had then determined that insight delivered as a service can be applied not only for benchmarking but to other analytic- and data-driven systems.  Over the intervening time I came across several companies that started developing products and services that were building upon the idea of insight generation and providing insight as a service.  However, the more I thought about insight-as-a-service, the more I came to understand that we didn't really have a good enough understanding of what constitutes insight.  In today's environment where corporate marketing overhypes everything associated with big data and analytics, the word "insight" is being used very loosely, most of the times in order to indicate any type of data analysis or prediction.  For this reason, I felt it was important to attempt defining the concept of insight.  Once we define it we can then determine if we can deliver it as a service.  During the past several months I have been interacting with colleagues such as Nikos Anerousis of IBM, Bill Mark of SRI, Ashok Srivastava of Verizon and Ben Lorica of O'Reilly in an effort to try to define "insight."

An insight is the identification of cause and effect relations among elements of a data set that leads to the formation of an action plan which results in an improvement as measured by a set of KPIs.  Insights are discovered by reasoning over the output of analytic models and techniques.   This output can take the form of  predictions, correlations, benchmarks, outlier identifications and optimizations.   

The evaluation of a set of established relations to identify an insight, and the creation of an action plan associated with a particular insight needs to be done within a particular context and necessitates the use of domain knowledge.

Most analytic model outputs do not provide insights.  There are two reasons for this.  First, the models don't suggest a meaning for each of their findings.  Second, they don't put each finding in an actionable context (even if the meaning were known).  Finding a pattern doesn't imply that you automatically find meaning and that you understand it.  It just implies that you are finding a correlation among a data set.  Moreover, finding causality alone is not necessary and sufficient for generating an insight.  One needs to be able to derive an action plan that can successfully and effectively, i.e., with impact, be applied in a particular context.  This requirement implies that even knowing the meaning of the finding doesn't tell me how to generalize it and use it for something in the context I am trying to impact.  That step requires knowledge of my environment (business, social, education, etc.), my strengths and weaknesses, other forces that may enhance or diminish my efforts, etc. 

An insight must be: 

  1. Stable.  This means that an insight must not vary depending on the relation-identification algorithm/model being used.  For example, if I use two different samples from the same data set to create a predictive model employing the same model-creation method, then the resulting models have to provide the identical result under the same new data input.  
  2. Reproducible.  This means regardless of how many times a feed a particular data set through an insight-generation system, the same insight will be produced.  
  3. Robust. This means that a certain amount of noise in the input data will not diminish the quality of the insight. This is particularly important requirement in big data environments.  Insight-generation systems must be able to organize noisy data and focus on the data that makes "sense," based on a particular context.
  4. Enduring. This means that the insight is valid for an amount of time that is related to the underlying data's "half life."  

Because of the above requirements, insight-generation necessitates the deeper analysis, including the causal analysis, of the underlying relation-identification models, rather than just the testing of each model's accuracy, as it is typically done in predictive analytics tasks.  Such causal analysis implies that when trying to generate insights it is preferable to utilize machine learning techniques that describe patterns declaratively, e.g., decision trees, rather than black box approaches, e.g., neural nets and genetic algorithms.  As a result of this requirement, one may need to sacrifice prediction accuracy and speed for expressiveness.  Therefore, one needs to identify the domains where insight-generation may be more important than predictive accuracy.  Moreover, because the models themselves need to anallyzed, simpler models may be prefered to more complex ones.

Insight-generation is not a single shot process.  Once an insight is generated and the associated action plan is created, it is important to apply the plan in the particular context and measure its impact.  The collected data must  then be compared to the set of established KPIs in order to determine whether the particular insight/action-plan pair led to an improvement.  Depending on this analysis, the system must then decide whether to attempt improving the action plan, create a completely new plan (assuming that alternatives can be found), or try to create a brand new insight.  This means that from a set of initial input data the insight-generation system must seek to derive all possible predictions, based on the set of available models. 

   

 

This page is an archive of entries from July 2013 listed from newest to oldest.

June 2013 is the previous archive.

September 2013 is the next archive.

Find recent content on the main index or look in the archives to find all content.