The biggest jump in data science’s ROI comes when a business matures from correlation to causality based initiatives. I worked with a global retailer last year to improve their in store average sale by increasing average number of items. We started by surveying sales associates who led their stores in these categories. “What do you do to get the customer to buy more from you?” As you can imagine, we got a wide variety of responses.
I knew we had a lot of noise and a little signal in the responses. If we had used correlation techniques we would have done something like select the most common responses and present, “86% of high performing sales associates use suggestive selling to increase their average sale.” Data science is able to do a lot more than state the obvious. Deep insights come from causal relationships.
So we experimented with the responses. We trained a variety of techniques and measured the results on individuals’ average sale and average number of items. We found more noise. Regional differences, differences between sales people, and training techniques all caused variations which blurred experimental results. Hypothesis became increasingly granular and experiments became more controlled and precise.
That’s when we started discovering gold. Initiatives with names like, “Plan to Increase Lowest Performing 15% of Sales Associates in the US Southern Region’s Average Sale By 45%” came out of our findings. Just over 90% of these initiatives have achieved or exceeded their goals. The retailer has the skills in place now to assess what went wrong with the other roughly 10% of initiatives and further refine their understanding through additional experiments.
There’s value in this approach but for most of my clients, it’s the first time they’ve undertaken anything like this. With repetition, I’ve come to learn the patterns that lead to the best practices in data experiments. It’ll come as no surprise that these patterns are what hard scientists have been preaching to their students for a very long time now.
Every Experiment Needs a Review Process
The experimental process needs oversight. There are too many business, ethical, privacy, bias, and domain concerns to not have multiple eyes on any experiment that a company undertakes. There are so many ways for personal bias to creep into an experiment or for someone who’s well-meaning to do something unethical. This has been my biggest takeaway from data science experiments. Something will go wrong if experimentation is contained in a silo.
The faster your business can go from hypothesis generation to proving or refuting it, the faster your business will act on the insights and move on to the next one. The first few experiments will take a long time but don’t feel like that’s the norm. Speed is key in business and data science experiments should get faster as the business gets more experience running them. Data science alone is a competitive advantage today because only a few businesses have those capabilities. As data science becomes more pervasive, the advantage will shift to speed and sophistication.
Use a 3 Phased Discovery Process
The first phase is detection. This is what statistical data scientists are really good at. They find correlation between multiple elements hidden in massive data lakes.
The second phase is experimentation. Experimental data scientists use the discovery of correlation to generate a hypothesis and design an experiment that will prove or refute that hypothesis. Then they run the experiment and analyze the results.
The third phase is application. An applied data scientist can take the experimental result and visualize it in a way that’s easily understood, meaningful and actionable. They’re the connection between experimental results and ROI.
Typically an individual will have the skills to do a single phase with more senior data scientists able to do two phases.
Transparency Is Hard But Necessary
Make sure everyone knows what’s going on. Specifics are proprietary so those shouldn’t be disclosed. The fact that data is being gathered and experiments run needs to be disclosed to all involved. If anyone has an issue with that, there needs to be a process in place to omit them from the data gathering and experimental process.
Even Proven Theories Get Overturned About 10% to 20% of the Time
It happens in science and it will happen in business. It should be no higher than 20% of the time or something is wrong with the experimental process. If no thesis is overturned or subsequently refined, that’s a problem too.
Experimentation – A Sign of Growing Data Science Maturity
Companies start with data science running strictly correlation techniques. These are the ones best supported by current software offerings and data science skills. As these capabilities mature the correlations move from obvious to very obscure. However, the value of correlation is limited and the business needs typically outgrow correlation within a couple of years. That’s because correlation is descriptive and the business needs prescriptive and predictive.
Experimentation is the next step and the insights follow a similar trajectory; starting out by yielding obvious insights and quickly migrating to obscure insights. The value of these obscure insights isn’t as limited. It leads to a more granular understanding of customer preferences, competitors’ actions, employee productivity, and investor sentiment among many others.
These types of granular insights lead to models that allow a business to understand the most likely impact of their actions as well as understand the full spectrum of available choices. When a company is able to see beyond the obvious choices their people become more innovative and creative. When a company is able to see beyond the obvious impacts of their decisions their people become more strategic. The hypothesis of data science is this shift towards creativity and strategy will yield better business outcomes. So far, the data looks promising to prove this hypothesis.