All organizations make decisions using data, but not all organizations use the right data in the right ways.
In recent years, many organizations have loudly embraced 'data-driven decision-making.' Using data to make decisions isn't new, but some of the digital tools available now make it easier than ever to collect and analyze data points from across a business in one place.
Meanwhile, the habits of startup culture emphasize the use of analytics to make critical decisions about products and nearly every other element of a business.
Before we meaningfully discuss decision-making with data, we need to understand the difference between quantitative and qualitative data.
Due to qualitative data's subjective nature, we sometimes apply quantitative methods to 'messy' data like stories and emotions in an attempt to make analysis more straightforward and more consistent.
Hiring managers and people departments have long struggled to integrate qualitative data into their decision-making processes, as any evaluation process requires that soft skills be taken into account. These often include elements that team members "just know."
For example, measuring confidence or professionalism can be extremely difficult in a single individual, even before having to find a way to reliably and consistently compare those qualities among entire teams or departments. A 1-10 'likert' scale for each of these 'soft skills' might be how we quantify a qualitative data point, much like rubrics and multiple-choice questions also help us.
Leaders often have extensive experience to draw on. In a way, leaders are already making decisions based on their qualitative datasets: the memories and decision-making paths inside their brains.
Intuitive leadership has resulted in many good decisions inside organizations, but there are several flaws with intuition and experience as the sole source of decision-making, including
1) The sample size of our own experiences are limited: all of our experiences are unique to each of us,
2) Our experiences are based on circumstances and mindsets which may no longer be relevant: with the societal and behavioral changes brought about by both the COVID-19 epidemic and evolution of the digital age, senior leaders' experience and intuition may be rooted in ways of thinking and working that are no longer relevant to the jobs we do today and tomorrow,
3) Humans experience confirmation bias: we search for evidence which agrees with our beliefs about the world.
As we enter into data-driven decision-making, it's essential to engage in the process of 'unlearning' to make sure that we have room in our minds for new mindsets. Without unlearning old behaviors and embracing new ways of thinking and behaving, we will essentially be making decisions with outdated and potentially obsolete data.
Any time a leader expects things to be done a certain way because that's how it has always been done, there is the risk of missing out on potential innovation and exponential growth.
There is a certain amount of what psychologists might call 'ego death' when we have our ways of thinking and operating challenged. We might find, for example, that what we thought our coworkers were motivated by was wrong; our website visitors may have been annoyed rather than excited by the flashy new landing page we invested in heavily. To a leader reliant on their intuition and personal experience, this can engender feelings of failure, embarrassment, or even some kind of personal attack, all of which could be avoided by integrating data into the decision-making process.
At other times we may find our intuitions are, in fact, confirmed by the data. In this case, it is still important to periodically test our assumptions. Keeping 'assumptions' or 'truths' framed as hypotheses with various degrees of confidence can help avoid confirmation bias in the future.
This is why engaging in a structured process is essential to identify what we know and what we know that we don't know.
The startup world especially has embraced data-backed decision-making. Qualitative data like stories can be used to get the attention of potential investors and partners. A good 'pitch' will segue from this qualitative data to quantitative data. It is understandable that potential investors ask for data that backs up a claim of a new opportunity, rather than committing large sums of money based on a gut feeling alone. This is particularly important with 'exponential' initiatives which are premised on network effects which will be slow at first and then grow rapidly later on.
Properly defining the opportunity or problem is critical to any data-backed decision-making.
Without a well-defined problem, it is impossible to move into the hypothesis or testing phases, as an unclear hypothesis leads to unreliable test results. It can be helpful to apply computational thinking to break down your problem into smaller parts.
For example, one of the most common areas of confusion in companies is how to measure employee satisfaction: until we agree on what satisfaction is and how it presents—or not—in team members, it is difficult to use data to make better decisions.
A common form of experiment used to test a hypothesis is a pilot program. By starting with a small-scale or even minimally-viable version of an initiative, service, or product, experimenters can often gain considerable insight into the likelihood of success.
Often a pilot's success is measured through its popularity. However, if efficacy is the most important factor, the pilot's level of popularity may cloud the results. This is why a clear hypothesis is essential before testing begins, as it allows the experimenters a degree of distance from the outcome, and enables a more dispassionate, balanced assessment. Experimenters using pilots are susceptible to confirmation bias: usually, the team members involved in the pilot are personally and emotionally invested in finding a successful, positive, outcome. This can lead to unsustainable practices, pressure on others to provide positive feedback, etc.
Television shows often secure funding with 'pilot episodes.' Just like the 'minimum viable products' of the startup world, pilot programs tend to have lower production values and are often less polished, as they are primarily used to ensure that the test audience reacts appropriately to the basic premise of the show before committing significant resources to the project.
An important part of testing any hypothesis is the inclusion of feedback loops, so that the team members designing the experiment can gather data directly from the end-users.
Without these feedback loops in place, experimenters are losing out on a lot of otherwise critical data.
For example, if a pilot program gets very low levels of interest, it may not be that the idea itself is poor or unviable, but that the communication of the idea to colleagues was poor.
With a feedback loop in place, colleagues would be able to explain that they were perhaps unaware of the pilot, allowing the experimenters to pour this feedback back into their model and adapt the experiment appropriately.
Portland, Oregon in the United States is famous for having a lot of food carts. After the 2008 real estate crash, a lot of restaurants couldn't stay in business in a regular building, and new restaurants had a hard time establishing themselves quickly enough to become viable.
As a reaction to this, the path to opening a restaurant in Portland now often starts with a food cart, which is a smaller, more manageable investment with fewer overheads and more flexibility, allowing prospective restauranteurs to test their hypotheses around the types of food offered, the price point and portion size, their ability to produce consistently-flavored dishes, etc.
This data can then facilitate learnings and drive adaptations to a restaurant's strategy before committing to a traditional brick-and-mortar restaurant.
All experiments should be designed in a way that will clearly indicate if the hypothesis is confirmed, proven wrong, or if the results are inconclusive. This third state is important and often overlooked, or conflated with a negative result. An inconclusive result indicates that the experiment did not yield sufficient information to either prove or disprove the hypothesis.
Inconclusive data is still useful, as it can be used to inform the next round of experiment design and drive towards a more definitive result.
If your experiments yield a negative result—the denial of your hypothesis—consider whether you need to update the mental model you are working with, and share this learning with others who may find it useful.
If you don't articulate what your model is, you can't change it.
There are many goals for using data to make decisions. Consider which are of the highest priority to you—not ever decision needs the same approach.
All of the goals above are written as decision principles to help people makes choices about their data initiative, too.
You can and should edit, customize, and regularly re-prioritize your own goals and principles. For example, quick decision-making might be more important than perfect decision-making in some contexts, such as stock market investments or responses to social media trends. The opposite may be true when it comes to hiring decisions or core business strategy.
Generally, it helps to start by focusing on the business outcomes you'd like to improve and then identifying where better and/or more data would help, rather than starting with which metrics or data points are already available.
The trend of establishing and driving by Objectives and Key Results (OKRs) is in part a reaction to over-emphasis of vanity metrics or unhelpful performance indicators. Measurement for measurement's sake can result in ineffective, perverse, or even dangerous incentives.
If your company needs increased sales, measuring outbound call volume alone may seem successful initially but may annoy customers and decrease sales over time. Similarly, basing delivery drivers' pay solely on the number of packages delivered may damage boxes or cause collisions.
Ask yourself (or your team) the following questions and see which have a clear yes and which still need more work:
There is no perfect approach to decision-making using data; the mix of quantitative and qualitative inputs is different for every team. However, the more you answered with a confident 'Yes!' to the questions above, the higher the likelihood is that you're using data effectively.
While simple surveys and analytics are within reach of many leaders, more advanced analyses require a distinct mindset, skillset, and toolset---that of the quantitative analyst, sometimes referred to as a 'quant.'
Quants use statistical methods to process and interpret data of many types. Quants may have titles like 'statistical analyst' or 'data scientists,' but anyone rigorously applying the scientific method to quantitative data can fall into this umbrella category.
Organizations with the strongest decision-making approaches have deep thought partnerships between quants and other stakeholders.
Intelligent consumption (and challenges) of your quant partners' work will make their job better and more effective. And their challenges to your decision-making can help you quickly evolve your mindsets.
Participating in analytical programs doesn't mean that everyone has to become a data scientist. If business stakeholders are clear about the questions they would ask of the data if it were a person, then the quant helping to process this can figure out how to turn that into useful answers. A good thought partnership allows two or more people with perhaps few overlapping skills, work together to solve a common problem or exploit a new opportunity.
It may be that there isn't a clearly articulated strategy, and therefore it's hard for quants to suggest a specific analysis or data point because they don't know exactly what hypotheses are being tested, or what trends in the data they should you bring to leader's attention. A quant may see things that others don't see, especially when they have access to advanced machine learning tools. 'Unsupervised' machine learning asks machines to identify patterns in the data that we as humans may not have seen. And in some situations, the 'quant' in your organization will be a machine co-worker, not a human.
Business operation decisions can and should be data-backed. The risk is over- or under-quantifying critical areas of the business.
Quantitative data lends itself to cost savings, risk reduction, and productivity improvement, among many other measures.
Using qualitative data to inform that process includes interviewing those affected and participating in discovery throughout the business change process.
Higher learning institutions are utilizing analytics to examine usage of resources to fix inefficiencies and plan for future growth. After examining classroom usage, Queensborough community college was able to make use of almost an acre of unutilized classroom space instead of spending millions on construction of new buildings.
Customer Experience and Product Management are great opportunities for data-driven decision-making.
The complex choices of which features to prioritize, which bugs to fix first, and which markets to emphasize all benefit from quantitative data.
For example, the personal investment app Acorns doesn't require new users to link all of their bank accounts, nor provide exhaustive information about goals, income, and the like up front. Instead, they gently request new information from a user in subsequent sessions, gradually filling a 'progress meter' to 100% setup completion. This allows people to try out the app without getting discouraged by complex onboarding or profile-filling activities. The subscription software, Chargebee, also experimented with simplifying their signup process, with stellar results. After simplifying their initial sign-up process to one step, entering your email address, they decreased the friction of enrolling and increased signup by 100%.
Such strategies are informed by data, which helped app developers know that users often abandoned an app during the setup process because it was too laborious.
Organizational culture and colleague experience offers another opportunity to leverage both qualitative and quantitative datasets. By identifying friction points, whether functional, social, or emotional, it's possible to increase productivity and retention, resulting in better talent pools and decreased management costs.
Fortunately, recent movements around the concept of Objectives & Key Results (or OKRs) mean that leaders can better map the organization's needs to relevant, quantitative measures.
The HR software tool 15five.com gathers quantitative and qualitative measures of effectiveness, happiness, and productivity using metrics derived from OKRs. This allows managers to understand and meet employees' needs more efficiently than extensive 1:1 meetings, which may be too costly to scale.
Business models and strategies also can benefit from increasing their use of data.
By linking strategic options to lightweight market and value proposition tests and OKRs, companies can test their big-bet visions for feasibility more quickly and uniformly than before.
For example, major firms are using Kickstarter, Producthunt, and other crowdsourcing platforms to test the viability of new products and features. Organizations as large and traditional as automakers are using startup-style pre-ordering to more accurately predict (and finance) manufacturing needs.
To get started, map your environment and identify decisions that aren't going as well as possible.
Then practice the steps of defining, discovering, hypothesizing, testing, learning, and acting. When you enter into that virtuous cycle of using data to inform decisions, you have the ability to couple both quantitative and qualitative data to make better, but still human, decisions in your organization.