I am a data-focused person who uses data to develop strategies, guide site plans, plan budgets, and generally make decisions. What people do not see are the many, many times when I decide not to use data to back up a particular point. The editing process is tricky that way, it is impossible to see something that is not there. And editing is exactly what it is. If something does not add or detracts value, it should be edited out.
One of the most difficult conversations I regularly have with colleagues is that the data they seek to justify their position is not helping their case. They often look at me as if I have grown a second head when I say to exclude the data. The world has seemingly moved to demand that all decisions be data-driven. Anything that is not data-driven cannot be appropriate. This is a fallacy we need to be careful of.
The single most common example of when this conversation occurs is when someone is looking at the occupancy of a site. Most cannot see why we would ever choose to not leverage our badge or sensor data to support a decision. If we have data, it must always be pertinent. To this I reply, except when it is not:
- There is not enough data to make a decision. If a site has been live for less than six months, there is probably not enough data to accurately and completely understand how people are using the space. Business activity is usually seasonal following a peak and valley cycle that may not be regular or predictable.
- The site is too small for the data to be relevant. If a site has less than 20 people or desks, the data may not be useful because a change in behavior from just a couple of people can materially change what we see. A new client could come on board that requires all hands on deck for a month suddenly changing things. In a 100-desk office, this can be handled through the normal allocation of space. In a 10-person office, this could break the scenario.
- The data itself is not reliable. Sometimes data is wrong. We may or may not be able to tell exactly what is wrong with it, but it is telling us something that contradicts our understanding of the situation. I have encountered scenarios where day cleaners have tripped sensors making it seem like office occupancy is much higher than actual. I have also seen offices where front doors were propped open against policy meaning that badge data was lower than it should be.
- Other factors are driving the decision outside of historical occupancy. The most common of these is that a business change is planned which changes the nature of the site. If a site is going to double in size or get cut in half, the historic numbers may not be applicable depending on how the change will happen. If an M&A is planned, then historic data for only half of the occupants may be misleading to the future overall picture.
- The audience is going to reject the data because they cannot understand it or ask you to analyze it in a way that is not appropriate for the situation. At the end of the day, the analysis is for an intended audience and sometimes we can predict how they will respond to the information they are being given. It is never wise to take information into a meeting where you know it will accomplish the opposite of the intended effect.
These are just a few examples of when it may not be appropriate to use data for a traditionally very data-driven topic. While it may seem like an easy out to simply throw the data in any way with caveats, data is not something that stands on its own. As the quote popularized by Mark Twain goes, “Lies, damned lies, and statistics.”
The real question is what story you are looking to tell. Data is an input to inform the story, data will likely be a key component to telling the story, and data may be the impetus for why the story is being told. But the data is not, in itself, the story. If you think your story is simply the data, you have not thought it through completely.