Are You Buried in Big Data? Move Toward the Light – Toward Transparency and Accountability

Kim Jinnett Signature

I’ve been thinking a lot lately about the promises and pitfalls of Artificial Intelligence (AI) in the context of big data.  As a data scientist who works with employers, providers, and others trying to improve employee health and well-being, I’ve worked with my fair share of “big data” and have heard a myriad of complaints from human resource staff, medical directors, and others about being “buried” in data.  In the health and wellness field, for example, having access to “big data” typically means there are many different types of data that are captured around employee health, program participation, health care utilization and use of sick leave and other health-related programs and policies.

There is an unrealized promise that having access to these sources in combination, loosely termed “big data”, will allow a better understanding of the health needs of a workforce and how best to manage the costs and other outcomes associated with those needs. For many practitioners, including health care and human resource analysts, the development and interpretation of systems relying on big data have consumed a growing portion of their daily work. And yet, many, if not most, remain disappointed in the results of all these efforts. According to the Integrated Benefits Institute, employers most want to understand the best ways to measure the impact of the programs and interventions they introduce. In a word, they want accountability

Sometimes outcome measures might include how effective these programs are in generating healthy results for the workers and sometimes the desired outcomes might be more narrowly cast around reducing the medical spend for an employer. Being accountable means having transparency around the outcomes you seek to generate and measuring those outcomes over time to assess whether the outcomes are changing in the direction you expect.

Right now, different internal departments and external third party contractors process streams of data that are generally collected in isolation from other streams and for different purposes. I recently presented on the Opportunities and Challenges in Workplace Data Scienceat the 2019 EPI Lifestyle Scientific Session. Part of my talk focused on the specific data types that are collected in the health and wellness field and the biases introduced and then propagated by the analysis and reporting of these observational data.

Inherent bias in big data

In brief, I discussed five general buckets of data types in order to illustrate some of the inherent biases in these observational data sources in the health and wellness space: 1) Administrative, 2) Survey, 3) Clinical, 4) Social/Mobile, and 5) Qualitative. For example, administrative data are collected primarily for billing and compliance purposes and include medical and pharmacy claims data and sick leave records. Inherently, claims data and leave records represent utilizers, those using health care services treatment or using their sick leave for time off at work. These administrative data exclude individuals who may have a need, but are not utilizing services because of unmeasured barriers; perhaps their deductible is so high that it precludes utilization in practice or perhaps they are a part-time worker who is not eligible for sick leave benefits.

If we are to understand the health and wellness of the working population, we need to be transparent about what the data streams represent. Each of these data buckets (and this is a small sampling of types as depicted in the chart) has inherent biases. How might these types or additional data better inform this fundamental question that employers have about what works for whom, when and why? This is the type of transparency and accountability that we need in the health and wellness space.

Artificial Intelligence, machine learning and deep learning (AI/ML/DL) all hold great promise in helping humans process information faster to develop potentially improved ways of solving some of society’s most intractable problems. I’ve written elsewhere about the importance of data quality so I won’t belabor that point here. I’ll just repeat that the adage “garbage in, garbage out” applies to machine decision-making just as it does to human decision-making. Data quality matters. We must understand what these data represent. What biases already exist in these data? Do these data reflect a representative sample of employees? If not, it will be hard to confidently say anything about the overall workforce of a company, but it may be possible to develop a bias factor. For example, if the resulting sample from an employee survey represents X% of the employee population, and over- or under-represents part-timers and employees with different socio-economic backgrounds these bias factors could be introduced into the AI/ML/DL system. Or at least these bias factors should be considered when evidence is reviewed. This is a type of accountability

With accountability, human decision-makers would not rely on machines to make decisions for them, they would try to ensure the data provided to the machines are high quality (clean, comprehensive, representative, etc.) and record any known biases.  On the topic of bias in decision-making I ran across an interesting article written a couple of years ago by The Centre for Public Scrutiny in London where Ed Hammond explains, “Our interpretation of evidence is, by definition, subjective – it is coloured by our worldview, by our personal and political preferences. Assuming that we can somehow divorce ourselves and our biases from our decision-making duties is dangerous.”

I would further argue that any decision, whether made by humans or machines, will be biased since all possible data sources and learning algorithms can’t possibly be included. But, we can hold ourselves accountable for acknowledging those biases and understanding how they affect our selection of problems to solve and solutions to consider. AI/ML/DL opens a powerful inductive space for researchers and practitioners, a space that relies upon grounded theory development.  "In science, there is a constant interplay between inductive inference (based on observations) and deductive inference (based on theory)”.   Grounded Theory (GT) development is a way to construct meaning from data, a way to induce or generate understanding. There are many schools of thought around GT methods, but for this short piece the GT approach should be viewed as an inductive method based on observational data, whether qualitative or quantitative, rather than a deductive method based on theory. In this way, GT methods have a lot in common with machine learning and together these approaches may help us develop novel approaches to improving worker health and wellness.

Canonico and colleagues propose that “Technology doesn’t just improve our lives: it can improve the very process we use to discover new ways to understand ourselves. Grounded theory enables us to approach large social problems without presuppositions, and construct models and theories from the data itself. Machine learning and grounded theory are a perfect match because they both use induction to reason about large data sets and complex problems.” They further note that “The key lies in making state-of-the-art computational methods accessible and human-centered.”

At the ICHW we all have a human factors orientation, and we will continue to design and study a wide variety of interventions and solutions for healthy workplaces and improved employee outcomes where the person is at the center of the discovery and application.  For workers, employers and others with an interest in health and wellness, we continually ask the questions -- what works for whom, when and why? – and transparently share the results.