Clinically-driven Algorithms: Their Impact on Risk Adjustment and Quality Measurement

By Matt Yuill, M.D.

February 01, 2017 - How can providers and payers gain valuable insights to reduce risk and increase quality care? Countless organizations have learned the hard way how difficult it can be to glean new learnings from large amounts of data accumulated on members. But implementing effective analytics and technologies to improve outcomes is possible with the right technology and steps in place. This blog will address five key tenants to more effectively assess health care data:

  • Ask specific questions of data to get the best results for each unique use case
  • Combine clinical, business, IT and data science for best results
  • Fine-tune algorithms based on use cases
  • Share/make decisions based on the limits and caveats of algorithms
  • Adapt to ever-changing quality and risk-adjustment requirements
  • Combine risk and quality whenever possible to reduce interruption to providers
  • Ask specific questions of your data

Bernard Marr explains in “Where Big Data Projects Fail” on Forbes.com that many data projects are unsuccessful because they do not begin with clear business objectives and a clear definition of the problem the organization is trying to solve.  Applied to healthcare, broad questions/objectives such as “How do I improve risk and quality scores?” are less effective compared to more specific targets such as “How can I use analytics to better identify patients/members with Type 2 Diabetes, not already coded as such, to improve quality of care?”

Combine clinical, business, IT and data science

Even with clear and specific business objectives, the secondary challenge is often that a primarily statistically-driven approach to data analysis will yield a slew of correlations that are seemingly important, but in reality, amount to little actionable information. Thus, even when asking targeted questions, one may not be able to harness the value of the analytic results if they do not lend themselves to reliable and actionable information.

The key to obtaining valuable information from big data is in getting clinicians and data scientists working together to ask the important questions.  A clinical team that is well-informed on the business use case, coupled with a strong statistical analysis team (or technology provider), can yield meaningful results that neither discipline can achieve alone. Clinicians can help create terms to search on, ask the right questions, understand the data types, while statisticians can serve up correlations and help clinicians understand limitations and caveats of whatever statistical approach is taken.

The most effective risk adjustment tools combine a clinically-oriented and statistically-driven approach to prediction and modeling. Consider the example of identifying members with Type 2 Diabetes, not already identified as such in claims data. Building the analysis for the data might include insights from statisticians that there is a high correlation between metformin and Type 2 Diabetes, and feedback from clinicians to exclude PCOS and prediabetes also treated with metformin in younger patients, to reduce false positives.

Fine-tune for use cases

As shown above, effective predictive algorithms often require a tradeoff between sensitivity and specificity. Higher likelihood predictions may miss people, while broader predictions may identify false positives, including people who do not have the predicted event or condition. The use/business case has to be taken into consideration when deciding how finely-tuned algorithms should be.  For example, if the cost for diabetes and pre-diabetes lifestyle education is low and you want to capture as many people as possible, then the algorithms to identify members can be highly-sensitive and open to get the greatest number of hits. Conversely, if the use case is to accurately identify a limited number of people with greater certainty who have multiple missing Hierarchical Condition Categories (HCCs), such as Diabetes and other conditions, and potential care gaps, then the algorithms should be more finely tuned.

Accurately identifying members described in the above requires clinical knowledge of how medical conditions behave in the real world. For instance, some HCCs tend to drop off because clinically the condition is resolved, and others are expected to persist because they are chronic diseases.  An acute event such as a Deep Vein Thrombosis should not be expected to be in the claims data in the following year, as it may have already resolved, where the code for Diabetes or Peripheral Vascular Disease should persist year over year as these are conditions that persist.

Make decisions based on limits of algorithms

Countless people have described analytics or clinical decision support as an extension of a clinician’s capabilities. Each clinical algorithm is designed to look for clusters of information and will have some degree of certainty. Statisticians, technology providers and others involved in analysis should be transparent about caveats and the degree of reliability of results suggested by algorithms to let users decide on the credibility of the predictions or findings. The degree of confidence gives payers, physician and other end-users a sense of how much to trust the analytics.  If the user disagrees with all of the “highly-confident” predictions that are presented, he/she will likely have little trust in the output of the analysis.

Perhaps algorithms predict that a member has uncaptured diabetes and the reasoning is that they have seven prescriptions for insulin this year. With this reasoning revealed to the user, the clinician or payer can more easily understand why the inference was made.  They can also discern causality from correlation or discriminant ability – do results appear to accurately correlate with the disease? Or do they correlate with something else?  For example: the “Chem 7” lab test, correlates 90% with Diabetes, making it appear a useful predictor of the disease, yet most members will get this common test so, in actuality, it also correlates 70% with other conditions such as high blood pressure – or having no condition and therefore, may not be good indicator after all.

Adapt to changing needs

Providers and physicians face many changes in the industry for which data and accurate analysis can be invaluable. Standards for measuring quality of care alone are constantly evolving--the HCC risk model changes every two years. Marketplace quality measures based on the Affordable Care Act may soon go away if ACA is repealed. The Physician Quality Reporting System (PQRS) is being phased out, and meaningful use for EMRs is constantly evolving.

Additionally, effective data analysis requires extensive technological effort, in transferring data in the correct data model, continually updating information and coordinating numerous back and forth transactions. The amount of implementation required makes it advantageous to leverage one solution to address many needs. Payers, providers, coordinators and other users, simultaneously, need flexible solutions with the potential to grow and address their evolving needs.  Analytics and technology must keep abreast of, and adapt to, these challenges.

Combine risk and quality analytics

Analytics technology should also help providers, payers, care managers and coordinators simultaneously address risk-adjustment and quality of care. Technology should incorporate Healthcare Effectiveness Data and Information Set (HEDIS), Medicare Stars, HCC and other measures to enable the provider to understand the big picture. With an integrated analytics solution, providers and care managers can generate all of the information they need in one platform or user experience.  This way, closing care gaps, updating risk-adjustment diagnoses and reviewing standard drug safety, as well as drug to genome safety, can be seamlessly integrated to address the most relevant and important objectives in one office visit.

Combining this information with a member priority score can help providers and payers decide where to allocate resources, especially where limited time and finances are involved.  ‘How can I be more efficient?’ is a question most providers and payers ask regularly and criteria that should be factored in any analytic output.

Providers and payers can analyze vast amounts of member data to reduce risk, increase the quality of care and further their organizational goals. The answers on how to best analyze that data lie in understanding the business need, data capabilities and constraints of the data, and leveraging clinical and analytical knowledge to ask and explore the right questions to arrive at more relevant answers.

At Interpreta we have developed an analytics engine that combines a clinically-oriented and statistically-driven approach to prediction and modeling. This addresses healthcare’s evolving needs and provides actionable insights for comprehensive patient care. We believe that by integrating and interpreting clinical and genomic data continuously, doctors, care managers, coordinators, and payers can access aggregated data needed for improved patient analysis, prioritization, population health management and precision medicine.