As stewards of healthcare, health plans are responsible for managing the care of its members. This includes working with providers to capture member conditions accurately and comprehensively via medical charts and coding. This improves member outcomes and optimizes the plan's risk adjustment revenue which ultimately reduces member costs.
The scope of a prospective risk adjustment program is to account for historical member conditions, and identify and close gaps on suspected member conditions. Many plans attempt to close as many prospective gaps in a year as they can and whatever they cannot close in that year is sent to retrospective programs. This is an unsophisticated, costly approach that tends to over-suspect and send providers weak evidence which diminishes provider trust and engagement.
Based on CMS guidelines, the prospective format has very specific language requirements for how providers document member conditions. Plans cannot go back in time and change how its providers code and document a condition, thereby making retrospective programs administratively heavy.
AI machine learning models offer a higher level of sophistication by scanning the clinical evidence and assigning a probability score to each piece of evidence in support of a suspected member condition. This saves administrative time and offers providers a high level of trust that the data sent via CDI alerts is compelling and indicative of a condition. When providers have confidence in the data, it increases their participation in prospective programs and leads to more gaps closed.
Tune in to this episode to learn more about AI suspecting program logic and prospective programs.
About Our Guest
Elizabeth Burreson is an expert in risk adjustment analytics technology and has 20 years of IT data management experience, managing product portfolios and backlogs.