0 likes | 4 Views
Discover the key risks of using AI in medical coding audits, including biases, accuracy issues, & legal challenges. Learn how to mitigate these risks. Read the PDF to know more or visit - https://www.3genconsulting.com/top-risks-of-ai-medicare-coding-audits/ <br><br>
E N D
Top Risks of AI Medicare Coding Audits New technologies in the healthcare revenue cycle are often exciting, but as many healthcare leaders have learned, it’s important to understand the risks of any new type of solution before implementing them. This is especially true in the case of medical coding services and considering artificial intelligence (AI) as a method of conducting medical coding audits. For all its promises, AI for medical coding audits has a range of potential downsides. How AI Is Used in Medical Coding Audits Medical coding audits offer multiple benefits to providers. Because they can be highly intensive and require specialized training, many healthcare leaders turn to options like medical billing and coding companies and AI. These provide support in reviewing their medical documentation, clinical notes, procedures, and diagnoses. AI is especially useful in handling data sets that are too large for human auditing teams. AI has been particularly attractive for Medicare audits because of the high amount of
fraud and errors that exist in the face of the large amount of data that Medicare patients and services generate. Leaders are looking for speed, accuracy, and support in identifying errors or audit opportunities to improve processes. While AI holds the potential to accomplish this, there are multiple high-risk pitfalls for revenue cycle leader looking for alternatives to manual coding. Risks in Using AI for Medical Coding Audit Services As you compare options in internal manual coding, working with medical coding and billing companies, and AI, consider the following risks [1]. AI Can Be Biased While AI might seem more objective than human manual coding or working with medical coding and billing companies, AI models were also trained by humans. These humans can have the same unintended biases, and when using AI, they often go unchecked. AI systems could have been trained under a provider that discriminated against certain groups or that was prone to certain errors, meaning any biases they had will quickly spread across your claims and audit results. Anyone implementing an AI solution also faces the issue of “automation bias”. Many people have a tendency to put too much faith in automation once it’s up and running. This means there will be a risk that your team will trust a solution to much when they instead should keep a watchful eye on results and understand that audit processes still need human supervision. If the members of your team who are responsible for overseeing audits do not have the education and training needed to supervise Medicare audit results and outputs, they will be more vulnerable to these risks. You Can Lose Accuracy The healthcare industry has already recognized the fact that AI auditing systems can have notable faults. As a response, some parties have suggested a medical algorithmic audit framework. This would be intended to help the auditor understand the errors that might occur. Anyone using AI for their audit coding will need a deep understanding of the weaknesses of their system. They will need to create a profile of the potential failures and set up systems to address the impacts on their audit results, revenue cycle outcomes, and on their patient population. As things stand today, there have been calls for the establishment of an international regulatory agency to address issues like these, which impact medical coding audits. There have also been calls on the government to create an agency that will certify the safety of these systems and address their legal liability. So today, in healthcare, and especially in Medicare coding, there is no universally accepted or clear methodology for evaluating the benefits and risks of AI. Anyone implementing one of these systems, especially for coding audits, is doing so at their own risk with minimal support for what is still essentially an experimental technology. You Run the Risk of Dehumanization
Much of the general AI discussion has centered on the risk of dehumanization. This concern is especially pertinent in healthcare, and for medical coding services. The audit process isn’t just a mechanical one. Medicare audits are an opportunity for providers to learn from auditors and implement tactics for reducing errors, improving billing and coding practices, and supporting a healthier revenue cycle. This is a very human interaction. With the use of AI, this interaction can be dehumanized in a few ways, including oversimplifying the learning and feedback process, and disrupting the natural learning exchange that happens with both internal and external audits. This risk is significant, especially as the healthcare industry as a whole is working to be more mindful of the human experience through initiatives around social determinants of health, addressing provider burnout, and improving the patient experience, as well as understanding how the revenue cycle impacts the patient experience itself through issues like financial toxicity [2]. Coding audits are an opportunity for providers to share knowledge internally and even learn from partners like medical billing and coding companies. It is important that short-term benefits of AI aren’t allowed to interrupt this potential. Your Legal Risks Are Uncharted Territory Consider the fact that it’s incredibly difficult, if not impossible for humans to check the audit work AI does. It doesn’t keep records of the factors considered in your coding, what it used to make decisions, or how those decisions were made. This is a recipe for legal trouble. RAC Monitor has already pointed out that AI cases will differ significantly from a case that involves a human [1]. This is because it’s not possible to havea hearing where an auditor can testify or offer additional details of their decision. This raises the question, if an algorithm is responsible for an outcome, what humans or entities are actually responsible? The company paying for the services? The vendor? The people responsible for training? AI in medical coding audits opens up a hornet’s nest of potential legal issues that the industry simply doesn’t yet understand. The Cost of Reversing Course on Medical Coding Services Can Be High When most revenue cycle departments implement an AI solution, they tend to shuffle things around, moving staff out of certain responsibilities, cutting off vendor relationships, potentially even laying off highly knowledgeable and qualified workers. This can become a significant problem if anything doesn’t work out with your use of AI in medical coding audits. You might be faced with building departments and relationships from scratch, on top of having fallen behind on training and knowledge required to keep up with fast-changing coding standards and requirements from payers and government entities.
One day, AI might be a safe and reliable addition for most revenue cycle and medical coding audit initiatives. As things stand now, most providers should rely heaviest on internal coding resources and relationships with medical billing and coding companies to navigate the challenges of medical coding audits. To learn more about how you can improve your medical coding services relationships today, start here. References [1] . E. M. Roche, "Defenses Against AI-Based Medicare Audits: Part II," RACmonitor, 24 April 2024. Available: https://racmonitor.medlearn.com/defenses-against-ai-based-medicare-audits-part-ii/. [2] C. Ike, "New Report Shows Hospitals Bear Financial Toxicity Burden," Association of Cancer Care Centers, 8 September https://www.accc-cancer.org/acccbuzz/blog-post-template/accc-buzz/2022/09/08/new-report-show s-hospitals-bear-financial-toxicity-burden. 2022. Available: Original Source - https://www.3genconsulting.com/top-risks-of-ai-medicare-coding-audits/