Lately, the field specialists in Nature Medicine have warned that there is an increasing number of AI-based health tools, which, while approved by the supervisor, can be insufficiently tested in the clinic. The article in question is called “Not all AI health tools with regulatory authorization are clinically validated” and reveals the fact that even the tools which were approved can be useless in clinical practice.
The paper written by the authors including Sammy Chouffani El Fassi and Dr Adonis Abdullah noted that as most of the AI applications are getting clearance from the FDA it might not imply that the applications have undergone significant test in terms of clinical utility. This has great potential towards threat dangers of patient safety, especially because large reliance may be placed on more practiced tools that appear effective only on the paper, but in real work do not offer accurate or beneficial result.
To this end, the authors put forward a new validation standard which would make it possible to assess whether reliance on regulatory authorisation as an evidence of clinical efficacy is warranted. It is a worry that without this extra defence, healthcare could potentially be drawn into implementation of AI tools that are risky and might contribute to wrong diagnosis, or incorrect treatment.
The report however criticized the rate of development and approval of health tools developed through Artificial intelligence calling on the regulators, the developers and the healthcare givers to enhance the approval process in such technologies so that efficient and safe health solutions are developed and implemented. This critical discussion helps to serve a reminder that while new technologies are emerging at this present generation, patient care must always appear as the centerpiece immediately any new trend is developed.