AI Bias

The results created by an AI model can be considered impartial or objective. However, those results are entirely based on the input data used to train the model. This means that the person collecting the data to train the model may inadvertently transfer their personal bias to the dataset.

And as to bias, Jess Peck from SearchEngineLand wrote, “For example, if you trained a language model on average reviews of the movie “Waterworld,” you would have a result that is good at writing (or understanding) reviews of the movie “Waterworld.”  If you trained it on the two positive reviews that I did of the movie “Waterworld,” it would only understand those positive reviews.”  Bias comes from the choices made by humans on what data the GPT is trained on.  And humans have biases.

While AI bias can only cause inefficiencies in an industry such as manufacturing, it can have dangerous consequences in the healthcare sector.  LLM data sets need to be audited to root out any bias, or at least reduce it.

In 2019, Science magazine reported the many commonly used algorithms are considered racially biased.  The article said, “Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients.”

 Bias can also be related to socioeconomic status, ethnicity, sexual orientation and gender.  For example, a study by the Commonwealth Fund of New York, showed that people with low incomes have higher health risks than people with higher incomes. So if data is only collected from clinics or organizations in wealthier areas, then the AI model will be socioeconomically biased. 

Jeremy VanderKnyff, PhD, Chief Integration and Informatics Officer for Proactive MD writes,  “There are few facets of healthcare that are unaffected by the unconscious biases of AI, including risk identification, healthcare claims, and data security. Research has found that some algorithms with these technologies have severely underestimated the risk for patients. This flaw exacerbates the already adverse problem that underserved populations face: underdiagnosis. Historically in healthcare, this disproportionately affects people of color, playing out in the form of back pain in black women and prostate cancer in black men.”

“Beyond implications for how patients interact with the health care system, this shift may have detrimental consequences for patient outcomes and health equity if we fail to examine the biases in our existing systems and collection methods of clinical and research data used to build AI/ML decision-making models. 1. Algorithmic bias. There’s a true potential to perpetuate biases in health care if algorithms are built with flawed data. If the data underlying clinical decision-making algorithms inherits the chronicled bias experienced by certain patient populations, the industry runs the risk of carrying over this bias in patient care. Mitigating the risk of algorithmic bias means addressing racial bias at the bedside and increasing diversity in clinical research.”

Challenges in using ChatGPT, from OpenAI, in medicine are numerous.  The most serious one being that, while its answers can sound correct and authoritative, even citing numerous references, it’s not always right.  The data can be biased, or just plain wrong.  ChatGPT uses Large Language Models (LLMs).  And it’s only got information that is accurate until the year 2021.  It doesn’t search the internet and you cannot direct it to use specific sources, as in only using as peer-reviewed journals.  So it’s really “physician beware”.

Many vendors of commercially available AI systems do not disclose how their products were trained or the gender, age, and racial make-up of the testing data. In many cases, it is also unclear whether the data they employ will map to those routinely collected by health care providers.

To reduce or minimize bias in AI systems, McKensy writes, “So there are no quick fixes to removing all biases but there are high level recommendations from consultants like McKinsey highlighting the best practices of AI bias minimization:”

 

For more information on Artificial Intelligence, please go to:

2Ascribe Inc. is a medical and dental transcription services agency located in Toronto, Ontario Canada, providing medical transcription services to physicians, specialists (including psychiatry, pain and IMEs), dentists, dental specialties, clinics and other healthcare providers across Canada. Our medical and dental transcriptionists take pride in the quality of your transcribed documents. WEBshuttle is our client interface portal for document management. 2Ascribe continues to implement and develop technology to assist and improve the transcription process for physicians, dentists and other healthcare providers, including AUTOfax. AUTOfax works within WEBshuttle to automatically send faxes to referring physicians and dentists when a document is e-signed by the healthcare professional. As a service to our clients and the healthcare industry, 2Ascribe offers articles of interest to physicians, dentists and other healthcare professionals, medical transcriptionists, dental transcriptionists and office staff, as well as of general interest. Additional articles may be found at http://www.2ascribe.com.  For more information on Canadian transcription services, dental transcription, medical transcription work or dictation options, please contact us at info@2ascribe.com.

You might also enjoy