New research from Reuters Events Pharma and Within3 revealed that 60% of medical affairs leaders said bias could be an issue when reporting on or sharing the results of insight-gathering activities. What are the causes of – and some potential solutions for – bias in insights reporting?
What can cause bias in insights reporting?
The medical affairs experts said bias can occur for many reasons, most related to human nature:
- Need to complete work against a critical deadline
- Instinct to be diplomatic
- Tendency to emphasize findings that support an accepted narrative rather than share objective signals
These things are likely to occur if information that departs from the status quo steers work in a new direction, impacting timelines and budgets.
“There are two places where bias sneaks in,” says Lance Hill, Within3 CEO. “One is when data is gathered.” In this scenario, asking the same group of KOLs the same leading questions will result in a pool of very similar answers. The second place bias can occur, says Hill, is “more insidious.”
“If I have been educating physicians on why one mechanism of action is superior, and I’m looking at a lot of data, it’s human nature to look at those data and try to find things that validate success.”
Solving for bias with AI
One way to overcome human nature? Let technology help with tasks that are particularly susceptible to bias.
“AI is not out to prove something you already know,” explains Hill. “It can surface things that you otherwise wouldn’t have seen.” Hill related a client story in which a medical affairs leader implemented AI-supported insights management to understand why adoption of the company’s flagship product had plateaued despite superior clinical results. “AI unearthed the source of the bias as the presumption of excellence in education, based on a small sample size of the same 12 experts. The bias was they had done a tremendous job at [HCP] education, and the reality was they had not.”
“It took the machine to really show them that.”
Like humans, AI has limitations
Because AI-assisted analysis is less biased, it can surface knowledge gaps more clearly, or more quickly, than human analysis might. What it cannot do is fix systemic issues in how data is being generated.
“If we’re still enrolling non-diverse populations in studies, AI analysis is not going to help,” says Hill. Here again, AI may be able to solve for a tricky problem. “There are a lot of companies focusing on how to use these tools to optimize diversity within clinical programs, and how to accelerate the outcomes.”
Using the right AI and working with data scientists can also help ensure that AI does not introduce or amplify bias. “We know that whether it is social media or any data, especially historical data, generative AI can amplify bias,” says Neeraj Goel Mittal, Global Head of Data, Analytics & AI, GSK. “We need to be careful and observe…and affirm that these [models] are not biased and are fair for everybody.”
Hear more from Lance Hill and Neeraj Goel Mittal in the on-demand webinar “Can AI reveal real-time insights?” or learn more about pharma insights reporting.