Skip to main content
March 24, 2023

Podcast: AI is an opportunity for medical affairs

In our new podcast, Within3 CTO Jason Smith discusses how AI can help make medical affairs more efficient, accurate, and effective.
AI for medical affairs

Earlier this month, artificial intelligence research lab OpenAI announced GPT-4, an update to the technology behind the ChatGPT chatbot. GPT-4 is multimodal, meaning it can respond to images and text; it can also pass a bar exam in the top 10% of scores, rather than the bottom 10% notched by its predecessor. (We congratulate GPT-4 on its achievement and wish it the best.)

As AI capabilities accelerate, companies are looking for answers about how the technology will impact them and their work in the future. The road hasn’t always been smooth: while the pharma industry continues to invest in AI applications that support drug development and other operations, controversy recently arose over ChatGPT being listed as an author on research papers. Clearly, humans and organizations alike have a lot to learn about generative AI and ChatGPT in pharma.

In this episode of Within3 Questions, we sat down with Within3 CTO Jason Smith to find out where we are in our journey with artificial intelligence and what medical affairs leaders need to know now.

Podcast transcript: Within3 Questions with Jason Smith

Hi, and welcome to Within3 Questions, a podcast where we ask one interesting person three interesting questions about life science, technology, and insights management. Our guest today is Jason Smith, Chief Technology Officer here at Within3. Welcome, Jason.

Jason Smith
Thank you. Happy to be here.

Thanks so much for joining. So our topic today is artificial intelligence or AI, and mainly how pharmaceutical companies use it and what they need to know about it. And I was prompted to ask you these questions because there’s been so much about AI in the news. In my job as a marketer. There’s a lot of conversation, for example, about ChatGPT and how do we use it? What does it mean to our jobs? What does it mean to our learning curve? And recently, there was some interesting conversation about a New York Times article where a reporter had a conversation with Microsoft’s AI-powered Bing search engine, which apparently left him “deeply disturbed,” in his words. So I wanted to ask for our first question. Given all this conversation, where are we in our relationship with AI as humans?

Jason Smith
That’s a great question. I’ll do my best to answer from my viewpoints. Obviously, a huge conversation and multiple people are working on bringing that into the day in the life of every user, regardless of your job function. Yeah, that article is interesting, in particular, because of, you know, I think where ChatGPT started to shift the conversation, it’s one of the first times en masse that a lot of individuals got to interact with what, you know, is a naturally, excuse me, a natural sounding AI response – a natural sounding response derived from an AI solution, from these large language models. So I think that is a bit of the uptick. You see, it’s always important to remember how we build these models.

Largely models like GPT3 and ChatGPT, which is kind of a more focused subset of GPT3, is based on the data that’s collected from the internet, right? So you have one of the main crawlers that collect everything from the internet. You have Twitter feeds, you have Wikipedia feeds, other social media data that are collected there, which has unfortunately a lot of opinions, not all of them are, are rooted in based on them.

So when we train a model on all this information without context for what those things are, it should not be surprising to us as end users and as the humans, if you just look around the world today that when you train a model to know nothing else other than what we tell it, and it’s trained on data that may have that type of content in it that without, without a filter, it will repeat that type of content, uh, when prompted to in certain ways.

Where would you say we are in our understanding of AI’s capabilities right now on a maturity scale?

Jason Smith
That’s a tough one. It depends on, I think it’s more nuanced depending on the industry and how the applications are being applied. I think there are a lot of terms that use AI instead of artificial intelligence, really use the term augmented intelligence. I think that’s more apropos to what we’re seeing in the industry today, and our relationship with them differs. There are some industries and image recognition and analysis that are far more mature. Just they’ve been, there’s a lot more focus on them. You know, you’re already starting to work towards self-driving cars, in those pieces, that’s a lot of intelligence and processing, video and images in real-time and context of awareness with – a dog runs out or stop sign or any, you know, anywhere around the world there’s different, you know, you have all that language barrier broken down from that.

That’s a bit more mature than maybe a language model being deployed to help a commercial or marketing team understand how to exactly market to a subset of a user base, right? Because we’re still learning how not only to train those models in a cost-effective and meaningful way but then bring them into the daily workflow such that it has an impact, for the day in the life to the user. So I think right now we’re gonna continue to see more of an augmented intelligence approach where it’s deployed in very unique areas, very specific bottles, and over time, you know, you have one model deployed to solve problem A, then another one for problem B and problem C in your workflow, you’ll have AI touching every aspect of it, but it’ll become more of a build rather than a rip and replace approach.

Okay. So turning a little bit toward our audience and their needs. You’ve written for several publications about the use of AI in the pharma and life science industry. And you’ve written specifically about incrementalism in AI. So can you explain to us what that means and why it’s important specifically for the pharma industry to understand?

Jason Smith
Yeah, absolutely. Incrementalism is building on somewhat of my last point, to that question is it’s not a replace everything, replace the entire department with AI and technology, it’s a – how can we optimize one part of our workflow in the life science industry? So, I will select medical affairs, for example, as a case study where medical affairs are out speaking with researchers and physicians and thought leaders around the world, and they’re capturing lots of rich observations from those interactions. And, you know, they’re hearing from the boots on the ground and the leading edge, what’s happening on a given disease state or in a broader therapeutic area, being able to use natural language processing and maybe some other subsets of machine learning to understand in real time what are the macro topics coming out of all those interactions in the therapeutic area.

And then even further, how are those topics coming out aligning to my strategy and my education approach? That’s, to me, a great first step in applying artificial intelligence as, as a whole into an organization. You know, you’re not gonna have AI bots going out and meeting with the doctors and capturing the data and analyzing it, reporting it back, and someone making a strategic decision or changing strategy. Humans are a critical part of that.

So when we talk about incrementalism, one aspect is the adoption, finding those really key pain points in the day in the life within an organization and where can it be applied. The second part is that by itself, even in that application, it is unlikely to be a hundred percent a silver bullet, right? And we see this a lot, where companies deploy solutions, and we hear, oh, it didn’t work, and it didn’t solve all my needs. Well, just like all software and tech, AI is no different. It’s very unlikely that a broad case AI application from a vendor that’s trying to sell to everyone is going to have something that’s so specific to the exact data models and into the use case of our data to expand that over time to try to hit 80% and 90%, with the goal of – that’s an okay approach because we’re continuing to solve it over time.

So to pick up a little bit on what you were talking about, you mentioned AI bots, you know, going out and making the decisions and changing strategy. I wanted to ask you what misconceptions about AI would you like to dispel, specifically for the industry?

Jason Smith
It’s really the goal, I think, to help make the professionals and the experts in the field make faster, more informed decisions. Because the benefit of properly trained AI is to look across vast amounts of data and look for patterns across those vast amounts of data that may be unrecognizable to humans just based on the time that they have to analyze all this information.
So if you really think about it as an augmentation to what they’re doing and automation of tasks that are either routine or either too broad to get a whole picture for, for any group of humans, that’s where I think we should focus on and not have this idea that it’s going to solve everything for us.

To really say, how can we help solve this problem today and then pick up the next problem tomorrow? Once we have the first solution, you’re using it, you have user adoption, one of the challenges that we always see is – how is this perceived by the users? How is this set up in an organization to be trained on so they know how to properly use the AI solution, they know how to properly interact and get the right data back out and make those informed decisions that go hand in hand with it? And I think incrementalism and then the misconceptions of taking broad approaches is it’s gonna solve everything, and it’s just not there. There you have to have a multi-pronged business training the end users to that adopting it and letting it, letting it kind of run its course and, utilizing it, and then figuring out how to move forward with the next step rather than try to bring in a unified solution from end-to-end day one.

And lastly, I guess kind of a bonus question, if I’m a vice president of medical affairs and I’m listening, what takeaways do I need to have? What do I need to know about AI now?

Jason Smith
Yeah, medical affairs – you’re across the board. You’re looking at drug development, clinical trials, patient care – AI can really help make those and your team more efficient, accurate, and effective. And going back, it understands those patterns in the data that would go unnoticed. It’s looking and can see those patterns to understand how to improve patient outcomes and how through education of healthcare providers can impact the patients. We really see our solutions and AI broadly as an opportunity to make those teams more effective to help both the healthcare providers and ultimately the patients in the end.

More insights, direct to your inbox.

Get our newsletter.