Improving the Efficiency of the Pharma Workforce with AI
In This Episode
This episode of the Life Sciences DNA Podcast, powered by Agilisium, goes beyond the usual tech talk to explore something more personal: how AI is helping people across pharma do their jobs better, faster, and with greater purpose. It's not about replacing scientists, analysts, or clinicians—it’s about helping them breathe easier, think sharper, and make a bigger difference
- Explains how AI enables pharma professionals to automate repetitive tasks, freeing up time for higher-value, strategic work.
- Covers how AI-powered insights support faster, data-driven decisions across trial planning, regulatory documentation, and market strategy.
- Highlights how AI facilitates better collaboration across functions by creating unified data views and intelligent workflows.
- Discusses the importance of upskilling and change management to ensure teams can effectively work alongside AI systems.
- Explores real-world examples where AI deployment has led to measurable improvements in output, speed, and operational efficiency in pharma settings.
Transcript
Daniel Levine (00:00)
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com.
Sri, we've got Kannan Natarajan on the show today. For people not familiar with Kannan, who is he?
Nagaraja Srivatsan (00:30)
Danny, Kannan Natarajan is Senior Vice President and Head of Data Sciences and Analytics at Pfizer. He's been part of Pfizer's research and development leadership team. He's a veteran from a pharma industry standpoint with over 30 years of experience and has worked across several therapeutic areas. Currently, he oversees clinical development strategy, data sciences, and statistical functional excellence across Pfizer's global portfolio.
Daniel Levine (00:56)
And what are you hoping to hear from him today?
Nagaraja Srivatsan (00:58)
Kannan is an exciting guest on the show. He is a veteran from a pharma industry standpoint, but he's an innovator. He brings in lots of concepts from an AI perspective to actually implement it for pharma. What I up hear is he's actually implemented real life use cases, which have made a significant impact in clinical trial development.
Daniel Levine (01:21)
Well, before we begin, I want to remind our audience they can stay up on the latest episodes of the Life Sciences DNA by hitting the subscribe button. If you enjoy this content, be sure to hit the like button and let us know your thoughts in the comments section. With that, let's welcome Kannan to the show.
Nagaraja Srivatsan (01:40)
Hi Kannan. Welcome to the podcast. So excited to have you on the podcast. It would be wonderful, Kannan, if you could start by telling about your AI journey. You've been at this for quite some time. You're a veteran here, so I'd love to hear your perspective on your journey.
Kannan Natarajan (01:56)
First of all, Sri, thank you for having me in this podcast. It's truly an honor. The concept of actual use of artificial intelligence or machine learning, particularly in the pharmaceutical sciences and medicine development, as much as we think that it has been there for quite some time. It is relatively new because pharma is a little late in terms of adoption of some of these nice tools. And my journey actually started back in 2018 when at Pfizer, we were doing actually simple things like automation, robotic automation, and trying to actually understand some simple things to continue to keep finding ways of bringing automation as much as possible. One of the events that we looked into is data reconciliation and ensuring that integrity of the data as we collect information across multiple data domains in clinical trials. So, we recognized that actually is very time consuming. So, we started a whole group, a concept called Smart Franchise. And one of them is called Smart Data Query. So, I like to actually understand inconsistencies in data that is collected across multiple data domains. And how do you actually go about managing those inconsistencies and getting a query out to the site as early as possible so that we can address some of those things. So, I don't want to talk today about the smart thing because it is being talked at length in many, many forums. But that is when actually the journey started for me, where we started actually recognizing the power of concept of machine learning and then bringing in some of these technological innovations in the pharmaceutical medicine.
Nagaraja Srivatsan (03:35)
The initiative Smart, I think it must be an umbrella term because you're doing multiple smart things. So, tell me how that journey of Smart happened and what are all the different umbrella use cases or things you have under that.
Kannan Natarajan (03:49)
Yeah, there are plenty. And today, probably I will not touch on many of those, because again, I have some interesting ones that particularly in the concept of GenAI was what I was talking about, because that's actually the next phase of thing that pharma is embracing. In the smart franchise, we had, apart from smart data query, we had smart coding, medical dictionary, assessment, and things of that nature. So, there were multiple of those examples that we had put in place where the use of AI or machine learning made a huge impact in time to automate some of these things. All of them are human augmentation, by the way. It doesn't replace humans. It just makes them more efficient. And because in health sciences, you have to find a way to augment the humans in making that learning. So that it made a huge impact. At least I can tell you the example for us was the COVID vaccine program, where it made a huge impact in terms of the timelines where were able to deliver the COVID vaccine in a very record time thanks to some of these innovations that we had put in place.
Nagaraja Srivatsan (04:51)
That's fantastic. I think you hit upon a very important part of this AI journey, which is the human augmentation. There's a change management in adoption. Give me a little bit of what was the sense of the people. Was there are lot of trepidation and kind of like, don't want this to ... How did you convert that journey to adoption? It'll be great to see how people went from skeptics to adopters to users to making an impact.
Kannan Natarajan (05:20)
Yeah, I think we have to recognize none of these are black box tools just out of the box, you take it out and then all of a sudden it actually comes up with some miracle solution. It is an iterative tool. The concept of machine learning is as any good statistician or a data analytics person will tell you is that this is a model, it's based on a model and the model only gets better on an iterative basis. You need to continue to have the right model in the first place, validate the model, and continue to iterate on learnings on a continuous basis to make that happen. So, when that happens, obviously initially when you actually have the tool come up with some of these ideas of so-called inconsistencies, skepticism builds in within the organization like, this is clearly not an issue of inconsistency. And people feel that the tool is less precise as what you would actually want to have. Because as I said, the inherent assumption is when you have these models, the precision should be to the highest extent. And that is not the case. You need to have continuous assessment and making sure that the model refines as it continues to keep learning from it. And over a period of time, the precision actually improves. But that skepticism is something that we have to overcome. Second thing is the culture in the organization. As with any tool, whether it's AI, ML, or even actually a simple smartphone, the very first thing in people's mind is like, why do I need to use this? What exactly is the - I myself in the early days, and I don't want to age myself in terms of the thing, but I can tell you over a period of time when we used to have these calendar things where we used to have Blackberrys and then smartphones and...all of these things. We used to ask ourselves, why do we need it? What exactly is it going to bring of an efficiency perspective? Today, none of us can exist without any of these things. So, it takes time for people to absorb and try to find ways of embracing some of these changes. These are truly, I would say, a significant change in the way that we do things. So, when you have some of these, it'll take time to make that happen. Obviously, you have to find ways of educating them in terms of how to utilize those and then making sure that they understand--the last distinct part of it is it takes time to have the full utilization of that tool.
Nagaraja Srivatsan (07:42)
Kannan, you touched upon a couple of points I wanted to just a little bit deep dive. One is there is an organizational infrastructure perspective to make it safe, make it easy. You talked about culture, which I'd love to get your double click on that. The second part is you said there are skeptics and people who adopted and who don't and ability for people to change. Talk about both dimensions. What did you do organizationally and what did the individuals do because the outcome is fantastic. You were able to reduce the cycle time for COVID and get a lot out of it. So, you got a very good goal. But what was the team dynamics of how you got this thing going?
Kannan Natarajan (08:21)
Number one, any of these problems that we are addressing, does involve people across multiple functions. It is not a simple out-of-the-box digital tool that we can just launch. But instead, these are tailored solutions to the problem that we are actually addressing. So, when it's a tailored solution, you need to have individuals across multiple functions to come together, defining the problem, and then trying to find ways of what would be a solution that would be considered ideal and what would be considered as ideal when it comes to the very first solution that you're looking at. And how do you refine that solution as you move forward? And how many of them are planning to include? You can't actually, just like any other solution, you can’t launch at the very get-go, or the very first thing across all of the things. You have to find a way to pilot it trying to find ways of ensuring that pilot actually has defined metrics that clearly identifies the success factor and whether the return of investment is actually clearly being met for every milestone that you have to measure. Because you have to have certain clear milestones where we have to measure these metrics and ensure that those are being met on all of those things. If not, go back to the drawing board and assess what exactly is the one that is a failure and how to go about doing it. So that's the first one. When going back to the culture piece, it's not about actually launching a technological solution. It's about ensuring that the organization needs to be trained and have the appropriate training tools in place in order for them to get used to some of those changes. And how do you go about making sure that those things happen? So, you need to have not just the individuals who are using the tool in itself, but other stakeholders, they need to also come together to make sure that they are all in this together, the journey together. And that actually has to happen first in order to bring in this change as we move forward. And that takes time. It's not a launching of a solution and then walk away from it. Instead, you have to have that methodical assessment on an ongoing basis. What is similar to actually launching of any software product you're going to have when you launch for one indication or at one therapeutic area, there is likely another therapeutic area. It may not work exactly the same way. So, you need to start refining it as we move forward to make sure that those things happen. So, you have to have the appetite to learn and continue to absorb some of the feedback that comes from the users and continue to then enhance the product as you move forward. That is a culture that basically listens, takes the feedback and tries to, using the feedback, develop the model even better to make this thing. That is an iterative process, and that helps the user to be even more friendly from a perspective of utilizing some of these tools.
Nagaraja Srivatsan (11:08)
I can go down multiple different paths of how do you get people trained, but maybe I know you had a couple of good use cases. You wanted to walk the journey off of it. So maybe within that journey, you could talk about the before and after. What was the state of people, process, and technology before you started these AI interventions? And then what happened in that journey? And I'm sure there were some positives and some challenges which you overcame. I would love to pick it within a use case or a case study, which you would like to share with us.
Kannan Natarajan (11:37)
The two case studies that I wanted to touch upon today, which is more relevant in today's world, is as we all started embracing chat GPT and all of the other generative AI solutions, Pharma obviously woke up and said, “my God, we have to leverage generative AI for automating many of the documents that we actually produce.” In the drug development today, I have to say the complexity is mounting in terms of the timelines are stretching. Everybody is worried about how long it takes to develop a drug, how long it takes to run a clinical trial, and bringing a life-changing therapy to patients is an incredibly complex, much costlier, and a longer journey. We are now recognizing it's a revolutionary shift and leveraging the generative AI piece can transform how we create as well as manage the vast documentation ecosystem that truly underpins the drug development aspect. And I can tell you from an example perspective, the document challenge our industry faces, because of the fact that for a good reason we are very much a highly regulated environment, I think, because we're talking about health sciences. So, our industry faces an enormous challenge when it comes to the administrative as well as operational burden in drug development. And it's often interfering with our ability to innovate very efficiently. And for each clinical trial, the teams produce thousands of pages of documentation, starting with the protocols, statistical analysis plans, clinical study reports, safety narratives, informed consent, and patient information leaflets when the drug goes out, among others. There are so many of these documents. And this process in itself consumes thousands of experts hours across multiple functions, including I'm talking about early stages from preclinical to clinical to regulatory and medical affairs and other things. So, from a typical phase 3 program, documentation task alone could consume about 20 to 30 % of our valuable resources, scientist resources and clinicians' time. And this is more of an administrative burden that can directly extend development timelines and ultimately have a downstream impact on delaying patient access for therapies that are much needed in many of the therapeutic areas. This is a big problem. Yeah, it's a huge problem. Believe it or not, it's so much time consuming with all of these documents, I think, it's not about Chat GPT writing a very nice email for you or Microsoft Copilot writing an email for you. We're talking about documents that make sense from a scientific perspective. And so that's the problem. And the opportunity with generative AI in this case, it does present a very strategic inflection point here. By automating some of these and augmenting, obviously human augmentation is absolutely critical. Augmenting document creation and management, AI in itself can dramatically accelerate process and enhance operational efficiencies. Specifically, generative AI has the potential, for example, to rapidly draft initial versions of essential study documents from protocols to clinical study reports. You can also automate quality control checks, including source annotation of verification. And also it reduces human error because it's an automation process. The human error is minimized because if you and I, for example, were to look at the same thing, you could actually be looking at one thing, and I could be looking at another thing. So, it reduces the concept of the human error piece. And then also it streamlines document revision management. It facilitates seamless amendments and more importantly, tailors writing styles for different audiences because we are developing these drugs for “across the world” submissions and approvals. And so, it actually does tailor the writing styles for different audiences, enhancing readability not only for regulators but also for patients and healthcare professionals as well.
Nagaraja Srivatsan (15:36)
You touched upon a very important part. The 20 to 30 % is one part of it, which is the actual time effort. But I think you touched upon a very critical part, which is the consistency effort. What you write and what I write is not consistent. You're actually trying to take the average, the mean up, but more so the median up, because you're now getting more people to write better at a higher standard. So you're bringing in the lower of folks to a much higher standard and also making it consistent. The other part which you said, which is really interesting, is changing to the different styles of output, because usually pharma is all about templates and getting one voice. Now you're saying, hey, get it to one voice, but now I can then manage that one voice into multiple different voices based on the audience, the regulators and all of that, which is a very fascinating part of this journey. How do you get somebody get...started on it. Let's say I'm part of your team. What tools are you giving me to author? What kind of training are you giving me to get ready? How do I on-board myself into this journey? I'm sold, but how do you enable me to be part of this journey?
Kannan Natarajan (16:42)
I think we have to recognize one of the things that in any business, pharma is no exception, you have to have quantifiable benefits and return of investment because when you're investing in some of these types of technological innovations, you need to define the way to quantify what is the benefit, why is it a benefit on the receiving end, on the stakeholder side, and what is the return of investment and what is the time of that investment, return of investment. The first thing you have to ensure is that the stakeholders have to have the buy-in for some of these things. So, incubating genAI in our R&D workflow isn't just innovative, but it has the power to be potentially transformational, and that's critical. And any significant reduction in document generation timelines has the potential to directly translate into efficiency as well as accelerated development cycle timelines. And so, you have to measure these metrics to show the fact that it does improve the cycle timelines. This not only leads to the time and resource savings, but also it means we are able to meet unmet medical needs, actually the patients that we serve. That's going to be a critical one. And I have to say there are three elements point of view, right, from a challenge and consideration piece. One is we have to recognize on a technical front, as I've mentioned before, remember these are based on models. So, number one is ensuring from a technical front, ensuring model accuracy because you cannot actually exaggerate. I can write a poem for my wife using chat GPT and she's not going to be concerned about whether the poem is as accurate as what she is saying. In health sciences, accuracy is absolutely critical. So, ensuring that your model is accurate is critical, reliability is going to be critical, and the rigorous validation is paramount. And more importantly, data privacy and security are non-negotiable. Now there's a seamless integration with our existing document trial and regulatory information systems. And it's very crucial for scalability because you can't build one solution, but if it doesn't talk to each other in our platforms, that is not actually going to be scalable across things. So that's from a technical point of view. The second one, which is a very critical one, is from a regulatory standpoint. We need to ensure that we engage with government regulatory authorities, building the trust and demonstrating robustness in compliance of AI assistant documentation, through transparent validation and clear audit trails. And there have been clear requirements, what both EMA as well as FDA had laid out, and also the European Union has laid out what are some of the things that we call good use of AI and how to go about doing it. That's the critical piece. The third, which is a critical one operationally as we have talked about much of this thing, embracing this AI driven change requires definitely a strategic organizational shift, which means new skill development and addressing ethical considerations very proactively. And those are the three ones that I would say are key challenges and considerations you have to do in order to truly make it as impactful as it should be.
Nagaraja Srivatsan (19:57)
I totally buy into that concept. Actually, one of the case studies we talked about before was just improving communication, actually say 20 to 30 % of the effort because you're communicating clearly. But the way I wanted to ask the question is, if let's say I'm not Pfizer and I don't have this infrastructure, how do I get started in this journey? Do I first... You said ROI, absolutely, and you've laid out the pieces for the ROI. Documentation automation is going to really benefit from cycle time. So, we know the ROI, but many people struggle on what is my starting point? Now that I know I have to do it, do I pick one process? Do I pick multiple processes? Do I pick a team? Where do I get started? And then walk me through maybe a couple of steps around after we start, what are all the guardrails I have to put in place to make this thing scale?
Kannan Natarajan (20:48)
I think one of the key things that you don't need to develop all of these things internally. Pharma is not a technology company. So, we're not the Googles or the Oracles of the thing. But certainly, we recognize that there are solution providers that are out there. But first of all, if you are not a Pfizer or any other company, the first thing you have to do is to create a challenge across all of these providers to understand how do they actually work. Give them examples and what are some of the basic requirements that you would want to have on those and how accurate or close to accuracy that they can get? Because as you know, genAI and so many of these tools do have this concept of exaggeration. And you don't want to have that in a health sciences document. So, you want to uncover the potential opportunities and the challenges using genAI in this context. And so...there are actually providers that exist, but you have to pick and choose the ones that truly set a test with an asset where you will have a certain segment of the document and see how close that they can get to that by actually having them compete to do ensure with multiple other this thing, how close they can get to what the real thing is. That is going to be the critical one because you don't need to develop everything in-house. But you have to figure out a way to also assess how do you go about managing this. And then the other one is from the, I would say the future, it's important that internally we need to start thinking about how do we then change people's mindset in ensuring that there is, I would say a fear of AI, which unfortunately many of the individuals continue to say it loud and clear in terms of they somehow feel AI will replace humans. That is not the case. AI will actually make humans more efficient and more - we will be able to actually do things in a much more effective efficient manner than what we had done in the past. So that is something that needs to be very loud and clear when it comes to the communication message. And how do you ensure that genAI is an augmentation of the work that we do in order for us to be even more effective. And that's something that we have to actually focus on as well.
Nagaraja Srivatsan (23:01)
Both are very good points. I love the innovation challenge and competition. I think that way you can almost democratize what is the right tool for your particular situation and environment. It's been a good journey. What are some of the headwinds you will face as you start to scale this across pharma? What are the tailwinds we're having to accelerate this journey forward?
Kannan Natarajan (23:24)
Yeah, I think actually the headwinds are that many of these genAI models are not actually appropriately trained in data that it needs to be trained in, right? So there are a lot of large language models that exist today, and many of them are actually good for very many purposes. When it comes to the document generation piece, many of these large language models are often trained in data sets, which are pretty large compared to the health sciences data. So, number one thing, I would say, is that the headwinds are bringing some of those large language models in and not necessarily developing others, but bringing in and then utilizing these large language models to train in the data that you have internally and be able to make that actually and train some of those large language models internally within your own sandbox. And I might say, because you don't want to put it out there because these data, as I mentioned, privacy is absolutely critical. Confidentiality is absolutely critical. And so, we need to actually ensure that it is within your own sandbox, right, to actually build these types of training to your models and ensuring that the data is or the output from these models is as accurate as it should be based on your data that actually exists. And then I would say this is a big challenge or a headwind. And because it takes time because many of these large language models that currently exist across multiple things are not trained appropriately in the health sciences data. And we still see some inaccuracies in many ways and you can't afford inaccuracy when it comes to these type of things. So, you need to make sure that that actually happens. That's the biggest challenge I would say. And then the tail I would say is that industry-wide, everybody recognizes the use of these tools that can help us in actually accelerating our drug development timelines and actually making us more efficient as we move forward in creating these multifaceted documents and helping the whole drug development process. So, there is a movement that comes in across all of the pharma companies as well as technology companies who recognize this is an area where tremendous opportunity exists as we move forward. So that is a tailwind that we have right now. And there are many companies that are all in the path to go together. And so, there are opportunities both from collaboration across pharma, collaboration across technology companies, and trying to find ways of making sure that these models are done appropriately in order to be able to suit the needs that we have across a different framework. That's critical.
Nagaraja Srivatsan (25:54)
I mean, both very spot on. I think pharma has always been one where they want to see success stories and then they are fast followers. And so, as more and more successes come on, that's going to be a good tailwind. But as you said, the headwinds are always going to be there in terms of context and having more of a health sciences context and making sure that it happens. As we're trying, what would be-as an expert who's gone through this journey, what would be your key takeaways for the audience. If they're going down this journey, what would you recommend they do? What is their starting point and what should they watch out for?
Kannan Natarajan (26:29)
As I said, the audience have to buy in the concept of what is considered as opportunity here, what could be as challenge. You cannot actually buddy the challenge and say there is no challenge. It's a solution that's going to make things in life easy. So, you have to spell out the opportunity. You have to be transparent when it comes to the challenge. You also have to be transparent, as I mentioned, on the three legs that we talked about. Regulatory aspect, making sure that you're working with regulators to ensure that you have a very transparent process in terms of how you actually develop this, how you validate this, what are some of the things that you are putting in place in order to have the checks and balances. And that is going to be absolutely critical. And so that’s what I would actually phrase as ways to enhance these type of things. And again, we have just begun the journey in genAI. There's so many more opportunities using generative AI. This journey just started. There's lot more opportunities as we move forward.
Nagaraja Srivatsan (27:24)
Yeah. And one last futuristic, as you said, the journey has just begun. If we are in 2030, five years from now, what is your prediction? What would happen within drug development and clinical?
Kannan Natarajan (27:37)
I think actually, first of all, 2030 is still a much shorter timeframe. And we're still talking about within five-year timeframe what is likely to actually happen. But I do believe that at least if a significant number of these documents that can be generated, automated, and be able to have reduced the timeline by at least 30 % and 30 % of resources, that in itself is a pretty good thing because I do believe that that helps us to gain over the overall timeline perspective because documentation is huge when it comes to direct development and that would be critical. By 2030, of course, there are other opportunities, significant opportunities that actually exist. In fact, if I may depending on the time, I want to touch upon another good one from a case study that we are doing, which is a critical one, which is digital health technology. Again, it's in an infancy stage and where people are trying to-- we have seen mobile glucose monitoring and on a regular basis monitoring your glucose. And that seems to be very prevalent now. Likewise, blood pressure monitoring, heart rate monitoring, things of that nature. But digital health technologies and wearable sensors also offer huge transformative opportunities in clinical development. Because the burden on the patients when it comes to participating in clinical trials as well as post approval when the patients are on any of these treatments, is he actually going to take the time off to be able to go to a clinical site and assess are they actually okay? Whether a patient who has cancer to actually see the likelihood of relapse of the disease or any of those type of things. The sensors enable more of a remote assessment and monitoring of this and also from a clinical trial perspective, remote participation from these patients. And the variable sensors, again, supported by artificial intelligence and machine learning is, as I said, very much in the infancy stage. It's only going to take off by 2030. I predict that there's going to be a great opportunity to be able to measure a significant amount of information, clinical trial endpoints measured through some of these devices. That makes a huge impact overall from a patient-centric development. Because right now, if you look at our clinical trials, many of them are not patient-centric. And patient-centric development and patient-centric monitoring post marketing is also going to be a critical one.
Nagaraja Srivatsan (30:07)
No, I mean, those are fantastic use cases. And more and more, the FDA and other regulatory agencies are asking you to create what they call clinical endpoints rather than observational endpoints. And I think that in itself is a huge topic to talk about because many of the early digital health were very observational endpoints, which the FDA thought it was too much noise. But as with AI and others, you can actually...make these things clinical endpoints and then clinical endpoints can be incorporated as biomarkers or disease progression things, which you can then help from a regulatory submission standpoint.
Kannan Natarajan (30:41)
Yeah, and the challenges are there. And I think we have to develop robust evidence to validate some of these digital endpoints from a regulatory approval perspective. I think the classic one is when the Apple Watch was measuring the heart rate, the question became like, do you then go in and assess the hardware or do you actually assess the output from the Apple Watch thing? And so, all of these were regulatory questions that were asked and then it is now approved, as you know. It is time consuming and it is costly, and by no means these are cheap. It takes time to do. But standardizing some of these endpoint definitions across sponsors helps you to actually to go forward in a more rapid manner and also broader adoption as well.
Nagaraja Srivatsan (31:24)
Absolutely. These give almost two big holy grails of what people can look for. One is the operational efficiency you talked about, which is the document process, which is huge and humongous for us in clinical trials. And then really looking at the future and thinking about how do you look at endpoints and clinical endpoints and take it on.
Kannan Natarajan (31:43)
I want to close with some key takeaways though, right? And I just do feel that it's important when it comes to the best practices for AI adoption. I do believe that while ensuring compliance and all applicable laws and regulations, you have to define a way to align people, process and technology, and that's critical. And also from a people perspective, this involves up-skilling teams, fostering cross-functional collaboration, cultivating a culture ready for AI-driven change. That's going to be critical. Process-wise, you have to have the best practice, including automating high-burden documentation tasks, embedding quality control, oil trails, and integrating these technology tools like AI into document workflows. On the technology front, it is critical to validate appropriately these tools, integrate them with the existing systems, and then of course, invest in scalable infrastructure. And that's going to be critical, whether it's variables or digital tools. Together, these practices enable a faster and more efficient development while maintaining regulatory rigor and patient focus.
Nagaraja Srivatsan (32:47)
This has been a fantastic conversation. I really appreciate you providing all these different insights. Thank you so much.
Kannan Natarajan (32:55)
Thank you.
Daniel Levine (32:57)
Well, screw, what did you think?
Nagaraja Srivatsan (32:59)
Danny, that was a great podcast. Kannan talked about two very relevant use cases, one for here and now on how do you save 20 to 30 % of the time in clinical trials by automating the documentation process. And in clinical trials, documentation is the bane of every function and every role and bringing in AI to streamline and reduce that effort is fantastic. The second one was much more looking at digital health technologies and then again, how do you bring in AI to build the future clinical endpoints which can make sure that you're taking care of patients remotely and monitoring them, but also collecting data from a digital endpoint standpoint, which you can then use it for regulatory submissions. Yeah.
Daniel Levine (33:48)
With that regard, he talked about some related issues like the problem with large language models and the need for them to be appropriately trained to the health sciences. This is a point I've seen raised elsewhere. What can companies do to address this?
Nagaraja Srivatsan (34:03)
I think in large language models, the critical part is what's called context. And the more context you give, the better that those models are from a hallucination standpoint. So, what Kannan said is if you train these large language models more and more on the healthcare context, you're going to make sure that they're performing much better, hallucinating less, and become much more accurate and appropriate for adoption.
Daniel Levine (34:31)
He also talked about the need for quantifiable benefits and being able to show return on investment. How challenging is it to measure what's a small part of a very complex process and are there key metrics companies will use?
Nagaraja Srivatsan (34:46)
As Kannan said, ROI is a very important parameter and a metrics to measure. And for that, as he said, there is a before and after state. What is happening with the process currently in terms of time, effort, and cost? And then what happens after you implement the AI from a time, effort, and cost? What Kannan said is it's a journey. It's not an instant switch. You implement AI and immediately you'll get the benefits. He said, as the AI model learns more from the interactions and as the humans learn from using the AI models, the ROI becomes much clearer. But the clear part of many of the ROI is can we save time? Can we save effort? Can we save cost in doing each of these things? It's also the journey. It's not a switch. It's a journey to get to that endpoint, success.
Daniel Levine (35:34)
You also talked about culture and adoption and this not just being about the need for buy-in from the people who are actually using these tools, but from others as well as assessment. He also talked about the fear that exists that this technology could replace humans and the need to communicate that this is an augmentation to allow people to perform better. To what extent do you think this inhibits the successful or rapid integration of the technology?
Nagaraja Srivatsan (36:06)
I think he gave us a right framework. There's two parts to the success. One is organization and second is the individual. As an organization, as he said, you have to clear and create clear metrics, create a culture of adoption, make sure that it's safe for people to try it out as well as having metrics to measure. That's an organization. From a human and individual perspective, it is the fear to try. Giving them the ability and the tools to try it and make their lives better, and articulating the value from their standpoint and how it's more augmentation than replacement. So, I think he gave us a little bit of a good playbook around organizational structure as well as individual things which one can do to make adoption happen.
Daniel Levine (36:57)
Well, it was a great conversation. I'm so glad we were able to have Kannan on the show. Sri, thanks as always.
Nagaraja Srivatsan (37:04)
Yeah, thank you.
Daniel Levine (37:06)
Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny at levinemediagroup.com.
For Life Sciences DNA, I'm Daniel Levine. Thanks for joining us.