Aired:
June 12, 2025
Category:
Podcast

Driving Efficiency in Drug Development with AI

In This Episode

This episode of the Life Sciences DNA Podcast, powered by Agilisium, we explore how AI is helping drug development teams work smarter, move faster, and waste less—all while keeping patients at the center. It’s not about cutting corners; it’s about cutting the clutter.

Episode highlights
  • Covers how AI is used to identify drug candidates faster through predictive modeling, target validation, and compound screening.
  • Explains how AI optimizes trial design, patient recruitment, and real-time monitoring—shortening trial duration and improving outcomes.
  • Highlights how AI helps teams make informed decisions on trial progression, molecule prioritization, and investment allocation using real-world data and advanced analytics.
  • Discusses how automation and AI cut down manual tasks, reduce protocol amendments, and lower trial failure risks, driving significant cost savings.
  • Explores how AI ensures cleaner data, faster evidence generation, and improved documentation for more efficient regulatory submissions and approvals.

Transcript

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilicium.com. We've got Venky Iyer on the show today. Who is Venky?

Danny, Venky is the Director of Data Strategy and Automation at Pfizer. He has more than three decades of experience in the pharmaceutical sector. Currently, he's involved in several AI ML projects and strategic initiatives within Pfizer. He's been in Pfizer since 2019. And what are you hoping to hear from Venky today? I think Venky is an experienced practitioner of AI ML. He implemented classic AI ML to make drug development work more efficiently. He's also worked with GenAI and other newer tools to make it happen. So I really want to look at how he's going to be talking about a couple of use cases and how they can improve the speed, operational efficiency across clinical trials.

Before we begin, I want to remind our audience they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comments section. With that, let's welcome Venky to the show.

Hi, Venky. Welcome to the show. It's really exciting to have you on the show. Venky, what I wanted to start with is if you could briefly discuss your AI journey and what kind of a pathway you took to get what you're trying to do today. So it'll be great if you just started with an introduction from a journey standpoint.

Thank you. We started our journey in 2021,22 rather. And our focus has been on looking at some key areas where we can goin and realize the efficiency gains. I'm going to go in and outline those efficiency gains or outline those use cases to you. I have a couple of use cases in mind. And the focus will be on talking about the use case by setting the context and the approach that we took and the value that we realized out of that particular use case.

That's fantastic. I think that's a good framework, really, to talk about a specific use case journey. So why don't we jump right into the first use case?

Yeah, thank you. So the first use case is more around clinical coding. You know, as part of conducting studies, know, data is collected, you know, from the patient and we have to go in and code the verbatim terms, whether it is to do with a drug product or whether it's to do with a particular indication, I mean, or the side effect the patient is encountering and these verbatim terms get coded against a drug and MedDRA respectively. Primarily, if you go in and look at the industry, sorry, maybe five, six years ago, it was all manually coded. You always had a large amount of team members that are called so-called clinical coders that would go in and look at the data coming in. And they would go in and code that data to the appropriate dictionaries, the whole drug or the MedDRA. You know, it's along-drawn process, and we saw an opportunity to basically do the - to accelerate that using machine learning and AI. And what we did there was to implement ML AI to go in and do the automatic coding with the right predictions with multiple predictions and confidence levels. Then the way we started to do that was rather than taking all studies at the same time, we started with a handful and looked at the terms that are coming in and used the standard dictionaries to build the models around that data and then have those handful of studies be used to go in and have the model run against it to go in and provide the predictions of the context levels.

So almost like a training data set. Correct .Which you used with that set of studies. Then once you build the training dataset, the model could then be refined with other implementations and then you started to improve the efficiencies of the model. Correct.

It took us a few iterations working on finetuning the models. Right away, right off the bat, it gave us good results, but we wanted to target higher efficiency gains. So we did multiple retraining of the model and then used that to go in and actually build the efficiency gains. If you go in and look at the pharma industry, there are some coding conventions that might be very unique to a particular client and in a particular company. When you go in and do these models, it requires a little bit of time to go in and train them on the uniqueness of the conventions. And that's why we took about three to four iterations to go in and get to where we needed to be in terms of efficiency gains. And not only that, we also built a portal around the effort to make sure that the predictions with the confidence level was visible to the coder. And obviously, we want to have a human in the loop to be able to go in and look at those confidence levels and then do bulk acceptances based on the confidence levels. But we also had the opportunity to go in and look at certain predictions that were not up to snuff. And if you want to go in and do a manual coding on top of that, we would do that. And the model would go in and learn from it so that next time we'd go in and see something similar come up ,it has the ability to go in and deliver the predictions with much better accuracy and confidence levels.

That's a fantastic use case. Back in my day, when we used to do medical coding as a BPO, we would have a coder code from 40 terms to 80 terms, depending on their proficiency and efficiency. And the goal was to make it much more better all the time. So tell me what was the kind of coding efficiencies of these manual coders before and now with this process, how dramatically were you able to cut the time it took from a coding perspective?

Yeah, from a coding perspective, I can say that we are seeing anywhere from like 70 to 80 % on many of the terms that are being coded. But for the ones that we need some human intervention or the coder intervention to be able to go in and code, we are still seeing about 40 %because it allows us to go in and only focus on the ones that require that human intervention to be able to go in and code and know, have that re-learning. So overall net-net, we are able to go in and do more with less.

Just, you know, as you build this model, let's peel the onion a little bit on how do you make the secret sauce? Was it use a large language model to start or you had your own AI model which you built from scratch? Did you work off of other models?

We partnered with a vendor to basically goin and do this model. And the vendor used the baseline dictionary data that we provided to go in and train the model. And iteratively, we had to go in and use different algorithms, you know what I mean? Underneath. And we had to play with it as part of the retraining to go in and figure out which ones was giving us better, you know what I mean, efficiencies and better gains. And that is the process that requires involvement from the functional team, along with the technical AI, you know what mean? Technical leads to be able to go in and come to a happy medium where we are starting to go in and see predictions with higher confidence levels to go in and leverage.

Yeah. Now that's wonderful that you built from a base model and continue to do retraining and reinforce learning from experts and business. Tell me a little bit about the people impact. Were they open to adopting the AI suggestions? What was the change management process you had to put in place to get them comfortable to use a tool like this? You have to, in order to do that, you have to engage the operational team as part of the process of actually delivering ML AI to it. It's not like you deliver a tool and then have somebody use it later. You have to bring them along for the journey. And our coding team was involved from the very beginning, closely working with the technical leads and the developers to make sure that there was a constant feedback loop and that iterative loop to be able to go in and evolve the product. It's not just to go in and deliver the ML AI, right? There is also some workflow efficiencies and other things that we wanted to kind of build as part of the overall tool set. And we were able to go in and bring them along for the journey and make sure that they are, they are able to go in and provide constant feedback iteratively as we built the model to be able to go in and have it evolved to be an end product that is operational. Was that a part of your existing workflow? Because a lot of times people do ML separately, but it seems like you have landed the ML right in their workflow so that they're not changing too much of their workflow or did you have to change that workflow too? There was some level of workflow adjustments, but it was with the efficiencies in mind. As long as the workflow was accelerating the delivery of the coded terms for studies in a much better way than before, there is always going to be acceptance to that. Obviously, it takes a little bit of time for people to get used to the way of working, but we started slow, didn't do all studies at the same time. We started slow with a handful, got them trained on the interface and got them trained on the ways of working before we started to open the floodgates for other studies to go in and use.

Now, and that seems like a very practical way. We've seen many leaders like yourself follow that journey of crawl, walk ,run, and get the change management or adoption going. I'm sure as you started to put this in play, you must have had some resistance organizationally or culturally or from a team. What kind of resistance did you find and how did you go about mitigating them?

For us, there were a few challenges, but it was not from the people aspect of it. It was more to do with the process and the adoption of the process. And any time you introduce something new, it takes a little bit of time getting used to. And we gave ample time through a methodical slow release and operationalization stages to be able to go in and get that comfort level up. One of the things that we had a little bit of a challenge with is making sure that all of the coding rules that the clinical coders used and the conventions that they use was incorporated in such a way that they were able to reduce the amount of time that they would take to go in and basically accept and approve the terms that are coded coming out of the MLAI framework. Building of the custom dashboards, ultimately when you go in and put a solution together, it's not just about delivering the AI piece, right? You got to go in and kind of like wrap that in a way that it is acceptable to the business and to the end user and that they're seeing value. Ultimately, when they see value, adoption becomes a lot easier. It's all about the value. So our focus was always on the value first and then adoption would continue.

No, absolutely. I think you hit upon a very good point about value because many times people focus the AI ML on the efficiency play, which is one part of the value. But I think the whole coding process to get the quality of coding better is going to improve all downstream systems and everything else you're doing in clinical trials. So there's a two part to the value by doing it right the first time with less effort, you're actually improving the overall value of what you can do from a clinical trial standpoint. Were there other metrics you used to justify that ROI and value in addition to efficiency and quality?

Obviously, one of the things that you focus on when it comes to efficiencies is how do you go in and repurpose the team? And how do you go in and make sure that the team is operating in a way that they can go in and absorb more studies without having to go in and do there source scale up. It is not about resource reduction, but it is about managing the allocation and support the scale up.

I think that's a fantastic thing that you're actually taking more workloads into your operations team and making them more efficient and effective. As they started to take on more workload, did you see any...challenges in that or that because the process was so automated it was actually not a big challenge for them?

Not a big challenge at all. You know, and as in pharma, if you're looking at a follow the sun model, you can go in and add multiple chefs and you can have people basically cover different chefs to go in and make sure that the inflow of the terms are being addressed effectively and that you don't have to wait for the US hours where there's a significant backlog to go in and deal with. We keep close tabs on what the backlog is for them to go and either bulk up, accept or manually code the terms. And we look at that and we also measure the metrics around how much of manual intervention is required to go in and code. And using that measure, we can go in and figure out when the model retraining needs to happen.

It needs to happen, absolutely. And this may predate some of the Gen.AI efforts, right? You would have done this with classic ML buildup. Would your process or the way you created this model change now with Gen.AI and things coming?

It can definitely change and there is a potential for it to happen as well. In the industry, if you go in and look at all of the service providers that are out there, there are many that have come up with clinical coding software tools and they use generative AI to be able to go in and do the coding. But we have to take a look at what is there in the marketplace and how it's evolving. And if you're planning on implementing something, we have to make sure that we don't see the drop in efficiencies. Of course. So in the future, you may have a kind of evaluation on do you continue to build your own proprietary model or use other standard models which are available and then make a determination from a productivity standpoint, which one?

This seems like a great use case. What's your next use case? I'm really excited to hear that.

The next one is around document authoring. So if you go in and look at the industry today, there are several documentsthat get generated during clinical trials or even in the drug development continuum, right? As you go from concept to market, there are several documents that do get generated. And the amount of effort that goes in from critical resources and key SMEs to author, review, and finalize the document is enormous. So that is an industry play right now that you see, and you might have seen that in the industry as well. There have been very, very degrees of automations that are coming up when it comes to document authoring and acceleration. So various point solutions have been developed and implemented with a good amount of reliance on authoring applications. So I'm going to go back a bit to go in and say that traditionalists love to go in and have co reauthoring applications where the document templates get defined within the application based on the template and based on the underlying data allows you to go in and generate sections of the document. And then any downstream document that relies on certain sections of the document upstream, you have the ability to either lift and shift it or use that as a context to be able to goin and deliver the content for a downstream document. That is document management. And that is how traditionally things have gone about and things have been approached. With the Gen.AI and with the opportunity, whether you useGPT-3, 3.5, 4.0, or whatever, is definitely tremendous opportunity to accelerate that process and also keep that document authoring application agnostic. So the agnostic aspect is what is very, very appealing to people. And so the industry slowly moving into a diagnostic approach. And that agnostic approach could be, why can't we use traditional office platforms, whether it's a Word document or whatever, and then use an add-in as part of the Word add-in, and then have the ability to go in and set the context for that particular document depending on the type of the document and the category of document being generated. And based on the context setting, you can go in and use prompt-based approach to generating the content. Now, the document can be vectorized. And you can go in and do the query against the vector DB to be able to go in and generate the respective content. That approach is very appealing to many clients and many parts of the industry. It's not very limited to pharmaceuticals either. So we are using that approach to go in and accelerate the generation of content and also accelerate the workflow going from draft to finalization. I think the beauty of your approach is that you're going into tools which people are very comfortable with, which is Word, PowerPoint and others, and then altering it right there, because that's been the holy grail. Everybody starts with putting a document template and stuff, and then it becomes siloed to that particular document template effort, and then you go to the next one and the next one, and many times you fail because everybody doesn't want to be in an online, always-on document infrastructure. Now you could be offline, work on your document, but you're still getting the context.

I love the word which you use, context and context defined by prompt. So tell me, as you went to select these plugins, were there industry plugins you did, or did you use a core GPT 3.5 and…

No, we had our own plugin that obviously needed to be customized with our AI platforms and the industry is regulated, it's all localized. And so we had to use our own platforms. And that was a general idea to go in and build something that whereby we had the opportunity to go in and define a particular template that an add-in can use. And depending on the selection of the template, you would automatically know the base metadata to be able to go back to the repository to go in and pull selective documents to goin and set as a context for that particular document template to be used.

Perfect. And so, it's like the beauty of itis that you're vectorizing existing old documents and sections, which is the old school document management, and then using prompt to bring it into the add-on so that it's contextual to the document you're creating and merging the two within a user landscape which they're comfortable with and they don't have to relearn and retrain how they're going about doing it, but you're enforcing organizational standardization templates and structures and all that.

Correct. And if you go in and look at it historically, you may see over the years the document templates of all sections of the document can evolve as well, right? And what goes in as content was based on a particular individual authoring the document. You take that out of the equation now, and there is some level of consistency in terms of doing that but human in the loop is a must. So we always rely on human in the loop to be able to review the content that is generated. And have continuous feedback loop as part of the human in the loop to be able to go in and have re-learnings happen or readjustments to the prompts that need to happen to go in and generate better, better content.

Walk me, as you start to do it, first as you're a very big ROI company, what was your ROI for this particular effort? Was that a before and after? Was it speed or was it content repurposing? Because many times the ROI for document creation goes back into what is the ROI? Is it the first shot at doing it right or is it reusability or is it... So how did you go about defining the ROI?

We looked at it from both angles. The primary point is to...look at efficiency gains, which is can we go in and speed the cycle; accelerate the cycle of going from a draft to a copy of the document. That requires identifying the right tool sets and the right approaches to do that. So, the second part is around once the content is generated for a particular document that is upstream, let us say, we have the opportunity to go in and keep those sections of the documents in such a way that it is digitized content that can be reused either as a lift and shift or as a specific input to a context for a downstream document to be generated.

No, it's almost like you could, as the technology evolves, you could create a knowledge graph of these things to then reutilize these components, but also not reuse the actual components, but reuse the knowledge, which is where I've seen the market go. We're going away from document output to actual knowledge output.

Correct, and the ability to be able to goin and recreate the document based on the object model and based on how the table contents are maintained is also a win-win for us.

Perfect, perfect. So you've literally taken what has been the holy grail of the old school of document, document chunks, snippets, and search into the new age of bringing in Gen.AI into the workflow of creation, but also applying it to the knowledge side of extraction of the right content and the context from that section of documents.

And there is also a good opportunity for as you go in and digitize the content, there are the documents. If you have an independent rule engine that allows you to map sections of documents, in your development continuum in terms of what the dependency and what the relationship of an upstream section of a document is to a downstream section could be either a lift and shift or it could be a specific context to go in and generating something downstream. If you have that independent rule engine based approach to it, then you are keeping everything loosely coupled - is equally effective to be able to go in and dynamically set context depending on the type and category of the document.

Now, this is a great note. Tell me, did you apply to a certain type of document cases like protocol document authoring and all of that?

Yeah, there is protocol documents. There is SAP statistical analysis plan. And there is obviously efforts that are being made to go in and automate some of the other downstream documents as well. But depending on the case, you can see varied degrees of efficiencies that you can go in and get as well. That doesn't mean that you can go in and lock the efficiency to go in and say, I'm getting 80 % efficiencies across the board now. It could be 40 % here. It could be 60 % over there. It could be 80 % over here. So we are looking at working very closely, understanding what those efficiency gains are that we are seeing, obviously with the human in the loop, and trying to go in and also look at how, where the opportunity is to go in and keep it very minimal from a human in the loop standpoint to be able to go in and further, you know what I mean, realize the efficiencies that we needed to get .But those varied degrees of efficiencies are continual, I mean, they're evolving. I mean, you know, and you just have to kind of keep at it and you just have to continually make it progressive.

No, absolutely. It's just fascinating. Tellme, same question from a change management standpoint. I know that lot of authoring, some people call it art, not science. Now you're trying to bring in a scientific rigor and templates, and it's always been that challenge, especially with protocol documents and even statistical analysis plans. People think it's a little bit of an art and you can't just templatize everything and stuff. How did you work through that? Because these are very highly efficient and scientifically oriented people you're trying to influence. And the only way to do that is through delivery of the content. How do you...

Engaging them, involving them as part of the document authoring and as part of the review process for the human in the loop will help get the right level of their confidence levels to go up. And the more the confidence level goes up with a particular type or category of documents, the more supportive you're going to go in and see the user base. Because ultimately, it's all about efficiencies. If you're able to go in and shave a couple of weeks off of a particular upstream document from being generated, then those two weeks are actually looked at as a very positive thing. And as your GPT evolves from 3.0 was different than 3.5, and now we are on 4.0, right? So as things evolve, you have the ability to go ahead and do more and more with AI. And that delivery using AI is actually seen as a positive by many folks. There are some traditionals , of course, that will always be looking at it slightly differently. How do you basically convert the naysayer into the yea-sayer as the key? And that involves engagement, engagement, engagement, and constant dialogue and constant feedback. And how you receive the feedback and how you apply the feedback is what is critical here.

No, absolutely. One last question down this. You just touched upon a very critical topic, which is as you keep changing the underlying LLM engines, things get better. But I'm talking about as you're changing those LLM engines, there's a whole bunch of prompt versioning, making sure prompt regression testing is happening and stuff. Let's explore that a little bit. What is the process you put in play that for sure that the LLMs are going to continue to evolve? And therefore you need to continue to do two things. One is versioning, which may be easier, but more so regression testing because what worked may not work and what needs to work and you have newer context. Walk me through that process.

No, that is pretty classic in a regulated industry, right, like ours. You have to go in and have very rigorous life cycle of version and releases. So if you are basically looking at a brand new engine that is being released out there and you want to go in and take advantage of it, you've to bring it into a lower platform and you have to have a core set of people go in and look at it and look at it from a use case standpoint and also look at it from point of view of security, point of view of compliance and a lot of other things. So we have a core organization that relies on you know the people, the right kind of people to go in and look at it from those angles and once it's ready to be released we have the opportunity to go in and look at some of the newer use cases or even existing use cases that see value in adopting it and then we go through the process of bringing in and through the proper life cycle and, you know, getting that release done.

That's fantastic. I know we talked about two very good use cases here. Venky, as you start to look at the path forward in the next five years to 2030, what do you see as headwinds? What do you see as tailwinds in AI adoption and life sciences?

There are a few things that I can think of. Headwinds, when it comes to headwinds, the standards. You are only as good ashow your data is. And how do you go in and pay attention to the standards, the quality and the accuracy of the delivery of data for the context to go and work. That is something that will continue to go in and evolve. Yes, we pay a lot of attention to it, but we have to continue to go in and have that focus be there in terms of data standards, quality and accuracy. The second thing is as Gen AI models continue to go in and evolve and the evolution of models happen, we got to go in and figure out what that framework is and what that approach is we're going to have to go in and take as things evolve and how that fits in with respect to our vision and the direction that we want to take. The third thing is evolving dynamic context setting rather than loading up 30 documents,300 pages each to be able to go in and be vectorized. How do you go in and look at digitized content and dynamic context setting that would allow us to go in and accelerate the content generation. And finally, and the most importantly, how do we focus on building a model to verify the accuracy of the data generated by these AI engines and the correctness of that data as well? So accuracy and correctness and measuring that through some common model that would allow us to go in and basically put the...

Almost like model validation. Making sure that you're doing a model validation around it. Those are really good sets. As we start to end the session, Venky, what are some key takeaways? You've been an expert in this journey. The market is evolving very quickly. Lots of new technologies coming out, new ideas. What would be some clear takeaways to the audience as they start to go down this AI journey?

I do believe that through successful delivery of key accelerators and operational efficiencies, we are going to goin and see the adoption rates continue and the advocacy buildup happen seamlessly. And it will also open up new avenues for additional use cases to goin and come up and evolve. The second thing is with adoption, cannot be done without training and without education. Building the confidence level, it's going to be very important. And with the human in the loop, you always have that opportunity to go in and look at the output and the quality and thereby get the confidence level up. Improving the confidence levels is a continuous exercise because as models evolve, you know, as Gen.AI is evolving, it is not something that should be looked at as a one shot, and that as  when new models come up, that continuous engagement with the, with the business to understand what value they're seeing from it and truly measure the right level of efficiencies they are actually seeing will help with respect to further opportunities for adoption. There area couple of things, Sri, that I wanted to kind of cover, which it might resonate with you. As we go through content generation, there are various ways of basically doing that. And one of them happens to be load the entire document, vectorize it, do the prompts and, you know, traditional ways of doing that. The second thing is really figuring out in your drug development continuum, how do you go in and do content re-use effectively? And how do you go in and store that digitized content for re-use? The second piece is functional experience is so key. When you go in and roll out these underlying technologies, accelerators, methods, and tools, it cannot be one without the other.

Absolutely. No, spot on. And I think you're onto something where you actually are creating the nexus between classic document management, classic authoring, but bringing in the spirit of, I call it AI, both in helping author the document, but I think the more important is the second part, which is how do you make the reusability happen, both at a content level, but also at a knowledge level. And I really want to differentiate because before AI, you would have it on a content level reusability, which is what happened. You ought to pick up sections and modify it. Now with knowledge reuse, you could actually pick up a section and even modify that for another therapeutic area or stuff, but you'll get the right context going. And as you said, with the rules engine, you can define and saying, these therapeutic areas can reuse this type of knowledge, these are the versus these therapeutic areas. And so I really appreciate how you've gone about architecting it and making it such that it fits within the change management considerations of what people are used to, but are constantly bringing in new innovation together. So again, Venky, this was a fascinating discussion. Thank you so much for your time. Really appreciate those two use cases and I really like the breadth and depth of what we covered today. Thankyou so much.

Thank you very much for the opportunity, Appreciate it.

Well, Sri, what did you think?

Danny, it was a great conversation. We had two very concrete use cases. I love that. The first one on medical coding is an industry efficiency play, which has been plagued the industry for a long time and really bringing in generative AI and newer models and ways and approaches to make that process work efficiently was great. The second use case on document and document creation is again a classic problem, but the way they went about solving it using again, GenAI. And I think it's a great testimony between bringing in classic GenAI for prompting and authoring, but also using that construct of knowledge and knowledge graphs for doing content repurposing. So I thought it was a great way to articulate two very good use cases in the market.

Yeah, it's interesting. On that first use case, he distinguished between where you could have the AI work with no human intervention and the ones where it was necessary to have human intervention and just doing more with less. But I think what struck me was he talked about this being an iterative approach. Why is it important that people think of this as an iterative approach?

So Danny, it's a great question. I call all of the AI projects as AI teammates. Let's say you come to my team. You need to be iteratively learning your style and also learning from the style of the other person. So it's iterative. And that's the same thing with AI. The model learns from the human in the loop feedback, and the human learns what AI is providing to make their jobs much more efficient. And so iterating over that over three to four iterations gets that team dynamics to work very well. And I think that that's why an iterative approach is a very good one.

Yeah, the other thing that struck me is he talked about the importance of engaging the operational team to evolve the MLAI and bring them along for the journey to provide constant feedback as the ML is developed. What were your thoughts on that?

So this is classic what we call reinforced learning. And the model starts with a particular genetic use case around how you would do coding. But when you go into specific sponsors, they have very specific ways in which they would code for their particular therapeutic area or their particular process. To reinforce the model with that organizational feedback and insight is very critical. And so the human in the loop, always telling back to the model what works, what doesn't work, and what needs to be improved are all very critical parts on how you are going to go about building these models together. But more importantly, how do you adopt these models? And the thing that struck me is there was a cultural sensitivity. He talked about converting the naysayer into a believer. And you can talk about the technology and ROI, but it seems to me that the cultural issue and change mandate comes up again and again in these discussions as being so central. How can drug companies get this part right? Change management is, first of all, a very critical part of the adoption, and you need to be proactive about how you're going to make this change happen. The second part is change happens when you start to give much more decisioning power to the operations team. What I mean by that is what works, what doesn't work, getting feedback. It cannot be like, this is the way, use it my way or the highway doesn't work. You gotta give them the tools, get the feedback, evolve it, and make this thing a process to be better. But the third which was very important in the conversation is the framing of the ROI. The framing of the ROI was not removing people, but making them much more efficient, much more strategic and much more doing high value stuff. And I think it's very important to frame how these technologies are coming and getting implemented because that's going to be very critical in how you can scale up adoption. to that point, when you asked them about measuring ROI, it wasn't about cost, but about time. Time is such an essential part of the development process in that process helps you launch the drug much more faster in the market, which then gets you a competitive advantage in terms of the value of your drug and the scaling off of that. But more importantly, bringing good drug therapies to the market and helping patients is the best ROI in this full process.

Well, a lot to chew on there. Sri, thanks so much for a great conversation.

Thank you.

Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine MediaGroup with production support from Full view Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at Danny atlevinemediagroup.com. The Life Sciences DNA. I'm Daniel Levine.

Thanks for joining us.

 

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Venky Iyer is Director of Data Strategy and Automation in Global Drug Development IT at Pfizer, leading the design and implementation of AI-driven workflows that enhance drug development efficiency. He focuses on automating clinical document generation and trial operations, enabling intelligent data-driven decision-making at scale. Recognized for measurable ROI contributions across Pfizer’s global R&D landscape.