Aired:
March 13, 2025
Category:
Podcast

Leveraging AI in Clinical Development

In This Episode

In this engaging episode of the Life Sciences DNA Podcast, powered by Agilisium, we get a front-row seat to how pharma companies are turning the promise of AI into real, on-the-ground impact in clinical development. No buzzwords—just practical, tangible ways AI is transforming trials from clunky and reactive to intelligent and agile.

Episode highlights
  • Clinical teams are no longer flying blind. AI now helps craft smarter protocols from the start by analyzing what worked—and what didn’t—in past trials. That means fewer costly amendments and more confident planning.
  • The days of slow recruitment and random site selection are fading. AI pinpoints which sites will enroll faster and which patients fit best, improving speed and diversity without guesswork.
  • Forget static dashboards. AI delivers live insights from ongoing trials—flagging risks, tracking performance, and enabling trial teams to respond before small issues snowball into major setbacks.
  • Clinical data often lives in silos—messy, mismatched, and hard to trust. AI can clean and connect these scattered sources into one coherent view, unlocking higher-quality decisions.
  • With clearer, cleaner evidence and fewer errors, regulatory teams can move forward faster. AI makes submissions smoother by reducing ambiguity and building trust in the underlying data.

Transcript

Daniel Levine (00:00)

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. What have we got on the show today?  

Danny, we have Aman Thukral with us. Aman Thukral is the head of digital operations and clinical systems at AbbVie. He has over 16 years of experience in clinical development, technology planning, and business technology alignment. He's responsible for the clinical data repository, eCOA operations, variables and sensors, and for piloting new technology initiatives. Before this position, he had worked at various levels at Deloitte, Cognizant, and GSK. So excited to have him on the crew. And what are you hoping to hear from him today? Aman has experience in integrating digital technologies into clinical development. And he's done that very successfully at many different places. And so I'm wanting to have an open conversation on his AI implementation journey and what he learned from that and what he's planning to do in the future.  

Well, before we begin, I want to remind our audience they can stamp on the latest episodes of the Life Sciences DNA by hitting the subscribe button. If you enjoy this content, be sure to hit the like button and let us know your thoughts in the comments section below. With that, let's welcome Aman to the show.

Hey, Aman, so excited to have you on the show. It's wonderful. I look forward to discussing with you on the potential of AI in clinical development. So let me turn it over to you. Why don't you walk us through some of the AI use cases you have implemented in clinical development, and we can take it from there.  

Thank you. Glad to be here, Srivas, and thanks for the opportunity. Yeah, I think AI is everywhere and we as a team also have made some investments and have some robust plans for AbbVie in 2025 and onwards. Specifically in my area, like if you divide clinical development into four major chunks, like planning, startup, conduct and closure, my team has done some cool case studies in the setup portion, and we are also making some strides in the conduct phase. In startup, our major focus is on transformation and setup of systems, which is a huge cost, effort, and money for us. And we are trying to optimize it and do it faster and cheaper, such as setup of key clinical systems and as DTM conversion. In conduct phase, we are trying AI and ML to review the data faster and review the data more effectively. That's what I am focusing on.  

This is fantastic. Several use cases. You must have been down this journey for maybe the last 12 to 24 months. Just walk us through in each of those cases. Let's pick conduct and what you talked about in terms of reviewing of the data faster, walk me through what was the scenario before and what were some of the improvements you did with AI and what made it become better.  

So in conduct stage clinical review, which is the remit of clinical data management organization where I sit, the longstanding trend is deterministic checks or edit checks or prescribed checks. You write a rule and once the data comes in, the rule fires, misfires or does not fire. But I think the best trend where machine learning or AI can help is based on patterns and anomaly detection. Can you determine the outliers? Can you determine fraud, misconduct, or something which your human eye cannot catch or something where your rules cannot catch. That's where we have piloted few studies. I think the exact value is too early to quantify, but our journey has been started. And maybe in three to five years, if we are lucky and the technology supports us, we may not need these deterministic or prescribed edit checks, I think machine learning or AI will do a lot of cleaning for us with some human intervention in the loop. Because as of today, when we started, we saw many false positives and we have to fine tune our models. That's where we are in the conduct in the clinical data review space.  

That's really a fascinating journey. You've broken it down into deterministic edit checks with now machine learning helping you anomaly detection. That journey must have been quite a bit of a challenge from a people, process, and technology standpoint. Walk me through what kind of headwinds or tailwinds you faced in that journey. Were people very receptive to this? What changed that? How was the process before and after? And then from a technology, as you said, you had to fine tune it. Walk me through some of those.  

I think initial, as I said, we are in very initial phase of our journey. Very few studies have been on this path. But from people's standpoint, it is a big change for us. And let me tell you, you know, with a real example. If in a deterministic check or the prescribed check, if a discrepancy is fired, that is always a legitimate query. And you go ahead and you go to EDC system or point of entry system and you ask your site staff or vendor to fix it. But what happens in the case of AI, we call it prediction. Prediction may or may not be a query. A human has to go and give a thumbs up or thumbs down. If it is a thumbs up, then only it is a legitimate query. If it is thumbs down, then it is not a legitimate query. So, know, initially team had like some hard time why they're giving thumbs down to non-legitimate queries. They felt that it is increasing their work. And that kind of was a huge chain management issue. Then we had to explain that you don't have to worry about it because once you give thumbs down, the system shows this information and then in the future you don't have to give thumbs down again. So, your thumbs down graph will automatically decrease for the coming studies.  

Nice. And I think you hit upon a very valuable lesson of proving total ROI rather than incremental ROI because in the short term, the productivity may have slightly bumped up, but in the long term, you're retraining the model and giving reinforced feedback that it starts to come down. But there is that phase where you have to provide effort, which may be a little higher than what you're doing it currently.  

And another thing what we learned, Srivatsan, is like some of the things what we discovered were not discoverable through our legacy methods. Now, the best example is one site, suppose, was entering same temperature for all the subjects. And the temperature was well within the range. And what we discovered - it was not a fraud or misconduct on site's front. The site thermometer was not working or was not calibrated. And I'm not giving you this example to point on sites. In fact, they didn't know. It is related to another procedure, but I've given you a very basic example of what was happening at the site.  

And that's a fascinating thing that you're now finding data issues which a human would not have been able to detect because a computer could sense that the pattern is very similar, which is abnormal in itself because humans and any temperatures or any of such ranges are always variable. And you're looking at patterns which are very consistent, which a human being would actually tune off because if it is same, then you tune yourself out. That is a fascinating example. So, the first case you talk about definitely in this whole data management piece, you also said that you were doing a lot of transformation in the setup piece. Why don't we explore that? What happened in the setup piece? Again, before and after would be great.

I think that is one of our main case studies. The review one is still in very early stages. When the data is collected, because we have a journey, data acquisition to data submission, once the data is collected or acquired, my team is responsible for aggregating the data, converting data to the format which FDA accepts as DTM. And we also harmonize data for all downstream needs such as review, listings, dashboards, patient profiles. It's like we are a single data power supply to the entire organization. Now I have a large team which does this activity and...And I think previous method was simply Informatica ETL. But nowadays we are using very advanced transformation algorithms where we have trained our data sets based on SDTM IG implementation guide, which is published by CDISC. We have trained our data on historical data sets where we have already submitted the data. Our effort is amazing and I think the setup time has reduced from 10 weeks to four weeks. And first time in the history, you know, we have been able to kind of produce data sets so fast. In fact, some of our studies of transformed data sets are available before even first subject is enrolled. First subject is screened. And that lightning speed was never anticipated two years back. Probably that is one of the most successful case studies we have in this AI ML space.  

That's fascinating. So how did you go about training this particular model? Did you build a custom model or are you using a platform or are you bringing a set of agents all together? What architecture did you use to build your AI?  

So our journey started before these model LLMs were commoditized, actually, like GPT or Claude or Gemini. So we started almost two years prior to GPT came into production. And we used the AbbVie historical data sets. And we are a very diverse company. We are into five or six therapeutic areas. Plus we used SDTM implementation guide. It was not easy in the beginning because, you know, the data diversity was very high for our end. If you compare us to say, a 20 pharmaceutical company, they might have less variation because they are only in oncology, but we are into many therapeutic areas. But as we started putting more studies into our platform, I think our score improved significantly. And today I think we are at 60%, which is a huge number by the way. It's very hard to achieve 60 percent, but our goal is to have 90-95 percent eventually in the future. This is 90-95 percent of studies? Of the total datasets in a study to map it or to transform it to STDM. Now, we are deploying the next iteration of this platform. The difference is this off-the-shelf platform will not have its own custom LLM, which was developed before LLMs were commoditized. In fact, we will use the modern LLMs and this will be a pure agentic AI architecture. That's where we are very excited for. I think the prediction scores and the transformation score will increase as the same speed because even our legacy LLM was very powerful. But the whole experience and the process orchestration will be faster because of the agentic AI and the modern architecture. That's our hope. And the other use case, which I mentioned on review, both transformation and review will work in a single platform underneath same architecture. So that's another value we are trying to achieve with this modern implementation. We just started, it may take another year to fully implement.

It's fascinating. Not many people have actually gone down the agentic architecture at scale. So why don't you walk me through what are different types of agent? Do you have a data agent, a review agent? What is your subcomponent agents and what are your meta agents? Or are you looking at it in that structure? And how do these things get orchestrated because each agent has to talk to each other? So it'll be great to just get your thoughts on how you break down a big problem into smaller sets of solutions.  

So the architecture is as good as any conceptual architecture you take. And we are taking three major use cases in this massive platform. The ingestion portion, the transformation portion, and the review portion. And the final output will be the submission ready data sets. So at high level, that's how our data journey would look like in this platform. And AI will help us to monitor the flow first. Second, AI will help us to transform it faster and cheaper. And AI will help us to review on top of the other review mechanism what we have. Now purely in terms of architecture, as I said, it is very standard conceptual architecture. We have a data layer, we have application layer, but in this case, we have another security governance and human in the loop layer. Because as I mentioned in our review use case, a human in the loop is an absolute need to train the data sets in the initial version of the studies.  

So, Aman, as you start to build these types of AI models, we're in a very GXP FDA compliant model. Are you having people look at what's FDA and EMA coming in regulations? Are you having kind of a governance board to make sure that AI models are appropriately built and governed?  

So we have an AI governance within our enterprise IT organization. This will not fly or go without their approval. And they have those engineers and security experts and compliance experts where they will review how the overall agentic operations are happening. Is it within the boundary, what have we intended? I think once we, the new platform, which is more than agentic AI based, once we reach a certain stage, then that board will review and will give us green flag of blessings that yes, this is ready to go. But we are working with an off the shelf vendor. They are in this business for long time. And I think they know what they're doing. Our hope is when we go to every AI security board, we will not see many issues there. And if there are any gaps, we will honor it and we will take our own time, but we will be fully compliant. And the AI governance app we have is very state of the art. They review everything very holistically because end of the day, we are a pharma company. We are slated for submissions and audits. And we have that figured out by that team exclusively. Yeah.  

So, one area which as I read lots of regulations is this notion of bias. And you've done quite a bit of training off of the data set. How confident are you that you don't have bias or you have bias or do you have bias governance models to make sure that you're not ingesting bias into the system?  

There are two or three things we are very mindful. One is the biasness, second is hallucinations. And in our world, I think the false positives are also very, which we have to pay a huge attention to. Now purely in terms of biasness, the three use cases predominantly what we have, the ingestion, review and submission. Even if there is a biasness, the human is eventually validating. For example, if it is submission datasets, we are running it through P21 or Pinnacle 21, which is gold standard in the industry to make sure the datasets are FDA STDM compliant. For review, we have actual data manager reviewing the output, truly if it is a false positive or if we don't see, if you're not seeing any hallucinations or the long model training. Like, let me give you an example. Recently we saw one example where, you know, the weight was dropped to 40 kilogram in a single visit window. Now you are a clinician – either you have gone through an amputation or you might have delivered a baby, still 40 kilogram is very high. But system was able to kind of see that as a huge drop and flagged it. But then we realized, hey, system, did you see the unit? One was in pounds, one was in kilograms. So it was not a drop actually. And so these kinds of examples, you know, like we need to then train the system. When you are comparing two values, you need to take additional variable into consideration such as unit, and in this case unit. I think that's where we will fine tune our algorithms and hallucinations, biases and many false positives, very few false negatives also, I think we'll be able to fine tune in the coming years.  

I think all your examples you've...hit upon a very key part of human in the middle and reinforced learning. I think it's a very key takeaway as you start to build AI models, you're putting in the right governance to give feedback to the model, reinforced learning, and always human in the loop, and the human is the one who's making the decisions. As you're going down this journey, you're going pretty aggressively from 10 weeks to four weeks. Tell me what kind of...headwinds are you facing in your organization? And what are tailwinds? I mean, if I'm a business getting me in four weeks and data ready before even first patient visit or screening is wonderful. So tell me about both the positives and the negatives in this journey.  

I think there are tailwinds for sure. There's a lot of buzz in the social media and any media you go nowadays. I think it's hard to bypass term AI, whichever outlet you go. It has kind of created an excitement within our board, our company. We have dedicated AI investments. And it's not like if I'm going and presenting to our team, they will say, AI, like 80 years, 90 years back, when you had gone to these executives, they would say, why, stay away. Let's tread it carefully. But today they feel excited, huge effort. Plus, I think the modern technology, these LLM companies and commoditization of LLMs have created a very positive environment that this is easily doable. You don't have to create your own LLM. I think that is another tailwind in this space. Finally, I think we work in an ecosystem. An ecosystem is consistent of are e-clinical vendors in clinical development, are service providers, big service providers. And then we have our regulators. And suddenly we are seeing an evolution where our e-clinical vendors, our service providers, our regulators, and us, all four are into this journey together. Usually that was not the case when first WebWave came, then RPA Wave came, then Web3 Wave came; this fourth wave, which is AI wave, all of us are in together. So that is a big tailwind.  

No, that's fantastic. As you start to think about this journey, you would have learned a lot, right, in terms of what to do, what not to do. If you were to redo it based on your two years experience, what would that journey look like if you started today?

I think if I were to redo it, I think I will give a crash course to my team on AI first. They won't feel what they're signing up for. I think that time probably was a big learning. Second thing, I think when you go to executives for business case approval, we should not make it about ROI or NPV return in the beginning. And it's more a competitive advantage and doing more with less. I think ROI will come later. I think that's my other learning. You will not have a quicker ROI on this AI initiatives. These are long-term bets and investments. So if you are a short-term investor, please don't do it.

You've been a leader in this space. Is it better to be a leader or a fast follower?  

I will say fast follower is better. And there are several reasons for this. and the public example is, you know, Apple. I think that's my favorite example nowadays. They usually perfect a model and let others do it first. But once they do it, it's widely accepted, it's perfected, there are no bugs, UX has been figured out. I think fast follower is my way to go. You need to do it right. The thing with our industry is, and I'm not trying to be pessimistic or negative, I think we are bound by lot of regulations and the wiggle room to do error is limited. And that's why do it right. You can be a little slow, you can be so behind. That's my personal point of view on this.  

That's some very good advice for people. Aman, as you're starting to look forward, these two years journey was, as you said, bringing in the data sets, putting the infrastructure, change management. What does the next three years look like?  

So next three years, we need to add more use cases. We need to perfect this model, take 60 % to 95%. And I think the next iteration is can we remove human in the loop for some of these important use cases? Or remove maybe is a little aggressive word. Can we reduce their involvement in the loop? I think will be our next iteration. And I think we don't need to think about from a standpoint where human is needed. Think from a perspective, machine is doing everything. Do you really need human? Like if it is about patient safety, we need it. If it is about really submitting it to FDA and it's a final review, it is needed. But some of the mundane tasks, repetitive tasks where there is low intellect required, I think we can think about reducing human's load. We need humans for do better jobs.

So remove the mundane jobs and focus on the much more strategic and important and patient safety, regulatory and so on. Actually, that's very similar to what the FDA recommends. They call it conduct of use and they look at each of the AI models and ask you to actually publish the conduct of use so that you can make sure that you're saying what is the use case for the model and then you can make sure that you're giving the appropriate risks in the usage of the model. So very much aligned with that. And so in that journey, Aman, do you feel good that we will achieve that vision? Or do you see some lack of clarity or headwinds coming in to make that vision work or not work?  

Well, I think there are headwinds, Srivatsan. I mean, it's not going to be easy. We have to compete with other major companies on the talent world. See, tech is not our core competency. We are a research company. So that's one thing. Second, what I've learned in last two years, some of these technologies are not cheap or not economical. Can we fund some of these larger initiatives? We have to work creatively with our leaders and board and some of the peers as well because some use cases may require different companies to come together. So that's going to be another headwind. And then finally, I think the overall underneath technology, which is a tailwind, but it is a headwind because it's getting settled and getting mature. And every day we hear new word, new company, new information. So once it reaches a certain maturity spectrum, then I think it will be easier. But as of today, it is very fluid, very dynamic, it's changing very, evolving very rapidly, I would say. Very fair points and lots of different challenges in change management, newer regulations coming in, as well as the financial costs.  

And I wanted to touch a little bit on the financial cost. As you start to go about it, I like your way of how you said ROI is a by-product, but the real ROI is competitor differentiator. But with that said, you went from almost a CapEx to an OpEx model because a lot of these things are agentic and it's token-based. The financial, that with all, you don't know what it is today because it's based on usage in the future. How are you thinking about the financial cost of operating an agentic AI architecture?

I think it's too early to say, but this token usage is a big talk in the industry. And we have not reached a stage where we can truly quantify in terms of whether the cost savings are outweighing how much money we are spending on tokens. I think it's what's meant to be fair. But overall, I will say, I think there is no cost to saving someone's life. If that's one way of thinking. Second, if our drug approval gets faster and without any hiccups, without any feedback, and my team doesn't do any mistake in database log in the quality, and we are one day faster in launching our drug, I think there is an ROI for us. So that's how you have to think in grand scheme of things. Plus I think CapEx investments are done at more than enterprise level. Can we use that effectively and operate it with a very minimal OpEx? I think that should be our goal.  

As we start to come to the end of the discussion, what do you think should be some key takeaways for listeners from this conversation?  

Start small, aim for big. That's one thing I want to say. Second is involved and go together - lot of your team members, your stakeholders, your ecosystem. I think achieving alone will be difficult here. That's another learning we had from this. And third, be a long-term investor, not a short-term investor. I think these three takeaways.  

These are fantastic. And it was so...happy to have you in this podcast and really excited to hear the journey you have come, which is a lot and also the journey you're going from here. And thank you, Aman, for participating in the podcast. Really appreciate it.  

Thank you for having me. Appreciate it.

Well, Sri, what did you think?  

Danny, it was an exciting conversation with Aman. He painted for us the journey till date, and he picked up a couple of really good use cases, one on data review, and then the whole process of creating submission-ready data. And both the use cases are spot on, and he's been able to really drive some good AI innovation through the process. So I learned a lot there. What was more impressive is his journey forward. And he was talking about how you take what you do to scale and breaking it up into functional components, the agentic architecture, looking at ROI, but not ROI from a financial standpoint, but from a value standpoint. And I think that's a recurring theme. As you're starting to implement AI, you need to really look at not just return on investment, but how is this going to be a competitive differentiator? How does it get you speed to market? How does it get you better drugs to market faster? How does it improve the quality of what work you're doing? And I think that was a very unique perspective around how you quantify AI programs.  

It's an interesting point because, he talked about that ROI slope on one hand, and he talked about the change management and at the same time board enthusiasm. And as you think about large scale implementation of AI, people will rightly focus on the technical and those implementation issues, but how important is it to think about things like culture and managing expectations?  

I think it's really important, and he talked about it. When you go to the board and ask for money for AI, how do you frame it is a very important part. But it's also very important to bring your people along in your journey. He talked about human in the middle, putting processes in play to make sure that people are giving feedback back to the AI systems, reinforcing the learning off of the AI module as much as reinforcing the learning off the human. And I think that is a very critical part of how you deploy and develop successful AI programs.  

He also talked about cases where AI flagged anomalous data. The temperature should be mentioned at the start in the weight loss case, which turned out to be someone using kilograms instead of pounds. What does this suggest about the ability of AI in real time to catch these types of data anomalies and what might that mean to something like a clinical trial?  

Clinical trial process is very human intensive. Most of the time we are moving to make sure that our data is clean and they're scrubbing it and cleaning it and making sure that it is submission ready. That is the Holy Grail. Any technology which is going to make our work much more meaningful and providing anomalies, and helping us navigate this data sea with very meaningful insight goes a long way in reducing that cycle time, but also improves the quality of work each and every one of the clinical trial associates are doing. Today, we do a lot of mundane and boring work, but tomorrow AI could do a lot of the mundane work and we do a lot more of the strategic and important work. And I think that's a very big differentiation to lot of these clinical trials stakeholders that your lives are going to become better because we're removing the grunt work from your life.  

You also asked about the headwinds and he talked about the competition for talent, the cost of funding some of the initiatives and the pace of change. Where do you think the biggest pain points are going to be for the industry?  

I think two parts. One is finding the right talent, but also finding the right mindset. Because people who are gonna be successful are people who have the growth mindset. They want to learn, they want to give feedback, they want to adopt, they want to change. If you're not comfortable with change, you're not comfortable with feedback, comfortable with working in a unison with technology, you're gonna have a tough time in adopting and adapting to this new world. Well, it was a great conversation and I'm looking forward to our next one.

Sri, thanks as always. Thank you.

Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny @ levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.

Thanks for joining us.

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Aman Thukral is a seasoned clinical systems leader at AbbVie, serving as Director for Clinical Systems & Digital Operations. With a strong focus on integrating AI and human-in-the-loop processes (HITL) into clinical data workflows, he champions innovation in SDTM transformation, electronic clinical outcomes assessments (eCOAs), and inclusive trial design. He shapes the technological future of clinical development through publishing and speaking on AI-augmented methodologies