Aired:
July 10, 2025
Category:
Podcast

Making Clinical Trials More Predictable with AI

In This Episode

In this thought-provoking episode of the Life Sciences DNA Podcast, host Nagaraja Srivatsan speaks with Craig Lipset—Founder of Clinical Innovation Partners and Co-Chair of the Decentralized Trials and Research Alliance. Together, they explore the paradigm shift AI is bringing to clinical development. The conversation uncovers how predictability—not risk aversion—is the core goal of pharma, and how AI can fulfill that goal. Craig also offers an honest take on the human-AI interface, discussing the concept of AI as a “digital teammate,” the challenges of AI adoption in regulated environments.

Episode highlights
  • Predictability Over Risk Aversion: Craig reframes clinical R&D priorities—organizations aren’t afraid of risk; they just crave predictability. AI can be a driver of precision, not disruption.
  • AI as a Teammate: A compelling take on agentic AI digital co-workers that don't just automate, but collaborate. Craig shares how to manage these AI teammates like any other team member.
  • Humanizing AI: Learn how familiar interfaces and anthropomorphized digital agents (like bots with names and personas) ease AI integration into traditional workflows.
  • Change Management in AI Adoption: From grassroots experiments to top-down use cases—hear why hundreds of internal pilots may be more valuable for change than the final AI tools themselves.
  • Regulatory Wake-Up Call: As the FDA begins using AI for submission reviews, Craig warns: if regulators use AI and you don’t, your organization risks being left behind.
  • The Experimental Imperative: Discover why encouraging hands-on AI experimentation—at the individual and enterprise level—is essential for future-ready organizations.
  • Rare Disease Orgs as AI Pioneers: Patient-led, IP-owning organizations are emerging as agile frontrunners in AI-driven clinical trials, offering pharma a blueprint for low-risk adoption.

Transcript

Daniel Levine (00:00)

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. We've got Craig Lipset on the show today. For listeners not familiar with Craig, who is he?

Nagaraja Srivatsan (00:30)

Danny, Craig is an advisor and founder of Clinical Innovation Partners. He's also co-founder and co-chair of the Decentralized Trials and Research Alliance, a global nonprofit organization dedicated to the adoption of more accessible clinical research participation. He's also spent over a decade as the head of clinical innovation at Pfizer. He advocates for an AI-driven transformation of clinical trials, advises several AI clinical analytics companies, and consults on integrating AI into medical trials.

Daniel Levine (01:02)

What are you hoping to hear today from Craig?

Nagaraja Srivatsan (01:04)

Craig has been at the forefront of innovation in clinical research and development. He has great insight into how AI can be used to improve clinical trial speed and cost. I'm hoping to hear some really compelling stories from him on how somebody could go down the AI journey and really adopt AI within that organization.

Daniel Levine (01:24)

Before we begin, I want to remind our audience they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy this content, be sure to hit the like button and let us know your thoughts in the comments section. With that, let's welcome Craig to the show.

Nagaraja Srivatsan (01:43)

Hi, Craig. Welcome to the podcast. Really excited to have you as a guest on the show. What I would like to hear from you is just, you've been an industry veteran and an expert driving innovation. I'd love to hear your AI journey and how that journey has evolved.

Craig Lipset (02:00)

Hey Sri, it's great to be here and I'm thrilled to participate in this conversation. Industry veteran is kind speak for old, right?  

Nagaraja Srivatsan (02:09)

It's experience and expertise.

Craig Lipset (02:28)

Okay, there we go. There we go. You know, it's interesting to think about our AI journeys, both in regard to how we as individuals are interacting and using these tools in our day-to-day jobs, as well as our day-to-day lives and how we're trying to experiment and keep ourselves agile and learning. And then how we see our organizations and our peers around us starting to evolve with these tools. You know, for me, I focus my energy on clinical trials and medicine development. And, you know, traditionally these are, we call them risk averse and cautious areas. I think that there are areas actually, it may be a misnomer to call them risk averse. Sometimes I think the real word is predictable. They want their operations to be predictable. And sometimes when we talk about digital and innovation, it sounds to them like interesting things that could make execution less predictable. Is that new and digital innovation thing going to disrupt my timeline, my budget, my expectations of quality? And so it is going to be interesting to continue to see how AI adoption sits on that traditional curve because so much of AI done right can make our conduct more predictable. And so if we think beyond industry as just being risk averse and cautious and embrace that really at their core, they just want predictability. And we lean in perhaps with some of those early adopter use cases that can drive and catalyze more predictable study execution, portfolio delivery. How does AI actually help to support those goals rather than seem like threats to those goals?

Nagaraja Srivatsan (04:00)

That's a perfect framing, this predictability. I love that. So as you're advising and helping companies think through creation of that predictability, what kind of scaffoldings are you asking them to put in their AI journey? What would lead to better predictability?

Craig Lipset (04:17)

Well, it's a great question, right? And so certainly a lot of it is making sure that we have the right foundational understandings of the use cases and the challenges and the types of stakeholders that would have to be involved. This is an industry for entrepreneurs or others that I would typically caution. Don't assume if you build it, they will come. Don't assume that if you see a problem and think that you fixed it, that that will be adequate. And whether it's companies as large as Apple deploying research kit with the belief that that would have fixed what ails pharma or entrepreneurs that see a challenge like, why do we still have informed consent on paper? I'll just put it on a digital platform. There tend to be root causes beneath those that are really the underlying true challenge rather than what an observer might see when they're kind of skimming that surface. So making sure that you know, as one foundational pillar in the process is making sure that you're investing the right energy upfront around understanding the use case and the real root cause of the underlying challenge that exists there. But as we think about predictability, really being able to pull apart, well, what does that mean in terms of the type of solutioning that I'm able to stand up? Can I make - which of those pillars am I going to focus on? This is around predictable budget, predictable timelines, or predictable quality. You know, there's this old anecdote and I think it's more than just, you know, urban legend, but it's that story of, you know, I go back to my boss at the end of completing a trial and tell him, hey, great news. I finished the study six months early and it was $2 million under budget and I get admonished for that. Right. And why would I get admonished for that? It's because I misused their time and their capital. if I did better planning and I were able to better predict and manage my study, then I wouldn't be showing up at the end thinking I'm delivering good news when to an executive's eyes what I really did was tie up his capital, those $2 million that could have been reallocated elsewhere earlier and tied up his resources by suggesting that I needed all this manpower for months longer than I actually did. And so, for all of these reasons, I think this lean around predictability and having an understanding of these kind of, you know, three priorities, whether it's time, budget, quality, and which of those levers you think you're going to be able to impact.

Nagaraja Srivatsan (06:51)

Yeah, I think you brought up a very good point on this predictability around being precise. Predictability is about not over achieving or underachieving by being precise. But one component you talked about time quality, what about productivity? Because maybe they used AI and they're more productive, which you did not assume and therefore got better predictability, but also productivity. Touch a little bit on what happens because productivity would lead to that six months gain. And so you have to have a feedback loop to tell what you're gaining. But how do you go about incorporating that productive measure in what you're talking.

Craig Lipset (07:29)

Productivity is a really interesting measure to go after. And it certainly leans in when we're thinking about job aides, companions, and co-pilots, or if we're thinking about, you know, exciting use cases for agentic AI and being able to deploy a digital workforce to be able to perform some of my clinical operations or data management tasks within a trial. And what will that, you know, near term future start to look like when a role that's in charge of oversight is no longer just doing oversight of a human workforce, but a blended workforce of digital agents along with humans that are all checking in at different points and milestones along the day with their job. But it's interesting to think about is productivity in and of itself the desired end goal? Or is productivity a means to an end in terms of better managing cost and time as you had indicated? Certainly, productivity is great for retention, productivity is great for talent attraction, but are those KPIs that leaders are kept awake at night about or held accountable to their leaders, to their bosses, because we all have a boss, or is it really just a means to an end for achieving time and cost savings? I like to think it's both, right? I think with better productivity, we can give our boss's boss that signal that we have more predictable and more accountable time and cost, and quite honestly, I think quality as well with some of these agentic approaches. But at the same time, we can do right by our workforce and reduce some of the burden that we've really been putting on people's shoulders for quite a while, certainly since the pandemic and our operation warp speed-like mentality that we can just ask people to do more and more and more, which just clearly hits a tipping point.

Nagaraja Srivatsan (09:21)

Craig, you touched upon a very interesting part, which I've been framing in the market, this notion of AI teammates. And you talked about the digital teammates and the human teammates and the rule of management to manage this integrated workforce of people and AI together. Touch a little bit about what does that look like from AI teammate as a coworker and then AI teammate you have to manage as a leader. You brought up both perspectives on this is the new norm. So how do you get to this new norm?

Craig Lipset (09:53)

Right. It is interesting to think about, you know, from all these different personas, Sri, I think you set that up really well, that in some cases, I'm doing my job and AI is my companion. They're kind of helping me and it's helping me and supporting me. And I think that's the way a lot of us in recent years have started to see and interact with platforms like co-pilots and others that can help me to do my job. And certainly on the other end of the continuum is that threat that my job is going to be impacted in a negative way, that I will be replaced by AI. And I realized that there are some very polarizing views there, some who feel that jobs will be placed at risk and some who are adamant that they won't. I think I tend to land a bit more in the middle here. I think that there are some jobs that will be placed at risk through automation or otherwise. I don't know that that's necessarily a bad thing because most of us don't enjoy doing such rote tasks that are so easily replaceable through automation. Most of us would rather do more human interesting and challenging tasks that stimulate our brains every day, especially if we've chosen to work in the field of research and development. But, you know, as I've seen different demos and use cases developed out there around what a digital workforce could look like, it's really been exciting and eye-opening for me to start to see demonstrations showing what an agentic enabled workforce starts to look like and how I can have my CRAs doing some tasks or data managers doing some tasks. But some of them are actually not being handled in just a robotic repetitive way, but by a workforce that can make certain decisions and check in with me periodically, one that I can monitor and oversee through different dashboards of watching these workers. And I think for many of us that are not AI scientists, I don't code and I don't build these tools myself. I tend to be more of a user. I think a lot of these demonstrations that start to humanize what a digital workforce looks like, I think are a great way to tell that story and help our non-technical friends start to see and appreciate what this will look like in terms that look familiar. I have lots of roles in this enterprise that do oversight. We are overseeing our CRAs, we're overseeing our contractors at CROs and other places. And so almost humanizing and personifying an agentic workforce in a similar way so that I could see these digital workers in action and be able to see when they're checking in with me at different points, I think is a really interesting way for us to start - to really start to visualize what it is to feel like.

Nagaraja Srivatsan (12:50)

Yeah, so Craig, one of the things which I did in the past and I welcome, I love this notion of humanize. So we actually named some of our bots, we had a tax bot named it Sammy and because you know, Uncle Sam and you're doing taxes. And then we changed our processes. We would say, did you tell Sammy that we changed our processes? Did we tell Sammy that this, and the early stages, we actually put a chair at the desk to make it happen. That was early stage adoption. Let's touch upon this humanization. That's a very interesting concept. How do you go about humanizing an agentic AI? What have you seen in the market which has appealed to you as, yeah, this is a co-worker I can work on because everybody uses copilot, but is there other things and techniques one can do to humanize that? Because I think that's a very key part you brought together.

Craig Lipset (13:39)

You know, for many of us, we're working with distributed teams who are not overseeing the shop floor and walking down the manufacturing line and like putting eyes on the workers performing their tasks. We're working in this remote dashboard enabled, you know, universe where we're really monitoring and keeping eyes on different sources of data and platforms that let me kind of see study progress around different tasks that have to be performed. And that's how many of us are doing oversight today. Colleagues are checking in periodically, either on a scheduled or milestone basis with us to let us know how the work is going, or if there are problems that need a decision escalated to me as a manager. And so it's interesting to start to see how models using agents are kind of replicating a lot of that feel and flow, right? Rather than just positioning that agentic, is this army of machines that are out in the ether encoding themselves into doing different work here and there, starting to render them in ways that look familiar with how we do oversight and collaboration with our human workforce, right, where I have similar types of dashboards to monitor progress, where I have different types of analytics tools that are letting me know if there's an issue out there. And where there's workers are checking in with me periodically as either on a milestone basis, or if a problem needs to be escalated for me to help make a decision before they go and carry out their next task. And so I think it's, it's almost, it's, it's been eye opening for me to see this kind of lift and shift of let's take these same oversight interfaces that our managers and others are familiar with and just insert the digital workforce into much of how that renders. And all of a sudden it starts to feel very familiar for folks in terms of how my digital workers will start to fit in with my human workers.

Nagaraja Srivatsan (15:40)

I think that's a really fascinating place where you've gone - is saying, hey, I am used to managing my projects and teams in certain project management tools and work distribution tools. I should just incorporate those digital workforces and outputs in that same structure so it's familiar as a leader that I'm managing work and throughput and productivity and KPIs as you talked about it, but it's in a very familiar, very interesting. Have you seen any good constructs which are coming out in the marketplace, which you've been excited that they have actually mastered that humanization of the workforce or is it still an early journey?

Craig Lipset (16:16)

You set that up really well, Sri, and I guess it shouldn't surprise me in that change management in our industry always lands best if we can make the front end look familiar and accessible. And so when we think about, I could automate using generative AI, I can automate my medical writing tasks. Well, just slipping a machine in and assuming that the process is now just going to keep running is a very dangerous proposition. But taking something like automation in that case, or generative AI, but putting a front end on it that's familiar to the buyers and the operators, right? So that it's almost like a, you know, it can be a Lego piece that fits in, even if on the back end, it's radically different in terms of how it's doing its work. It's got to fit into the existing machinery or now you're trying to do two things at once, right? I'm developing a faster, more agile process and I have to do all this radical change management to make it fit into my organization. And so whether we're thinking about use cases like generative and functions like writing, or whether we're thinking about some of these, I think even more exciting examples with agentic workers. You know, one example that I saw recently, entrepreneurs from a venture studio called Team8 had done some interesting fact finding and deep dives with some different pharma, biotech and CROs around using agentic here. And the company they spun up was called, it had a coffee house name Espresso, Espresso Health. And so their demonstration stood out to me as just one example that kind of fit in this theme. Give me a way to kind of see this that is like that Lego piece that kind of fits to how I operate today. You could do magic on the back end, but make it familiar and accessible on the front end.

Nagaraja Srivatsan (18:17)

I think you touched upon two concepts, right? I love it. Humanize, the way to humanize it is to put it into familiar situations, but you hit upon a key part, which is change management. Let's explore that a little bit on. If you're a novice within that, first, how do you start embracing what tools do you go to or where do you start? Because sometimes it's top-down, as you said, medical writing, do this, use that, is that enough, but- every one of us is very curious to improve and doing it. Is there certain one-on-one things you recommend to people as they start this journey? Are there things which you would put together? So I want you to talk about it from an individual change perspective. What can I do to change myself? And then we will explore what happens from an organization standpoint to facilitate the individual to change. So I wanted to talk about both dimensions.

Craig Lipset (19:08)

You know, your question reminds me of an example earlier this year where there were headlines that J &J had run some 800, 900 internal ground up experiments using AI and that they had ultimately landed on these five use cases to prioritize with AI in the organization. And, you know, I saw the articles around this and I made a comment on, on LinkedIn at the time of like, wow, you ran 900 experiments, but you came up with five use cases that honestly, you didn't need 900 experiments to land on. Like you could have asked just about anybody, including Chat GPT, and it probably would have given you those five use cases. But then somebody commented beneath and said, maybe the 900 experiments were part of the change process to drive the meaningful adoption of those resulting five use cases. And whether that was by design or an unintended consequence, I really think that person's comment hit it right. Because rather than just having this top-down mandate of we contracted a big consulting firm or we asked Chat GPT or we surveyed experts inside and out and here are five use cases, instead they have this almost grassroots engagement where hundreds and hundreds of colleagues were using AI to pursue different use cases. They're getting hands on these different tools and they were being expansive thinkers themselves. And so I do think that that's going to play out to be an interesting example and it will be interesting to see to what extent that really helps them not to develop better and smarter tools. I think that's almost a level playing field today, but in terms of implementation and adoption and meaningful use, and that's the hard part. And maybe those experiments are going to give them a bit of a leg up.

Nagaraja Srivatsan (21:03)

Craig, you're spot on in that observation because even in my own self journey, I can talk about AI, but once you start to use it and see what it does and what it cannot do, you become that much more confident. And one of the things going back to your humanization is as much as AI is learning from us, we're learning about AI. It's like you suddenly want to drive a new car. You have to take the driving test. Just because it's proven and it's out there in the parking lot doesn't mean that everybody can drive it, right? We have to do what it takes to see the fit. Some people drive too fast, too slow. How does it fit? Adjust the seats. So you're saying, hey, let's experiment it. Do the test ride in your own situation. And maybe that's going to create that humanization and human ingenuity because now that you're using it, you're like, there could be things which are unintended, but you can realize that you could do and I really like that thing that 800, it's not about 800 experiments which brought down to five, it was 800 people who are now on board and now who can do not just the five, but the 5,000 different things which they can do because now they have the opportunity. They learned the skill.

Craig Lipset (22:13)

They learned the skill and they were given permission in the organization. And then to your point, you know, there's how are we using these tools on our own, you know, outside of the day to day. And certainly we can't go taking our confidential study protocols and throwing them into public LLMs, but there are still great ways that all of us can use these tools today. And it's interesting. I saw one social media post kind of suggesting this AI challenge, try to use a different AI tool every week yourself, just to kind of learn, whether it's to help you build presentations or to write and develop ideas or whatever that use case may be. I have found that for those types of opportunities, I've found a network of friends that I learned from, people like you Sri or Angela Radcliffe or others who I know are always on the edge, a step ahead of me usually in terms of trying a new tool and just being able to learn from them. They can people, these are people that can be my guides in the process and help me to see that there's a new tool that is one that I should be starting to try and experiment with. And certainly it's interesting to have those conversations at home with my college age children and in what ways are they afraid to use some of these tools and in what ways are they starting to experiment themselves and build comfort and confidence.

Nagaraja Srivatsan (23:40)

I think that self-discovery and journey is a very important part of this narrative where you have to touch and explore. This is not like ERP where people say, I'm going to use a manual and train you on click this button, click this button, click that button, and then things are going to happen. Here, you're giving inputs, you're getting outputs, you're reacting to it, and that is what makes you better, but also make the tooling and the adoption of the tooling much better. Tell me about it on the other side. How do I enable - humans are very ingenious. You give them tools and they'll do whatever you need. What are kind of as a leader or an organization, what kind of, how do you first impact a positive part of the change, right? Give them the tools to enable them. But we're also in a regulated industry. What are the guardrails we have to put to make sure we're not having bad actors across that organization?

Craig Lipset (24:31)

I think that some folks are - it's interesting. We think about all of our change curves and they always end up looking the same, right? In terms of this distribution of folks in our org where some are hungry to use these, they're using them on their own, they're itching to start to use them. Some will never trust them or will always kind of lean into fears of hallucinations and misuse - of how these tools can go wrong. And most of our organizations lie somewhere in the middle and they just need guardrails and permission to use them. And for most organizations, those guardrails and permissions come with safe and secure implementations of tools that their own communities can use. And we already see this today, whether it's in large pharmaceutical companies, universities, or even in regulatory authorities like the FDA with their announcements of accelerating their rollouts of some of their AI-driven tools for helping to support accelerated review cycles. Now, of course, there are concerns that people will have, which will lean in heavily around both privacy and security, as well as misinformation with LLMs and models that may use more open internet data and what types of hallucinations may come out. And so what are the guardrails we put in place? Well, certainly, by making sure we have some of our own private instances or licensed instances that can provide a little more security around including our own documents inside of these types of machines. But for most of us, ultimately the decision-making is still coming from a human today. We're not self-driving our cars. We're still using assisted human at the wheel, human with foot on the brake pedal. And so it's still the humans who are held accountable for the work product, for the decisions that are being made. And that's really our stopgap that we have to rely on today. Just like you could have misused Google and internet search and made bad decisions in the past, the same is true today. And it's still the responsibility of our people and our organizations to hold themselves accountable for the decisions they're making.

Nagaraja Srivatsan (26:44)

And Craig, you talked about that and a regulatory agency like FDA just published how they're their own internal private cloud in the government cloud, as they call it, infrastructure to look at all of the different documents and put together a query interface to help in better review processes. And so if they could do it, then I think anybody in the life sciences industry can do it in a secure framework, right? What you said, make sure that you're...using in a protected environment. You have your own private stuff. Make sure data doesn't go over there. You could put the right guardrails. Then what you're saying is once you put the right guardrails, then you have to create that experimentation culture for people to do the 800 experiments knowing fully well only five will be the ones that you take on. How do you create that? That's very antithesis to where we started with KPIs and dashboards and wasting human capital because that for a CFO means that I did 800 experiments of people hours, which wasted versus the flip side is that 800 experiments is now going to make my workforce incredibly productive in the next years that I'll be producing better output and throughput. So how do you balance these?

Craig Lipset (27:59)

I mean, I would imagine those 800 experiments probably were not 800 wasted hours on change management training and agility training. For most of us, even an early experiment with AI, we're learning and we're already finding incremental efficiencies, even if it's just offsetting whatever additional time was needed for me to start to learn how to safely navigate a new tool. But your observation with the FDA, I think is really astute because you know, as drug developers, whether we're in academia or at pharmaceutical or biotechnology companies, we always want to be able to anticipate, right? And you rarely want the regulators to have more tools and better tools and better insight engines than you yourself have, right? Because you want to know what are the regulators going to be seeing and making decisions off of? And so for organizations that are not using these tools, I think it must be terrifying today to know that the FDA, when you drop in a submission, are going to be using these types of machines for their reviewers to have faster access to different types of information for their decision-making, to have that type of insight and analytics support at their fingertips. If you're regulatory, quality, safety, and clinical people didn't have the same before clicking submit, I would be terrified today. And so if anything, it'll be interesting to see what type of catalyst that alone creates for companies that are planning regulatory submissions to make sure that they're keeping ahead of the curve. Because the worst thing that could happen for them is they submit something to the agency that their use of AI is uncovering an issue that you missed because you were not using those types of AI-driven tools.

Nagaraja Srivatsan (29:57)

That is almost like it's an inflection point. I remember when the AERS 2 database came and when the FDA had every safety data and information, that just drove the amount of safety data that house is being created within sponsor companies because you wanted to make sure that you had all access to every data and analysis and analytics because you didn't want somebody else to look at your data in a different way than you did. Craig, this has been just an amazing...conversation, we could go on and on, but as you start to say where we are from a future perspective, where do you think this is all going? If we were here in the next 12 months, what do you see that? If you're here in the next 36 months, what do you see?

Craig Lipset (30:36)

I think that there are a lot of remarkable opportunities at our feet, but Sri, because you and I are veterans and have a bit of gray hair, we know it's not unusual for there to be remarkable opportunities at the doorsteps of life sciences organizations. And for years, I think our ecosystem has done  a remarkable job of avoiding opportunities at their feet until there's almost a catastrophic cataclysmic reason to drive adoption, some sort of step change event like the pandemic driving risk-based monitoring approaches for studies that seems to then be able to stick and outlast the pandemic. And so, can these AI opportunities transform and be remarkable? Absolutely. Can they drive better study design, better study conduct? In some cases, some truly transformational opportunities in terms of using digital twins, synthetic data and other opportunities, not just to make studies run better, but to make studies themselves run, look radically different, especially at a time where regulators and say rare diseases are signaling incredible receptivity to alternative approaches to less dependence on traditional confirmatory clinical trials. Will we embrace? Will we be able to adopt, adapt and change? Will we commit to the hard part, which is not can we run cool experiments and build cool tools, but can we actually pull these through in terms of meaningful change in our organizations? you know, I'm not sure to be honest, but what I do think is we're starting to see some interesting new stakeholders enter the world of clinical research, stakeholders that I think will have a sense of urgency and agility that could become pharma's best friends in terms of de-risking and helping to drive adoption. And in this case, I'm speaking about rare disease patient-led organizations. These are organizations that have intellectual property on phase 1 ready drugs. Nonprofit organizations that are holding the INDs on molecules and bringing them into the clinic themselves. They're not a threat to pharma. They're developing medicines and indications that have proven to be too small to be commercially interesting to pharma. But they're doing it with agility and urgency. They're not beholden to any legacy process. And I believe these types of organizations will become the tip of the spear, the sandbox for pharma to be able to see how these new tools can be deployed, understand regulatory feedback, and then start to feel that the water is a little bit de-risked for them to start to step in themselves.

Nagaraja Srivatsan (33:35)

No, that's just an amazing view of where things are in a very practical and pragmatic conversation. So Craig, again, thank you so much for your time and insight. I really enjoyed it and thank you again.

Craig Lipset (33:47)

My pleasure Sri, thanks so much for having these conversations here.

Daniel Levine (33:53)

Well, Craig always has such great insights into clinical development. What did you think?

Nagaraja Srivatsan (33:58)

It was a really good framework which Greg shared with us. The first thing he said is AI effort is about driving predictability. Humans always look for predictability and he wanted to make sure that the outcomes of AI within clinical trials is predictable. The second concept, which is much more interesting was how do you humanize AI? This new model around having a digital workforce and a human workforce working together, it's very important that you humanize AI. The third part he said is that as you start to humanize AI, can you then provide the similar mechanisms to manage labor between the human and the AI workforce together, common dashboards and common frameworks? So very good journey as one starts to go down the AI adoption path.

Daniel Levine (34:52)

It's interesting because when he was talking about predictability, you then asked him about productivity that AI could bring to it. What did you think about the way he responded?

Nagaraja Srivatsan (35:02)

I think what he said is productivity is one of the KPIs, but the main KPIs which are very important in clinical trials are quality, timeline, and cost. So anything you do from a productivity and a predictability standpoint have to impact quality, time, or cost. So that's how he framed the aspect of how you incorporate productivity within the larger clinical timeline.

Daniel Levine (35:26)

Well, he also talked about change management and the use of making new tools seem familiar and the importance of getting them to fit into existing ways people work. What did you make of that?

Nagaraja Srivatsan (35:38)

He had a very interesting take on change management, two parts to it. One, how do individuals incorporate change? And he talked about an experimentation culture, really looking to experiment and try tools out, get familiar because that's what's gonna get you better. The second he talked about is, if you are not very comfortable, create a peer group or a network of people who you can lean on, share stories, learn from. And then the third part, he said is as organizations, you need to be measuring both the quality of the outcome, but also the journey which people go through to getting the outcome. The more people who can learn and come along, the better the organization is going to be.

Daniel Levine (36:22)

And in that context, he was talking about experimentation and how hands-on use through implementation, adoption, and clinical use allows for discovery of what the technology is capable of and to exploit this in unexpected ways. Has that been your experience?

Nagaraja Srivatsan (36:42)

This is a really a personal journey where every day you're learning new things, what the tools are capable of, as well as learning their shortcomings. And without experimenting, it is very theoretical, but when you start to experiment, it becomes very practical on what it can do, what it cannot do, and how you can then incorporate it within all the aspects of what you're doing.

Daniel Levine (37:06)

One of the interesting things he said was the implications of having a super driven AI to uncover issues that drug developers may miss when the FDA goes to look at a package. This creates an interesting type of new risk for drug developers to contemplate. Have you heard companies think about this?

Nagaraja Srivatsan (37:28)

We finally have a burning platform. When the regulators start to use tools, the adoption of those tools starts to accelerate within sponsor organizations. This press release from the FDA on using the tool to help them with their submission process, done in a secure way, gives the comfort to other companies that they A, can embark on the journey, but B, if they do it in the right way, they are going to be in the same level as the regulatory agency. For those who do not want to embark in this journey, they will be surely left behind because they are not able to learn from the tools and the tools' potentials and the shortcomings. That learning curve will become a competitive disadvantage.

Daniel Levine (38:14)

Well, Sri, it was a really thought-provoking conversation. Thanks as always.

Nagaraja Srivatsan (38:20)

Thank you, Danny. Appreciate it.

Daniel Levine (38:25)

Thanks again to our sponsor, Agilisium Labs. ⁓

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Craig Lipset (he/him/his) is a recognized leader in clinical research innovation and digital health transformation. He is the Founder of Clinical Innovation Partners and Co-Chair of the Decentralized Trials & Research Alliance. Previously, Craig served as Head of Clinical Innovation at Pfizer and played key roles in founding TransCelerate Biopharma, Perceptive Informatics, and Adnexus Therapeutics. Craig has pioneered several industry firsts, including the first fully virtual clinical trial and the return of trial data to participants. He advises top tech and biopharma companies, academic institutions, and venture firms. He also serves on multiple boards, including MedStar Health Research Institute and the Foundation for Sarcoidosis Research. Named to the PharmaVOICE Red Jacket Hall of Fame, Craig has been recognized by The Medicine Maker, CenterWatch, and Pharmaceutical Executive for his contributions to innovation in life sciences.