Aired:
August 14, 2025
Category:
Podcast

Why AI’s Most Promising Near-Term Value Is in Clinical Operations

In This Episode

In this episode, host Nagaraja Srivatsan and co-host Daniel Levine dive into how the biopharma industry can bridge the gap between artificial intelligence and real-world decision-making. Their guest, Dr. Dimitris Agrafiotis, Director of Digital Analytics & AI at Arsenal Capital Partners and former Chief Digital Officer at Generate Biomedicines, shares his decades of experience leading data-driven transformation in pharma and biotech. With a background spanning technology, data science, automation, and informatics, Dimitris discusses how leaders can build a common language between AI specialists and business teams—enabling faster innovation, smarter R&D, and better patient outcomes.

Episode highlights

What You’ll Learn in This Episode:

  • Why biopharma companies struggle to adopt AI at scale
  • How to “translate” between data scientists, researchers, and executives
  • Real-world examples of AI transforming clinical research
  • The role of leadership in driving AI literacy across organizations
  • How to prepare for the next wave of AI-powered breakthroughs

Transcript

Daniel Levine (00:00)

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. Sri, we've got Dimitris Agrafiotis on the show today. For people not familiar with Dimitris, who is he?

Nagaraja Srivatsan (00:32)

Danny, Dimitris is Director of DigitalAnalytics and AI at  Arsenal CapitalPartners, a specialized private equity firm. He previously served as Chief Digital Officer at Generate Biomedicines, where he led multidisciplinary teams across technology, data sciences, automation, and digital transformation. He's worked in several large biopharma companies, and he's also held key positions in informatics and data strategies at companies like Covance.

Daniel Levine (00:59)

And what are you hoping to discuss with him today?

Nagaraja Srivatsan (01:01)

Dimitris has a wonderful experience in the intersection of digital, technology, data science, and life sciences. He spans the entire R &D continuum in all of the different companies he's worked on, as well as across venture-funded biotechs and biopharma and services industries he's been a part of. I want to talk to him about how do you unlock the potential of AI in clinical trials specifically, and some of the key challenges companies face in adopting AI.

Daniel Levine (01:29)

Before we begin, I want to remind our audience they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, hit the like button and let us know your thoughts in the comments section. With that, let's welcome Dimitris to the show.

Nagaraja Srivatsan (01:49)

Hi Dimitri, pleasure to have you on the podcast. How are you doing?

Dimitris Agrafiotis (01:52)

I'm doing well, it's pleasure to be on this call.

Nagaraja Srivatsan (01:55)

It's really our privilege to have somebody like you to explore AI. So, Dimitri, why don't you give us a little bit of your take on what's the state of AI right now, specific to clinical trials, and where do you see it used the most?

Dimitris Agrafiotis (02:09)

In my opinion, there's a lot of pilotitis,what I call pilotitis. There's a lot of pilots being tried in the industry, alot of people throw things on the wall and see what sticks. And I think much of this is prompted, I guess, by the enormous success of the large language models in our personal lives, and predictive analytics in general in our personal lives. The recommendations on Netflix or the success of the chat GPTs and related technologies. I think people have gotten a taste of how powerful AI can be in their personal lives and now they're looking for applications in an industry setting, in their work setting. And if you look at the pharma industry where I spent most of my career, entire career in fact. There's different, you know, you have discovery, you have research, you have pre-clinical development, you have clinical development, you have manufacturing and commercialization.And I think some of the most promising applications of this modern deep learning models, large language models, are in clinical development, in fact ,because they're much of the - much of the work there is operational in nature and in my humble view, this technology's AI will be much more impactful in removing operational overhead versus discovering novel science.

Nagaraja Srivatsan (03:32)

That's a great segue, Dimitri. You've been visiting quite a number of product vendors and shows, not to name them, but which area of that clinical operations are you seeing a lot of use cases come out? Is it in regulatory? Is it in commercial, not commercial, safety?

Dimitris Agrafiotis (03:51)

I see it in many areas, Sri. Now, whether you get the ROI from these areas, that's a very different question, but it's everywhere. The obvious, the big, again, given the success of large language models and conversational AI, obviously conversational agents is where a lot of activity or authoring, generally speaking. So summarizing information, extracting, information from the Internet or other sources, structured and unstructured, summarizing content, and having conversations essentially with AI bots is where a lot of activities are happening. I mean, think about where do you have authoring, protocol authoring of regulatory filing, co-pilots or co-authors that help you assemble submission documents. There's a lot of, again, when it comes to the conversational side of this, I see a lot of activity in having conversational agents help triage inquiries, for example, adverse events or safety calls or medical inquiries for the, you know, once the drug is in the market. Where you see most of the commercial sort of consumer use of these models, you will see them first applied in a pharma setting as well.

Nagaraja Srivatsan (05:07)

Yeah. I think there's a great segue, right? So you said that the use cases would be where authoring is involved. And so maybe that's a good use case, wherever you're creating content to really see if you can have AI assisted or, as you said, co-pilot. Clinical, the landscape is rife with a whole bunch of authoring because we write a lot of documents and document-centric views. So are youseeing...several pilots in this area? Do you think there's any particular use case which has gone at scale? Or is it in the journey there?

Dimitris Agrafiotis (05:39)

Yeah, I will not mention examples of vendors, obviously, or home grown solutions. There's a lot of them. There's definitely productization around authoring. Even one of our portfolio, I work for Arsenal, as you know, then one of our portfolio companies, Certara, now offers a product, which is called Certara Co-Author, that does exactly that. Just assembles, you know, helps author regulatory documents and submission documents. So this is already reaching the market with relatively robust products. But there's a lot of activity. Back when I was - one of my previous employers, again, let's leave this vague for now, but we were experimenting with conversational agents for medical information or for safety. And again, use them, the idea was to use these technologies in their proper setting and in a proper way. What do I mean by this? An AI engine in its prediction generally has a confidence factor. So let's say that you want to process somebody's reporting an adverse event. And you have a case processing workflow. An adverse event is reported, and you need to determine what to do with it. Now, there is a risk associated with every adverse event. And I think the most productive use cases of AI is in a risk-based approach. Meaning, for example, when there is a case processing of an AE, you can mathematically ascribe a likelihood that it might be benign, you know, a non-severe adverse event or something that can be automatically classified very easily without much risk. If it is a severe adverse event and it's a complex case, then clearly a human needs to be involved, a medical professional needs to be involved. And you can think about workflows where you essentially bi furcate the cases, right, and use more automation when the risk is low and more human touch and human understanding when the risk is higher and you can quantify those things.

Nagaraja Srivatsan (07:45)

A very important part, which is these tools give you the ability to do process re-engineering. Like what you said before, would press every case, there was no risk factor.  Case one was the same, case two, case three, case four, and then you triaged it. Now what you're saying is, let mepre-filter it. Let me risk-based filter it, take one stream to automated, another one to human. There'll always be a human in the middle in the end, but that could be a lesser touch of the human than the larger touch. But processre-engineering, you know that, is a very difficult task because you have to change SOPs, quality will push back. So walk me through that scenario. How do you make that change happen? Because it's easily said than done, right? You saya process change; it's deliberately right, but procedurally difficult.

Dimitris Agrafiotis (08:35)

Yeah, change is difficult, particularly if you don't offer compelling arguments to the people who are supposed to do the work. This is very complex problem. Change management is very complex for variety of reasons where fault is on both sides. Meaning, I've seen it first hand. Generally, heavily operational processes do not invite innovation. Individuals who are used to doing the same job over and over and over and over again are less receptive to change. It just comes with a job. It comes with the territory. Similarly, on the other side, I know many technologists who have no appreciation, have no empathy for the end user, have no appreciation of the user community, and have...They cannot get into the shoes of their users, into their skins, and understand what's important to them. And how can they make them change their perspective and change their views so that they become morereceptive to this technology. So it requires sophistication, not sophisticated, sophistication on one side in terms of how you approach the user community, how you approach management, how you approach...Everybody needs their own touch, Sri, and their own hooks. But change is a formidable adversary when it comes totechnology adoption, and you need to approach it thoughtfully and cleverly.

Nagaraja Srivatsan (10:00)

No, I mean, let's explore that a little bit because you've been a big proponent of making it easy to use. You're a very big proponent that "keep it simple, make it easy" from an adoptionst and point. Walk me through that because I think that that's so relevant to this change because many at a time, technology is very complicated or you're complicating the workflow, as you said, don't put that end user in mind and walk me through, how would you make that easy? What are some tenets which you advocate for that.

Dimitris Agrafiotis (10:31)

As I said before, it is a foundational part of my being, I guess, and my work throughout the last three plus decades. Infact, many of my presentations, my scientific or technical presentations, end with a couple of slides around that point. One of them shows a nice iPad with alegend. It's the apps stupid. I'm paraphrasing, of course, James Carville, BillClinton's campaign advisor, which is "the economy stupid." But at the end of the day, what wins hearts and minds is the user experience. Okay. You can have the most sophisticated system on the backend, the most sophisticated data platform. If you cannot deliver it in a way that captures people hearts and minds, makes their lives easier, and also captures their hearts, their senses, it would be all for not, all for nothing. Now, how do you build greatuser experiences? I think you need to have the spark yourself as an engineer, as a developer, or as a product designer. But you also need to have maniacfocus on your user and very deep understanding of their phenotype orphenotypes. Okay. A researcher in pharma has a very different approach to innovation than somebody who collects toll in a toll booth. Just to give adramatic example here, there's nothing wrong with any of these jobs. I'm just telling you that there's receptivity to innovation. It's very different because one is living and breathing it, and the other is just exposed to it and has to work with it. So you have to categorize the personas as we call it sometimes.You have to understand who is your user, what frustrates them. Okay. And when you understand this and morph it into elegant workable software, you have to walk them by the hand and show them how their work is getting better, faster, easier by adopting these tools. You also have to recognize, and I'm sure you do, given the line of work you're in, that sometimes automation comes at a professional risk to some people. That is a broader question. I'm not sure this is the right setting. But often times you run into situations where the user maybe a victim of this technology and this should not be underestimated. So you have multiple audiences and you have to make sure that you target each audience in the language that they understand with the considerations that they need.

Nagaraja Srivatsan (13:04)

Absolutely. I mean, I think you hit it upon, you need to understand the phenotype or archetype of your users, make sure you're maniacally focused on what's good for them, giving them a good experience and making it easy such that they're coming back and getting the auto-wire, right? As you said, productivity and improvement. very fair points and very critical to part of that change. Now, going back to your process example, in all these streams with adoption of AI, there's the human in the middle. And what kind of role does this human in the middle play? Because before we were just an adjudicated, now we learn from that decision. You give back and teach the AI back. And so the human role is becoming a lot more of a teammate than one of a judge and jury. Walk me through, how do you build systems towards these teammates? How do you become these teammates that the AI, you know, what has worked in your experience?

Dimitris Agrafiotis (14:03)

Well, this is still happening as we speak. My past experience is not of much value here because this is an emerging discipline essentially where men and machine communicate in real time and one impacts the other. It's like the observer's principle, the mere observation ofan experiment changes its outcome. But back to the topic and the discussion here. I think first of all, Sri, one shouldn't underestimate. Sometimes we talk in very futuristic terms and we're not there yet. Many of our needs are very earthly, very mundane. Many of the improvements are, you can't even call it AI practically, it's just automation. It's digital automation that's on the wrist of it. And yet the benefit is enormous. Practical benefit is enormous. So think we're a little bit away from the stage where the interaction between humans and machines is really blended and one truly impacts the other in real time. Imean, there are some exceptions to this, but by and large we're not there yet. Okay. I don't think we're there yet.

Nagaraja Srivatsan (15:19)

I think what you're doing is it's an evolving place. It's an evolving. And then this goes back to your change management piece because even in automation before, we did what was called end-to-end process automation. You clicked a button and did it and the work. Now it is reinforced automation, which means you're learning from how that, unlike your adverse event example is a very good one, you're looking at the risk and you're getting non-serious adverse events and then you're processing and saying, but by and large, you'll find out that it missed something and it is a seed that was there until you're giving feedback back to it so that the next time it's corrective in nature.

Dimitris Agrafiotis (15:52)

I'm sorry, can I elaborate on this point? There's this other element that is very unique about these AI technologies. I'm not sure how many people understand this, but they're stochastic in nature, particularly the generative pieces. So if you ask the same machine a day later, the same AI engine a day later, the exact same question, it's very likely you get a very different answer. In fact, the same AI engine today, right, two successive prompts that will give you a different answer. And that is by particularly generative approaches like LLMs because of the very way they work. They sampleessentially from the statistical distributions and where the training data was drawn and they keep making novel associations. So because they're stochastic in nature and because they're probabilistic in their outcomes, there's a degree of confidence in the outcomes. That's another very important quality that doesn'texist in regular digital systems. And that is that you can never have absolute confidence in the outcome. And there are cases in clinical development when you deal with human lives where this is not acceptable. So in those cases, the question is, where do you put the human in the loop? And how do you minimize waste? You can still have, you can still take massive advantage of a stochastic probabilistic system and save enormous amounts of time and effort, you correctly said, because you involve the humans - because you use the human where they need to be involved.

Nagaraja Srivatsan (17:44)

Yeah, no, you're spot on. And it's an evolving field. So let me take us to the evolving field. As this market is evolving, I just blogged about multimodal AI, where they are used to text-based LLMs, but we're seeing now a combination of text and voice and image and videos coming together from an experiential stand point. And it's early stage, and it does fit within the protocol authoring stage and it does fit within some of the other use cases we talked about. But what are you seeing about the potential ofmultimodal AI for clinical?

Dimitris Agrafiotis (18:20)

It's exactly the same potential that I see in my consumer knife series. Today you can generate video. In fact, two days ago, yesterday, two days ago, I was reading an article in New York Times where they had 10 videos and they were asking, you know, some real, some fabricated, you know, AI generated. We're asking the reader, it's one of these games, to guess which one is real. it's very virtually impossible to tell them apart. So that time is already here, right? You can fabricate voice, can fabricate speech, you can fabricate images. And we're getting to the point where these are indistinguishable from the real deal. Now, put aside all the risks that come with it, right, social risk and all kinds of other stuff. If it is used in a positive, productive way, the way that you can now communicate with users training material - I mean, I don't know where to start. Sri - I mean, you know, the way you train people, the way you communicate information generally, this multimodal AI allows you to deliver the content, the same information to different people in a variety of different ways, many of which are a heck of alot more effective than others, depending on the recipient.

Nagaraja Srivatsan (19:36)

Yeah, and you could almost contextualize it as you said, and that's what's going on in the B2C personalized marketing stage where they are personalizing content and video experiences towards them. Andthen, and you think that that would be starting to get adopted in our...

Dimitris Agrafiotis (19:47)

That's exactly right. I think so. I just believe that the rate will not be as fast as people think. The rate of adoption will not be, I mean, there's a very famous law, it's called Amara's Law, Sri. It says basically that humans tend, we as humans, we tend to overestimate the impact of technology in the short term and underestimate it in the long term. I think our industry grossly, in my humble view again, grossly overestimates the impact of these technologies in the short term, particularly in clinical development, and grossly underestimates it in the long term. So the adoption will come slower than most people think. The incentives need to be aligned for this change is to happen. The commercial models need to be defined for these changes to happen. And that will take more time. And I think part of it has to do witha very highly regulated space that we're operating. But I think it's a little more than that. That's oftentimes the easy excuse. The easy excuse is there's other sort of deeper problems here, obstacles.

Nagaraja Srivatsan (21:09)

So, Dimitri, you've been here for 30 years. You're seeing this potential of AI. Where do you stand from a productivity improvement standpoint? Is it 0.5x, 1x, 2x, 10x? Where are you seeing it? Maybe, as you said, in the 12 months, it may be x or 2x, but where do you see this potential from a transformation standpoint for all of us?

Dimitris Agrafiotis (21:35)

You know, if I were to give you a number, Sri, I would be hallucinating like the LLMs themselves. I cannot give you a number. I know intuitively that the productivity enhancements can be very significant. Okay. Intuitively, without any quantitative data, I expect that listed to X, probably more. Sri, you've been around for as long as I have, maybe a little, you look younger, but...The thing about all the manual processes involved in clinical development today, just think data management, just that alone. Monitoring. There's so much human waste, not human waste, this is the wrong expression. Waste. Waste of human time, thank you. There's so much waste of human time and such an unnecessary one that to me that's like the lowest hanging fruit. And X is probably an understatement there. Now you have to wonder, because we have technologies today that can cut this waste very quickly. So one wonders why hasn't it happened yet.

Nagaraja Srivatsan (22:40)

And what is it your opinion? Is it that change management thing, the human adoption, that spirit of innovation?

Dimitris Agrafiotis (22:48)

I think it's a little more complicated. Is incentives. Oftentimes, human labor is used intentionally because it has higher margins, for example, for some businesses. Let me put it this way. If you run a business today, where you charge your clients a hefty premium to perform fundamentally manual work that can be automated by at least 80 % through computing. How incentivized are you to change that business model if it cannibalizes your company? And how do you, and everybody can see the tsunami coming. But how many people can respond to this tsunami from within? So this is another interesting thought I've been having. This is not a science, it's just intuition. It is that often times drastic changes come from outside the organizations that have a vested interest in the status quo. In other words, Uber or Lyft or Uber was fast - I guess you can use them - would never have been invented by a taxi company. It had to be an outsider who thinks very differently and is not confined by the constraints of the existing system.

Nagaraja Srivatsan (24:14)

Let me, I like your Uber example and we'll just explore one part of this. So Uber existed because they built a business architecture. They did not have a map, that's Google. They didn't have a financial model, that was Stripe. They just put it all together within the context of that user experience you're talking about. We're in the era of agentic AI where we are having these different architectures. We don't need to be in each of the swim lanes of things to do. The disruptor can come from somebody who can bring another Uber for clinical trials. Walk me through, what do you think of this model which is evolving with agent-to-AI, but more so what kind of business architectures would evolve to Uberify our clinical triall andscape?

Dimitris Agrafiotis (25:01)

That's an interesting question. Yes,listen, the phase we're in today, yes, the Ubers of this world were catalyzed.We came about because it was a different business model leveraging a lot of different technologies that were already available. But let's not forget the fact, Sri, that there was enormous innovation done in this foundational platforms to enable the Ubers to come and synthesize technologies and create new frameworks. So I personally think that the era of deep innovation, fundamental core innovation is anything but over. I mean, this will continue to happen with... But the analogy I give is with APIs. For us, we have a little bit of a technical background, application programming interfaces, like you have modules, components that can communicate, can be designed in isolation, have a very...well-defined interface to the external world and can be invoked and used in configurations that were in many cases were never even envisioned when these components were put together. So we're now into this era where we can really very quickly pull down components from gazillions of sources and reconfigure them in very unique ways. And that's where, and I don't think that today there is anybody who can stand completely on their own. Maybe save Google, but even that is not, you just put together components from the marketplace and the public domain and you can assemble a solution in no time. And you can have a business from nothing to, you know, like a multi milliondollar business from nothing in three months or six months. So human history has never seen this before.

Nagaraja Srivatsan (26:43)

No, absolutely. And that's the art of the potential. Maybe the skill sets of tomorrow is that recipe management of building these components, APIs, agents together to create that infrastructure. And what you're saying is that that's not, you didn't say 24 months from now, six months from now, 12 months from now, that innovation is constantly happening. What you said was even more important is the foundational layer has to start to catch up. If you have a bad CTMS and if you had a bad data management tool and a bad monitoring tool, whatever agents you do is only as good as the rate limiting infrastructure. So getting good data infrastructure going will then, as you said, accelerate or spiral this innovation because then somebody smart like will come and put these pieces and components together in a puzzle mode, which we never thought could be done, but now drives significant value in clinical.

Dimitris Agrafiotis (27:38)

Correct, Sri, but remember this as well though. I've often pondered worrying about this. I mean, it's terrifying. The pace of change today is staggering, which means for me to make an investment ina given piece of technology today, I'm making it knowing full well that six months to a year from now, it will be obsolete. So the way we invest intechnologies...You know, back in the time, you know, for 20-some years, 30 years even today, you know, I wrote my first C++ code 30-some years ago, and it's used today in production systems, and it's flying. It's like fantastic,okay? Because that core has stayed. Good luck today. I mean, today you're building something, and the next day, it is obsolete. So the way you invest in technologies changes completely, the bets you make change completely, the economics change completely. A lot of these things are transient. They exist today, they may not exist tomorrow. And what is very hard to do, Sri, and you know this from personal experience, is make educated bets. And what happens if your bet doesn't pan out as is often the case?

Nagaraja Srivatsan (29:00)

And last question, and then I'll pivot to your key takeaways. The last question is the skill sets which we all got trained in C++, fundamental architecture, all of that is needed. But as you said today, you're talking about skill sets about making bets, looking at probabilistic things, making experiments which can fail or succeed, quite different in my mind on the type of skill sets you need to develop. What are the core skill sets which remain true and tested, which I know you have a strong opinion on that. But what are the new skill sets somebody has to learn and build for this future, this changing future you just talked about?

Dimitris Agrafiotis (29:40)

I mean, there's business and there's technical skills. I am firmly of the opinion, that will never change, probably take them to my grave. That fundamental understanding of how computers work remains very important. What worries me today is that there's a whole generation of engineers who use, you know, GitHub compilers and use essentially LLMs to help them program but have lost deep understanding. They don't care about efficiency. You know, they just throw more CPUs into the problem or grab another library and they're losing the fundamental understanding and backgrounding algorithms and efficient computing. That, I guess, is the curse of affluenceand abundance. I believe that skill is as relevant today as it was before. And I believe that skill is well differentiate the lasting innovations from the fads, the temporary ones. Having said that, I still believe that the sources of innovation are abundant, right? There's just such a plethora of new ideas and new capabilities are just you being confronted with and exposed to that the task of assembling an effective, efficient and helpful solution has become much, much easier. You can do it in a shorter amount of time and in greatstyle. You can put together very sophisticated systems without sweating it as much as in the past. So in that sense, I think when it comes back to your skill level, I think that the business savvy trying to understand what is the problem you're trying to solve, right? The ability to read between the lines because what is asked for is not necessarily what is wanted, it is not necessarily what is needed. So people need to read between the lines, take vague amorphous user requirements and morph them into elegant workable products. And that is a very creative endeavor. What other skills do you need? You know, I mentioned before empathy for the end user. I think a real skill is trying to think like the providers of consumer types of apps and software, right? Because they go after you. They go after you, after you as an individual, as a consumer. And speaking about clinical systems, there is a big difference between consumer technologies, consumer oriented technologies and clinical technologies. And the big difference is that in the clinical world, when the pharma is your buyer, you have a captive audience. They have no choice. And that, I believe, the lackof choice, the lack of voice into what tools they use is one of the main reasons why clinical software and clinical technologies have been so stagnant for so long. Because users don't have a say. Try to do thus in your consumer and you're out within three months, you don't exist the companies of.

Nagaraja Srivatsan (33:05)

Yeah, that's a great point. Dimitri, I know we can keep chatting for another couple of hours in this. It's my favorite topic, your favorite of driving innovation and change. What would you want the audience to have as key takeaways in this changing world of AI?

Dimitris Agrafiotis (33:22)

I think people need to remember, but I think they have the impressive performance of LLMs, right? The language that they build hide alarming truth, okay? Number one, sometimes you forget that particularly generative AI approaches basically, as I said before, draw, from the same probability distributions over the samples the training set was drawn,which is another way of saying that they're very good at interpolating and not extrapolating or generating brand new knowledge. The thing we also need to remember is that they need a lot of training data. That's the essence of the self-supervised data. But to feed these... models with three years of parameters, you need, you know, on average three data points, the data training data points per adjustable parameter of the model, right? You need a massive amounts of data. and this data needs to be a high quality. And there is, I see very little, very little attention on the quality of data today. Okay. It needs to be well structured, well organized, very well curated. It needs to be fit for purpose. And there isn't a lot of talk because that data wrangling is seen as a, not as sexy, I guess. I think I mentioned this before that we tend to forget human nature and the fact that at the end of the day, human convenience trumps everything. And there in lies the risk. If I can get to my answer easily, I'm not inclined to check the response that I'm getting. Okay. And, and this is the word that we're getting, we're getting into Sri, that it's so convenient for us to be, to do our work, that we actually are not incentivized to check, to do the validation in the back end. And the other thing, just to close this, we're clearly at the, not even the peak, close to the peak of the hype cycle. And sometimes we tend to go all in to this, assuming that the capabilities of AI will keep growing exponentially as they have done in the past. That may not be the case. Futures are very hard to make, particularly about the future, as Niels Bohr put it, but we've been through two AI winters before where the hype, where the reality didn't meet the hype and the expectations. And even though it appears we're in a different phase now, that risk is still there.

Nagaraja Srivatsan (36:05)

These are great takeaways. I think the AI winter is a good one to always worry about. But the thing which you said when you make LLMs easier to use, there's a new term coming up. It's called AIobesity. That is basically you use AI and because you're not training your mental muscles to think, you are actually doing it. There were a couple of research papers that just got published where people who are using AIs were not highlighting the regions of the brain for decisioning and decision making. So, it's something to always watch out for. But, Dimitri, thank you so much. I really enjoyed myself in this session. We dived around lots of topics and I thank you again for your time.

Dimitris Agrafiotis (36:48)

Thank you, Sri. Thanks for the opportunity. Great to talk to you.

Daniel Levine

Well, it was fascinating to listen to thetwo of you. What did you think?

Nagaraja Srivatsan (36:58)

It was a fascinating conversation. What I really liked about the conversation was it was very practical and pragmatic. Dimitri always has a good framework to say. He said, hey, AI is here today. It has high throughput and productivity. But he also talked about the challenges of AI and thinking about what we should do in terms of the right use cases to build, the right type of diligence we need to do, and the right type of guardrails we have put for making it successful. So. I really like the pragmatic nature of the conversation today.

Daniel Levine (37:30)

So he actually talked about operational efficiency rather than discovery being a big payoff. Is that how you see this?

Nagaraja Srivatsan1 (37:39)

Specific to generative AI, I would agree with Dimitri. There's a lot more scope in operational efficiencies coming out. He talked about it. The clinical trial infrastructure has so much of documents and documents flowing back and forth between different people and really using AI to streamline some of that document management process and bringing in more efficiencies is a low-hanging fruit. He also talked about using AI for better positioning, where we could re-orchestrate the workflow. He gave the example of safety, where you risk analyze a case and then decide whether that has to go through a human-first workflow or an AI-first workflow. So, there's a lot of good nuggets in what Dimitri talked about on how you could bring productivity improvements in what we're doing from a clinical and clinical trial perspective.

Daniel Levine (38:34)

You know, one of the things that really struck me is when he said that AI is stochastic in nature, particularly with regards to generative AI. If you give a prompt one day and you use the same prompt the next day, you'll get different responses. And he suggested this has implications for clinical development because it removes certainty. What does that say for the role of the human working with this technology?

Nagaraja Srivatsan1 (39:00)

I'll separate that question into two parts.The first is, yes, it is stochastic and probabilistic in nature. And therefore, if you ask it the same thing, you're not going to get the exact same answer all the time. Now, you could put guardrails to make it deterministic through better prompting, better guidelines, and stuff so that you could get a deterministic output. But you should be comfortable that when you ask for a summary of a particular document, you're going to get different languages couple of times you ask that. But dimensionally, it means the same thing. You should be comfortable working with that nuance of black and white. If you're used to very deterministic answers that every time the answer has to be the right answer andit has to be number nine, when you say three into three, then you need to put the guardrails to make sure that it becomes deterministic. Many of the usecases today are more probabilistic, and the role of the human is that. Today, you and me are communicating. I can bet my bottom dollar that every time I talk to you, you're not giving me the exact same answers, unless you're an actor and doing it from a script. So if you're giving me different answers, I'm still able to interpret it because I know what Danny is saying and I can understand that by and large, Danny's not changing his position. He's just articulating it in a different way. It's the same thing as humans interact with AI. AI will articulate it in different way, but we need to deal with what it is dimensionally telling it, not what it is specifically telling us.

Daniel Levine (40:33)

And he also said that people tend to over estimate the technology in the short term and underestimate it in the longterm. Do you agree with that? And if so, why do you think that is?

Nagaraja Srivatsan (40:46)

I think it's the classic part of the hypecycle. Early stage, we were very skeptical about genAI when Chat GPT came in. It's never and all of that. Now that people are starting to use it, they're saying that this is like panacea. It'll solve world problem and world hunger. Of course, that's the height of the hypecycle. I think where it'll land is in very specific use cases that it works with the right guardrail, with the right ROI, with the right productivity measures. And then the adoption becomes much more mainstream. I think we are in that cycle of hype and it'll come down to a cycle of reality. We're in a good space because we all know what we should be looking for and we're putting the right constructs to build a better tomorrow.

Daniel Levine (41:31)

One of the things he said towards the end that that almost surprised me because you think of the computer age saying of garbage in garbage out, but it takes massive amounts of data to train AI. But he said there's very little attention to the quality of data today. Is that correct? And why is that?

Nagaraja Srivatsan (41:52)

It is. The data quality has been a holy grail problem in clinical trials because we have different siloed systems. Wedon't have a uniform taxonomy. Every time we try and bring data standardization together, things start to move on. So data is, and he brought this term up, data wrangling. And truly, we have to wrangle data together to make sure that we have the right infrastructure for decisioning and decision making. So he's absolutely right. Data is going to continue to be the problem for us for a longtime. Harmonizing the data, structuring the data, getting it right, using that right data for training, and then making the right decisions on top of it. I think he said something very clear. If you do not know how the outcome is coming, you will just then not to trust the outcome. And so I think

all of us have to be chefs. We need to know how the dish is made so that when we eat and taste it, we know exactly what went into it. I think that's a very critical learning that he advocated that we need to know exactly how things are coming out and how it's made so that we can have confidence in the output.

Daniel Levine (43:03)

It was a great discussion and Sri, thanks as always.

Thanks again to our sponsor, AgilisiumLabs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny atlevinemediagroup.com.

For Life Sciences DNA, I'm Daniel Levine.Thanks for joining us.

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Dr. Dimitris Agrafiotis is a Digital, IT, Data Science, and AI executive with deep expertise in healthcare innovation. He has led global teams in information management, software engineering, and analytics for top biopharma organizations. A proven strategic leader, he aligns stakeholders to deliver complex solutions on time and within budget. As a software architect, he has designed enterprise-wide systems used by thousands worldwide. Dimitris is a widely published scientist with significant public sector advisory experience. He has pioneered advancements in AI and data integration for life sciences. An entrepreneur and innovator, he holds experience in intellectual property development. He is passionate about building empowered, collaborative organizations. His leadership fosters connectivity across functions, regions, and cultures. Dimitris continues to drive AI adoption to improve R&D and patient outcomes.