Scaling AI Across Discovery and Clinical Development
In This Episode
In this episode of the Life Sciences DNA Podcast, host Nagaraja Srivatsan speaks with Madhu Varadarajan, a biopharma digital and AI leader with experience across R&D organizations, about what it truly takes to scale AI beyond pilots in pharmaceutical research. The conversation cuts through the hype to focus on data foundations, governance, talent, and change management as the real determinants of success. Madhu shares a pragmatic perspective on how AI has evolved not as a sudden disruption, but as an outcome of scale enabled by cloud, compute, and mature data architectures and why organizations must rethink processes end-to-end to unlock measurable business value.
- AI at Scale, Not AI as a Buzzword
AI is not new to pharma but hyperscalers, cloud infrastructure, and compute have unlocked its ability to operate at scale. The real shift is moving from experimentation to enterprise-grade execution. - From ‘Specify’ to ‘Verify’ in R&D
Madhu introduces a powerful framing: if a process can be specified, software suffices; if it must be verified, AI becomes essential. This distinction is especially relevant in clinical trial feasibility, site selection, and patient stratification. - Data Is the Currency - Governance Is the Differentiator
Multimodal data (structured, unstructured, real-world, and clinical) fuels AI, but without strong governance, stewardship, and quality controls, it becomes a liability rather than an asset. - Beyond Pilot Purgatory
Hundreds of disconnected pilots dilute value. Real ROI emerges when organizations commit to two or three high-impact, end-to-end use cases, backed by leadership support and clear business outcomes. - Business-Led, Technology-Enabled AI
AI initiatives succeed when business teams own the process and outcomes, with technology acting as an enabler, not the driver. Adoption follows ownership. - Talent Is a Hybrid Equation
Scaling AI requires a mix of internal domain experts, computational engineers, and selectively acquired AI talent, supported by continuous learning and hands-on experimentation. - Agentic Architectures Are Emerging
The future points toward agentic systems with orchestration layers, controlled autonomy, and human-in-the-loop oversight especially critical in GxP environments. - Trust, Safety, and GMLP
As AI models learn and evolve, managing model drift, bias, and validation under Good ML Practice (GMLP) frameworks becomes a core operational requirement. - Change Management Is Non-Negotiable
Successful AI transformation depends on bringing people along—through persona-based training, transparent guardrails, and shared accountability.
Transcript
Daniel Levine (00:00)
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. we've got Madhu Varadarajan on the show today. Who is Madhu?
Nagaraja Srivatsan (00:30)
Danny, Madhu is a biopharma technology leader with experience at Amgen and Jazz Pharmaceuticals, known for operating in the intersection of science, business, and digital innovation. He specializes in turning complex R &D needs into scalable technology strategies that elevate data, analytics, and operational excellence across discovery and clinical development. Madhu partners closely with various teams across R &D and digital organizations to deliver solutions that genuinely advance research, and improve decision making. His passion lies in enabling capabilities, people, process, technology, and data to accelerate innovation and deliver better outcomes for patients.
Daniel Levine (01:11)
What are you hoping to hear from him about today?
Nagaraja Srivatsan (01:13)
Madhu focuses on leveraging AI and digital to accelerate pharmaceutical innovation and support clinical research. I want to hear to what extent he's had success doing this. Where has there been challenges or resistance and what has he been doing to overcome them?
Daniel Levine (01:29)
Well, before we begin, I want to remind our audience that they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comments section. And don't forget to listen to us on the go by downloading an audio only version of the show from your preferred podcast platform. With that, let's welcome Madhu to the show.
Nagaraja Srivatsan (01:56)
Hi Madhu, welcome to Life Sciences DNA podcast. How are you doing? What would be greatest for you to describe your journey in digital innovation and AI? Why don't you tell us that journey on what got Madhu to where he is today?
Madhu Varadarajan (02:11)
Thank you, Sri. Appreciate the opportunity. And I just want to state that anything I express here is my opinion and not my employers. But yeah, I've had a long journey in supporting biopharma in various leadership positions. And the last several years, I've been a business partner in R&D, as a digital business partner, in fact. So, through that journey, we've evolved technologies, as you would know, for a period of time, I'm thinking about APIs, cloud first, mobile first and everything. And now we are in AI and naturally this has got a huge potential and we look forward to leveraging that.
Nagaraja Srivatsan (02:50)
Wonderful. Specifically in the context of R &D, as you said, the journey has come from APIs to mobile to this. Walk me through, where are we today? Is AI completely different from what we're doing or is that an evolution of what we've been seeing in the marketplace?
Madhu Varadarajan (03:06)
I've heard the terms of industrial revolution in terms of comparing the AI evolution, but it is also interesting - I've been reading some articles about how they talk about if you can specify, you use software. If you can verify, you use automation AI. So that seems to be the real evolution in fact. And what we're seeing right now is a lot of the underpinnings of whatever we put in as technology capabilities, whether it's APIs or hyperscalers or cloud, they are enabling the scale for AI. AI has been there for several decades, but it's just a scale and computation that's enabling all this right now.
Nagaraja Srivatsan (03:50)
Wonderful. Why don't we walk through a couple of use cases to really look at what, in your opinion, is a heavy impact digital or AI use case? I love the frame, ‘specify’ it's software, ‘verify’ it's AI. So, why don't we pick a couple of cases on where verify is and how does that go through that process?
Madhu Varadarajan (04:08)
This is based on my experience. I'll be alluding to it. But what we're seeing is data is the currency for AI. And that is clear. So, the data foundations naturally are going to be the most critical in terms of governance, the quality, the stewardship, and everything around data governance. So, with that as a basis, obviously, if you're looking at some use cases, so for example, in clinical, I could think about, let's say, study feasibility assessment or study startup, right? So, you have plenty of data around, you know, external sites and countries, internally - how studies have been executed in the past. So, given all this nationally, you have data that is structured, unstructured, multimodal as you would say, right? So, you have content from various public libraries that you could use in terms of comparison and comparative intelligence. So naturally there's a lot of data. Now, if you think about how you can apply this for possibly understanding what's the best model for study success or how do I get to the best patient strata, as an example, or even accelerate by choosing the right sites and countries, that's a great example, honestly.
Nagaraja Srivatsan (05:22)
So, let's double click on that a little bit. What kind of data past and present is available for R&D in the study setup and feasibility stage? Where does this data reside? Is it in transactional systems? Is it in unstructured systems? Why don't we explore that a little.
Madhu Varadarajan (05:42)
So, I think we see it across the board. So that's why it's multimodal. And naturally, as AI is getting a lot better with multimodal processing, I think it's hugely beneficial. So, for example, we have clinical trial operational data. We have transactional data. We have site data. We have data captured from the clinical trials. So, you're also looking at licensing data and content from other sources. And I've seen models that many companies now are offering -- I will let you use my computation capabilities if you can give me your anonymized data, as a great example. I think it is either Lilly or somebody, Pfizer, I don't know who, that had a recent announcement. So that's a great example. So, we have data aplenty and it is in different versions and formats and modes, but how do we get this to actually work on in terms of getting to our outcome, which is how do I get the best set of site or sites to execute a trial? That is a part that we have to build on. So, I'll let you continue the question, but I think there are great thoughts around what do you buy? What do you build? Because that's something we're all going to get into eventually.
Nagaraja Srivatsan (06:56)
That's a good segue - buy versus build, but let's explore. Data is the new oil, as you said, we're bringing in data together, but data by itself, even if you structure it, is not useful if you don't have the right algorithms or the right output. Yeah. So where do you see that verification part that you're talking about? How is that being built within R &D?
Madhu Varadarajan (07:16)
I think that is a fantastic introduction to data governance. So, I know the commercial side of the equation, typically data has been the currency for a long time, right? Although it's been processed in different formats, pre-AI, EDA, if you will. But in this new EDA [Exploratory Data Analysis], obviously with AI coming in, we have data, but we don't know if it is the right quality. Think about real data as an example, right? One of the biggest struggles with real world data. It is plentiful by nature, it's highly variated, but there is no harmonization. And then hence, real world data becomes a huge struggle or a challenge, but also an opportunity for us to use the data more intelligently. But if you look at more structured clinical trial data, it is still lot more structured than real world. But still, depending on the versions, the formats, how we evolved in terms of the standards, there are older versions that may not be compatible with the new ones. So naturally, when you think data governance, this is where that whole process plays really strongly. So, you put in quality rules, you have ownership around who's the data owner, who's a steward, and who are the folks who maintain this data. So, there are going to be a number of things that you can do to actually make sure it is good quality. But just to quickly end that in a note, right? You can also use AI to actually look at the data to say, can you give me some patterns around where are the inconsistencies and hence use it as a capability to actually enrich the data, in my opinion.
Nagaraja Srivatsan (09:00)
So, Madhu, this AI which you're talking about is your classic AI, right? Looking at patterns, looking at this machine learning, deep learning, really learning from data. Is there any particular set of tools you think are most useful in this area of classic machine learning or any particular compute infrastructure or any particular technology you've seen which works easier on data governance and all of this area?
Madhu Varadarajan (09:24)
Data governance has been there for a long time. So naturally there are good tools in the market. Example is Informatica as an example, right? So, we have tools that play a really key role. However, adoption as a, let me talk about the pre-AI era. Adoption, usage, how do you make it delightful so that users can actually use it, maintain it and use it for the business process? There's always been a challenge, but nevertheless there are two links available. But if you look at the newer EDA, AIML, obviously there are tools in the market that we can use or this is where we had to do bespoke customization algorithms, right, that actually can look at these data patterns and help us. Now it's also interesting, you would see that a number of larger vendors in the market, they are coming with their own tools that also kind of allude to this kind of capability. So, I've seen and...and not experienced, but I've seen some organization, some vendors, obviously product vendors, develop their own data pattern ⁓ in Medicaid monitoring as an example. Great example, right? You're capturing data and you need to assess what is the inconsistency, how is the data conforming to requirements? So great example. So, there are different ways of approaching it through a business centric-way or really a technical way. So, I think they both have their places in the world.
Nagaraja Srivatsan (10:50)
Madhu, as you start building an organization to do this type of AI and work, what kind of talent are you looking for? Because, is this old talent, is it new talent, is it a particular place where you have to go and get this stuff? How do you start building that talent pool for these solutions?
Madhu Varadarajan (11:10)
My thought obviously is you could use a lot of experience to pivot into this new era. So one of the challenges I've always heard about is, hey, is digital or formerly IT going to feel left out, right? But again, if you look at the different layers of all these engines, AI, the agentic architecture that's coming up, or the newer models I've heard about, world models and whatnot, we are just starting on the journey and I'm already seeing so much evolution, right? You just cannot keep up, right? So, you may have to buy certain talent or certain capabilities. For example, you can buy some of the brains around the LLMs, the NLP components, whatnot. But then internally, you have a lot of capabilities. For example, the orchestration layer, the...information, the processes around the current systems and other business processes. So those are something talent that are internal that can be pivoted because AI ML, we know very clearly that you just cannot plug and expect changes. It has to be rewired. The business process has to be rewired, which means you are going to realize or utilize existing processes, functions, but transfer them.
Nagaraja Srivatsan (12:31)
It's a combination, right? What you're saying is people who are in the organization, who are domain-centric, partnered with AI-centric people working together to solve a problem so that they can learn from each other on domain and domain process versus looking at AI and AI implementation. Is that where you were going with this?
Madhu Varadarajan (12:51)
I still did not fully answer your question, but the second part of the question is computational engineers, if you will, right? AI, that's where we need lot of expertise to be brought into the organization. We can start with some consulting and some supplementation, but we definitely have to have internal talent. So that is an area that I think we should start looking at acquiring. And even, I've seen interns being used and the post-grad interns coming into this with a lot of computational skills. Now they have to be a little bit oriented towards the biotechnology, biopharma concepts. Of course, there are talent that has both, best of both, but it is still a very growing market, if you will.
Nagaraja Srivatsan (13:35)
Yeah. So, Madhu, a lot of new things are going on, as you said, new talent, new computational metrics, all of that. How does one justify spend in this area? Is there ROI models which people are putting? How do you justify the spend? And then how do you measure the success?
Madhu Varadarajan (13:54)
It's an excellent question because I was reading, I don't know if you saw the latest McKinsey State of AI report, very interesting summarization of what's happening, right? For what it's worth, I know it's survey-based, but still, it gives you a lot of light into how organizations are doing and thinking. And you may have even seen the information about how J &J was doing what they did. So different companies are adopting different mechanisms. The last three years has been, let us learn, let us...pilot and let us see what works. But then clearly what I'm seeing is a differentiator in terms of the people who have a good strategy understanding, have a good lot of leadership support and look at the top, use cases that derive the best value. So, there is a concept of the pilot purgatory as they say, right? You do hundreds and hundreds of pilots. It doesn't yield value. It is testing resources thin. It goes out into purgatory. Whereas if you look at the top five, two or three use cases and say, this is the outcome. The outcome I would require as a scientist or let us say an executive in clinical organization, right? I want to be able to get it to 30 % faster throughout the process. So, what that means is it puts the impetus for the organization to invest across all the processes and see how you can rewire that. Whereas it's just looking at one or two areas, which doesn't yield you the value. So, I think the justification really needs to a point: good support and good strategy in terms of getting to the best ROI value use case.
Nagaraja Srivatsan (15:29)
So that's an interesting thing, right? Because now you're no longer becoming an IT project or a project in AI. It has to be a business impacting project, which means business has to adopt this and use it for making that 30 % work. How's business approaching this? Are they feeling this as a friend or are they feeling nervous. AI is going to come and take things away? Because what you said is spot on. If you don't build a use case which is completely strategic and end-to-end, you will not get the benefits off of AI. But to do that, you have to work very closely with the business side of it. So how is the interface happening between tech and business?
Madhu Varadarajan (16:07)
The interesting parallel I can take back to the non-AI, the pre-AI eras, you cannot, there is a classic saying that IT or technology led projects always are doomed to fail. You have to have business leading it process first as the horse before the technology cart, as they say, right? So that doesn't change. However, in the era of AI ML, you have toolings and capabilities that wherein business can go experiment, learn a lot, but what would be missing is the whole layers around trust and safety. Especially if you're looking at a GXP area, I think that's going to be critical. Second thing is data privacy, data AI ML biases because a vendor may be developing a product with a limited set of data that doesn't have the diversity that we are expecting, let's say in clinical trials. So, there are a number of things that we need to be cautious about. Although you will see business teams starting to experiment a lot, there'll be a lot of boundaries that we have to put around in terms of guardrails, as they say, to enable this to be successful. So, I still think it's going to be a very strong partnership, business led, technology advised, but ultimately it is for the best outcome.
Nagaraja Srivatsan (17:30)
So, business led technology supported is a fantastic metaphor. Enabled. Yeah, technology enabled. But is there a big change management going on to get business going? How do you enable business to own and drive this, as you said, work with these tools, get comfortable?
Madhu Varadarajan (17:34)
Yeah, I think it's still a work in progress, honestly. So, a few things that we are looking at obviously are in general and not specifically my employer, right? What we have to do is when we consider AI in a business process, consider the entire workflow. Don't do in spots and pieces, but rewire the entire thing. So, transformation of process is going to be critical, which means you need the business teams who are champions, who are already owning the process to be actually participating in this. So, you are essentially giving the business teams, the ownership and the employees a skin in the game to play in this, right? And the other things I've seen obviously is a lot of discussions around persona-based training, right? Because that's going to be important. AI is so powerful. You can do a lot of things and go wrong. So, if you do it in a targeted persona based way, like you would always do with a traditional system, but in a little more sophisticated sense, I think the chances of success are way more better.
Nagaraja Srivatsan (18:58)
Wonderful. As we start to move things forward, you said we've come a long way in this endeavor. What are some architectural patterns which you're seeing in the marketplace to bring all of these things together?
Madhu Varadarajan (19:11)
The few things, as we look forward, think the agentic is catching up pretty heavily, right? The digital workforce, as I've seen in your other podcasts with the other esteemed colleagues, right? So, I think the infrastructure in terms of hyper-scaling, whether it's high-performance computing or containerized-based capabilities or super-computation, all those are absolutely critical. But beyond that, if you go to the application layer, you need to look at investing in orchestration layers, for example, right? So, if it's an agentic architecture, you have to have agentic orchestration layer. So that is going to be important. Model layers, they are already catching up, but that is going to be a critical component. We are seeing some of the cloud vendors already offer guardrails for using specific models. That way, we are not bringing our own models, possibly, and starting to leak data. So, there's a lot of...you know, info security, data privacy, you know, advice capabilities that I think we should look at. Then you cannot forget the application layer. You know, the CTMSs, the EDCs, they will continue to stay in one way or the other. Now, we need to just make sure how do we get to those MCP as they call, right, the model context protocol endpoints. How do we enable these systems to be able to interact with the agents? And which means I was reading a really interesting article on how you have to give levels of autonomy to the agents to work. So, you may start with very manual, but then go into something very high levels of autonomy. So, there is a lot around that. And last but not the least in a GXP sense, which we all operate in a biopharma, you have to look at the trust and safety layer. So that's going to be critical as well.
Nagaraja Srivatsan (20:57)
GXP is a very interesting part. Is regulatory GXP 21 CFR part 11 a challenge or it is easy to adopt AI into that mix. What changes in GXP and 21 CFR part 11 in the AI world?
Madhu Varadarajan (21:11)
Very interesting. If you replace the X with ML, GMLP is a new protocol, that new standard, rather, practice, right? Good ML practice. So, we've already dealt with this in the software as a medical device kind of areas. But in this case, what is an interesting phenomenon is, along with everything else, is the drift, right? The model drift, as they say, which means you train the model on a certain amount of data and set of data. Now as the data, more and more data is coming in, your model is learning and trying to adapt and do better, right? But in that case, it might drift from the expectations and naturally there's a no-low in a GXP, traditional GXP area, right? So, there is a lot of validation and I don't know how we're going to do it, but it's going to be a very interesting opportunity on how we automate the validation process when there's new data and new learning and you set some floor and bottom and top of floor and top for the model to give you the best outcome. At what point will they say, okay, this is going to cause a revalidation versus it can still say within the limits that we've defined. So, lot of learnings, I don't know, at least to my knowledge, it is very early in the days, but it's going to be exciting.
Nagaraja Srivatsan (22:36)
No, it is very exciting. You're spot on with looking at the GXP going to GLMP. I love that. Really looking at how you're going to get AI working. And there's also other things like biases and model drift and a whole bunch of things which you have to continue to watch out for. Doesn't that put more pressure on IT and validation? Right now, when we do validation, they say it's 30 % of software build, right? As you said, structured build. So now if you go into the specify to verify, are we now thinking 50 % test cases, 70 % test cases? How do you... because you're taking something which is non-deterministic and trying to apply a deterministic methodology, which is part 11 and GXP.
Madhu Varadarajan (23:16)
Very well said. I think it starts with a good foundation. Typically, I've seen some surveys or articles where they say one of the reasons AI projects fail is lack of adoption of good software principles. So that doesn't go away. That is going to be critical. Second thing, the data set and the data biases we include inadvertently or not, how do we manage that? What is the data that is being used for validating the model, I think that is going to be critical. So, the more we put burden or onus on the early stage, which is a foundation, the less we will struggle later. So, I think it starts with a lot of those principles that we adopt early on, good software, good engineering principles, and then trying to deal with the non-deterministic stuff in a very lower ratio, if you will, of the whole scenario.
Nagaraja Srivatsan (24:21)
That's wonderful. Seems like quite a bit of exciting things are going on. As you said, architecturally, things are coming along. Infrastructure is coming along. Teams and talent are coming along. So, as you start to look at the next 12 to 36 months, where do you see this thing going?
Madhu Varadarajan (24:39)
In my opinion, it's going to be a lot of risk mitigation like model drift data, data risk, vendor lock and risk and all that stuff. So, you will see a lot of buy versus build settling because there's a lot of excitement now. There may be a lot of option to say, let us build, let us build, but there's a possibility things are going to evolve so rapidly that you can't just keep up. So, there's going to be some of this interesting buy versus build capabilities, but we have to build a lot, honestly, because several things are going to be dependent on our own expertise, proprietary information and everything. Beyond that, think agentic is going to cache up a lot. And it is very interesting when I read the levels of autonomy you could give an agent, it is fascinating. But would you be able to trust it right at the outset? So, I think there's going to be a little bit of a growth and looking at how things perform. As I said, I think if you know the classic design, make, test, assess, that's what it is going to be. So, a number of these architectures as they come in on a daily basis, on a rapid fire basis, we need to see how we can adapt it. And in some cases, maybe move late versus be the bleeding adopter, if you will. it's a very interesting era, if you will.
Nagaraja Srivatsan (26:00)
Yeah. Would you think that in the future, every transactional system will have an MCP server and you have, as you said, an orchestrated layer which then looks at all the clinical data flow and then proactively sets up these systems versus humans setting it up? Is that in the short-term horizon or long-term horizon?
Madhu Varadarajan (26:19)
I was thinking about that exactly. Would you ever trust an agent to make decisions that are so near and dear to patients, clinical outcomes and whatnot? So I think there could be very high level of threshold for human intervention at some point in time. There could be very increasing levels of agentic actions, if you will, because that's what an agent is. Agent is observing, planning and acting. That's what the agent does. But how much trust do you put? It is going to be a build-up basis until you come to the point where you can completely entrust the agent to do what exactly it can do.
Nagaraja Srivatsan (26:57)
And what would that process be, as you said? Is it more testing or more validation? You give some very good ideas on start upfront with design, be more thorough on the data, make sure that everything is coming together in the way, define the thresholds on the model drift and all of that. Is it more that upfront analysis and getting comfortable, which will get you to use more of the agents? Where do see this moving?
Madhu Varadarajan (27:23)
I think certain things can be done upfront. Software buildup, engineering buildup, and all those can be done upfront. Models, of course, the way we test them, see what their limits are in terms of responsiveness and everything that can be done as well. But where I see possible variations is the classic data-driven testing. You can never get enough production-like data, but there are synthetic data generators. So, you could potentially have...I know of AI tools in the market that can actually do synthetic data generation. So, give it a whole lot of test conditions, let it generate the data and use your agents to test that. But will it ever be 100 %? I doubt it. So, I think it is basically putting enough controls and thresholds. And also, how does the orchestration, there is a human UI aspect to the orchestration digital workforce management, right? In that one, how do we see the analytics? Where are the agents? How are they performing? Where are they exceeding the thresholds? I think all those factors are going to play a huge role in terms of how we interest or build the confidence.
Nagaraja Srivatsan (28:31)
Madhu, this is wonderful. As we start to come to the end of it, what are some key takeaways for you as organizations that are trying to scale AI? And what do you think to go beyond pilots and experimentation to scale?
Madhu Varadarajan (28:45)
The first one obviously is skill set. I think everybody should be playing with AI no matter what, whether it's generative, white coding, do whatever you want. I'm trying to take some agentic AI classes as an example, right? So, I think that's going to be more unique to get your hands and feet wet. I think there's no substitute for that because this is beyond anything else we've seen, at least in our lifetime, right? So that is important. Second thing, obviously, is change management is going to be critical. I think it is business, the digital teams, everybody will have to be brought along the journey rather than leaving people out, which is what causes the concerns and frustrations. So that is going to be important. The third thing, some of the typical foundations we do from a digital capability perspective is the data governance, the application and other infrastructure that you can provide. And then how do you advise the business in terms of the risks, whether it's the model risk or the data risk? I think those are going to be absolutely critical. Those, I think in my opinion, would be the best steps forward.
Nagaraja Srivatsan (29:49)
Yeah. Madhu, this has been fantastic. I really appreciate your advice and very valuable insights on what's going on in the marketplace. And I love it. You need to experiment and make sure that change management happens. And of course, put the right guardrails from a data perspective and then bring the business along. So great insights. Thank you, Madhu. Thank you, Appreciate it.
Madhu Varadarajan (30:11)
Yeah, thank you. I really enjoyed this. It is a fantastic topic. It'll go on for a long time, but I loved it. Thank you again.
Daniel Levine (30:21)
It was an interesting conversation. So, what did you think?
Nagaraja Srivatsan (30:23)
I think Madhu is a very thoughtful person. He gave us lot of insights on what exactly organizations should do in their AI journey. I loved how he ended it. You need to be making sure that you're playing with AI. That was the first thing. Second, he said is that as you play with AI, make sure that you're having a business context to doing it. Third, he wanted to make sure that you're bringing the change management aspects of things so that you can adopt and make businesses adopt AI. And then last, to put the right governance so that AI is managed properly. So, I think these are very sound guardrails for implementing AI at scale.
Daniel Levine (31:03)
You say implementing AI at scale. One of the first things he said was that what's really changed is not the advent of AI, which has been around for some time, but the ability to do it at scale. How has that changed and what's the significance of that?
Nagaraja Srivatsan (31:19)
As he said, hyperscalers and compute infrastructures have just gone through the roof. And with that comes power, which you can then deploy to large scale problems. And so, where we are in this era is in a totally different place from where we were before, where we had the compute, the algorithms, and the ability to really discern complex problems and then solve for them.
Daniel Levine (31:41)
You asked about data quality and data governance. This was kind of a recurring theme in the discussion. Madhu had talked about real world data and the challenges with that because it can be unstructured and there can be data quality concerns. There's a lot of compelling reasons to use real world data and there's increasing availability of this despite the challenges. Do you think these challenges get solved by vendors? Do they get solved by the end users themselves? How do they manage the risk that comes with that?
Nagaraja Srivatsan (32:14)
Madhu is very demonstrative. You need to have strong data governance. It doesn't matter where the data comes from, whether it's external or internal, you need to be owning up to the data and the data governance. That's the first part. Second thing what he said is, as you start to bring the data together, you need to be testing it out within a set threshold to verify that you're comfortable with what the quality of the data is. And then the third is to really build algorithms on top of it to see how that data can be leveraged to make meaningful data and selection. So, he was very thoughtful in that part. Real world data has the variability and of course you need to be able to manage that. But by putting strong data governance and data context, I think you can manage that.
Daniel Levine (33:00)
He also talked about change management being critical and the need to have everyone brought along the journey. One of the things he talked about was that business teams who are champions, you know, have to be made the - given ownership of this and have skin in the game. What did you make of that?
Nagaraja Srivatsan (33:21)
That's so spot on. He talked about how do you go from multiple pilots to scale? It's businesses adopting two or three AI use cases and looking at that end to end? Once you have an end to end ownership from a business team, then you can really use that to deploy AI at scale and make sure that it's adopted. Change management then becomes easier because then the business team is the owner and they're looking at AI to make their lives better. He has flipped the model by saying that you need to make sure that you bring business along upfront, take 2 or 3 use cases, not 20 or 30 use cases, and then make sure that you deploy that end-to-end to reap the benefits of AI.
Daniel Levine (34:03)
I really enjoyed the conversation today and Sri, thanks as always.
Nagaraja Srivatsan (34:08)
Danny, appreciate it.
Daniel Levine (34:13)
Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny at levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.
Thanks for joining us.








