Why Biopharma Should Recognize Data Analytics and AI as Strategic Advantages
In This Episode
This episode of the Life Sciences DNA Podcast urges biopharma leaders to think differently—AI and analytics aren’t just digital tools; they’re strategic power plays. The conversation walks through how companies can embed AI into the business fabric to unlock long-term competitive edge.
- Encourages leadership to move beyond pilot projects and recognize AI as central to transformation, not just innovation.
- Illustrates how data is changing the game from molecule discovery to market deployment—unlocking speed, cost-savings, and insight.
- Outlines what it means to have AI at the core—where people, processes, and platforms are aligned for intelligence-led decision-making.
- Real-world stories from companies that have gone from experimentation to enterprise-wide results.
- Ends with a direct appeal to leaders: own the change, invest in scalable data foundations, and foster a future-forward culture.
Transcript
LSDNA Episode 4:
Nagaraja Srivatsan
founder of the consulting firm Vidya Seva
Daniel Levine (00:00.398)
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients ranging from early stage biotechs to pharmaceutical giants to co -develop and incubate POCs, products and solutions that improve patient outcomes and accelerate the development of therapies to the market. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics
to help you turn your visionary ideas into realities, visit them at labs .agilisium .com.
You're tuned to Life Sciences DNA with Dr. Amar Drawid
Daniel Levine (01:08.526)
Amar, we've got Nagaraja Srivatsan on the show today. For listeners not familiar with him, who is he? Srivatsan is the founder of consulting firm Vida Seva. He has more than 30 years of experience growing businesses in digital data analytics and IT. Today he's helping a lot of healthcare and consulting companies unlock the growth opportunities to scale their businesses. He has previously served as the chief digital officer
for R & DS technologies for IQVIA. Prior to that, he served as the chief growth officer for EXL where he oversaw the sales and marketing, consulting, and strategy functions. He has also worked as a venture partner for Cognizant, where he incubated and grew innovative, digitally enabled ventures across the healthcare and life sciences value chain. Srivachan did his undergraduate
work in electrical engineering at BITS in India and has a master's degree in electrical engineering and computer science from Northwestern. He's someone with an enormous amount of experience in data analytics applied to the life sciences. What are you hoping to hear from him today? Srivatsan has long been involved in data analytics, AI and their application to healthcare and life sciences.
So he has a lot of great insights into this evolution, where we are today and where we may be heading. So I'm hoping to get his perspective on the evolution of these, how it's changing the life sciences now, and how the small health tech and consulting companies could be capitalizing on this potential and what advice he has to biopharma companies in adopting data analytics and AI to the best of their potential.
Well, if you're all set, let's welcome Srivatsan to the show.
Daniel Levine (03:10.734)
Srivatsan Hudson, thanks for joining us. You've had a direct view of into how data analytics and AI have been evolving and reshaping life sciences throughout your career. So how has the pace of innovation in this area accelerated and where do you think we are in terms of the evolution of this technology and its impact and application on life sciences? Thank you very much, Amar. It's a very interesting question. If you really looked at
the early days of data and analytics, we spent a lot of time around how do we collect the data, standardize them. And then we dumped a whole bunch of data on people and said, okay, now I have the data, do something with it. Market evolved, people started to do reports, dashboards. But again, it was our way of saying, okay, I think you need this type of data and inside this type of trend and pattern. Let me show it to you
and you use that. And again, it was measured in terms of metrics of usability, metrics of how often did you come back to that dashboard, what components, but it was still what I call a push model. As you start to look at where market is going, the evolution is to bring data, analytics, and information at the point of positioning.
So I could give you a spreadsheet, but if I can't give you that information, when you're making the decision, you're discerning data and trying to figure out what it is and then translating that into another place where you can actually make a decision. So data and analytics is really transforming itself into...
from a push model to a pull. You pull the information when you need it, where you need it, how you need it, but all of that to be there for you to enable decisioning. So the market has really evolved where we're no longer spending majority of the time in bringing data together and we can actually spend a lot of time in utilizing the data
Daniel Levine (05:29.038)
to do things better. And how have you seen the decisions now being changed, right? Like decisions being dependent on the analytics and how is that really changing the R & D as well as the commercial aspects of the life sciences now?
So if you really look at it, let's take both sides of the equation, R & D and commercial. Let's come to commercial. Commercial has always used data. We bought data from vendors like IQVIA, Symphony Health. We had all the Scrip data. We have affiliation. We have other data sets which we bring together. And this is the only industry where incentive compensation,
territory management, salesforce effectiveness are all driven from data. So commercial was very much in the data analytics part of the ecosystem. But if you really thought about how the marketplace in that evolved, we spent a lot of time bringing data together and informing people. And that process of informing people was good, but it was like you had a drip, a drink from a fire hose.
You gave people, salespeople, two gigabytes of data, gave them everything they wanted, every insight they needed to be more and more effective. As you start to go more and more into the commercial side, you want to give the salesperson these nuggets of information before they go and visit a doctor's office. Be very specific, very targeted.
As a salesforce person comes over in the week, you want to give them specific feedback on how do they do vis -a -vis versus their competition. So data analytics is changing to what is A, the right for the persona, but also helping them with the decisioning framework. Now let's take a look at R & D, and even in R & D, R & D are separate. In the research area, we're now starting to use data
Daniel Levine (07:39.438)
to find what is the next generation of biomarkers? How do we find what's the right companion diagnostics? We're looking at data and information to inform us on better trial design. Now, the previous model was I would give you a lot of data and information and you have to find the needle in the haystack. Right now, these are becoming very contextual, sometimes even conversational, saying, okay, tell me what I've seen
this type of pattern of molecule, can you tell me what are similar ones? The greatest example of that decisioning, Amar, was when COVID happened. We all knew COVID had a particular design pattern off of a virus. And very quickly, one had done the literature search to say, these are 10 potential target areas if you have to go after this particular viral vector. So again,
things are changing from providing you information to contextualizing and giving you that frame to make decisions. And there are other use cases too in the clinical side where you have the same thing when you're trying to do a clinical trial development and as you're going through each of the different processes of patient recruitment. Data and analytics before was, I'll tell you, oh, you could go after...
these 20 zip codes to go and recruit patients to right now to say, this is the hospitals, this is the kind of players, and this is who you need to go to. Similarly in site selection, in feasibility, and then driving it through the clinical development and trial process. So data analytics is such a critical part. And more and more, I see that it's going from just giving you information,
to giving it to you in a consumable format for you to make effective decisions. Okay. Now, as these are, you know, making, changing things, we also see that there are these big promises and also about the cutting costs, right? But by and large, the costs continue to rise and the development times haven't changed significantly for the clinical development.
Daniel Levine (09:58.062)
What about AI? Is AI different and is AI going to be changing things quite a bit? Yeah, there's a two part question that you had. And so let me answer the two parts. So the first is you said about cost to make these decisions happening. And, and, you know, as you can imagine, data volumes are increasing.
More and more we're dealing with more varieties of data, more velocities of data. The five V's of data is completely applicable to every area of life sciences. And so as you deal with more data, of course, naturally, cost is going to go up because they come in different formats, different volumes, different structure. So there's hardware or cloud costs, there's compute costs, storage costs. The infrastructure cost is going up, but...
the value from that is much more bigger because you're able to drive better and more effective decisions. And so as much as the cost is going up, vis -a -vis versus previous years, the value delivered is so much more and different. And that's because of much more scalable platforms, more standardization of the data, more ways in which you can then bring this data to actually do
newer insights and newer decisions, the value is great. So yes, data cost, infrastructure costs, the operations costs are going up. But if you look at the outcome and impact, it is not going up at the same linearity as what it would be to the type of outcomes and the output. That's the first one. Yeah. And so what you're saying is that the cost will continue to go up,
but then we have the value, actually, we are getting multiples of the costs, right, like in terms of the value, but it's also the interesting point there is that these have now become part of doing the business, right? They have become that integral part of the business. So we can't really think about like, okay, well, this is something separate that you bring in, but it is part of it. Like, in order to do a life science business,
Daniel Levine (12:18.958)
whether it's R & D or commercial, you have to have data analytics, but then it is also then driving the decision. So it is becoming integral in all parts of life sciences, right? Absolutely. Data operations, data insights, data analytics, the full continuum is now a cost of doing business. But for people who can do it best, it's a differentiator. And people who can enable their...
teams to capitalize on that data and make better and effective decisions, that makes it even more, as you said, a multiplying factor. And I think that therein lies the difference between the upper quartile and the lower quartile of people who can use this data and data analytics infrastructure to make effective decisions. Now, you asked a question before on AI. AI has a
definitely a big impact on how you can reduce the cost of these operations. You can use AI to improve your data ingestion, data quality. How do you storm and norm your data? How do you bring these things together? It also can help you with how do you consume the data? And as I said, when you start to move from this push model of dashboarding to a pull model of conversing with the data or even having data storytelling,
AI plays a very big part in providing that speed to the insight, speed to decision, speed to value. And so AI is tremendous, but with AI comes its counterpart. It has hallucination. It has things which you have to guard against. And therefore you have to have an operationalization infrastructure to make AI work. It's not like, okay, let me just download this,
do it and you get it. You need to have a robust operational infrastructure. But if you can adopt that correctly, sky's the limit on what you can do. Now, you talked about companies having access to the data, but there is a difference in how well the companies are using data to get the right insights and incorporating that into the decisions of their business.
Daniel Levine (14:41.486)
Can you give some... like what have you kind of seen as like some of the challenges around this and do you have any advice for the life science companies, the biotech and the pharma companies about how they can use the data for their maximum benefit?
Yeah. So unfortunately, whether you take R & D or commercial, we live in a Tower of Babel. Data is everywhere, but not all of it is communicated in the same language. It is stuck in transactional systems. When you export it, it is stuck in systems which don't have master data structures. The metadata management is not there. And so,
we have a problem of good quality of data. We also have a problem of how do we bring that, storm it, standardize it together. So a a lot of effort goes into providing that single version of the truth so that then we can use that to make effective decisions. So I call it the bottom part of the iceberg and the top part of the iceberg. The top part of the iceberg is where you're making decisions, doing analytics. That's only the tip of the iceberg.
The bottom part of the iceberg is in data operations, ingestion, standardization, metadata structures, master data management. And this takes a lot of good domain thinking and good efforts to bring it together. Now, we've come a long way. Standards are evolving. If you go into the clinical, you have SDTM standards. If you go into other areas, you have OMAP,
you have FIRE. And so a lot of standards are evolving. And so as you start to bring these standardized data structures together, you can then start to do much more effective decisions. In commercial, we have a lot of different standards. Master data management is much more robust, whether it's a physician master or if it is any other data related to in commercial. As you start to bolster good infrastructures to standardize the data, map them,
Daniel Levine (16:58.19)
standardize them, you can then have much more higher confidence in terms of the decisions you make. And so we have to start looking at this continuum or the iceberg in a full continuum because it's garbage in is garbage out. Bad data leads to bad decisions. And so one needs to really look at what decisions am I making based on that.
What is the data sets I need to use to doing it? How good quality they are? Where am I getting it? And then operationalizing it such that it happens every time I start to make that decision. And so having that continuum of an infrastructure is very critical as you start to put a plan together for whether it's a clinical data warehouse or R & D data warehouse, whether it's a commercial data warehouse. And do you still see that even in 2024,
companies struggling with that? And I know more and more data is coming, but in terms of having that infrastructure in place to make sure that we are getting a standardization of the data and then we are then able to use that to answer the right business questions. How do you see the different companies right now? And do you see there are a lot of challenges that companies still face? So it's a continuum.
Some companies have invested heavily and are much more further ahead than the other. I look at the problem in two parts. One is, as you said, bringing in all these data components, standardizing it. It's a one -time deal. Many companies have put that together. But they also have infrastructure which is not agile and adaptable. Let's say you get a new data source. How quickly can you adapt and bring that data in the mix? How can you make that available
from a data as a service to people who consume it downstream so that they can use it for better decision making? I think that speed is still quite slow. It may take you anywhere from two to six weeks to bring a new data set together, validate that, incorporate it, and then roll it out. By that time the business user has even forgotten why they needed the new data set, because it's not agile enough in terms of the speed of positioning which they need.
Daniel Levine (19:25.838)
OK, and what are some of the other challenges that you've seen the companies facing? So one in terms of like getting this up to speed in the right amount of time, but like in terms of I would say on the analytics side, right? The insight side. How well the companies do or like what are some of the challenges the companies face in terms of making sure that of for their business questions they are getting the right insights or not? It's a great question.
So if you really look at the output side of it, the analytical insight sides, let's break it out into three segments of people broadly. The first, people who are report factory oriented. That means I'm producing a report, giving it to the people and them using it. And that could be done monthly, weekly, daily, whatever is the report frequency. And that...
is still going on from organization to organization. That's the prevailing default model. Everybody goes and builds a report in that BI tool of their choice. They put all of the different dashboards and then ship it out. Business intelligence, right? BI. Correct, business intelligence, absolutely. But as they do the business intelligence work, there is not good metering.
So we do what we call the report bulge. I have five reports. I don't discontinue one. I do the tenth, five more reports. Then lo and behold, you look at it, you have 2000 reports. You're feeding the beast. Every day you're producing a new report and you don't know why it is, but that's what the business wants. You are not knowing who it is. And so there's lots of opportunities in transforming those, that report factory business intelligence infrastructure into a,
metered, fit for purpose, very consumable manner, because it has to be consumer driven. There's only 10 reports most people use. This one report was done for a VP three years back and the VP no longer exists in your company. And so you're still holding onto that because you think that that's the right way. So I think that business intelligence team has to be very savvy and appropriate on what needs to be done rather than just be producing
Daniel Levine (21:49.102)
business intelligence reports. That's one consumer. Now, if you will look at the second consumer who's coming in, I call them data scientists. And the data scientists want access to raw data because they're trying to build models. They say, go away, I'm gonna copy all this data, get it in my sandbox or in my personal laptop, run my models. And what they're doing in effect is creating very siloed versions of all of this. Because once they have done the model,
taking from experimentation to scale is gonna take a lot of effort. So how do you really help democratize the data sciences process, but still give them the infrastructure, which is enterprise scale. That's a very big problem and challenge because you're gonna be having, you know, you're just moving the problem from the ingestion side to the output side by just giving multiple different data and data silos to be created. The third part is,
how do you manifest these two things in the workflow? Because a data scientist produces a model and it says, you know, it classifies a particular problem. If you don't give that classification to the person who's making the decision, then again, going back to my speed to decisions, it doesn't help out. So how do you bring all of this business intelligence and the AI models and everything in the workflow so that when I am...
going to visit a doctor's office, I know what the three things I have to do. What is the next best action? If I have to go and do things in clinical, I know exactly if I'm visiting a site what I need to be doing. So making it very contextual and relevant is the third set of users. And that's where you have to bring in business and business teams, as well as workflow tools to work the data inside at the right place of positioning. That's a great point about how
we have to bring all of this together and the challenges in all of those. So how, are there specific areas of opportunity that you see right now in like the entire life science value chain where you think data analytics can make a tremendous amount of difference? Yeah. I mean, as you mentioned before, data analytics is a foundational capability.
Daniel Levine (24:13.902)
and it has to work foundational across all areas. Once you have the foundational infrastructure, I think AI is going to be a game changer on how you can get the insights from this data. That's one part, machine learning algorithms, AI algorithms to make sure you were building the right models to get the right information and insights out of this data.
Where I think there's going to be acceleration is in the democratization of this data, that I can access this data and insights through conversational interfaces, where I am now speaking in business speak, not in tech speak. I come in and say, tell me how I'm doing these vis-a-vis versus my competition, not saying what is my LRX and TRX and what am I doing here and knowing that this is a table
and this is kind of the rows and columns which are there in the table and this is the kind of metadata structure, removing that business speak-- Once I, sorry, not removing the business speak, but actually encouraging more of business speak and removing the tech speak. Where I think the market is going is when I have people who can self -serve their needs to decisioning by conversing with data and getting the right data storytelling
as an input and output or narratives from AI models to tell me what is a better starting point for what I have to do. That becomes a game changer to that decision infrastructure. I'm not relying on an IT or somebody else to translate and explain what the data is saying. I am getting the explanation in the way I can discern so that I can make an effective decision. So it's a really game changer which are happening on both parts. One is the output being much more targeted with insight and the input on how I converse with that process and consume that insight. Okay, gotcha, gotcha. Now, you have been advising a lot of healthtech tech companies to work with biopharmas. So what is your advice for these small healthtech tech companies or even tech companies that are trying to get into health and then are working trying to work with the biopharma company? So.
Daniel Levine (26:35.758)
What is your advice to them about how they should be engaging with the biopharma companies? So let's take general tech companies. Many people do cool tech, but it's very difficult to do tech in life sciences. Why? We have to speak the language. You can't come to somebody and talk about, oh, I have a third normal database, or I have this cool random forest AI model, and you can use it.
It doesn't help. You have to translate it into, hey, you can use this to make better decisions on targeting. You can use this to help patient recruitment. And then you have to break it up into how these data sets and everything come together and why you have the best model to actually address their needs. So the first thing is to you have to speak the domain. Absolutely. If you don't speak the domain,
please get help in getting the right domain experts because without that, you're not going to be able to even address a business problem. So you'll come in and you may have the coolest technology, but it won't help. The second part is we are in a regulated industry. And I always say this to tech companies, think that whatever you're doing is not technology, but you're going to be producing a drug or a pill, which your grandmother is going to take,
which means the scrutiny of what that decision and output should be is quite different from, okay, I just did a model and it helps better shopping behavior. It's a completely different norm of what you have to apply to in terms of rigor, validation, testing, conformance, because we are in a very regulated industry and we need all of these different aspects of rigor to be done.
We're not just doing this because it's a fun activity, but it's a life -impacting activity. And so that's the second part, which a lot of the tech companies need to know. And the third is...
Daniel Levine (28:44.75)
cool technology is only as good as how you can implement it. And in pharma we have a multitude of ecosystems where you cannot just say, I'm a silo and everybody will adopt my technology. You have to figure out a way of how you're a complementary effort to things which they're doing right now. So those would be my three advices: domain, looking at it from a very regulatory, but a high level. level.
high validation throughput standpoint, validating the results, not the computer system validation, but making sure that what we're doing here has a higher bar to the right output versus anything else. And then of course, last but not the least is to really look at it from a user adoption ecosystem perspective. And the health tech companies,
do you see them doing that to some extent or how do you see them like as you watch these companies? How are they taking this advice and how are they then establishing themselves with the life science companies? So it's a very big challenge depending on who that healthtech tech company is. Many of them want to be a platform. So they say, I'm a platform. You can use it for anything.
But a Swiss army knife doesn't help pharma. They want the right screwdriver, which will then solve that particular problem. They want the right, you know, corkscrew which will open up the bottle. So I tell the tech companies, you could be having 20 different features, but sometimes you may have to only talk about the one or two, which make the biggest impact for life sciences. And so think vertical, not horizontal. And that's a big change management for many of the tech companies.
Okay. And do you see like with all the experience that you have and expertise that you have, the successful healthtech tech companies, do you think are the ones who will adopt to the more like focusing on specific business solutions versus like the platform companies? Is that in general, that's how you see this? Yeah. I'm not saying platform companies won't succeed, but I would say that the...
Daniel Levine (31:08.398)
platform companies will all have to have a vertical story to how you can use that platform. So you're not selling an electric car or an engine. You have to sell what the driving experience is for the person to go from place A to place B. So it's very important in life sciences to really take your platform, but also spend that
time to understand the customer's needs and domain so that you can articulate the value of your platform from a domain perspective. Gotcha. Now, let me flip this now. When you're advising the life science companies, the biopharma companies, and then of course, when they are adopting data analytics, AI, a lot of times they are working with many vendors and try to get the right vendors to get
who are specialists in these state of the art technologies. What advice do you have for them about what to look for in the vendors and what are the type of projects that they should do? So any advice you can offer them.
Yeah, so many of the life sciences companies have strong data teams and opinions about how they would want to do and go on. And so I'm going to answer the question and what is the right reference architecture of platforms you will go after? And second is what is the right services partners you can go after? Because those are two separate decisions. From a reference architecture standpoint,
many of us in life sciences are fast followers and we want technology to be proven in some other place before you can start to adopt it within your infrastructure. I would give them a framework of using a Horizon 1, 2, 3 model where in Horizon 1, they have their mature infrastructure and they can continue to drive that, but invest in Horizon 2 and more importantly, innovation
Daniel Levine (33:18.894)
tech in Horizon 3 to see how you could find what will work and not work in your infrastructure and quickly bring that on. We need to accelerate speed of innovation so that you could get the right type of reference stack going. I'm not saying you immediately keep changing your reference platform. That's why you have a Horizon 1, 2, 3 approach where you spend 60 % on Horizon 1, 20 to 30 on Horizon 2, and 10 % on Horizon 3 activity. So you need to put together a
good model on how you can play in the Horizon 1, 2, and 3 one, two, and three from a tech architecture infrastructure standpoint because the marketplace is changing quite a bit. We just thought cloud adoption happened. Now that's table stakes. We just thought data warehouses, now it became data lakes, data lake houses, data mesh. Technologies are changing quite fast.
We just talked about bringing AI. Now you have generative AI, large language models, hallucination. And so you got to keep experimenting in the Horizon 3 and 2 so that you know exactly how to bring and build scalability in Horizon 1, which is your core infrastructure for your data information analytics. And for every CDIO I advise, I say you have to have your
hands in all three pockets. And Horizon 1, how do you optimize? Horizon 2, new capabilities which are adjacencies and Horizon 3, which are innovations. And you have to constantly be doing all three because you can't wait for somebody else to do innovation and then see how you can adapt and adopt it within your infrastructure. Now, if that's the problem, CDIOs are looking for partners who also think in their same model.
In their Horizon 1, they want vendors to really optimize what they're doing, provide them cost efficiencies, but more productivity efficiencies so that they can make sure that they are continuing to optimizing their Horizon 1 infrastructure. In Horizon 2, they want companies who understand the domain, challenging the norm and helping them be uber fast followers -that they've seen something work, bring it in quickly and driving it.
Daniel Levine (35:41.55)
And then they want also companies who can help them with Horizon 3, setting up labs, setting up innovation platforms where they can experiment and find the next new thing which can go into becoming a Horizon 2 and then coming to Horizon 1. So that's what I would recommend if I'm a vendor in this space is to really look at different value propositions for each of the horizons, having a good model discussion with the customer. Now,
the customer is also buying these things differently. Previously, they bought resources, then they bought outcomes. Now they're buying squads, and each of the squads have different aspects. That's one. Second, each of the customers are also changing the way they are organized. Before it was application maintenance, application development, it was data operations and analytics and BI and AI. And now what we're seeing is
they're building a product mindset. They're saying, okay, I need AI, I need data operations and all of that, but those are all some of the parts. I need to bring all of them and say, I'm a pharmacovigilance innovation product. I'm a clinical development innovation product, a commercial. And they are creating this platform approach to products. They have a horizontal platform and an experience platform. And so how do people...
react and make sure that they are on top of all of this are also things that the vendors have to do. Because it's not the old platform that I do data operations, somebody does data insight. How do you think about yourself and change such that you can either lead your customer down that innovation journey or be a very good provider of services to help the customers realize their dream of innovation?
That's a lot of great advice now, Srivatsan So now, I mean, have you seen, like in the last year, right? I mean, we have this rapid evolution of generative AI that is really changing the game. If you had the crystal ball, where would all this be and where would be analytics and AI in, let's say, five years?
Daniel Levine (38:07.15)
And I mean, you know, we're starting to see where this will evolve, right? So let, I started this. Effective AI and analytics is to help you speed decisioning. So you're going to see a lot of things which are going to happen at the point of decision. And whether that is through a conversation, through a ChatGPT interface, through a workflow, through a mobile phone, through alerts, notifications.
You're going to change the paradigm in my mind from a push model to really a pull model to help us make those decisions more effective. First point. The second part is it is going to be more business and conversational. I would say a report generation is going to become a table stakes and you will actually see less reports done because you're going to increase ad hoc
interactions, you're going to increase self -service, you're going to increase the speed to data, so you're going to see that. To enable this, everybody is going down this path of what I call a data as a service. And a data as a service is, I don't care where you're storming and norming and bringing your data, tell me, is this of high quality? When did I have the last refresh?
Can I then use this to make effective decisions? And if you look at it in five years from now, we're basically subscribing to data pipelines with high quality so that I can then put these decision frameworks to make decisions effectively. I'm not questioning, like today's problem is, oh, my LRX data and TRX data don't work well, or whatever is happening in the marketplace. We're debating the quality of data. With the data as a service, you could say, I have three data
services, one is at high quality 90%, another one is at 80 and 70. And I'm going to use that to make decisions on what I have to do in which doctors to call, which marketing campaigns to do, which segments to go after. So I really see the evolution of data operations, data infrastructure into a much more streamlined pattern and the consumption of this
Daniel Levine (40:33.646)
in a much more democratized pattern. And so as you start to really look at business and business interacting much more, you have to then build that scalable infrastructure to help them be successful. So Nagaraja Srivatsan, an industry veteran and the founder of consulting firm with Vidya Seva, thank you very much for your time today.
Well, Emmer, what did you think? There was a lot of great advice that Srivatsan gave regarding the challenges that the companies face, the biopharma company face, and how they should be thinking about overcoming them and how they should be focusing on the decisions that they should be getting out of data analytics. And then they should be thinking about the different horizons and then how
you know, like the infrastructure and then what's right now and what is the future, right? How they should be thinking about all of those three at the same time and getting the most out of data analytics to get the business value of those. Yeah, it was interesting to hear him talk about the evolution in this move away from the emphasis on gathering data to actually using it and going from a
a firehose to a consumable format where people can make informed decisions. What's the significance of this change? I think it's very significant in the sense that, see, I mean, one thing that people have to realize is that pharma data is extremely complex. It's huge. It's very dirty. And this is everywhere in research, development, commercial.
The data is large and dirty. And so a lot of times what happens is that just to get the data cleaned up, standardized and something that one can say, okay, well, everything has come together. That is a huge effort in itself. So a lot of times that just takes such a long time. And then by the time that is happening, the specific questions for which it was gathered, like that link sometimes it's not even there. And then the...
Daniel Levine (42:59.534)
a lot of the effort is focused on the data stuff rather than inside generation stuff. So that is something that has, as the, as a pharma has evolved that has started happening. But instead of just getting the data to now getting the data in the hands of the users, that is becoming now more and more prevalent. And that is big because what do you need all the data for? You need the data so that the scientists who are doing the experiments, they are know exactly what's the latest that's happened
in the science, or you have sales and marketers who are going in front of the healthcare providers. They need to know exactly what's out there with the physician things, or what are the perceptions about their brand. or so. So they need to get that data in their hands when they're making the decisions. So it's a huge shift that has been happening in the pharma.
Srivatsan also talked about data analytics and life sciences becoming a differentiator. In general, those of us on the outside don't have visibility into what life science companies are doing with data analytics. Do you think this is an underappreciated competitive advantage some life sciences companies have? And do these companies need to think about data analytics from a strategic point of view?
I think so because data analytics, I mean, I can tell you, you know, 20, 25 years ago, when I started in the industry, people really didn't care about data analytics. They, I mean, I know of several people in pharma who said, hey, we're in the drug business. We're not in the data business. So let us make the drug and let us sell the drug. That is the business, right? Now,
I have seen the change where no one is saying that anymore. People are talking about, okay, well, what can data bring us? And again, if we look at research where earlier on the drug targets were being identified more from the literature or more from doing the experiments, but then as genomics, proteomics, all the omics became big, that has now really become
Daniel Levine (45:15.854)
table stakes, everyone has to do the genomics experiments to really identify which are the right targets, or even for the finding the right drugs, right? The screening of the compounds and so, or the structure, you know, the structure -based drug designers. So these are the things where data analytics is used heavily. Those have now become essential part of research.
On the clinical side, we are talking about getting a lot of data for predictions, sorry, for PKPD, like pharmacokinetics, et cetera, but also in clinical trials, biomarkers, right? All of that data now, we have started using that. Not to a full extent, but there's been a lot of strides. Same thing with medical affairs, manufacturing, and also in commercial, there has been a sea change where now, I mean, we saw in our first episode, we were talking to Jonathan about
getting 20 years ago, just getting the data together was a big deal. But now the data is there and now we're getting the right insights to the sales and the marketers. So this has been a change definitely over the last couple of decades. One term he used that really stuck out to me was AI hallucination, things you need to guard against. And the role operational infrastructure can play in protecting against that. What did you think of that?
When people use generative AI these days for ChatGPT GPT and so, and then you get some AI hallucination, some wrong answers, it's okay when you're playing around with ChatGPT GPT. It is not okay in the pharmaceutical setting because you're asking very serious questions about some clinical data that came out, right? Or like determining what should be the decision for a patient based on the data, right? So your accuracy needs to be,
pretty much 100 % because one wrong thing that AI says will affect, can affect lives of a lot of people, right? And on top of that, it is a very highly regulated industry and a very compliance -driven, right? So that's why to me, the hallucination, AI hallucination, that needs to be minimized as much as possible.
Daniel Levine (47:35.726)
And this is something that as well as I'm talking to a lot of people in the industry, yes, I mean, that is one of the number one things that people come with. Like when with AI, they do expect to have pretty much 100 % accuracy and they do not want to deal with the hallucinations. And there's something they worry about. It's very seductive and it's very easy to just put your faith in the technology. But, you know, he talked about the iceberg, the decision making at the top
that's above the surface and all the issues of data quality and evolving standards that's below the surface. Decisions are only going to be as good as the data they're based on. Do companies take this aspect seriously enough? I think it depends on the data people in the company in specific areas. So in terms of like their appreciation, but also data literacy
of their stakeholders in the company. I've dealt with stakeholders like marketers and scientists who understand that it's the garbage in garbage out where you have to get the right data in for the model to give you the right output. If you train the model on wrong data, it is not going to correct the data and give you the best answer. So that's something that people don't really realize. They think that, oh, I have a data problem,
the model will fix it. No, it's not, because the model is getting trained on the bad data. So it's going to be a bad model. So the data, but I've also worked with people who don't understand that and they're like, okay, well, you know what? I need a solution for this. Here's some data lying around, do something with it and come up with it. That doesn't work. I mean, the data that you need needs to be very high quality. It needs to be ideally generated for the purpose for which it is used for. Even that is very challenging because a lot of times
what I've seen the tendency of people who don't know much about data is that they think about data analytics after the data is generated because something was generated and now they have a problem and they're like, yeah, but that data was generated. Okay, well, why doesn't the data analyst team go take a look at it and come to me with something? So the planning needs to be there and then the data analytics folks need to be involved right from the beginning. And this is something that I've like in the multiple companies I've worked with,
Daniel Levine (50:00.558)
this is the education that I've had to do to them saying, hey, this is how it works. So we're in it together. And it also kind of ties to what you were saying earlier, which is that the data analytics is not something separate. It is not part of the business. So even when you're thinking about it's not just a cost center, it is something that is bringing the value, right? So similarly, the data analytics people are not something that's outside, but they should be part of the business and need to be included
when you are thinking about a new decision strategy and a business strategy. Srivatsan Vatsan offered some great insights and it was very instructive. So looking forward to our next one. Thank you, Danny. Take care.
Daniel Levine (50:45.902)
Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi -monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny@ at levinemediagroup .com.
For Life Sciences DNA and Dr. Umar Drawid, I'm Daniel Levine. Thanks for joining us.
you
Daniel Levine (51:25.198)
you
Daniel Levine (51:39.214)
you
Daniel Levine (51:43.694)
you