Moving Beyond the Hype of AI in Biopharma
In This Episode
In this episode of Life Sciences DNA, host Nagaraja Srivatsan and co-host Daniel Levine turn the spotlight on their fellow host, Amar Drawid, Vice President and Global Head of Data Analytics, Insights, and AI at BioNTech. With over 25 years of experience spanning bioinformatics, translational medicine, clinical development, and commercial strategy, Amar shares his perspective on how artificial intelligence, both classical and generative AI is reshaping the biopharma landscape. He explains how organizations can move beyond the hype to build real business solutions, outlines the structured “SDLC-style” process his team uses to develop Gen AI applications, and highlights why change management, domain expertise, and stakeholder collaboration are critical to success.
What You’ll Learn in This Episode:
- The evolution of AI in pharma where classical AI has matured (discovery, translational research) and where generative AI is unlocking new opportunities (commercial, medical affairs, clinical).
- How to build Gen AI solutions that work combining technical expertise with deep domain knowledge through iterative development and stakeholder feedback.
- Why prompt engineering must be domain-driven training AI systems to “think like an expert” in oncology, immunology, or market access.
- A practical framework for scaling Gen AI from proof-of-concept to enterprise platforms using 20–30 business iterations.
- The real ROI of AI shifting from cost-cutting and productivity gains to enabling faster, better business decisions.
- Change management insights how to take users on the AI journey, secure buy-in, and maintain momentum even when early outputs fall short.
- Looking ahead why AI will soon be table stakes in pharma workflows, transforming everything from document drafting to strategic decision-making.
Transcript
Daniel Levine (00:00)
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. Sri, we've got a familiar face today for listeners of the Life Sciences DNA podcast, our own Amar Drawid. For viewers not familiar with Amar, who is he?
Nagaraja Srivatsan (00:34)
Amar, of course, is one of the hosts of Life Sciences DNA podcast. He's vice president and global head of data analytics, insights, and AI for BioNTech. He has more than 25 years of experience working with leading biopharmaceutical companies, doing bioinformatics research, translation medicine and clinical development, and global commercial strategies, medical affairs, finance, and business consulting. He's been involved in the full life cycle of drugs and how companies are applying data analytics and AI through each step along the way.
Daniel Levine (01:06)
What are you hoping to discuss with him today?
Nagaraja Srivatsan (01:08)
Early on when this podcast was launched, I appeared as a guest and Amar interviewed me. I'm returning the favor today. Today I want to get his perspectives on the AI journey. How did he come along in that journey? What are some relevant use cases? And really what are the challenges in implementing those AI use cases in real life in large organizations?
Daniel Levine (01:31)
Well, before we begin, I want to remind our audience that they can stay up on the latest episodes of the Life Sciences DNA by hitting the subscribe button. If you enjoy this content, you should hit the like button and let us know your thoughts in the comments section. Also an audio only version can be found on most podcast platforms. With that, let's welcome Amar to the show.
Nagaraja Srivatsan (01:55)
Hey, Amar, so good to see you. Really excited to have you on the show. Amar, given your wide expertise around AI, it'll be great for you to tell us the journey till date on where you think AI has come along and how that journey was for you.
Amar Drawid (02:13)
So I'll focus on, of course, the AI and biopharma world where my career has been. And of course, the word AI, as we all know, there's a lot of hype involved with that. A lot of the stuff, the definition keeps changing and it gets simplified. So I've seen even like simple regression, now people using the term AI for that. The way I think about it is that when we look at the evolution of AI in pharma, I try to look at it in the different areas that I've worked in. So, for example, when you look at research or discovery, where we're trying to identify the new molecules and understand the biological underpinnings of diseases, there's been bioinformatics, hemoinformatics. There has been a lot of machine learning, even for several years. Like, even when I started my career, it was over 25 years ago when we were doing advanced machine learning, even in that area. So that has been, I would say, the most advanced area using machine learning, which is even just getting better and better because now we have generative AI and a lot of the other stuff, right? So I was in research and discovery, research or discovery, whatever you call it, the AI has been there. It's been, I would say probably at the state of the art, the most state of the art that we can think about in research. Some of the others like clinical development, I would say it's a bit different. Behind, yes, there's statistical analysis that gets done in clinical trials, but when it goes to more like biomarkers or translational medicine, yes, there has been more machine learning as time goes on. But a lot of, I would say, like the clinical data management or clinical operations, there's a lot of AI that can still be introduced. So I would say that's a bit behind, I would say, in terms of AI. Commercial, where I've been spending a lot of time in - Again, I would say this is just the beginning. And commercial, of course, has been always behind in data analytics because the commercial data has been very small amount, very limited amount, very dirty, right? Commercial, I would say, has always been behind in terms of data analytics and also AI. And medical affairs, I would say, is even more behind. There just hasn't been that much effort. And even manufacturing supply chain, I think, still a long way to go, a long way to go in terms of automation, terms of analytics, in terms of predictive analytics. So when I think about AI overall, I'm looking at not just the rules-based automation, but there is the machine learning-based predictive analytics, but then also the machine learning-based gen AI, generative AI. So those are the different elements, and especially gen AI, as you know, in the last couple of years has gone up quite a bit.
Nagaraja Srivatsan (04:58)
Amar, it's very fascinating how you've started to lay this motif out. Actually, what you've done is actually segmented the use of AI into classic AI, which is as you started regression and others, and then evolved the use of AI in terms of using ML and others for prediction and others. And then the third is the generative AI part of it. And as you've started to apply your view of that, I would say that you started with your classic AI mindset and said, hey, discovery uses lot more classic AI for the longest time. There's so many models on Omics and others to make it happen. Versus some of the other areas, maybe they're much more human workflow in nature, and so they're more generative AI use cases, and they have not used classic AI. So why don't we explore, A, do you agree with this rationale around classic AI versus generative AI? Would you score differently in each of these different segments on AI adoption or you're applying the same considerations across each of these different - loops?
Amar Drawid (05:59)
So I would say classical AI, is more the analytics AI, which is a lot of the prediction or design of experiments, et cetera, those aspects. So I believe that had been progressing very well. But it's the generative AI, which has been quite a bit, the big wave that we're seeing is with generative AI over the last couple of years. And what has happened is that generative AI, because of the nature of that, has opened up a lot of new business questions and scientific questions that we were not asking before because there was no solution before. So the questions that we had, the scientific and business questions we had for classical AI, which is: I have this data. Now give me some prediction. How is the system going to evolve, et cetera? Those questions are still there. They still need to be solved. But what's happened now is that we have a lot of the data in the documents, a lot of this unstructured data, and now all of a sudden, there can be a lot of use cases that can be formed to answer those. Classical AI wasn't really dealing with that. Yes, there was some natural language processing, but there were limitations to that. Now all of a sudden, we're in just a very different league.
Nagaraja Srivatsan (07:12)
Fascinating what you said ⁓ is, hey, classic AI was working on a much more of a structured problem set where the data sets are more formed and normed. And now generative AI is working on these documents where there's a huge variety of unstructured documents. So if you were to pick each of these different functional areas, as you said, you're working in commercial and clinical, which are both document rich, both with lots of unstructured data. Let's pick a segment and then walk me through what would be the classic applications of this gen AI? Let's pick medical affairs or commercial or clinical. And just let's walk the audience through, how do you go about applying this gen AI to that particular workflow?
Amar Drawid (07:53)
Let's take an example of commercial. So in commercial, ⁓ when we look at ⁓ market research, when market research is done, there's a lot of insights that get generated. A lot of times, you just get one PowerPoint. That's the end of it, right? So now with gen AI you can have those insights. You can ask questions about those insights. You can go much deeper into that. In general, we have a lot of brand strategy documents. We have a lot of different documents. We can now start asking insightful questions about that, which are helpful for marketers to understand more about the disease areas. But then also when you have the sales reps who are going to the healthcare providers and they get questions asked, instead of memorizing everything there, the sales reps can ask questions about insights from whatever the approved material is. So that's a lot of what I would call like generating insights. So that's one aspect. A lot of documents that are there even in market access, the global valued SEA, it's always good to get insights when you don't have to read, go through hundreds of pages of documents. You could just ask questions. So that's one big - the insight generation.
Nagaraja Srivatsan (09:05)
So let's explore that a little bit, right? So what you're saying is that, the first thing you do is when there is a ⁓ plethora of documents which are unstructured in nature, previously you would digest that and summarize it into a PowerPoint and you get the final insight. Right now what you're saying is we could expose you to the raw data and you could build multiple different versions of the insight, not the insight created by the PowerPoint author, but the insights which you who are part of the workflow wants to get created. So walk me through that and market research is a good example. How do you go about building this? Is this your conventional retrieval augmented generation where you're putting all of these documents into a RAG model to question or are you starting to see much more sophistication going on and how you would go about doing it? So, walk me through, how would you select this as a use case? How would you then go about building it? How do you go about then socializing the ROI behind?
Amar Drawid (10:04)
For market research, example, there are lot of these transcripts of the interviews that you do with the HCP. So what one can do, what we've been doing is transcribing the audio into text, and that text then becomes available in what we call, like we built a knowledge hub. And so in the knowledge hub, that's available. And then you have, but the key is here, how do you then...ask the questions and get the good answers. And for the good answers here, yes, you have good LLMs that are coming up and that they are going to keep evolving as time goes on. But the two key elements here, course, RAG is very important, where we are specifying different types of files, we're tagging them, we're making sure that the right files get the right importance when you're...trying to get the knowledge, right? So RAG is one way of doing it. RAG is one aspect of it, but then the other aspect is also a lot about the prompt engineering because we are building these systems now, let's say, focused on oncology or immunology, right? And there, the questions that we ask regarding the patients, regarding the treatment paradigms, regarding...the experience of the patients or like how many patients are in what segments, et cetera. So there, the way I think about it is that the gen AI needs to have a PhD in oncology or the PhD in immunology for it to answer those questions well. So we are doing a lot of prompt engineering, a lot of training around that so that as we are - I mean, we have hundreds of questions which we're with which we are training the systems, making sure that the gen AI is giving the right answers to those. Because gen AI, I mean, whenever you build the system in gen AI, the first answers that it gives are very basic and very often don't make sense, right? So we need to teach that, right? So we have, yeah, on the one hand, there's the knowledge. On the other hand, there's the way of working, which is the prompt engineering and so on. So we're using both of them quite a bit.
Nagaraja Srivatsan (12:14)
Let's explore that, right? That's fascinating. So one is what you said is, how do I bring all of this knowledge together in some form of model? That's your RAG, that's your different aspects. Let's explore the prompt side of it. And you said it's very critical skill and most people try and use prompt like search and of course they get very perfunctory answers and very basic. Tell me, how would that prompt journey be? Let's say I join your group, how would you train me in being the PhD in oncology in prompting, you know, what do I have to go through? Are there tools? Are there courses you recommend that we build this infrastructure on?
Amar Drawid (12:52)
I would say a lot of times, see, for a lot of the prompt engineering is the domain knowledge that is extremely important, right? There's a tech aspect, but a lot of this is about the domain knowledge and what is it that... So how we give the PhD to the gen AI, that's the question. So I would say what we're doing is we're picking up specific areas, for example, epidemiology. So in epidemiology, what are the questions that people usually ask? What is the incidence rate? What are the different segments in a specific...patient pool, what are the treatments in those, right? So then what we're going is like systematically create questions around those. Those are the questions that we ask. Now we also have the documents that we are providing to them. And then we are matching how it's giving the answers. And wherever it's not giving the right answers, we are then adjusting that when a question is asked this way, you have to focus this way and stuff. So that's how we're doing it. It's very much, very much domain focused. The idea we're going to have is that for each of the areas we will be so right now we have more generic agents, we will be now, as time goes on, we will be evolving the system to build agents for a specific domain, subdomain. So for example, a marketing agent, a sales agent, a market access agent, right? So specific agents who will have more specialized front engineering around how they answer the questions.
Nagaraja Srivatsan (14:09)
Yeah. I think you said a very fascinating stuff, right? So, you didn't go immediately and say, okay, I'm going to give this guy a PhD in prompt engineering. You just said, I'm going to give him or her a PhD in the expertise which they know. So, if you are a PhD in oncology and epidemiology, you're just saying, I'm not going to ask you any tough questions. I'm going to ask you the questions you ask on a daily basis off of your human workers, off of your current state of things. And that's actually a really good way of differentiating what you need to be doing from an AI standpoint and what you need to do from a domain standpoint, because that's in the capability of every one of us. If there's an expert in that particular role, then you ask the expert to saying, hey, list out the 50 things you would do, the questions you would ask, the kind of data you would look at. And so you're really democratizing that knowledge in terms of what they best know and then complimenting it with okay, then how to best access? Is that a fair way to formulate the framework you articulated so that you can leverage the best of talent?
Amar Drawid (15:15)
We are definitely hiring the people who have the domain knowledge. And those are now imparting that domain knowledge to the gen AIs. So they are the ones who are teaching them. And we have like even a lot of different levels around that as well. They are like, whoever we build the gen AIs system with, yes, their domain people need to work on it. Our own, my team members work on them. Then we have these like what we call our key stakeholders who are the ones who are champions of these gen AI things. These are business stakeholders. These are the people who are marketers who are in precision health, et cetera. And then they are the ones who train the system. That's the third level. And then we open it up to the general audience. So there's a lot of this systematic training of the gen AI that we're doing based on this. Now, the thing is you have the business people. And business people, I'm using that term very vaguely, same principles apply when you're clinical, right? Then you're talking about the clinical people or researchers in discovery, right? So these people with the domain expertise, right? They are the ones who are providing how things should be. But then there's also the technical people who are the experts in gen AI who need to implement it in the right way. So there is, yeah, of course, I mean, the biggest challenge here is that you need people with both expertise with the business expertise, the domain expertise and the tech expertise together and they need to work together. That is the biggest thing ever.
Nagaraja Srivatsan (16:43)
It's amazing. What you articulated is almost like an SDLC for building these types of applications, right? What you said is, hey, I bring in a set of first experts who then articulate what questions they ask. Then you said, I'll take that and have it with my prompt engineers to make the actual prompts work better. But then you said, I don't know, validate this with the second tier experts to make sure that the results they ask and the expected value is the same. If not, you go back and tweak either the questions or the process. And once you have rolled this process out to a few iterations like an SDLC or an agile SDLC, now you're ready to roll it out for public consumption. so you're almost putting it, the human review across the board consistently with a feedback loop to make this thing get better. Is that a fair way to think about how you build this SDLC for gen AI development?
Amar Drawid (17:43)
I would say so. So there's so many iterations as you're talking about that, right? And I call these like business iterations, right? So when you first build a gen AI solution, that to me is a technical solution. But then what I'm interested in building a business solution that our business people are gonna use, and that just requires 20, 30 iterations. That's the only time it's gonna get better. You also have to kind of think about it as, you know, right, like into the change management or adoption. It's very important that people actually use something. Right now, there's a lot of hype around gen AI. But also, people do believe that this is something that's definitely going to be a game changer, and I do believe that. But we need to come up with these. We need to do a really good job of rolling out these solutions so they are clearly adding value to people. They're clearly adding business value, right? And so that's why we have to be very careful about it, about how we orchestrate all of this and how do we keep people interested, how we take them on the journey as well because they have to realize it's not just magic that just all of a sudden you put a lot in there and it's going to give you like the best answers in marketing, pharma markets. That's not going to happen.
Nagaraja Srivatsan (18:56)
You told, I was going to go down the change management route, right? If somebody you said, hey, your job is to do 20, 30 iterations with AI and the first five are going to be garbage, people will say, I'm not doing it. And so how do you get people? And you also said, these are all vertical solutions and everybody has to be looking to the benefit of it. So it's a big change, right? Because most people in POCs would try and that the system doesn't work and then move on because it's additional work. What you're saying is you need people to be A, empowered to doing it, but B, when they're frustrated, have the right safety infrastructure so that they can continue to iterate out because it's only with multiple iterations does this thing get better. It'll get worse before it gets better. So how do you get people motivated? Is there some best practices on what works from an organization? Is it top down, side up, bottom up. How do you try this?
Amar Drawid (19:51)
First of all, the important thing is to take people on the journey. And what I found is that it's, yeah, you could always say that, this is a magic that I'm gonna just give you. But then when you say that, like if it's successful, yes, people do think, my God, that's great. But when it's a failure, people are like, people just lose interest in it. On the other hand, what I found is that when you take your stakeholders on a journey right from the beginning, if it's successful, yes, they love it anyway, but if it's not, they understand why it wasn't successful and they're willing to give you another chance. So to me, it's very important from the change management view that you actually take these people to build it with you. So it is not my solution, it is their solution. So that's the change that I try to have. And for that, of course, you have to be very careful about how you use their time, I mean, their time is very valuable and you don't want to waste their time, right? So that's why a lot of the things I'm doing is that as part of my team, the data analytics AI team, what I do is that the first couple of iterations happen within my team. So I even hire people, even contractors who are domain experts. They are the ones who go through these first for the iterations. Then it's my team members who go through these because...these guys have a higher tolerance level, right? Even if we're getting crap out, but they're going to work over that. It's only when it starts becoming decent, that's when we expose it to our business stakeholders. Because then they're getting something out of it. What I want to get to them is that, yeah, this is not perfect, but it is kind of on the way. And then if I actually now put in a lot of effort in it, I will get the value. That's how I want them to think about it.
Nagaraja Srivatsan (21:36)
Amar, it's fascinating when you ask to get the business as stakeholders on your bandwagon. Many a times they ask for ROI and say, hey, what is the ROI on this? And as you said, sometimes you have to go back before you go front and it's, how do you get them on board? That seems like a very critical step to get them in the journey. It's easier said than done, I'm sure. What are some carrots or sticks which you use to make them come on the journey?
Amar Drawid (22:02)
So interestingly, see, I've been in the industry for 25 years. I would say the first 23 years, it was very hard to get people to be on the bandwagon for analytics. Last couple of years, ever since gen AI has come in, it's the reverse problem. There are way too many stakeholders coming to us saying, hey, I have this idea. It's because just Chat GPT got, I would say, got into so many people's hands so quickly and so many people were able to try so many things that people have a decent idea about what can be done. So it's more about how do we package that now and how do we provide that? Of course, we have to say that in terms of business impact. And one of the things I keep telling them is that, yes, you can use Chat GPT. You can ask, well, give me some ideas for my kid's birthday party. And they will give you some idea. Now, yeah, 10 % of that could be wrong. Doesn't matter because the other 90 % is there. And so when you're using Chat GPT or Claude or whatever in your personal life, you're using it more when you don't know something about a subject area and you're asking some kind of basic question and you're getting some kind of answer which is good enough. The problem in a corporate setting is that the...people who are using them, they are themselves experts in that area. And now they want even higher level insights from this. So the window for mistake is very small in the corporate world. And so that's why we need to, that's why what I was talking about, the prompt engineering, the RAG, all of that. That's why you need to provide that to gen AI. And that's how you need to do that because... because the expectation is much higher. And I don't think a lot of times people realize that, even this concept, but that is important. And this is something, that's why having those conversations with these people is very important in terms of the ROI. It's more about like, is this helping you in decision-making, right? Instead of like really figuring out the exact ROI, okay, well, you're gonna make like, you know, say, make like a hundred million dollars or something like that. I think a lot of these, especially the insight generations one, are more about enabling better business decisions rather than not. There are a couple of examples which I see clearly where you can very easily show ROI. Apart from those, it's more about enhancing productivity but also making better informed decisions. I don't want to just go into the productivity bandwagon. I think it's also about we have too much information that a human being cannot process, can we use these insights to get those insights from these multiple sources in one place?
Nagaraja Srivatsan (24:54)
You did give a different ROI, which is better decisioning and also better helping the human capital make that right decision. And that takes a different skill of the human, right? Initially, we just do work. And sometimes it's the road work and grant work. Now what you're saying is, no, no, no, that's all fine, but you need to now be putting your careers on the type of decisions you make. And that's a little scary because now...I had before, I did and somebody else - powers that be - made the decision. Now you're empowering that decision making to even the lowest and lower racks. How do you coach and train people on this? How do you get them comfortable? How do you get them down the path of doing this tech?
Amar Drawid (25:39)
There is always a pressure in most of the companies to have enough people to do some work. And so what we're saying is that, yes, if you have this, then instead of getting more people in the team, you can get the work done with a lean team. And that's something as long as a lot of the companies have the focus. So we do that. I don't think right now what I'm seeing is a lot of people getting really like...scared about losing their job ⁓ at this point at least because this is so new. And I got to tell the truth, Srivatsan. I mean, it's been, what, getting almost three, two years. At least in commercial, there haven't been these fantastic solutions that are really solving a business problem. I haven't seen those gen AI solutions. I mean, I've always trying to look for those because I would love to just internalize those solutions. I'm not seeing those. Okay. So what I'm seeing is that there's a lot of hype. People want to do a lot of stuff. There's a lot of people who are trying to do stuff, but the results haven't been fantastic. So yeah, there haven't been like a really great example where, okay, well, we got the solution. So yeah, a lot of these other people are going to be redundant because of the solution. That hasn't happened. I think it will happen to some extent at one point, but I think it's more about, let's just get more richer insights and do things faster is my main thing.
Nagaraja Srivatsan (26:55)
And there's a whole body of knowledge which is going on right now to saying that that incremental looking at workflow and productivity did not hit the mark on the gen AI you were spot on. And they said you had to reconfigure workflow and redo and rethink and reimagine how work needs to be done in a new AI environment. So more and more, as I look out in the marketplace, it's actually re-engineering workflow is much more important than just augmenting a workflow. And that's where the POCs are right now. The POCs are more in augmenting and removing, rote and grant work. But in any transformation, as you can imagine, now that you have a new tool, you need to really reimagine how work gets done. So it's just a completely different landscape. As we start to come on, as you said, you didn't see any this bank success yet in commercial, but you see a lot of promise. You see, of course, change and change management. Where do you see this going in the next five years or even three years? Where do you see this market going in the next three years?
Amar Drawid (27:56)
So first of all, I don't know about how much progress there's going to be in the AI technology. I don't know if we're going to start plateauing at this point in terms of what gen AI can do versus whether it's going to keep going into whatever, the sentient being or whatever people talk about. I don't want to comment about that, right? So the way I think is that I would say in the next three to five years, these will get incorporated into most of the pharma workflows. They will become basically table stakes. Some examples, right? A lot of the document writing, I don't think three to five years down the road, anyone will be writing any documents starting from scratch. That's not going to happen. There's going to be first drafts that are going to be done by gen AI. There'll be, of course, review. There'll be changes by humans. Are these going to be completely automated workflows? I don't think so, especially because in pharma, patients' lives get on the line, right? So you have to be very careful at the end. But I do think that we will be able to write a lot of these documents in probably 20 % of the time and probably spending 20 % of the budget than what we're spending right now. That includes clinical documents related to...clinical documents related to the global value dossiers or market access, but also a lot of the promotional material. I would say you don't need to go to the agency and spend millions of dollars with the ad agency to get all the creative stuff. I mean, we started doing that, and just like, we get so much variety of different promotional material very quickly. Yes, we do need to now do like the MLR, right? Like the medical legal review about that. We can do it, right? But I think a lot of these things which are new right now that people are starting right now, they should be table six in the next three to five years. Beyond that, what we can do, I don't know. I I would love for this to have really big breakthroughs, especially in research, like getting new molecules that can become really strong drugs. That would be, I would say, a big game changer, but we have to see. I mean, as we both know, we talked to lot of these people who are trying to do that in these biotech companies, right? And I do wish some of those will have fantastic success.
Nagaraja Srivatsan (30:25)
Fantastic. As we come to a conclusion of this podcast, what are your key takeaways which people can take from this session?
Amar Drawid (30:35)
I would say into the takeaways, right? One is don't fall for the hype and don't just run to do something for anything to be successful. And we have seen a lot of these potential magic bullets in the career, right? So I even like I started my career with the magic bullet of the silver bullet of bioinformatics and genomics where we thought that once we sequence the entire genome, we will be creating these drugs in like, you know, right and left and we'll cure all the diseases. That was 25 years ago, right? And that's, yes. Was it improvement? Of course there was a lot of improvement, but it didn't solve all the problems. Same thing here, right? Let's have the right expectations. Is it gonna change the way we work? Yes, it is, but is it gonna just solve all the world's problems? It's not going to, right? So given that, what is very important to me is always keeping the business at the focus. We're in the pharma industry. And then whether you're working in research or development, commercial, medical, it doesn't matter, manufacturing, it doesn't really matter. Think about what is the scientific question or business question that you're trying to solve. And then think about how you can use gen AI or AI for that. So start with the problem and then try to find a solution rather than the other way around. And even in that, you have to think about what is - What is feasible versus not? So one is you're coming from the problem point of view. And people have a good sense of what problems are the difficult ones to solve. Or if you solve this problem where there's a bigger impact versus not, the business people usually have a good idea about that. The feasibility, they don't really have a very good idea about it. The business people usually are clueless about that. This is where the tech people need to do a very good job about what is really feasible and what is not. So for example, some of the insight generation stuff right now is feasible, but again, as we talked about, you have to have domain knowledge. Some of the stuff about content generation is becoming even easier now. So we are making a lot of progress there. So you have to marry the potential impact and the feasibility together, and we need to think in terms of not just the POC, but we have to go beyond POC at some point. I mean, we have...Yeah, we're doing POCs sometimes, but we are already thinking about the full platform because what we're finding is that, yeah, there are different types of use cases, but there's also ⁓ a lot of times the outputs you need. Yeah, sometimes it's text, sometimes it's the slides you need. It's similar, right? So you don't need to develop the POCs in vacuum. You can have one platform where you can start doing these, right? So, but again, start with the POCs, get the experience, but start getting to the platform. This is something that's here to stay. This is something that needs to be, as you said, right, it needs to be part of the business workflow. That's the only way of saying that it was successful. So you have to have it. And in the end, I think it's going to be more about is all the business people, are they happy with it? Are they using it? What are the business decisions in which this was used? That's how we measure it. That's how we move forward with it.
Nagaraja Srivatsan (33:47)
No, no, this has been a fascinating discussion, Amar. Thank you so much. I really appreciate your insights and wonderful to have you here. Thank you.
I think it was a very fascinating discussion. I like the way Amar articulated almost like a SDLC on how you would go about building these gen AI platforms within organizations, how he brings in the natural expertise of experts, complementing them with the right technology expertise and making sure that you are creating a ⁓ good environment of active input and feedback. It's really...fascinating to hear that journey and with specific and tangible use cases.
Daniel Levine (34:31)
Amar said the state of the art of AI is going to be founded in research today. He sees what things like clinical data management and commercial and even manufacturing having a long way to go. It's interesting to some regards because I would think the implementation challenges there are much smaller. Did it surprise you?
Nagaraja Srivatsan (34:55)
It was a nuanced answer and that's where we went back in the podcast. So, Amar started with a definition of AI, which is abroad, which includes classic AI, machine learning and generative AI. And as we start to evolve that, first he gave us a scoreboard on where is classic AI being used across the board. And he's spot on, it's being used quite a bit in discovery while other parts of the organization are lagging behind. But later on in our conversation, we hit upon...what are the use cases for generative AI? And he talked about that there are large document sets where there's use of data, which is not structured, that becomes a very good use case for gen AI. And there we explore the potential of gen AI, both in the clinical context, but also in the commercial and medical affairs context. So I think it's a natural separation between the classic definition of AI versus what we're trying to do from a gen AI perspective.
Daniel Levine (35:54)
He also distinguished between AI as a technical solution and the business solution and the number of iterations necessary to get that business solution right. Is that on par with what you've seen?
Nagaraja Srivatsan (36:08)
I think so. I think he articulated a very good way of how you bring people and experts along. First is there are lots of experts in the organization. He said that it's not like you using chat GPT to find out what's the 10 things you need to do for your kid's birthday, right? Here it is really about experts wanting to make sure that they are getting the right answers and the right expertise to enhance their work and workload. So that was the first part he said is, pick the experts and help them be natural to what they do. How do they ask questions? How do they bring together their teams? How do they really engage with systems? So start with that. And then he talked about how technology can then provide the answers. He also was very interesting. When you create a set of excellence for generative AI, like what he has done, he said he goes and hires the experts into that team so that they could do the first iteration or activities to get the right answers. And therefore, when they come to the real experts, the experts are not wasting their time, but enhancing that. Even with all of that incubation, he said that it's iterative. It's not a one and done deal. You give feedback, you gain feedback. Sometimes it could be 20 to 30 iterations, but it's a very good and informed process because human and AI are working collaboratively, not humans against AI.
Daniel Levine (37:29)
We talk a lot about change management on this show. He talked about the need to be careful about rolling out AI and bringing the user along. And the two of you talk about taking people on a journey, the idea of making the stakeholders own what's being built and understanding when things don't go as expected, to be tolerant of that. How did you think of the way he phrased all that?
Nagaraja Srivatsan (37:54)
It was a very critical part. You have to bring the people in the journey. So he said three things, which are very important. First is you cannot drop AI on somebody and say, use it, because then you will lose it or they will lose it. They won't use it. Bringing them in the journey means that it has to be their ideas, their thought process and their inputs. The more you make it about them and their journey, the better it's going to be. And last, he said that as you start to iterate, you have to give a certain risk tolerance and say AI may fail, AI may not do all the jobs, but we're going to come across this whole effort in a better mindset because we're both going to have learned what works and what doesn't work. I think that's a very critical part of this learning, experimental, organizational thinking you need to make this change management of AI work.
Daniel Levine (38:46)
Yeah, there's still at times a disconnect between what people expect AI to do and what it can and is doing. And in some regards that came up when you talk about ROI and he talked about it not being a matter of improving productivity so much as leading people to better decisions. And he talked about at the same time the way people get lost in the AI and forget that there's a problem to solve. And he talked about staying focused on the business problem or the scientific questions. Is there an adjustment that needs to be made in the way people think about AI as a tool in the life science?
Nagaraja Srivatsan (39:34)
I think you hit upon two topics which are very important. One is there's a intrinsic ROI in terms of productivity and others, but there's another type of ROI which is helping you make better decisions. And how you articulate the value of that is very important in the ROI discussion. We talked a lot about how you think and enable people to make better decisions and get them skilled in making that. That's the first part. The second part you said, is really how you really deal with change and bring people along and make this about them and their journey. And I think that's a very critical part of it because that's how you make AI better and make yourself much better. So I think we really touched upon some really key aspects of how you can make this AI journey work.
Daniel Levine (40:20)
Well, it was fun to turn the tables on Amar today, so, Sri, thanks as always. Thanks again to our sponsor Agilsium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny at levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine. Thanks for joining us.