Navigating the Challenges of Integrating AI in Clinical Development
In This Episode
This episode of the Life Sciences DNA Podcast, powered by Agilisium, cuts through the hype and dives into the real-world friction of embedding AI into clinical development. It’s a candid, behind-the-scenes look at the tensions between innovation and reality—from outdated infrastructure to regulatory hesitation—and how forward-thinking teams are overcoming it.
- It’s not that people don’t want AI—it’s that they’re not always ready for it. Outdated systems, patchy data, and unclear ownership often stall even the best-intentioned AI initiatives before they begin.
- There’s no universal playbook yet. Many companies are stuck asking, “Will regulators accept this AI-driven insight?” The episode explores how pioneers are working with—not around—regulatory bodies to carve a path forward.
- AI is only as good as the data it learns from. Building trust means putting guardrails in place—auditable models, robust governance, and explainability that we can follow.
- One team can’t do it alone. Successful AI adoption happens when clinical researchers, data scientists, tech teams, and compliance officers come together around shared goals.
- This isn’t about one-off pilots. It’s about designing adaptable, future-ready AI architectures that can flex across different trials, geographies, and therapeutic areas.
Transcript
Daniel Levine (00:00)
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. We've got Mano Das on the show today. Who is Mano?
Nagaraja Srivatsan (00:29)
Mano is an experienced pharmaceutical executive with a focus on R &D and clinical operations. He currently serves as director and lead IT product manager for trial execution at Bristol Myers Squibb, a role he has held since August 2016. Mano has led significant transformations in clinical trial management, including implementation of global startup solutions. He's an exciting person to talk to and well-spoken for in the AI circuit.
Daniel Levine (00:57)
And what are you hoping to hear from Mano today?
Nagaraja Srivatsan (01:00)
Mano and his organization has gone through the AI journey, and I would call them mature implementers of AI. So I'm looking forward learning how that journey evolved and also what they're doing with specific examples and use cases.
Daniel Levine (01:17)
Before we begin, I want to remind our audience they can stay up on the latest episodes of the Life Sciences DNA podcast by hitting the subscribe button. If you enjoy this content, be sure to hit the like button and let us know your thoughts in the comments section. With that, let's welcome Mano to the show.
Nagaraja Srivatsan (01:37)
Mano, it's great to have you here. I wanted to really get your thoughts on the state of AI today, and specifically in clinical development. It would be wonderful to get your perspective.
Manoranjan Das (01:49)
Sure, Sri. Thank you for the opportunity. As we have all experienced, AI, GenAI is hitting us from everywhere. So, in our personal lives, as well as in clinical operations and development where we work, in my space for the work that I do day to day, what I'm seeing is that a lot of experiments and pilots are being done with several new tech, whether it is AI, GenAI, or agentic workflow being the most recent one.
There are some which have scaled to, and by scale I mean they're touching, let's say, one therapeutic area or more, touching 50 % of the clinical trials in a company. But there's more to come. We started the journey probably only 18 months. So there's a lot of promise, a little bit of skepticism, but the future is looking great.
Nagaraja Srivatsan (02:37)
That's wonderful. Mano, as you said, it's been an 18-month journey. Maybe you can provide some illustrative examples, maybe pick one or two areas and kind of tell us what was the state of affairs before and then what did you do from an AI perspective to make it better?
Manoranjan Das (02:54)
When we think about clinical development, I think in my mind, I divide it typically into two halves. One is trial execution or operations, and the other is trial data or data management by a stat. So maybe I'll pick an example from each. In the trial operations area, one of the things that we deal with is a lot of operational data and documents. This data is from documents or trial master file.
And one of the examples that I found when I entered this area with the need to optimize was there was up to 18 pairs of eyes or nine people looking at a document at different stages for quality control, finalization, and ready for inspection. A lot of man hours, a lot of time, human error, peer reviews, and in general, not an optimized process.
With some of the AI capabilities, the goal was to automate. And there was a variety of AI technologies we deployed, including robotic process automation, just to mimic clicks, also AI models to read a document, find out what kind of document it is, and finally make a judgment on whether this is a quality document. And if it's deficient, why is it deficient? Highly successful example or use case. It did not solve, did not work for the entire set of documents, but it worked for over 50 % of trial documents that are used in a trial master file and a huge cost and acceleration quality benefit to the organization.
Nagaraja Srivatsan (04:27)
That's a fantastic example because as you know, trial master file is kind of the storage for all clinical documents. And over time, it's like a closet, things get dropped there, but just, you know, you don't know what exactly is there. So I think this one is a very good way of making sure that the right documents are getting there, the right classification, but also, as you said, making sure that the document and the intent of the document is correct. So it's a wonderful example. Tell me, as you went through this journey, it must have been quite a change from previous processes to current processes. Walk me through what was that from a people, process, and technology perspective.
Manoranjan Das (05:07)
Yes, it was quite a journey and everything didn't happen in one big bang. As we went through the different phases and refinements of people, process and technology on the people side of things, initially there was skepticism on the capability that can a bot actually do what a human is intended to do? What accuracy will it have? What hallucinations or mistakes will it make?
But over a period of time, when the bot did its job, was peer reviewed by human and we could prove that it operated at parity or better than a human. I think we eventually, and when I say eventually, after months and months of testing and retesting and reproving, we got there. But on the process aspect, what we all know about GXP validation processes, and most companies probably don't have a well-defined AI technology process for GXP validation. So it's difficult to get past those barriers. And one workaround or alternative that exists is in a decision-making step, a human is always in the loop. So that was a process compromise we made, although the performance of the bot was amazing. It was that the bot would just recommend, but not take action.
That was the compromise that everybody was happy with and it's still working well. From a technology perspective also, things are not perfect. Some examples of initial failures were, what if the bot is looking at the same thing that the human has already looked at? How do we avoid that? How do we make sure that the bot has access to the right, simple things, has access to the right documents? The password for the bot expired, did the manager of the bot who is a human take care of that as an example? Or even some more complex examples like it is exceedingly difficult, as you can imagine, to identify a human signature. It is very easy to read printed documents, but if it comes to human signature or scan documents, the bot actually doesn't do a great job. At least right now we don't have a model for that. With some of these limitations, we focused on the art of the possible. These are the classifications of types of documents where this works well. So let's move forward with that. And then within the boundaries of what people and process would also allow.
Nagaraja Srivatsan (07:23)
Now that part of classification or Pareto on where it works and where it doesn't work is amazing lessons learned. I've learned that over many times that you don't have to make everything work 100%. You can actually apply the 80-20 rule and take 80%, which you can actually do 100 %, and the 20%, which it cannot do or fail. You can then lead it to have a human doing it. And it's a kind of a re-engineering of the process workflow. Did you see something very similar happen in your situation of the processes?
Manoranjan Das (07:51)
A little bit. So for example, we had an initial hypothesis on what the bot, which classifications types of things that a bot could do well. So for example, we assumed incorrectly that it would be only more effective for English documents. What we found out, it performs equally well with all languages, for example. We also assumed that it will only work for a PDF document, well-defined document, and probably won't work at all with scan documents with some of the OCR technologies, et cetera. The scanning works really well. It is where signatures that it falls short. The other thing that we also had placed some thresholds that, to your point, if it exceeds 80 % accuracy, I think we would take it.
And when I say accuracy, correctly saying this is a pass, document looks good, document is not good, or I can't make a decision, let me pass it under the human. But I think we exceeded 90%, but within the classifications where it worked well. So I think with new technology, it's always going to be learn and don't go with a fixed mindset that this is what will work and be able to
be open to changing your mind and your assumptions as you get in. Because I gave you only two or three examples of where this happened, but there's many others where our assumptions were... It made us humble. Like assume nothing, done.
Nagaraja Srivatsan (09:25)
Yeah, I think that growth mindset, but more importantly, the experimental mindset you talked about is a very critical component of what's going on with AI because you have to keep experimenting. And I like the way you said humbleness because you learn from what its outputs are and it learns from our outputs, which is quite different from a deterministic system, let's say a CTMS or an ETMF that it works or doesn't work. Here it's almost a symbiotic relationship of experimentation where you're learning how to use the tool and it's learning how to work well with the human. Tell me a little bit about that change management process. How did you get that experimentation mindset going within the organization and how did that happen?
Manoranjan Das (10:10)
Initially, as I said at the beginning of our journey, when we were, and I think I didn't mention this part, we had a lot of use cases of automation and bots even within the trial master file scene. And this one bubbled to the top as high value and is an example of a success story. We have at least three or four examples of experiments that did not give the right output.
In all cases where we started, was skepticism, especially from the end users. The ability to change the mind of the end users or the leaders of those departments was only with facts and data. So we had stage gates in our experiment where we had some particular thresholds to prove based on data. And in a couple of examples, we met that but then we were asked to provide even more data, as you can imagine when you're doing a board for the first time, which is prove that over the next three months of data. And with all those stringent needs for evidence, because you're working in a GXP compliant area, which is inspection, very inspection heavy, we were able to change the mindset of the leaders, but the leaders are not the workers. So the people who do this day to day job, there was some initial resistance on if the bot is going to do 50 % of my quality review work, am I at risk of losing my job? And that was a battle that no technology or technology solution can do. It is for the leadership to instill the confidence of the organization that we want to give you time back to do more high-value work. And as they... as the organization and the end user saw that the time that they initially got was being diverted to other higher quality work on critical studies. Over time, the confidence has grown that people are not losing jobs. However, some sense of fear still exists, but in something that has been proven over several releases and several quarters of work with this work, remains a success story. There are some others where the same initial way one skepticism does exist. So in my case, at least, it was, I think, the two critical factors were proving it with facts and data and making sure at least the heads of department were aligned that this is something they wanted to execute.
Nagaraja Srivatsan (12:31)
That's a great part of change management where you have to align the heads of the department to give that confidence. I also like the phrase which you use, is, we're taking away the mundane work and giving you the time back for more high quality work. But again, everybody has a little bit of skepticism on what does that mean, so it's a change management exercise. It seems like you're managing that, but it's an ongoing management because it's not one and done as the bots keep coming up. You'd said it's an 18 month journey and you're now having the organizational support. Tell me what kind of that stage gates or metrics does one look at to say that this is a successful effort?
Manoranjan Das (13:12)
What we did, and I don't know if this is a rubric for success because we were also experimenting, our initial goal or stage gate was to cross some thresholds, 80 % of coverage and accuracy, for a very limited set of document types and English only. We set those boundaries intentionally. Once we went past that, the next stage was...can we now do this for a higher number of classifications? And I think our next stage gate in this example was 10 % of all documents in TMF. And that by itself is a big number. It's about 10,000 documents every month. And then getting to the same or better accuracy as a human reviewer, which in this case was 90%. So we met that 10 % and 90 % accuracy as well. And then beyond that, we are right now at we were able to go from 10 to 50%. So that's the scale that we've achieved. And then the ambitions, once you prove it in a particular area, documents and quality review, the ambition, we have other ambitions and offshoots of these use cases. So examples like if we can quality review a document once it is in TMF, why can't you consider indexing the document as it comes in?
Human needs to read the document and figure out the metadata about the document. Let the document just come and maybe theoretically you can actually fill in all the metadata attributes and a few other use cases such as the indexing use case. So we are on a journey where we had to go through four or five stage gates of consistent success. And when I say consistent success, in other bots what you have seen is sometimes you...have to stop at stage one or stage two. And that is it. I mean, we have many, many experiments that have failed and not reached scale, then that have reached scale. And that humility also has to be there. Sometimes I've seen people and leaders making this their baby. Like I have invested six months of my time and effort and my team's effort and I need to find a way to make this succeed. This mindset doesn't work here because there's many, problems to solve. It's okay if the bot doesn't do a good job today for these three. There's so many more evolving technology and solutions coming that the problems of the past will be overcome very, very quickly.
Nagaraja Srivatsan (15:43)
No, I think that experimentation and constantly, it's okay to fail. I think that's a very big lesson not many people take on. I think the ability for somebody to say, okay, failure is accepted is itself a big organization's culture because most people want only success metrics. Now, if you go and say, hey, I did 10 bots, two were very successful, eight were not, requires a good culture to encourage that it's okay to fail because fail fast is going to actually help you be better than - be successful after lots of efforts. And so how did you build that fail fast culture that must have taken some change from classic IT teams that don't believe in fail fast. Want to make sure everything is successful. So tell me a little bit about that journey.
Manoranjan Das (16:31)
I will not take any credit for that culture because the culture came down from a few of our forward thinking leaders, both in technology as well as in the business, who had set up what we loosely call as our innovation agenda. It is something that is funded, not a huge funding, but there's some limited funding and some individuals or team members, like cross-functional team, put together to influence that innovation agenda. And it is the objective of that innovation team to look at the strategic objectives of the company, like accelerate clinical trials, cost savings, transforming an area of business could be some strategic ideas, and then looking at where tech can do this. And it's not only tech. Sometimes we go to, we can't go there fast enough ourselves, let's go to this partner or vendor that we saw last month in a conference and have them try it out. And then that steering committee or that innovation team proposes experiments in different areas, gets approvals, and then obviously works to report back on those experiments and successes or failures. And I would say...We have been very fortunate that we had two or three leaders who were willing to not only do the talk, but actually put some resources and funding behind it. It has, I think, been a win-win journey for them as well because they have seen some benefit in their areas of business. We are here talking about some of the successes. So hopefully, it all worked out in the end.
Nagaraja Srivatsan (18:11)
That's fantastic. Mano, you had talked about two parts of the journey. One was this whole trial documentation part. The second one was the clinical conduct and data. Why don't we switch and get what kind of before and after situations were there, and what did AI and its application help?
Manoranjan Das (18:27)
In the data management world, one of the big problems or time-sensitive and effort-intensive work is discovering queries and protocol deviations by looking at data. I'm sure in a large majority of the companies, there's a big workforce, whether internal or external consult outsourced staff doing just that. The goal of the project or assignment that we were given was not necessarily to automate everything away, which is what comes to mind when you think AI, was actually to get speed, speed to clearing out data queries so that we could get to database lock faster. So with that objective in mind, we looked at several initiatives or ideas to get there. Some were tech, Some were just process. I'll probably just highlight a couple in the interest of time. With respect to the tech idea, it was that let's look at all the different criteria or conditions at the study level through which a human finds out or initiates data queries. We had to almost learn what the human does so that we can replicate it with some automated fashion that was, and it was both for data queries as well as protocol deviations. We were able to do that with some success, but as part of the just replicating the human intelligence, couple of learnings and interesting findings that we had was maybe the process itself is not as efficient, meaning how can you rely on a set of rules or what we call data review plans and then review the plan exactly according to that. I think the plan has to be flexible enough to look at outliers and not just rely on idea. The second thing was the typical way when you get data queries is look at data listings and based on data listings, look at outliers so you can identify a protocol deviation or a query.
A different way would be, can you chat with the doc data instead of looking at listings? So we create, so one of the outcomes of that was a chat interface, which could look at the data and just answer questions. Show me all the patients who have this or that. So overall, so that idea resulted in a large amount of automation. So very little had to be done with human in the loop. We did need to have human in the loop. But the identification of protocol deviations and data queries in most of the use cases worked well. But from a process perspective, another thing to achieve database lock quickly is just the simple commonsense habit that people procrastinate, meaning whenever the last patient last visit happens and we have a deadline for database lock, okay, let's now make sure we have all the queries and make sure our sites are responding to queries and there's just a big rush at the end of a trial. The process change was something that a fifth grader could come up with, which is every particular month in a trial, let's say every quarter, we will look at or identify the number of patients who have gone through the entire trial, they have completed last visit, and look at locking the database or freezing the data for those. We call that patient groups, and we said, let's clean patient groups in batches rather than towards the end of a trial. And there was technology behind it, but it was just common sense process engineering.
Nagaraja Srivatsan (22:07)
Those are the simple things, right? Many a time we try to over complicate the stuff, but it's the simple stuff that you can minimize or remove impediments to success. Like what you did was amazing because you're not waiting for a large volume at the end as well as latency. If the patient has finished this month, let us finish off all the data sets before we go to the next one. So a cohort based approach was very smart. Now, as you went through this, give me a little bit of some of those stage gate measures which you talked about in the previous example. What was that? Was that a 50 % threshold, 80 % threshold? Where did this stage gate happen? And also please talk to me about people process technology. What changed in people's lives process? You talked about a little bit, which was very good from a technology standpoint.
Manoranjan Das (22:55)
Unlike the operations example from before where we had metrics and stage gates, this project felt more random and unplanned. I didn't know where we were going for the most part. So I'll explain that with a few examples. To begin with, it was unclear whether a solution that could be built within house knowledgeable resources versus let's fully outsource this to a vendor company that probably has a deep expertise in this area was something we had a lot of difficulty in actually making up our mind. Eventually we, well, this was competing experiments, got to some initial results and it was quite a big decision because to be frank, one of the solutions was better, but it was more expensive but limited, limited in scope. The other solution, which was initially less effective or had lesser accuracies or lesser results was wider. It could not only give you identify PDs and data queries, but it could also expand how people worked. They could chat with the data rather than look at a listing. So the power of that
improved end user experience and the potential to expand that foundational platform, if you build that, is how we initially made a decision from an experiment to let's invest in this idea or not. And that just brings one big idea to the forefront, which is every company and team has limited resources. So you have to choose where you want to go. And this is an example of where we chose a less expensive but probably also a less reliable solution at the beginning. In the end, we got there, but it was a conscious informed decision. And then once we had a solution, the next challenge really was adoption because different study teams had different levels of acceptance of the solution because everybody is used to a particular way of working - the inertia of this works for me. My timelines are at risk. I don't want to go to the doctor. Do it for the next study is what everybody told us. Nobody wanted to be the first pilot study or the first set of pilot studies. We got there with some sponsorship, but that phase of trying something new was extremely difficult. And this is an example of something that is constantly evolving as well. I think this is an example of where we have piloted probably in about 25 % of our book of work or clinical trials. It needs to get to, I mean, for scale, I think we have a rubric that it should at least touch half of our clinical trials. So this is still a pilot and not yet fully done unlike the other example. So maybe I'll have more to share with you with rubric and what worked well when we are at scale.
Nagaraja Srivatsan (25:48)
Absolutely, I'd love to have you talk about that in subsequent episodes. Mano, just as we are starting to look at this, you really painted a journey 18 months before there was not as much automation. You've gone through the process, people, process, technology, stage gates, change management. What does the next 18 months look like? As you put your hat on to the future, is it the same playbook? Is there a new playbook to evolve? Where are you seeing technology as well as efforts in the space evolving?
Manoranjan Das (26:19)
The future looks even more exciting. And I'm still learning, this is almost like a child with a toy. I'm learning a little bit about the potential of agentic workflows now, which is not necessarily just make recommendations and help somebody or assist somebody. As I understand it, it will actually learn on its own and take action for you. I don't know where this will go eventually, but I can at least chart at least a dozen ideas on where to try this. So that's promised. The tech, and this is just one of the techs, the techs evolving so rapidly that every six months it seems like we have, why don't you try this? So when you talk of 18 months, I'm sure we'll have a lot more tech to play with. With respect to the people aspect of adoption of AI, some skepticism and healthy skepticism does exist. Things like, ethics, bias, and I think there are smart people working on, and also regulations, and there are smart people working on figuring out what is the right, how to right size it. One other thing that I missed was, the thing that has worked really well and should continue to work is, how literate is your entire workforce? And when I say entire workforce, I'm not talking of the techies or some small number of technology enthusiasts who are looking at all the innovation, but your entire workforce, including the clinical development end users. As we look at, at least the GenAI aspect, hopefully everybody in the company is, or in the real world is encouraged to try things on their own. So some simple examples, hopefully everybody's using chat GPT kind of technologies, similar to how we were using Google.
I, for example, in my personal life can tell you that if I have to refine text, if I have to plan my vacation, if I have to refine an essay that my daughter is trying to put for her college, I tend to use all these tools. I think at a minimum, we should encourage everybody to be at that, try these new tools. There will always be a subset who will be always looking at the most super users and the, you know, more techie stuff and actually coding and programming, although that is getting automated as well. So, having that culture of tech can solve the problem for you is something that I want to see in the next 18 months so that the ideas come from everywhere. Like I see a workplace where, you know, you have hundreds of ideas, people vote, upvote or downvote ideas. And that's how we get to the ground up ideation and innovation. Right now it feels a little bit more like top down. This is my strategy to solve this problem.
Nagaraja Srivatsan (29:00)
That's an amazing future where you're trying to democratize ideas, bring in ground-up ideas. And I agree with you, everybody having that vernacular of AI and using it will lead to much more of new ideas and innovation. Lastly, as you said, the agentic workflow means that you're taking big tasks and breaking it up into small tasks, which can help you. And therefore, as more and more people think about what those small tasks can be to enable them, the better it would be. This has been a very good thought exercise we've had as we're coming to the end of our session. What are some key takeaways you want the audience of this podcast to take from this?
Manoranjan Das (29:37)
Irrespective of where you sit, hopefully you can take some time from your life to actually try these tools out. If you haven't done so. And if you are an enthusiast and innovator, which some of you I know are, you're hopefully taking it to the next level and asking chatGTP to write code and deploy the website. So whatever level you're comfortable with, I hope everybody is trying these tools out. I hope you find now more than ever that instead of trying to improve your process, like let's do this a little bit better, some of these tools are going to encourage you to rethink the whole process. With these tools, can I do this entirely differently? As an example of the so-called crazy ideas of the future, maybe can I enroll all my patients in a week? Can I get away with a one day contract? And many, others. They--right now may seem far-fetched and crazy, but believe me, when we talk in 18 months, some of these ideas would have been done. So hopefully I can encourage you to actually look at some ways through technology and your innovative ideas to get there quickly. As you all know, we are all working to get clinical trials, and find innovative medicines for patients. So faster is better.
Nagaraja Srivatsan (30:57)
Thank you so much. This has been a very illuminating conversation Mano, and really appreciate your flexibility and your time. This was a wonderful experience. Thank you.
Manoranjan Das (31:06)
Thank you, Sri, for the opportunity. Take care.
Nagaraja Srivatsan (31:09)
Thank you
Daniel Levine (31:10)
Well, Sri, that was a great conversation. What did you think?
Nagaraja Srivatsan (31:14)
I think it was exciting. Mano really talked to us about that 18 month journey and really told us how the AI efforts within his organization evolved. He really honed in on the people, process and technology aspects of AI adoption.
Daniel Levine (31:30)
It's interesting because as someone who's not intimately involved in implementing these, you asked about the need to re-engineer processes based on what a bot in practice can actually do. Is there generally this type of flexibility and mindset among people implementing this technology, or are they able to adjust their assumptions and expectations?
Nagaraja Srivatsan (31:54)
Yeah, that's a great question. If you really look at it, previous iterations of technology would give us incremental process improvement. So we would take a process and then think about how do I incrementally improve or provide productivity to my teams? Right now with AI and bots and several other evolutions of this technology, we can have a dynamic change around productivity. It's not simple productivity improvements. This could be in the dimensions of 100, 200%, if not even more. To do that, you have to really rethink how you're doing your processes because you're not trying to incrementally fix your process. You're really looking at your process and saying, what can I do on a macro level to completely change that process? So rethinking process is very important component of how you're going to be going down the AI transformation journey.
Daniel Levine (32:48)
Mano also talked about the need to win minds. And it was interesting to hear him say, on one hand, he's got to use data to convince department heads, but also the need to overcome the resistance of employees who feared what they were implementing would replace them. How big a challenge is it to meet comfort levels of both of these different groups?
Nagaraja Srivatsan (33:10)
Danny, one of the critical parts is change management. And change management works in multiple different levels. The first is the organizational change management, which Mano talked about, the innovation culture, the fail-fast culture, the ability for you to experiment. Fantastic ideas, because without that, you're not setting yourself for success. The second one is, even with that experimentation culture, how do you convince somebody what good is? And that's where he talked about the stage gates and really talked about the process and making sure that you're sharing the success of the bot or the AI very transparently. The third is to make sure that you're convincing the people that this is actually beneficial for them. This is not taking away their jobs, but to remove mundane activities and leaving them to do more strategic, you know, engagement. So I think change management is multiple dimensions start at the top making sure you set the tone for what the culture. Have a very data driven approach to define success and finally making sure that you're convincing every one of the stakeholders on what is in it for them.
Daniel Levine (34:28)
One of the other interesting things Mano discussed was in designing solutions, there's a need to take a step back and consider some of the faults that existed in human processes. You know, he said they needed to think about things like procrastination and inefficiencies in their existing process. What did you think about that?
Nagaraja Srivatsan (34:47)
I think that goes back to the rethinking of how you do your process. If you're only looking at an incremental change, then you're really looking to automate work getting done. The brilliant thing, what he said is, why don't we automate work overall without it being done at the right time? So it was a very good way for all of us to think about our processes and rethink what we're doing with how work is being done and then work is being done. Very important part.
ALSDNAnd then to be able to make sure that you can optimize on both dimensions on what work to be done and when that work needs to be done. The last part is who does that work, whether it's AI or humans. So it was a very good framework on what, when and who.
Daniel Levine (35:34)
A great conversation that really gave some insights into the nuts and bolts of actually implementing this type of technology. Sri, thanks so much.
Nagaraja Srivatsan (35:47)
Thank you
Daniel Levine (35:47)
Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny at levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.
Thanks for joining us.