Aired:
November 26, 2025
Category:
Podcast

Agents of Change: Creating Regulatory Documents with AI

In This Episode

In this episode of Life Sciences DNA, Srivatsan Nagaraja speaks with Anita Modi, Founder and CEO of Peer AI, about how agentic AI is reshaping the creation of regulatory and clinical documents across the life sciences industry. The conversation explores how AI supported writing workflows can improve consistency, reduce manual burden, and strengthen quality while keeping human-in-the-loop. It also highlights the cultural, operational, and technological shifts needed to bring AI driven documentation into everyday practice.

Episode highlights
  • Reimagining Document Creation with Agentic AI
    This episode explains how agentic AI systems break down writing tasks into modular components, enabling faster creation of submission ready documents while ensuring writers stay in full control.
  • People, Process, and Technology Alignment
    Anita explores why the most successful implementations require medical writers, AI engineers, and product teams to build together, ensuring technology reflects real workflows
  • Quality, Evaluation, and Human Oversight
    The episode outlines practical quality metrics for regulated content, including accuracy, consistency, completeness, and clarity, plus the essential role of human review.
  • Driving Adoption in a Regulated Environment
    She describes common fears and resistance among writers and how transparency, training, and change management help organizations build trust in AI systems.
  • A Connected Future for Regulatory Workflows
    She envisions a future where documents, data, and decisions are seamlessly integrated, enabling AI to assist across the entire lifecycle of regulatory submissions.

Transcript

Daniel Levine (00:00)

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. Well, Sri, hope we've got Anita Modi on the show today. Who is Anita?

Nagaraja Srivatsan (00:29)

Anita is the co-founder and CEO of Peer AI. Before launching the company, she has several senior leadership roles at Science 37, where she led transformations in quality, product strategy, and regulatory innovation. She also co-founded Dash Genomics and was part of the founding team of Genos. Combining her background in science and entrepreneurship with deep interest in improving how medicines are developed, we're really excited to have Anita on the show.

Daniel Levine (00:56)

And what is Peer AI? What problem are they trying to solve in drug development?

Nagaraja Srivatsan (01:01)

Peer AI builds agentic AI systems for life sciences companies to transform how regulatory and clinical documents are created. It's a platform which uses specialized AI agents and human oversight to dramatically accelerate clinical study reports, INDs, protocols, and submission documents, cutting time from weeks to days while ensuring regulatory compliance.

Daniel Levine (01:25)

What are you hoping to talk to Anita about today?

Nagaraja Srivatsan (01:28)

I'd like to really get her thoughts around how one can leverage AI from a people, process, and technology perspective. We would really explore how AI teams and human experts come together and really help us get to accelerate the documentation process. Ultimately, I want to know how tech companies are coming into highly regulated industries like life sciences and making a difference and driving transformation.

Daniel Levine (01:59)

Before we begin, I want to remind our audience that they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comments section. And don't forget to listen to us on the go by downloading our audio only version of the show from your preferred podcast platform. With that, let's welcome Anita to the show.

Nagaraja Srivatsan (02:26)

Hi Anita, welcome to our podcast. It's really exciting to have you here. Maybe it'll be good for you to tell our audience, you know, what got you to this place and your journey to get here.

Anita Modi (02:38)

Yeah, well, Sri, thank you so much for having me. So again, I'm Anita Modi, CEO and founder here at Peer AI. In terms of my journey, I have been in this space for almost two decades now, and specifically building technology for clinical research and drug development. Prior to starting Peer, I served as the chief quality officer at a company called Science 37 that operated clinical trials in the decentralized manner. And so in that process, being kind of in the thick of how you operate clinical trials all the way from designing them to recruiting patients and execution. I saw so much of just how the drug development process is done today. And as the chief quality officer, also where so many quality issues come from. You know, I was working with incredibly talented, experienced, brilliant people. And quality issues were often coming up because of the system that they were working in, right? Clicking through different technology platforms, manual data entry, fragmented systems. And so for me and really where my AI journey started, was in 2023 playing around with many of these tools that were now available and just knowing from my experiences that AI and specifically agentic AI could fundamentally change how so many of these experts do this work today. But really accelerate that work.

Nagaraja Srivatsan (04:05)

No, that's fantastic. Anita, I want to just probe an area which you talked about. Let's start with that. You talked about human variability, right? Lots of manual work, lot of things which humans are doing, which has cost you as a chief quality officer quality problems. So talk me through what are the kind of human variability you saw in this submission process and what are you designing to make that variability much more less?

Anita Modi (04:32)

I think in this space, submission is at the end of the day. There's a lot of judgment that goes into that, right? You're obviously working off data sets. But putting together the story of how to represent that is very strategic. So you have teams coming through, obviously the data that comes out, learnings from regulatory bodies in the past, and their own sort of strategy, as well, to ultimately craft that story. It's like any sort of writing, right, that happens across industries. You have to think about the right way to portray that. So for us, the way we think about it is how do we still enable that voice of the writer and bring in AI in a way that can be a peer to them? That's really where the name Peer AI came from. But how do you leverage agents and AI to do some of the repetitive work and support that and bring a level of intelligence, but leave the control to the human in this process? Like that's not going away, that's not going to be replaced. But it can be amplified, and I think we're already seeing that today.

Nagaraja Srivatsan (05:36)

So you brought up two great parts and I'll explore both of them in a different way. Is this thing, what you said, augmented AI, where you're having AI do all of the mundane tasks and lead them to do more of the strategic. So tell me, how did you do a process decomposition? Because it's not taking X amount of people's time. It's really looking at the process and saying, I had to decompose. What's the, you know, fairly easy and what is not. So walk me through the journey of before and after. What is the current process in this document submission and where are those key heat maps on where challenges are and then how did you go about solving for that?

Anita Modi (06:17)

Great question. So maybe let's step back a little bit. Let's go back to the kind of world of documentation, right? So you can almost think of documentation as what gates the progression of a drug, right? As a drug progresses ultimately to commercial stage, every step regulatory decisions are based off of that documentation as well. Every approved drug has almost 200,000 pages of...documentation. Delays can be from regulatory bodies can be almost 400 plus days on average. So it's a, it really is the backbone of drug development. And to your point today, it is highly manual, right? You have writers who are sitting with Microsoft Word on one monitor and looking through PDFs of data on another and putting together really technical, really complex documents that are then - sent around for more opinions, often through email or multiple different systems. So it is a very fragmented system. And so for us, then, as we dove in, I think step one was exactly what you said. How do we understand what the full process is today? And I think recognizing we are on this frontier of a new technology, but you also have to think about the process and the people and how that all comes together. So one of the things we did from the start was actually start our company with a whole team of AI engineers, but also a whole team of medical writers. And I think that has been really critical to exactly what you said, where can there be, and to define the areas where they can be real value and that tend to value is fast, but also how to build the product in a way that suits the way the work is done and putting that control back into the hands of the medical writer. And I can share some examples, but I think building the company that way has really put us in a position to understand that full process and where there's the most value and where to start and how to continue to grow too in terms of our kind of big vision.

Nagaraja Srivatsan (08:12)

Absolutely, I think you brought up a good point where you need the technical expertise, but that's a hammer chasing a nail and you have the people expertise, which they don't know what to use the hammer for, and bringing those two groups together. Tell me, many times companies face the same thing. They have domain players and tech players and the techies are like use AI and the domain players are like, I don't know about it. So how did you bridge this two camps? Because they're not cut from the same cloth.

Anita Modi (08:39)

Yeah, it's a really good question. I'll say a couple of things. One, it's putting together a culture very intentionally and holding to it. So we've created almost these artifacts to bring the teams together. Everything from a standup where groups work together, we do a lot of shadowing, we have teams sitting next to each other. You know, it's sort of creating that culture where...information is shared. We actually, one of our values is be the expert and we encourage everyone to bring their perspective. And the narrative is there is no right answer and we're figuring out this new feature together. I think you have to have a culture of almost psychological safety where you can share those views and you bring the best of that together. We've also hired for specific ⁓ characteristics. So our medical writers on our team actually we put them through kind of a, not an AI test, but are they able to articulate the pain points or what makes a document good, and a lot of this logic that ultimately does go into product building, and how much can our engineers also understand kind of other problem areas. We throw them in the thick of writing a  a document early on to actually say, do you as an engineer even understand kind of the full workflow? It's never perfect because you are putting these domains together, but I think intentionally creating this culture of collaboration and recognition that we're on, have one team together, has really helped us at least build a product and iterate very, very quickly in this space.

Nagaraja Srivatsan (10:07)

So Anita, I'm going to ask a little bit of a controversial question. I know it's not all the way in being sunny and wonderful. Tell me a couple of instances where that didn't work and how did the team resolve that? Because for sure, as you said, the engineers need to have that document perspective. I love it that you're asking them to write documentation, which is so important so that they can actually put themselves at the end and vice versa, you said AI. But it's all not going to be perfect. So walk me through couple of areas where there was friction, because that's what many of our audiences face. They don't get the perfect couple, but then they work through the issues. And so tell me about some really nice places where issues happen and how you work through that.

Anita Modi (10:50)

It comes down to just a lot of direct conversations. You know some of the things where I'm just laughing, thinking back, you know, we've tried to put some designs and figmas in front of the writers and they're just like, that's not gonna work. Like, we appreciate this, but it's almost too innovative or requiring us to change too much. And that's okay, right? I think we're on the sort of crawl- lock journey to adopting AI and...we encourage them to say this is a safe place where you should bring that up because at least it's internal and we're going to next share this with customers. So we'd rather get that feedback here and today to understand where we can push this as well. A lot of it will be at the design stage or early stage with  of our engineering product team saying how much can we push this forward and knowing like where is the boundary at least today to keep being innovative but within the realms of comfort when you're bringing in a new technology too.

Nagaraja Srivatsan (11:41)

Yeah. So, you know, I think you have a good model where you have the technology and the AI evaluation done by the domain people, and there are also feedback loop on doing this stuff. And as we know in AI now, the AI evaluation is very critical and you need that human in the loop to make sure of the evaluation. So do you have an evaluation framework and  what is that? And then two, how did you institutionalize or bring that in play to make sure that every output of that AI is actually evaluated correctly and that it's not giving you rogue answers to what it should be doing?

Anita Modi (12:20)

Yeah, great question. So when I started Peer, I interviewed dozens of medical writers and I said, how do you know that this document is good? What does good mean to you? And I'll just say, it was really hard to get an answer. A lot of people are like, I know it when I see it. And so we did dozens of interviews and started to take out notes and actually created our own framework, which boils down to things like data accuracy, which you mentioned, consistency within a document, completeness of the document - you have all the information that it needs, and readability. So we said, okay, these are the fundamental things we hear. For example, when someone is evaluating a draft they get from a consultant or a CRO or one that's written by a junior writer, this is what they're looking for. So we've done a couple of different things. One, we've had customers just grade us, right? We say, grade us on this framework and tell us, one, how do we compare in general to your benchmark? And two, how do we compare against the status quo of how you're doing this work today, which is not perfect either, right? So how do we compare from a quality perspective compared to the ideal? The second is that we've actually had our own sort of evaluation internally. So we have independent groups of medical writers who ⁓ essentially grade on this and a few other metrics, everything on a periodic basis. So we internally can also see, are we improving and how are we comparing? And I think we're now far enough in our journey where I'm actually presenting this case study next week with one of our customers that shows document over document. We can actually now measure with them, not just speed, which I think everyone talks about with AI, but also quality improvement too. And in this space, as you know, quality is time, right? The faster you get to something good, the faster you're able to submit and get to the next milestone as well. So it's incredibly important to hold that.

Nagaraja Srivatsan (14:05)

That's a really good framework, right? I always say speed with quality because in, especially in regulated industries like us, repeat work happens because you did it very fast and then it messed up and then you had to do rework. So anything to reduce the rework would be good. Again, ⁓ exploring this whole thing, you're doing it yourself. You have a culture, you could get the teams working and if they didn't work, you could crack the web. As you go and talk to medical writers, what kind of resistance are you finding? What are your top five objections? Like, this will never work in my company because... What would be your reasons or what objections do you face every time you meet your customers?

Anita Modi (14:53)

There's a couple that come to mind. One is just disbelief, right? This sort of idea of, this work is so complex. How can AI even support, right? Like just not believing it. And for that reason, we actually recommend all companies, whether it's with us or anyone, to just start with some baseline of AI literacy. Because I think demystifying AI and understanding it's not a magic wand, but can be helpful with some things, does a lot to bring people on that journey. And so when we face that kind of resistance, that's what we do. We do workshop, we do a live session, and we say, what are some things you generally do, and we'll just show you live how AI can help you with those things. Not everything. It's not a magic wand, but there are certain things that are really, really good. So we try to show, and we certainly see those aha moments in our demos and working sessions and so on. I think that's one. I think the second is really managing change and new burden for companies. And so our kind of messages, we're kind of in this world now where AI feels inevitable for this work. It's coming in some period of time because the value is really starting to show. And so the question is, how will companies bring that in? Are they going to go to the vendor? That's sort of bolting it on or someone kind of AI native? And our view is you want to bring in a solution that is easy to adopt for your company, that doesn't create new burden, or now you have to create a team that manages templates or configurations - like you should be thinking about easy. And so that's helped a lot of the resistance, like, hey, we'll plug into your data sources, we'll take your data as it is, we're compatible with Word because that's not going away, you know, anytime soon. I think little things like that that sort of emphasize, you don't have to change your whole workflow. Also gets into some of the resistance too, because it's a lot of change. And so wherever we can minimize that, it's supportive. And then I'll go back, I guess, last point is, I do think there is value in sort of a peer-to-peer guide at this point. That was why we brought in medical writers from the beginning. So our medical writers lead our training, they lead our onboarding, they lead our support, and they speak the language of our customers in a way that can address head-on many of their concerns, also level set of expectations and support kind of this crawl, walk around to adoption too.

Nagaraja Srivatsan (17:20)

So it's a great framework, right? So you're bringing experts and peers who they know, get them comfortable, show them what good looks like and walk them through the journey. But what you hit upon is that it's change management, right? And you were a quality officer and there's a lot of quality processes within Pharma which says, no, this cannot be done. This is not in my SOP. Oh, this is not GXP compliant. No, the regulators would come after it. So just why, I know that Emily, sharing a small universe of the questions you're facing. But two part question, how are you helping in taking your customers down that change management journey? And second is, again, what kind of objections from a regulatory SOP quality standpoint you're hearing and how are you making the customers comfortable that they are GXP compliant, they won't be getting into a regulatory audit and they're in good shape.

Anita Modi (18:20)

Yeah, so a couple comments there. So first question, how do we help support this? I will say the companies where we've been the most successful is where we have both top-down support to bring in innovation and look not just at technology, but again, process people and how this work more broadly could be done, and bottoms-up kind of support to say, we're open. We're open to this. We don't want to change everything, but we're open to understanding and learning and working with you. For us at this stage of company, and I'll say rather this stage of the industry, every engagement is a partnership. And I approach it that way to have transparency in these conversations. And I think that's really what it is. It is a collaboration. It is understanding more about their culture. We bring in our best practices. We have programs to say, this is how we recommend you, again, start and start to scale, looking at their portfolio as well. We support with all those things like SOPs and whatnot. We have SOPs that we can reference. So we try to just bring in all of our learnings and share all that we know, again, recognizing we're on this frontier. And we'll share what we've seen with other successful deployments. Your second question was some of the concerns around quality and regulators. I think this is where it comes back to building with the right team of experts. So the team that we've, you know, the founding team that we've assembled here has built part 11 compliant technology, been audited by the FDA, you know, is built with sort of compliance and security in mind from day one. We have experts in cybersecurity, experts in quality, and bring all of those folks to the table. And I think it's just really being up to date on what are regulations today, how do we support them from a technology standpoint, also where do we see in the industry? So your point to the FDA, I actually think it's amazing and encouraging that the FDA is sort of embracing the use of AI. They're using Elsa and their own internal tools and I think very thoughtful AI guidance, a lot of transparency. So they released their checklist last week and tools that are helping us even support our customers to offer. So I think there's really great tailwinds from the regulatory side too that have created this environment for continued exploration and support within the kind of regulations and compliance frameworks that exist today.

Nagaraja Srivatsan (20:47)

So let's envisage a world today where you are having your peers, AI peers helping this process. Walk me through the journey today, what that looks like with humans and AI peers, and then fast forward it 18 months and fast forward it 36 months and what do you think that journey would be?

Anita Modi (21:09)

The core to how we build up here is recognizing that AI is really powerful. It's incredibly, so much intelligence brings speed, but has to be kept almost on a short leash, right? And so for us, this loop between human verification and AI automation, maintaining that is our view on how you scale. So today we've really built our product around that to say, this is the workflow of how you author and AI can support a lot of these, right? We have agents that do data ingestion, agents that author, agents that QC, but the work is really done by the humans and all these checkpoints to basically inject that subject matter expertise. So that's really how we've started with this view that agentic AI is the future and has to be done hand in hand with the human. You know, as we look ahead, kind of our view is, we know documentation drives every step and we're starting here, but our vision is to ultimately connect documentation, data, and decision making. As we're writing more documents and if you think of documentation as kind of contextualizing the underlying data, we're actually getting so much smarter and able to start helping our customers move from kind of reactive workflows to proactive intelligence. So early days, but like even analyzing a lot of these regulatory body patterns to identify potential issues early on. We've caught some of these flags actually with some customers to basically auto-generate responses or prepare for questions or start seeing cross-portfolio risk detection or just even moving to more proactive submission strategies and optimizing that and thinking about sequencing and timings. All of that comes into place now that you essentially have more of a workflow there. Early days of that, but I think that's our path of where you can grow from documentation into ultimately this next level of intelligence. I think that's what we'll start to see in the next few years. And the exciting thing is, you know, that's how you truly start to accelerate more and more of this life cycle and ultimately just get treatments to patients faster.

Nagaraja Srivatsan (23:15)

So it's a fascinating journey. Again, a lot of times people like, are you a maker? Are you making your own LLM? Are you a shaper? Are you taking and shaping it? Or are you a taker? You're just taking what's available and configuring it in the workflow. So where are you in that journey of maker, shaper, taker today? And what do you take and make? And then as you go through this intelligent transformation you're talking about, do you likely, where will you likely be?

Anita Modi (23:46)

It's a good question. I think we will also continue to move along that spectrum as we drive more and more intelligence. Today, a lot of our focus is let's take advantage of tools that are out there and really be thoughtful about the subject matter expertise and how you build that layer in on the application, both in terms of the, again, intelligence of what makes a document good. That does not exist anywhere on the internet. It is in...the heads of our medical writers, the heads of our customers, we've captured that and that's what's really driven our quality. And a very thoughtful interface and experience that lets medical writers work very seamlessly with AI in a way that doesn't put burden on the writer, they have to learn prompts, but a very natural way to engage. And then as we shift to kind of more intelligence, I think we get further on kind of your spectrum of how to kind of bring that in.

Nagaraja Srivatsan (24:39)

As you start to shape to the future where you said, love that intelligence connecting documents, data and decisions. Are you building kind of an operating platform for that decisioning? Are you going to be in the document workflow place, you know, because when you go into document data and decisions, you can go multiple different ways. So how are you seeing yourself? What role are you playing? Are you going to be a document centric innovator? Are you going to be a data centric innovator? That's your journey going?

Anita Modi (25:12)

 I think of it in a different way. I think the future is agentic AI. And for us, it's to be the agentic backbone for drug development and bring in that intelligence. I think we're in the middle of a major transformation in just how so much of this work is done with incredible tailwinds from regulators, from incredible desire from customers to change and look for ways to optimize. That, I think, this workflow will change in our views. How can we bring more and more agentic capabilities to that process.

Nagaraja Srivatsan (25:43)

So I want to pivot.  You're a startup growing very aggressively. What does the talent market look like and how are you getting the right talent for your organization?

Anita Modi (25:57)

Yeah, Sri, great question. I mean, we're building a pretty diverse team here in terms of expertise, right? We're bringing in a lot of AI engineers and then these subject matter experts. The reason they choose Peer is a few things. You know, they believe in the vision. Like, I think we're very well set up here to capture that. We're already seeing signs of that. And I think it's exciting to be in a place where you're actively reducing cycle time in the drug development process and play a role in getting therapies to patients faster. So I think that's the number one reason folks join Peer. And second is the culture. You know, everyone here is part of that change. One thing that's interesting, Sri, every six months I do something called a stay interview. So as opposed to an exit interview when people leave, I actually go to every employee and I ask them why they stay. Why are they at Peer? They're incredibly brilliant. They could go anywhere. Why are they here? And what can I continue to do to be the best employer for them? And what I constantly hear is the excitement of being kind of on the ground and building something novel and being in the front hands of changing the space with our customers.

Nagaraja Srivatsan (27:10)

And is there, as you bring in your medical writers and this talent of AI, as you said, very brilliant, how are you meshing their different aspirations? One wants to be cool and innovative. One wants to make sure it's standardized and regulatory. Is that a cultural clash? Are you seeing it's two sides to a same coin? How are you figuring that out across?

Anita Modi (27:31)

I don't see that with the folks we have. I think they all want to be innovative and all want to hold to the regulatory bar in terms of quality. I think that's actually a shared view across both parties. The question and the discussions are often more about how. I don't think there's disagreement on where we ultimately want to be and what we're pushing. It's often the questions of how do we do that? And a lot of that is design and novel work, right? We do so much whiteboarding. And that's where I see the expertise from both parties and others, where you have cybersecurity, equality of all these other players, the table too, but really come out.

Nagaraja Srivatsan (28:11)

So you said you do a lot of whiteboarding and stuff. Is your teams co-located? Are there lots of remote teams? How do you make these co-located and remote teams work together to get that quality going?

Anita Modi (28:22)

That's a great question. So our technology team is mostly centralized here in San Francisco and our experts are remote. We bring the team together fairly often in person and we have explored lots of tools to help us do that when we're not co-located.

Nagaraja Srivatsan (28:38)

Right. And as you know, the innovative companies like yourself are bringing tech domain experts and solving different business problems. And one of the key lessons is that that's the same journey many of our sponsors have to go through. They have to bring in innovators and technology together. Talk a little bit about the fact that you're an innovative technology play. There are lots of incumbents within the sponsor. There's lots of processes within the sponsor. A lot of QA guys within the sponsor. How do you make this thing work? Because in your company, you control the controllables, but that is left to the sponsors. So how are you making this mesh work across the board?

Anita Modi (29:17)

Great question. You know, I think ultimately today where we are, it is really important to find your champion, you know, at our customers. When we think of our ideal customer, a lot of it is kind of, again, their culture, but where they are from top down support, where they are from kind of bottoms up support and knowing there's a champion because you're right, there is a lot to work through and...it's not for the faint of heart. It's helpful that we have a team that's been doing this for a long time and can help support some of those processes. But it is helpful to have a champion who understands those internal dynamics as well and can help work through those.

Nagaraja Srivatsan (29:55)

And as you said, the whole drug life cycle is having a lot of different documents. You've started with the medical writing, the last mile before regulatory submissions. Are there other use cases? Is this domain big enough for you to conquer the world or you need other document centric domains? And just walk me through how you're looking at the problem space and which are places of high document automation ability areas which you should be thinking and going after.

Anita Modi (30:25)

Documentation today for, if you look at all the documents associated with drug development is a $15 billion market. And looking at how the industry's continued to grow, expected to be $19 billion in the next few years. So large market, again, heavily manual, heavily outsourced, hasn't changed in decades. When we started Peer, we specifically started in authoring preclinical CMC and clinical documents, the bulk of where there's high volume, high need, high complexity, and we're continuing to go deeper in those with our customers. Now we're actually looking at expanding into kind medical affairs and commercial as well. One, to make it easier for our customers to have one partner to work with. And second, given the reality of how data and content continues to travel. So your protocol feeds into your CSR, CSR feeds into your data, it continues on ultimately throughout the life cycle of a drug. And I think that's critical as we think about continuing to build an intelligence. There's so many signals from different documents along this process. They're incredibly important to bring together as you start to get into more proactive workflows and intelligence as well. And as I mentioned, I think documentation is where there's such a high need today. There's so much value. And our view is there's continued opportunity as we drive more and more of the decision making as well.

Nagaraja Srivatsan (31:50)

You hit upon one thing, this industry outsources a lot to the documentation process. So tell me, so you have multiple channels to work with, are you going after the outsourcing channel partners as well as the direct sponsors? Are you going directly to the sponsors? How are you going after the market?

Anita Modi (32:09)

We're ultimately going where documentation is done. We have been working with a mix of primarily biotech and larger pharma, but also some CROs as well. Again, it's really looking at where can we add value given, again, how this work is done today and how this work can be done at end of the day.

Nagaraja Srivatsan (32:31)

Okay, coming to the last few questions, as you start to, you know, what would be your kind of key leave behinds for this particular one to the audience? What do you want them to take on from this particular podcast?

Anita Modi (32:45)

Yeah, a couple of different things. You know, I think we're, again, all in this journey of bringing AI into so many of our processes. I think my two pieces of advice - you can sort of step back and think about change. You know, it's often this triangle of process, people and technology. And I think it's really important to be strategic in how you bring those together. So I'll say one, make sure you know what the real pain points are. We dig into this and we often want to make sure it's not just an exercise to check the box with AI, you know, say you're using it, but what's the real problem? What's driving the needs? And think about the kind of human workflow from day one. So how does AI work hand in hand with those expert teams? Where are those control points? Whether it's working with us on documentation or beyond, but how will you have experts kind of check and verify and validate? And I think it's really important in that process and as you're shaping that to just bring teams along. You know, I think we often say this industry moves at the speed of trust, not the speed of technology. And I think it's important to build that trust early on, whether it's just exposing teams to general purpose, AI tools, or bringing in courses on AI literacy.  I would say education and demystifying is the most important thing organizations can do today.

Nagaraja Srivatsan (34:02)

And this has been a fantastic, lots of good nuggets on what needs to be done in a very fast growing, what I call the last bust in for automation because we're in a very document centric world and everybody writes documents and to be able to disrupt and take it on. So really appreciate your time today. Thank you so much for coming and sharing your expertise and really enjoy the conversation.

Anita Modi (34:27)

Yeah, thank you so much for having me Sri.

Nagaraja Srivatsan (34:29)

Thank you so much.

Daniel Levine (34:32)

Well, Sri, that a great conversation. What did you think?

Nagaraja Srivatsan (34:35)

I think it was really good. I think as innovative startups are coming in, I wanted to explore two things. How are they bringing the right culture to bring tech and domain people together? But that also gives us a framework for sponsors or other companies when you're adopting tech, how are you bringing in expertise in domain and technology innovators together to make sure that you're solving the problem together in the same way.

Daniel Levine (35:03)

You talked about document quality. Speed is one thing, but quality is a lot tougher to measure. The elements may be easy to define, how difficult is that to translate quality into an algorithm?

Nagaraja Srivatsan (35:17)

It was very fascinating in her own journey when she went about asking people to define quality, she got 15 versions of what quality looks like. But I think what was really good was to really standardize that into the four pivots she talked about. What is the data accuracy, which is good documentation has to have sound data. What is the consistency of the document and the document flow, completeness, how complete is that document and how readable it is. I think as you start to break abstract concepts like quality into very measurable KPIs and outcomes, I think you could get to a much more standard view of what good quality looks like in documentation.

Daniel Levine (35:59)

You asked about medical writer resistance. And one of the things Anita talked about was demystifying AI for people and showing them what AI can do. What did you think of the approach she takes?

Nagaraja Srivatsan (36:11)

I think she said three things which I really like. One was if you come top down, you're coming with a ⁓ good sponsor, which means that they're driving down the adoption. But what she said is even if you brought it down, AI education and people understanding the power of what it does and it doesn't do is very important. And so I really like the concept of creating discovery or other workshops to get the sponsor or the customer comfortable with the art and the potential of AI. I think that's a very good change management route because you're not coming in and throwing AI in somebody's face, but really helping them get to play  on it, with it, and also to understand its benefits and its limitations.

Daniel Levine (36:55)

The other thing she talked about was having medical writers lead the training for medical writers. These are people who understand the particular audience and speak the same language. Are there lessons there in terms of change management others can take?

Nagaraja Srivatsan (37:11)

So I think people listen to peers and pun intended, people listen to people who have done this thing before and from that experience standpoint. Framing context and framing how that context is similar to so important. So I think it's very critical. When we look at the evolution of AI adoption, there is a very significant body of work done in AI evaluation. And they said the best AI evaluators are what we call the super users, the domain experts who really understand what good looks like and what doesn't good like. So by having a super user from a peer company to come and tell you what good looks like and to be able to make sure that they connect is a great way to drive change.

Daniel Levine (37:59)

She also talked about agentic AI being the future, but the need to do that hand in hand with human involvement. How difficult is it to strike that balance? And do you think as comfort grows with AI, there's a risk that people will start to get lax about the human role?

Nagaraja Srivatsan (38:19)

Yeah, there's this thing called AI shlup. I don't know if you've heard it, that people get so used to AI that they are not being very thorough with what they're doing from a qualification or the use of AI. They just say, AI must be right. So there's always that downside. And even things get very easy, then we get to a muscle memory where we do not question it. I think we should guard against that. Human in the middle is important, but the human should not be biased and take it easy because their job is to evaluate it correctly and their job is to make sure that through the red flag, if it's not doing what it's supposed to do.

Daniel Levine (38:54)

Well, it was a great conversation. Sri, thanks as always.

Nagaraja Srivatsan (38:59)

Thank you, Danny. Really appreciate it.

Daniel Levine (39:04)

Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny@levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.

Thanks for joining us.

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Anita Modi holds an MBA from Harvard Business School and an AB in Molecular Biology from Princeton University. A life sciences technology leader, she has worked across product strategy, transformation, quality, and compliance to modernize how biopharma operates. As Co-Founder and CEO of Peer AI, she focuses on bringing AI powered solutions to regulated medical writing and enabling faster, more efficient drug development across the industry.