Aired:
February 12, 2026
Category:
Podcast

Scaling Practical AI Across Commercial Team

In This Episode

In this episode of the Life Sciences DNA Podcast, host Nagaraja Srivatsan speaks with Drew McCormick, Head of Data and Analytics at Eversana, about the evolving role of AI and advanced analytics in life sciences. The conversation explores how organizations can transition from fragmented data initiatives to integrated, value-driven AI strategies that support R&D and commercial decision-making. Drew shares practical insights on building scalable data foundations, aligning AI initiatives with business outcomes, and navigating the cultural and operational shifts required for sustainable transformation. The discussion also highlights the importance of governance, cross-functional collaboration, and leadership commitment in turning AI ambition into measurable enterprise impact.

Episode highlights
  • From Data to Decisions –
    Why AI success in life sciences depends on building strong, scalable data foundations rather than isolated use cases.
  • Enterprise AI Alignment –
    Connecting AI initiatives directly to R&D and commercial business outcomes to ensure measurable value.
  • Governance as an Enabler –
    The role of governance, operating models, and cross-functional ownership in accelerating responsible AI adoption.
  • Scaling Beyond Pilots –
    What it takes to move from experimentation to enterprise-wide AI impact, including talent and cultural shifts.
  • Leadership in the Age of AI –
    Why executive sponsorship and strategic clarity are essential to embedding AI into the fabric of the organization.

Transcript

Daniel Levine

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. We've got Drew McCormick on the show today. Who is Drew? 

Nagaraja Srivatsan

Drew leads Eversana's data and analytics organization, overseeing the company's data strategy and data-driven insights across its commercialization solutions. He brings more than a decade of experience in investing and operating in the healthcare information technology and digital life sciences industries. Drew offers a unique perspective on innovative technologies and disruptive delivery models aimed at driving actionable, measurable impact. His work includes leveraging real-world data across multiple verticals, such as commercial analytics, advanced data sciences, field force effectiveness, and marketing analytics. 

Daniel Levine

And what is Eversana? 

Nagaraja Srivatsan

Eversana is an independent provider of commercialization and related services to the life sciences industry, helping pharma, med tech companies bring therapies to market and improve patient access, adherence, and outcomes through integrated commercial market access data and patient services. It's offering span consulting, pricing and market access in addition to field sales, patient services, specialty pharmacy and real world data and analytics. 

Daniel Levine

And what are you hoping to talk to Drew about today? 

Nagaraja Srivatsan

I'd like to really hear how his organization has gone down the AI journey. He comes from a data and analytics background. And so it'll be really interesting to see how he's applying conventional or classic AI as well as gen AI to several of the workflow areas in his organization. 

Daniel Levine

Before we begin, I want to remind our audience that they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comments section. And don't forget to listen to us on the go by downloading our audio only version of the show from your preferred podcast platform. With that, let's welcome Drew to the show.

Nagaraja Srivatsan (02:22)

Hi, Drew. Welcome to Life Sciences DNA podcast. It's wonderful to have you here. Drew, why don't you tell us a little bit of your journey to your in your current role and what you're doing at Eversana?

Drew McCormick (02:36)

Yeah, thanks for having me. My background is a little different. Previously came from the finance side, always working within healthcare. Started in investment banking on the sell side, my way over to the buy side, working across healthcare portfolios, as well as generalist investing. After a few years there, realized that I wanted to see how the work actually gets done and made my way over to Eversana, one of the portfolio companies. I've held a number of roles in the organization from integration to kind of general operations all the way to being a PML leader across patient services, DNA, and as well as our agency business. So I've seen a little bit of everything here and what is a very diverse business at Eversana.

Nagaraja Srivatsan (03:16)

It seems like it's a smorgasbord, a very interesting set of things to do, patient services and others. But tell us a little bit about your AI journey. ⁓ How are you applying AI to each of those different places? And why don't you tell us how you got started down that journey?

Drew McCormick (03:35)

Beginning of AI for Eversana was via our data analytics business. I mean, before the boom of generative AI, you know, traditional AI, if I call it that, has been around for decades, right? Especially within predictive modeling and machine learning tools and in the healthcare space, that sort of modeling is very valuable for what I call sometimes the needle in the haystack problem, finding unidentified patients for predicting actions of what will happen next based on historical insights and the like. After that, we were able to build on what was a very strong data foundation. I think that's really critical. You can't just throw AI tools at something without having that strong data foundation to really start embracing AI across the rest of our diverse business, from our agency using it to power marketing campaigns and our strategy work to our patient services business to help out the call centers as well as - you think about the large onerous process that is benefits verification in the American healthcare system - all the way to our field force business and using AI and predictive analytics to help a rep when they are sitting in the doctor's office or when a nurse is alongside a patient to say, patient exhibits XYZ criteria, this is probably where the pain points will be. Moving beyond what our hunches today in people's institutional knowledge to try to have a little bit more data to guide those decisions. Within a traditional services business, that's the name of the game here. How can you make these folks and make our team just, you know, two times as effective, more knowledgeable about the space, especially for a company like Eversana where we don't have a therapeutic. We work across all therapeutic areas and all devices and all drugs, you name it.

Nagaraja Srivatsan (05:16)

That's fascinating. So what you started off with is classic AI, strong data foundation, look at ⁓ ML models, predictive action, next best action for each of the different stakeholders, and gone down that pathway. Tell us in that journey, how is it to bring that ⁓ from a development standpoint, but more importantly, how is the adoption from an implementation standpoint? How is each of the teams suddenly getting an x-best action from an AI, whether it is receptive to it, did the challenge. Why don't you walk us through that journey?

Drew McCormick (05:49)

If you've seen anything in the news about why AI programs seem to fail with all the investment going into gen.AI, it largely comes down to change management. You can have all these tools and models and new tiles within your instances you're working in day in and day out, but if there's no clear adoption, none of this matters. I break it down in a couple of ways, that change management process. Step one is making sure you're creating, like any product, creating something that folks want to use. If you're working with field force reps, you need to be developing these models alongside them. If I tell you that the next best action is to do X and in practice, that isn't either actionable, nor is it something that a doctor wants to hear, that's effectively a useless suggestion. So a lot of folks within their kind of AI tools have general QC folks or testing folks from a tech perspective. We've had a lot of what I'd call kind of product quality control folks, people who are taking the outputs of what might be a gen AI suggestion or traditional AI suggestion and making sure it makes sense in that context. This has two real benefits. The first is that feedback loop is informed by the people on the front lines. So they really have skin in the game. They feel bought in. Like anything, you have to weigh hearts and minds with these tools because this is more than just adding in a new feature into a website. It's really a different way of working. So we found that that kind of product testing loop is the best way to really drive adoption. Because the first time you give it to someone, you'll have someone on the far end of the bell curve that's an early adopter using it, and they're off and running. The other 80 % of folks is both where the value is and the opportunity. And those sorts of adoption cycles are what we really focus on here. And making sure we have the right people in place who are there every day because, like anything, you got to say something three times before it really gets through. Got to go through a cycle of full time before it makes sense. So that's where we really see a lot of the value be for legacy services businesses adopting these net tools.

Nagaraja Srivatsan (07:52)

That's pretty good because that's very much in line with lots of best practices where they said you need to have real people actually validate AI versus it being done from a tech perspective. In your narrative, you said there were two things. One was this whole change adoption and bringing in these type of functional people into the fore. Was that another turn in the turnstile on other things you were doing from a change management perspective?

Drew McCormick (08:16)

I would group it as the change management in that feedback loop I mentioned. And on the other hand, it isn't just saying, hey, we came up with something in a black box and now let's help you adopt it. Actually, the submission of these tools, I feel is just as important because I, as a kind of a leader in the organization, could say, this is where I think the value is. And I could be completely wrong. We've set up what is a bit of a Skunk Works hackathon, call it what you want, where folks within the organization, and we're talking about over 150 submissions in like a three week period when we opened this up came to us with their ideas of: I could use this and have we thought about this or I know the tools generally can do X. What if we apply it to Y? The value of that is then it doesn't feel like a edict. It doesn't feel like a top-down remit. Instead, it's: this is what you asked for. We gave it to you. And that creates a really good virtuous loop because then folks know I can ask for something else. And there's someone on the other end, both listening and deploying. I think one of the struggles organizations have felt is they have just democratized access to the tools. That is great. Yet again, for that 20 % of early adopters, but it's really important that you have technical folks sitting alongside everyday users, hearing what their problems are and interpreting that with the tools. Other companies, you know, people call these deployed engineers, forward deployed engineers, call it what you want. It's of that ilk and we've seen a lot of success there.

Nagaraja Srivatsan (09:44)

Yeah. It's fascinating that you started by saying, hey, I get ideas from the rank and file. I prioritize it. And this almost leads to better change management and adoption because if my ideas are getting listened to, that's one school. But as you can imagine, there's whole schools of philosophy which are saying that democratization of ideas doesn't yield the final ROI. ROI's come from top down. You pick three areas of focus and see where the most impact is. And of course, you bring your people and teams along, but then you're not doing 300 things. You do three things, but you'll do that very well. What are your thoughts on that? Is that a shift between this hackathonish POC culture to a more robust authorized, let's benefit the business and drive AI deeper culture? Where are you seeing that pendulum swing?

Drew McCormick (10:31)

In my opinion, they're not mutually exclusive. You can have these submissions from the rank and file as you put it and have those bubble up to that 150 plus I mentioned while having pillars, metrics, criteria, you name it, that say, how do we go from 150 to three? As long as you have those north stars that everything fits into ultimately, you can have both. You start with you know, you have to have a view of your business. Where is there the most waste? Where is the most opportunity? Where is an unmet need that you feel can drive that ROI, as you mentioned, that top-down perspective? You then take all of these ideas that are forms of solving those problems. I'll give you a good example. I mentioned benefits verification before. Let's say one of the things for our patient services business, our ROI, our North Star, is to minimize the turnaround time and the amount of effort that goes into benefit verification for rare and orphaned patients, patients who need access to therapy more quickly. So that turnaround time has not just ROI monetarily, but from a patient satisfaction perspective. Then when you get all of these submissions, the ones that fit that mold that go to saying, hey, I had an idea about when I'm clicking between this tool and this tool. It's too onerous. I'm not getting the right information. I wish there was a way to summarize this EOB form and put that into the system. You're able to have the two marry and you're actually getting yourself information for your product team of what should be built to tackle something as amorphous as minimized turnaround times. That make sense?

Nagaraja Srivatsan (12:08)

Absolutely. And you're picking up the classic thing of how you're doing a set of micro services or impacts and then bringing it together. And that leads us to where the world is evolving from an agentic perspective because that's kind of how you're building fit for purpose agents, which then you cobble it together as you start to bring different parts of the workflow. Where do you see - you started with classic AI, which of course is Sound Data Foundation, you went into changing workflow and behaviors in patient services and benefit verification. Where is that AI journey going for you in terms of: are you still in the classic AI place? Are you going more into gen AI? Are seeing a combination of both?

Drew McCormick (12:54)

Combination. I would say that it is certainly trendy to try to throw gen AI at everything, but there is certain situations, as my data scientists would reveal, sometimes the traditional approach not only works better, but is cheaper and easier to implement. What we're starting to find is if you start with traditional AI and you start getting these new gen AI tools, think standard chatbot style ecosystems that folks can work in on an enterprise level, really the next frontier is twofold. I would say, A, it is starting to establish a data ecosystem that is amenable to these tools. And what I mean by that, because that seems very academic, is you have to really start setting things up within your data warehousing infrastructure, within the presentation layer of the data sets that you're using to make it so that these models actually can do their work. You know, it's a bit, at least in my opinion, it's a bit of a, people think it's a panacea, throw all this data in the gen AI tool and you'll get out the right answers. You know, like anything, you have to be refining and kind of setting things up to make them interact. I'll give you a good example. We have many disparate data or disparate patient services as a business unit. We have an agency, we have a field force, we have a lot of different groups here. I find that the next horizon will be allow these agentic workflows, partners, team members, call it whatever you want, the future of agentic innovation, to be able to bridge those gaps. So moving beyond the silos of both data and businesses to really be someone that knows what happens within an organization. I was speaking with someone at one of the manufacturers who they're trying to deploy agents to be their institutional knowledge. I mean, think about that within these large organizations. There's one person who's been there for 20 years who knows how these four things connect together. What if you could really kind of summarize that for an agent that you could ask questions to that could help you understand the connection between your data analytics business and your patient services business to find opportunities that weren't even revealed to you previously.

Nagaraja Srivatsan (15:01)

You're exploring a very important theme, which is knowledge and democratizing knowledge, right? Taking a human variability and the super user of who is very knowledgeable, as you said, that expert, and then codifying or institutionalizing or democratizing that knowledge. And you said one of your sponsors are doing it. Is that a theme which you're trying to bring to fore? Because you would have very similar pockets of expertise buried in patient services, benefit verification, and field force and other agency. Is that a team which you're prioritizing?

Drew McCormick (15:35)

Certainly, and especially outside of healthcare, I think this is something that is already relatively well-trodden on the CPG side and other industries. For anyone on my compliance team listening, I wanted to give the critical caveat here, which is that data governance and clear guardrails, I think is the biggest limiter, I think an important limiter. It's not the technology. We have the technology in a vacuum right now with these gen.AI tools to do exactly what I just said. The really critical pieces, these are highly regulated pieces of business. You think about sales and marketing strategies for different products. You think about PHI for patient services engagements. It is critical and it's really the - I'd say that the biggest thing that most organizations are tackling and spending real time on that you establish really robust guardrails because the tools will do what you ask of them. And you have to make sure that you're not empowering these tools to do things that ought not be done. Just like we've always done with humans in the loop that have restrictions on what data sets they have access to, what information they can see. So I think that's the biggest lift right now that most organizations are tackling. And as I talk about that kind of data cleanliness, it comes to those guardrails and making sure you have the right protections in place.

Nagaraja Srivatsan (16:56)

Let's explore that a little bit because it's such an important topic and worth a little bit more depth on. As you can imagine, everybody has access to data and if you don't put the right governance, then everybody has access to the data which they should not have. So, having the right data infrastructure is very critical, but enforcing that is so complex with LLMs because there's no structured row and column database and where you could say, okay, you could only go after this row and this column. Here it's blobs of data, information which LLMs connect and stuff like that. So walk me through, how are you thinking about this problem? How are you putting guardrails around this and what are some best practices you can share with us?

Drew McCormick (17:40)

A few things, well, our clients are demanding this. It's not even up for discussion, right? Every large pharma organization and everyone who ⁓ is thinking the same thing we are, are putting in really ironclad restrictions over how these tools can be used, which is forcing services providers like ourselves to not only meet those requirements, but go a number of steps beyond so that we know wherever this goes that we're comfortable with the situation we're setting up. Eversana has really taken a multifaceted approach. We've established an AI risk council, a mix of our legal leadership, our tech leadership, our compliance teams and the like, to make sure that when you're starting, you know as I mentioned, those pillars, those areas we want to go after, we're setting really clear guardrails, not only on, you know, the nitty gritty of how these datasets connect and are laid out in a database format, as you mentioned, but that theoretically, contractually, we're starting from a place of comfort. There are certain areas you obviously could make more effective with agentic AI that really are non-starters for various compliance reasons. So we make sure we don't go down those paths for both ourselves, for our patients, for our customers as well. So beyond that, it then becomes what we've really kind of said internally is pharmatizing AI. The value we're able to bring next is then all of our subject matter experts. So if you start with compliance and legal and IT, and then move to the institutional knowledge of folks who have worked at health systems and pharma companies and on the services side. They're the next layer of kind of, I'll call that the next hurdle we put in place for any of these tools that are evaluating what comes out of that AI council, saying how in their experience they've seen it work from a pure kind of services perspective. And then that feeds into a development life cycle, right? So as we talked about getting all of these ideas, making sure they meet our kind of pillars and criteria of where we see the ROI, going through that robust governance cycle, and then getting feedback from tens of folks as senior leaders in our organization for how they've seen it work. It's created a really good life cycle where A, you're winning hearts and minds because there's lots of folks in the channel that are seeing how things are happening, seeing where we're focusing our efforts and putting our capital candidly, while also ensuring that once we ultimately bring this to a customer, not only are we saying that we can stand behind it, but we have answers to their questions, both from a compliance perspective before they even ask them.

Nagaraja Srivatsan (20:12)

Yeah. You hit upon a very pertinent and relevant topic. We call it AI evaluation, right? Basically, you're having somebody and this tiered guardrail system to make sure that you're evaluating the output of the AI. You have your legal and compliance framework, which is the first evaluation, but then you've also put AI evaluation among your subject matter and domain experts to verify what its outputs are. You get to the final usage of that, it has been double-vetted from a compliance and a domain perspective that it's scaling. And on the topic of ⁓ AI evaluation, are you thinking of each of your projects with an evaluation framework or you're putting it much more in this kind of council and compliance type of model?

Drew McCormick (20:59)

It is a fascinating question and it's almost philosophical. What I mean by that is you're not really in this day and age approving use cases. Any good internal legal team would say, okay, what's the use case? What's the data sets? What are you gonna put in? With these new tools, you're not approving X to go to Y like you would with a data pipeline or something to that effect. You're almost approving a way of working. You're almost allowing yourself to say, I'm okay to operate within this certain arena. I'll give you an example. If you're trying to get approval for a new, you know, again, I'll keep sticking to the chat bots that are so popular on the gen.AI side. If you're trying to get approval to utilize that within a professional services or consulting business, you're not getting approval for, I'm going to use this for a market landscape assessment. You're getting approval that I'm going to use this for this type of client across a litany of potential questions and use cases that today I couldn't even tell you to get approval for, because who knows? The client might ask, what's happening with this switching-dynamic? And next thing you know, you're running an agentic tool on your datasets to evaluate that question. What it means for us is that by starting almost at that theoretical level of the council stage, we have shifted from a use case approval where there are infinite use cases for these tools within a single sector to much more of a broad-based I'm using this data set, that's how I think about it. I have this type of person working on it from a role and kind of an expertise perspective to create this type of deliverable, a PowerPoint, an Excel, a call to a customer, you name it. And once you have those guardrails, everything can flow below it. So it actually required us internally to change the way we think about approvals, mainly because the tools don't fit within such a narrow purview anymore.

Nagaraja Srivatsan (22:53)

That's absolutely wonderful that you put together that process, which is you get the role approved, you then get the deliverables approved, and then you're getting the workflow and what it can do approved, which is very fair. Let me just dive a little deeper because a lot of people are not looking at the first instance of how AI gets deployed, but what happens with model and model drift. What if AI and the feedback, you use your professional services, hey, I wanted to do this PowerPoint with switching and it learns and then it starts to drift and by the time you know it, you're not getting the right outputs. So tell me, how do you extend your guardrails to what happens post-facto? The launch is great because you've done check the boxes, got the testing, got your guardrails in play, but what are your ongoing real-time guardrails as it starts to function.

Drew McCormick (23:46)

Yeah, you've heard the term human in the loop and I think it's a necessity, right? And if I go back to my banking days, when you're an analyst or an associate, you come up with your analysis and you share it and then a VP looks at it and then a principal and then the MD and ultimately the client. There's multiple levels of review. If you take that current example and you say, I'm going to utilize the generative AI tool to make myself, the associate, the analyst, you name it, more effective. there still is that next person checking it saying, doesn't seem right to me. Something seems off. And then the next person. So by having those kinds of standard approval processes and by keeping the kind of organizational structure in place and making the individual more effective, we aren't really changing what was previously the same sort of human in the loop, subject matter expert that's seen these markets for decades who gets to be that protector. You know, candidly, my personal belief is that, I've used this before, it is not a panacea. There is not a world whereby these agentic workflows, you have 10 agents, you've seen these agent gardens who all work with each other and spit out an answer. Even on the drug discovery side of the business, there still is a scientist evaluating and seeing what happens. But that testing cycle is massively expedited. So we feel that it's very similar, especially in a professional services business where now you can do even more tests. You can concept more rapidly. You can have more things to review and pick amongst with much less labor to go into that while still allowing that sort of iterative cycle of people taking a review, seeing if it makes sense, and having that discussion with a client. So not to be too buzzwordy, but that human in the loop review cycle and ensuring that these tools are a means to that end of the work we're doing today is the way I sleep at night.

Nagaraja Srivatsan (25:37)

All us go down the human in the loop play and that's absolutely fair, right? You check the checker, make sure this happens. One of the things, and I'm just exploring a little bit of the edge case, is as humans start to get comfortable with the output of what happens, there's an inherent what we call in the market AI laziness, where you start to trust the model more than that and the “check the checker” becomes a biased check because you start to trusting it and then the guardrails start to come down. And I'm not saying anybody has solved for it, but that's a constant thing which I think about is that as we start to evolve the human in the loop mechanism, you need to start to make sure that you're guarding against what I call the AI laziness, which is you start to trust and get yourself biased. And I just explored a little bit. What are your thoughts on that? Does that make sense or, you know, you guys have a base in which you have solved for it.

Drew McCormick (26:30)

I'd be lying to you if I said I have a perfect answer and it's been solved for. I mean, you're seeing it currently in the education world, higher education where folks, you create your essay using ChatGPT, you send it across and, no, this reference might not even be real. The way that we have solved for it within our organization, especially if you think about something like our complete commercialization offering, is a few things. So if you take my previous example of the human in the loop, that's one guardrail. The next is then the data that we actually allowed to power our models. One of the differences is that by partnering with folks like Google, who obviously are putting nearly trillions into this space, right? I mean, it's a seemingly fake numbers of CapEx when you look at the market there. We then are working alongside them to make sure that it's kind of narrow data sets. They're powering a lot of our models so that we aren't just grabbing from kind of diverse edge cases. It's not going to pull it from something random, et cetera. You're building a robust data model that you're getting your insights from. So that narrows the aperture. Ultimately, it then comes back to what I would just call good business. In our complete commercialization, we have a team of consultants who help with kind of the market landscape and the assessment of a drug and do the forecasting using epidemiological approaches, which can obviously be benefited from general AI use cases. Alongside that is my data analytics team that's utilizing claims data, open claims data, closed claims data, you name it, to identify where the prescribers are, where the high patient populations are, what type of specialists are actually driving the right referrals. And if both of those are powered by gen.AI and you get to the end and they triangulate around completely, or they don't triangulate and they have completely disparate answers, you have a good check amongst yourself to make sure that you aren't allowing that laziness. Now we could get in trouble at the end there if you then have an AI agent evaluating if those two are consistent, but we make sure that that isn't the case. So by having multiple teams who are coming to the table with that agentic support to have that assessment, ultimately bring it to the client. That's our way of making sure we're protecting against that laziness. And it always doesn't hurt to have really experienced chief commercialization officers and others sitting on the other end knowing they're going to take a look at this - you better have crossed your T's and dotted your I's.

Nagaraja Srivatsan (28:53)

That is fantastic. I know we're evolving this journey, and we can continue down this path. But let's take a little bit step back and say, as you start to look at the next two years, what kind of use cases are you thinking of deploying AI, whether it's classic AI or generative AI?

Drew McCormick (29:13)

Across our business, the areas that get me most excited, and if you think about where the growth is within life sciences, it is within rare and orphan disease use cases, it is within oncology use cases and the like. So if you take those two as kind of to narrow the discussion a little bit, you know, the data analytics, AI and large language model use cases support predictive analytics are nascent in a way. If you think about how we can even use these models to determine connections we didn't know existed before between different diagnoses, get three months sooner to identifying a patient before they even know to take the test that will reveal a biomarker that shows they're at risk for some rare disease. That is going to be one of the biggest areas we see a lot of the investment going. We do work with the NIH in this space in particular because they see that that's one of the areas of opportunity. I fundamentally believe that those models will go from today, which is we build the model, we share it with a client, we try to kind of democratize it within the healthcare system to being really at the point of care, especially as we get better within our EHRs and different sort of workflows for the doctor. You know, ambient kind of tools are all the rage right now just because the administrative burden they sell for, and I think, helping with kind of differential diagnoses and things of that nature is the next thing that gets me very excited, connecting the services we do directly to the doctor with these agentic tools. On the patient services side, so much of that is done by call center systems, so to speak. I'm a patient. I have a child with a rare orphan disease. And I rely on my person at that patient services hub to be my answer on a late Friday night when I need my therapeutic and I need someone to help me. I see that there is a lot of opportunity, again, on the predictive side, to utilize these models to know beforehand that there's going to be a risk, to proactively outreach, to help collect the data more properly between, ⁓ every two months this needs a refill and this kind of drives, you should be looking at this. That is next best action as we perceive it today. But so much of what we do is linear in a sense, if X, then Y almost rules-based tools to set up these business requirements engines to moving to what is much more, take a bunch of disparate data and start giving me insights as soon as I'm speaking with this patient. Move more quickly for them to overcome the hurdles within the healthcare system. And then finally, it is on our distribution side. I think everyone has talked about how when we're shipping medications and we're trying to work alongside different areas, just general supply chain benefits are vastly inefficient and can really benefit from a lot of these tools. Happy to dive into any...

Nagaraja Srivatsan (31:56)

These are very good and broad cases. But let's say you have all these six, seven, eight, 10 use cases. Drew, tell me how are you building an AI organization? Many people are struggling. As you said, from a business standpoint, there's no dearth for ideas. There's a plethora. You do a hackathon, you get 150. You have business prioritization, no problem. You have these eight areas to doing it. And then they turn around and say, Drew, build it for me. What kind of team - where do you get this talent? How do you put this all together to get the right next best action going? Walk me through that.

Drew McCormick (32:34)

I'll give you another addition to your question you didn't even ask and the talent you have today might not be the right talent tomorrow because the tools are moving so quickly. So what we have focused on is having a bit of a skunk works, having a team that is untethered from specific client work, untethered from a specific business unit, untethered from the day to day kind of, you know, business books called that whirlwind that takes over your day so you can't get to the bigger problems, and keeping them in a vacuum, put them away from that work that distracts and allow yourself to have that untethered group that does this work. So what we've set up is a real development engine that stands separate from a lot of the business units with champions in each of the business units. I call them power users. And if you have that connection, you can have a team, much like you would with a product development team, who is able to ingest requirements, ingest business use cases and needs, and spit out a tool, think about that; but instead, you're working alongside the power user that has that subject matter expertise of patient services, working alongside a team of... and I think this gets to your question, solution architects, front-end developers, backend developers, prompt engineers, the new phrase of term, and kind of talent that's increasingly important, and folks who understand how to create these agentic workflows. You power that together with someone who's giving you real life use cases they need to solve for. And we found that that works very well. I go back to a comment before about democratizing the tools: it's asking someone who has worked as a health systems access expert for 25 years to also be a generative AI agentic workflow expert is not fair. So what you do is you give them this team of engineers who can take what they're saying and interpret it and then really make that into the tool they need. So the short answer is we have found that by having them untethered from the day to day, the client deliverables, the fire drills that occupy our lives, and making them work on much more of a mid- to long-term focus around these kinds of ecosystems and pure platform plays, we've seen a lot more success. You have a steady release schedule. You're able to make sure you have a...clear area where I can add resources to turn up the dial and start developing more. And you can really guide it after the biggest problems. Whereas if you pepper them throughout an organization, you lose that centralized benefit, you lose that visibility. And candidly, at least in my experience, you get a lot of distractions, which is the nature of any client services business.

Nagaraja Srivatsan (35:13)

Yeah. No, it's absolutely true if you sequester them correctly and give them the right inputs. What is fascinating, as you have always said, is in the AI evaluation is the domain users. Are these domain users on loan to this group or you go and tap them as needed? Because I think going back to your lack of distraction, many best practices say you co-mingle these two teams, tech and domain and one is evaluating the output of the other and that's kind of how you make magic happen. I don't know if that's how you're organized or slightly differently.

Drew McCormick (35:50)

There are three versions I would call. One is the “do it as well as your day job.” The other is let's carve out time from your day job. And the final: the opposite end of the spectrum is you are fully dedicated to this net new kind of Skunk Works team. On the first example, sometimes it is hard to find hours in the day and it's hard to find people that have subject matter expertise and we don't have the luxury of pulling them out of the work. So it's about finding time within the day that obviously goes the slowest. Other things come up - that power user is critical, but can sometimes be the bottleneck. The middle cycle is something that Google and others have instituted that's sort of 20 % of your work week is for innovation, for you to think about what you want to work on, obviously within the strategic vision of the organization - figure it out. Go and do a cool science project. My data scientists have that luxury of Fridays. We have a number of kind of, you know, I just call them sides projects that we're working on and constantly doing. And this would be one of those. So you do have partial dedication focused on that untethered from the business in a lot of ways. And then the final and something we've done for those really big, hairy, audacious goals is fully brought that person into the development team for a period of time. Think of three to six month loan of sorts whereby they are in the room day to day on the standups, in the development calls, sitting alongside the engineers, testing the tools, and that is their full-time job. That final scenario is the most ideal. It is the most unrealistic with the demands of running a business and growing a business. But for the areas where we know that you don't have the luxury of waiting, it's an investment worth doing.

Nagaraja Srivatsan (37:39)

Drew, we can continue on and down our fascinating conversation for a long time. But as we're starting to come to the end of the podcast, maybe you could take some key takeaways. You're so experienced in bringing this across a broad portfolio of use cases. So if there were three to five takeaways or one to two takeaways for the audience, what would that be as they go through this AI journey and they're starting. What would you want them to follow from your experience?

Drew McCormick (38:11)

You have to have a champion. There is always something else that is going to come up. And as much as there is a lot of excitement and every single earnings call any of us listen to talk about agentic tools, talk about gen.AI. I mean, it's the joke at conferences of how long can you go before you talk about gen.AI? You have to have a champion that is senior - top of the organization that is espousing the benefits and really in the deep end alongside those teams. This isn't something that can just be done purely grassroots because it requires the confluence of governance and IT and business users. So that would be the first thing we found that having that ⁓ champion, sometimes a champion who is completely, think startup, be a nature of just single-minded on these opportunities, I think goes a very long way. The second is that sort of hearts and minds and those employee engagement to get people bought in is just as important as building the tool. Think about how many organizations has a tech team that said, why am I spending all this money on this SaaS tool? I can build this myself. Why would we do that? We can do it ourselves. That's a good input to deciding where you want to work, but it can't be the only criteria. You have to make sure that you're speaking with the business users, that you're solving a tangible problem, and you're really alongside them in their day to day to ensure that you're actually solving something that needs to be solved. It goes back to your wonderful point on ROI, top-down. And then lastly, I think it's just about investing in education. You cannot read enough about this space. You will not be able to keep up. And you can try and play with all the tools and do everything you want, really making sure you're investing in everyone understanding how these tools are changing day in and day out. How did I use it last week to do my job as X role that other people can mimic and having that sort of communication, especially in the largely remote world we live in now, is increasingly important. You can't sit over someone's shoulder and see, that was a cool way you used that tool. So setting up the governance structure whereby people can speak to other folks on the front lines about what they're using are critical. So once you get the ROI right, once you get your governance councils right, once you pick the tools you want to work with and which of these large language models you want to kind of get behind, that champion, sort of hearts and minds and that education, I think are the final mile that will result in these things either being a success or a failure.

Nagaraja Srivatsan (40:38)

 Very fascinating conversation. Really appreciate all your thoughts. ⁓ As I said, we could continue to talk about this for another couple of hours, this is fantastic. Thank you so much for sharing your experience with us.

Drew McCormick (40:53)

Thank you for having me, I really appreciate the time today.

Daniel Levine

Well, Sri, that was a great conversation. It was so interesting to hear from someone on the commercial side. What did you think? 

Nagaraja Srivatsan

I think Drew was very articulate about the experience of how you go about bringing AI. What I really liked was the framework he talked about. When you start to solicit ideas for AI, he was very democratized getting hackathons and ideas. But then he had a very good, strong auto-wire model on aligning that to business and business construct. But what was exciting was how he was evaluating the output of AI initiatives. They have a governance council where they're really looking at how AI can be governed, where he calls it guardrails. And then he brings in domain and subject matter experts to make sure that whatever they're doing is vetted by human in the loop. And finally, he has multiple different stakeholders who are verifying the output of that. 

Daniel Levine

What I really liked was the solid foundation and the approach he was taking to making AI implementable, but also real and delivering value. we've spent a lot of time on this show talking about the application of AI to discovery and development. Less so on the commercial side. How do you think the challenges or opportunities compare the commercial side versus R &D? 

Nagaraja Srivatsan

What is fascinating is it's the type of workflows which he's trying to influence. So, a patient advocacy or a benefit verification workflow is a little different from field force and enabling the field force workflow, which is a little different from real world data and analytics and discovery of new biomarkers. But as you take each of these different workflows, I think there's very practical approaches to really looking at where the pain points are, how AI can make better solutions to that particular problem. What I really liked about that approach, which he was taking, Drew, was that he took each of those functional areas of process and said, what would be the right next best actions which would improve the productivity of these individuals? But not just thinking out in a vacuum and saying, okay, I'm a cool tech guy and I think this would be good. He brings along the stakeholders, whether it's the field force team, whether it's the benefit verification team and the patient advocacy team into the decision-making process to make that next best action work better. So was really interesting. And we're seeing this across multiple different AI initiatives, You have to bring in the techie guys with the domain guys working together to make sure that they make AI work properly for each of these different business areas.

Daniel Levine

He also talked about this not just being a new tool, but a different way of working. This was in the context of adoption. How important is it to recognize this when thinking about change management? 

Nagaraja Srivatsan

He said that many of the problem statements which AI is addressing is not actually technologically related, but change management related. And he talked about the change management comes when you start to throw something over the wall. So he did not want to fight the not invented here syndrome. He actually brought the whole adoption mindset of the person who's going to be using it to have a say in what he or she is wanting to get out of it. And then he slowly built up the constructs of bringing the team along, aligning them to a north star of what productivity or what improvements or chaotic or auto eyes they need to do, and then making sure that they're constantly deploying it, but also measuring the output off of those efforts.

Daniel Levine

Yeah, interestingly enough, he also talked about compliance issues and not technology being the rate limiter, having to make sure that tools are doing things that they should do and not things that shouldn't be done, and making sure you have appropriate guardrails in place. How challenging is that? What did you think of the approach he's taking there?

Nagaraja Srivatsan

We are all in a regulated environment. When it comes to patient compliance, you have HIPAA and GDPR and other aspects of it. You have privacy. And if you just approach this as a technology problem, you will crash and burn. And that's what he said. You have to think about the guardrails from a compliance, from a regulatory guidelines perspective. When you put that framework and then start to evaluate tools, the data, the output, then you put...the right safety considerations so that the tools are not inherently behaving from the guidelines or compliance which they need to do. So it's really what is the cart before the horse? What's the horse? What's the cart? And here he's being very explicit. Compliance and regulatory leads the way on what can be done. And then you do the tooling, then you bring the domain experts, and then you start to drive the solution. So I think that's the right sequence. And it's a great primer for any regulatory or regulated industry. 

Daniel Levine

It was a very interesting conversation to listen to and I really enjoyed it. Sri, thanks as always. 

Nagaraja Srivatsan

Thank you, Danny. Appreciate it.

Daniel Levine

Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny@levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.

Thanks for joining us. 

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Lead a team of AI/ML data scientists, engineers, medical professionals, and product managers who leverage unique real world datasets to develop patient analytics in support of commercial, medical, and market access customers across life sciences.