Building the AI Playbook for Clinical Trials
In This Episode
In this edition of Life Sciences DNA, host Nagaraja Srivatsan sits down with Stephen Pyke, Chief Data and Digital Officer at Parexel. With more than 25 years of leadership experience across Pfizer, GSK, and now one of the world’s largest CROs, Stephen shares his perspective on how AI adoption in clinical trials is evolving.Stephen explains why organizations must view AI not as a pure technology play, but as a change management journey. From productivity tools and workflow automation to the coming wave of agentic AI, he lays out a pragmatic roadmap for integrating AI into clinical development while safeguarding quality, compliance, and patient safety.
- From Productivity to Workflows: How AI can be scaled from simple personal productivity tools to embedded clinical workflows that save time and reduce errors.
- Redefining ROI in AI: Why AI’s value shouldn’t just be measured in minutes saved, but in improved quality, faster decision-making, and reduced rework across clinical trials.
- Driving Change Management: Lessons on empowering half its 20,000-strong workforce to embrace AI, using peer coaching and “power users” to accelerate adoption.
- Agentic AI and Governance: How the next wave of semi-autonomous AI will reshape clinical operations and why context of use, guardrails, and responsible AI frameworks are critical.
- Preparing for the Future: Why every role in clinical development will soon touch AI daily, and what organizations must do to get their people AI-ready.
Transcript
Daniel Levine (00:00)
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. We've got Stephen Pyke on the show today for audience who don't know Stephen, who is he?
Nagaraja Srivatsan (00:30)
Danny, Stephen is the Chief Data and Digital Officer at Parexel. He leads and directs the company's patient data and AI strategies. He's responsible for the design and operational execution of all facets of Parexel's clinical data approach. Before joining Parexel, he held several leadership positions in companies like Pfizer and GSK. Among other things, he serves as an executive committee for the Clinical Trials Transformation Initiative and chairs the Association of Clinical Research Organization, ACRO's, AIML committee. He holds a master's in statistics from Imperial College London and a bachelor's in mathematics from the University of York. It's a pleasure to have Stephen on the show.
Daniel Levine (01:11)
And what is Parexel?
Nagaraja Srivatsan (01:13)
Parexel is one of the world's largest clinical research organizations or CROs and works across the clinical development process with several biopharmaceutical companies. Their goal is to expedite drug development processes and manage trials globally. ⁓
Daniel Levine (01:27)
What are you hoping to hear from Stephen today?
Nagaraja Srivatsan (01:30)
Danny, Stephen is an expert in how AI is adopted within the clinical trial continuum. He is really well-read and has implemented several AI use cases on getting organizations like Parexel on the AI journey, but he's also a very well-spoken speaker on how do you adopt AI responsibly in the marketplace. So I'm really looking forward to a good conversation around understanding the AI roadmap and template he used, but also a framework which every one of us can use to implement AI within the clinical trial continuum.
Daniel Levine (02:08)
Before we begin, I want to remind our audience that they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comment section. And don't forget to listen to us on the go by downloading an audio only version of the show from your preferred podcast platform. With that, let's welcome Stephen to the show.
Nagaraja Srivatsan (02:35)
Hi, Stephen. Welcome to the show. Really excited to have you here. It would be wonderful if you set the context and start to give us your opinion on where the state of AI is right now, specific to our clinical trial marketplace.
Stephen Pyke (02:51)
Yeah, well, Srivatsan, I'm delighted to be with you here today. And obviously a topic that's near and dear to both of our hearts and one that's critical for our sector, I guess, at this stage. In terms of where we are today, you know, I was reading a report just the last couple of days, actually, discussing exactly that question. And I think there's a sense in which, you know, if one just follows this topic at a distance, you'd think we'd already changed the world, that everything had already been transformed by AI, and that it was really just a matter of weeks, maybe even days before people were seeing their lives transformed. Well, of course, you and I both know that's a long way from being true. In fact, in our sector, life sciences, highly regulated, uppermost concerned in our minds, obviously, with patient safety, we've been taking a much more steady - some would say cautious approach, I would say. And so we've not advanced as quickly as I think people, casual onlookers, would have seen in some other sectors. That said, I think we are starting to see genuine impacts in clinical development. Really important AI-based solutions beginning to emerge. We've been on the hype cycle. There's been a lot of talk within the sector. There's been a lot of proposed solutions. But I think what's beginning to shift now, and you see this most obviously with the commercial vendors, the clinical tech vendors, is they're starting to deliver solutions and we're starting to adopt them. You know, we can talk in, I'm sure we will over the next 20 or 30 minutes about, you know, some of the use cases that we've implemented. Some of those we've built ourselves, some that others are building. But I think my answer to your question would be, we're definitely on a moving train. You can see clear impacts in our sector, but I would say it's still relatively early in the journey, at least as far as life sciences clin-dev goes.
Nagaraja Srivatsan (04:53)
Yeah, and that's spot on with the observation of you and other experts in the marketplace. But as we are going down the journey, I like the analogy you said, we're in a moving train, the train has left the station. Where does the first train stops in this journey be? What are people starting to do in this journey? what are some low-hanging fruits for quick wins? How are people thinking about...the process to select the right quick ones here.
Stephen Pyke (05:20)
I suppose if you're going to describe the journey, the first place you have to begin are the personal productivity tools. ChatGPT really kick-started the current wave of AI implementations. Of course, I'm bound to say AI has been around for 75, 80 years. It's not a new technology. But as far as most of us are concerned, generative AI really changed the game because it democratized access. It made building AI tools that much quicker and simpler. And first among them were typically the sort of productivity assistance. ChatGPT itself. But a lot of enterprises, my own included, have developed sort of safe and secure versions of it. And then they start to develop into other ways beyond just tell me the answer to this question, slightly more specialized, but nonetheless geared towards personal productivity. Again, within my organization, we've got something like two dozen specialized personal productivity tools now. So quite a range already. I think that was step one on the journey. The next step, I guess, has to be now where we see AI embedded into workflows. So, you've got the idea, for example, that you have highly repeatable tasks. And I'll give an example. Designing a protocol, finessing a protocol. We know that plenty of people have done studies like this in the past, same disease area, similar population, different drugs maybe. And when we're designing our protocol or working with a partner organization to help them design theirs, what we're really trying to do is say, well, this is what we think we want to do. How does that compare? How does it contrast with what's gone before? Are we confident that we've set this up optimally for success? When studies were designed similarly or differently to this, what were the outcomes and how does that help us think about the design? And a lot of that can be enabled by AI, rapid data synthesis and digestion, surfacing insights, making recommendations, making observations, helping us really home in on what are the key questions that need to be answered. So, solutions like that definitely now beginning to be well embedded in our processes and many others that I could talk about. So personal productivity is where it begins. Automations come next and we'll maybe talk about what comes beyond that later on.
Nagaraja Srivatsan (07:52)
Yeah, absolutely. I think you set up the... And I want to explore these personal productivity because I think the people listening to this, some may be in the journey of doing the two dozen and having all the productivity tools and somebody is just starting on the train and they're just using chat GPT to summarize or write a better email. Maybe you could give your top five applications on that 24 stuff on what goes on. And then I'd want to come back on the workflow because I have some questions on that. But maybe we could just start with saying, hey, what are the most obvious things you should be doing?
Stephen Pyke (08:23)
If you write code, and we've got a lot of programmers in our business, SaaS programmers, but other times too, I think it's well understood that thoughtful choice of generative AI solutions can support better coding, more efficient building of code. And indeed we've got that, but it is one that perhaps wouldn't have been so obvious. It actually is, in our experience, at least as valuable, and that's QCing your code. So we're an enterprise organization. We need code written to a certain standard, certain quality criteria that guide us. We build a tool that says, I'm going to read the code you've written and I'm going to confirm or challenge you around whether it's meeting all the quality criteria. Similarly, we can talk about monitors and the works that they do writing visit reports. Again, QCing that content, is it answering all the questions that you went on your visit to address? Are you writing in a way that is stylistically the way we'd expect it to be written? So again, this kind of QC based assistance turning out to be very valuable, which sort of takes us into some of the stuff you've mentioned as well. You know, I want to summarize the document. I want to take notes from a meeting. Of course, all of that is there as well and very valuable. What I would say though, since we're sort of talking about personal productivity, one of the things that we've discovered, and I think, I'm certain actually, we're not unique in this regard. Capturing productivity gains for individuals on an ROI, what's the cost of the investment? What are we going to get back? Actually, that's proving quite tricky. And to give a rather glib kind of illustration of why that might be. All right, so I'm a monitor and I've saved 15 minutes. What do I do when I go and grab a cup of coffee? And no disrespect to monitors, I'm sure they're their socks off. But I use it just to make a point, which is how do you sort of have the discipline to recapture short amounts of time saved? Now, very different from we've made AI part of the workflow, the process, we expect you to follow it. Well, we can build in the labor saving. We can build in the speed up as part of the process redesign. I think that's been a really interesting, rather obvious with hindsight, but a really interesting observation as we now look at some of these types of solutions in play.
Nagaraja Srivatsan (10:55)
There's so many aspects to explore. I'll go down that auto-AI path a little bit. You remember, let's say we are writing better email. The auto-AI is that I write now in two minutes what I would have taken 20 minutes to have a well thought out email. But the value is not the saving from 20 to two, it's the clarity of the communication, which means that if I send you that email, I'm not doing back and forth three or four times, which then would save you, who's a much more expert, less of your time because you have very crisp articulation of what you need and stuff. But you're hitting on a very, we've always tended to saying my productivity, I save 15 minutes, I get $100 an hour. So this is kind of what the productivity of that individual, but there are some intended consequences of better quality. And I like your QC ROI is because what is the cost to quality? When you ask people, they'll say, hey, there's a cost to quality, but that's just time. Either you're working 50 hour weeks or 70 hour weeks, depending on cost of quality, or you're putting systems. But cost to quality means that you're getting less Kappa's and less stuff. So I think you're bringing up an interesting point that we should be thinking of ROI intrinsically versus personal productivity. What are your thoughts on that?
Stephen Pyke (12:17)
I think - I'm really glad you picked up on this because for me it opens up a really broad and important question. There are far more opportunities to invest in AI than a company like mine has the capacity to kind get its arms around. We're not a tech company. We don't have legions of software developers. So what do we do about that? We make choices. Well, on what basis do we make those choices? Well, part of the equation is the benefit of that investment once made. And that raises a really interesting question - well, how do we measure benefit? What benefits are we talking about? Well, speed up. Our customers care about that for sure. We're a CRO. We deliver studies. The most valuable thing we can give our customers is time back for sure. Arguably, by the way, the most valuable thing for patients is that we move more rapidly as well. So speed is absolutely should be part of the reckoning when we're making choices about which AI solutions to go after. But there are others, quality and first time quality in particular, the ability to get a high level of first time quality, reducing or removing rework. That in itself is clearly valuable. It's driving the effectiveness and the efficiency of our staff. You know, from just an internal resourcing capacity standpoint, that's got to be valuable. So speed abilities to get to first time quality quickly. And then of course, the one that many people would quickly like, which is: What does it mean we get them more cheaply or at a lower cost? Impossible to avoid. And I think, I think therefore, you know, when we're making choices, we've got to be able to translate between those benefits types, which are not intrinsically the same. Sometimes they converge, i.e. you get both or neither, but sometimes you don't. And so there's some sort of subtlety that's needed here.
Nagaraja Srivatsan (14:13)
It's also the culture of the company to be able to take some risks, not everything driven by hard ROI, but some of the soft ROI's. I call it cultural and I want to touch upon the culture because as you said, In 2023 Chat GPT came and people used it Sunday night at their house. Come Monday morning, they didn't be able to doing it. But you've gone down, you said you've picked 24 different productivities, you've picked processes and stuff. There must have been a huge change management challenge because just giving the horse - taking the horse to the pond is not going to get them to drink the water. So what did you do? How did you get, you have a large workforce motivated to use that. There must have been skeptics like, Hey, this doesn't work for me. Walk me through that journey because it's a very important part which everybody's going through.
Stephen Pyke (15:01)
You're absolutely right. I think one of the things I've started, I've talked to customers on a regular basis, as you can imagine. And I think one of the things I've started to emphasize is this sort of notion that we've got to get away from AI as a pure tech play. I mean, for sure, technology is at the heart of what we're talking about here. But as you rightly said, in the end, if we want to get value, it's about change management. And actually the lessons of change management are lessons we've learned over many years through many iterations with many past technologies. And we've got to just remember what those lessons are. Now in our case, if we're talking about AI today, mine's an organization of 20,000 people. There's a Parexel AI assistant, that's what we call it. That's sort of general productivity suite of tools. About half of our staff have become regular users. Just instinctively, intuitively, we rolled it out. We've given them a few hints and tips. We've given them some nuggets of video, do this, do that. But basically it's been self adoption. What it means is half the organization hasn't. You know, they've used it once, they've used it twice maybe, and then they backed away. And I think one of the, as we've dug into this, one of the things we've realized is it's not an unwillingness to use. It's more a just kind of: I don't know how; someone needs to show me how; if someone shows me how, I'll gladly use it because I want to be more productive. And so we've started to develop this program. I guess you could call it sort of power user or, or peer coaching, however you want to describe it. ⁓ A project leader who instinctively has adopted Parexel AI assistant would tell your colleagues, show them how you use it. And we're finding that's actually beginning to break down barriers in the, in the sort of slightly more uncertain, I don't want to say reluctant, just an uncertain community within our organization. And I think that sense of peer support coaching, a friend who you're happy to have a kind of virtual cup of coffee with, and will just say, here's what I do when I'm manipulating these spreadsheets. Here's how I use this tool. Really, really powerful. And of course, not threatening in any way.
Nagaraja Srivatsan (17:18)
No, absolutely. I mean, I just did a very similar to what you did today, actually. I had a contract document and an amendment, and we have our own internal stuff. I asked it a question, please be my in-house counsel and review these two documents and amendments and give me the terms which I need to be making sure that I have to be discussing with my customer or ⁓ vendor. And it put that together. For the first time I did this on purpose with my whole team and writing the prompt and the upload and everything. Now everybody's like, next time we're having a contractual document, we're going to use this as a template. So, I get it. You have to lead by example. And I like that “change champions.” In the old school e-clinical setup, we had this concept of power users, super users, we would call them, who would really be process savvy but tech savvy, use that, show it, and you have to nurture that. And you bring a very good point. It's old playbook for new technology, but it's the old playbook. I mean, you don't need to reinvent what do you do for AI. It's going back to that old playbook of who's your champion, who's comfortable with tech, and then to put and deploy them into the mix.
Stephen Pyke (18:35)
Once you recognize this for what it is, it's just another change program. I mean, there are some technical development difficulties, issues, and you know, we're, I think, somewhat familiar with those. But once you realize what it is, it's a change program, where you start thinking about training and support for staff. You start thinking about the processes. Are these processes still the right processes or workflows when AI is part of a story? Or do they need to be evolved in some way? And we even get into roles. where are the boundaries and handoffs where the AI is starting to interfere, interject, interrupt established role patterns and can they be developed? I'll give you an example. CRAs and data managers have been working together closely for years cleaning data. And we've had sort of various attempts to centralize statistical monitoring and data cleaning and what can you do at the site and what can you do centrally? My assessment, and look, I'm not an expert, but my assessment would be it's been partly but not wholly successful. Well, I think AI is the sort of technology to give us the next leap forward in terms of redefining what a data manager does, redefining what a CRA does, really helping the CRA focus on the high value site specific stuff, getting more of the data management done, data cleaning done by data managers, using many of the new AI tools which are becoming available. And by the way, we haven't talked about the clean tech commercial tools, but many of them now available just to buy off the shelf as it were, very good ones. We brand it as integrated data delivery in my organization, but you know, the opportunity is very clear. Process, roles and technology all evolving together. I think then you've got to change program with its name.
Nagaraja Srivatsan (20:31)
No, absolutely. As we go to that second use case, you got the train on 10,000 plus half of your organization using it, and you started to go down the workflow piece, which is the most important part. And we just talked about it, the process part, the handoffs, the human in the middle, all of this. How did you go about picking, now these things need a little bit harder ROI, getting teams together, and then say, sometimes you have to reinvent the process because you're not automating the age-old process of human process. Walk me through, how did you start that journey? What one or two did you pick? Some have been successful, some may have not. Share what was that journey like.
Stephen Pyke (21:10)
Let me disclose right at the beginning of this piece, we haven't moved that far down that part of the journey. I think this recognition that what we're talking about is the need for integrated change has really come quite recently. We were, I would say, slower than I would prefer to be able to acknowledge, slower than I would have liked to recognise that, but we are beginning. I think integrated data delivery is the one standout example I would point to at the moment. But having said that, having recognized, perhaps later than we should have, change for what it is, I can say that we've now taken on board an external partner who's very experienced in helping organizations through change. We're starting to pull together training and awareness programs for all our staff, again, raising the level of literacy across the organization, making people feel comfortable and confident to embrace AI. In other words, we're beginning to do in a number of different spaces all the things that you might hope we would be able to do, we would be doing. But I'm not going to sit here and say that we've solved it and done it many times already because we simply haven't. We're getting going with that stuff.
Nagaraja Srivatsan (22:25)
You did pick up data management and centralized monitoring as a critical part, which is a big part of the CRO process. Anyway, what kind of areas did you go big bang or you picked a few areas, like you said, data review and saying, I can apply something there and complement.
Stephen Pyke (22:41)
I mean, look, our approach always is start small. Okay. And you know, you've got a business which is on the hook to deliver for its customers. We're a CRO, which means we don't have lots of fat. We're a fairly lean, tight organization. I think most CROs are. So there wasn't, there just isn't the capacity to take huge experiments. So, you know, my observation on integrated data delivery would be the same as my observation on all of our AI endeavors. Start small, convince yourself it's doable technically, but also in terms of beginning to think about the value for the business. Proof of concept. Again, it's a fairly traditional pathway. Then pull together a business case and then step into full development and get the tools in place. And of course, in this case, I think we've been fortunate in that there was already energy to take another look around data management, CRAs, and we were able to sort of slipstream in with that. But I think the more general point you make here is we recognize these things need to be done together. And again, this is where POCs can be helpful. You iron out the wrinkles, you find the problems, and you figure out how to sort of I mean, they move the various facets of the Rubik's Cube to get the nice, neat picture that you need at the end.
Nagaraja Srivatsan (24:03)
And it's fascinating, right? That is a different facets of the Rubik's cube because when you come and look at the new process and solution, as you said, the clinical vendors are bringing that AI. That is the LLMs and their evolution to GPT-5 and others. There's the reasoning models coming up. There's these new tech companies saying, I'll own this part of the process. And as somebody who's trying to put this mosaic together, it's like, Hey, do I go with process-based view and pick the best tool? Do I go tool-based and extend it to people? Do I deploy something standard so that it's fungible across? You must be having a lot of these trade-offs, or am I over-engineering the trade-offs and it's simple and you just go and pick off the shelf and just deploy and things start to work.
Stephen Pyke (24:54)
No, I'd say it's far from simple. And of course, what we're trying to bring together. Do we build it? Do we partner? Do we buy it? And I think there's a fairly simple, sensible answer to that question, but that's certainly part of it. If a commercial vendor, you think they're going to bring it forward, particularly if it's a link to a core tech, why wouldn't you just buy it? We're not a tech company. Why would we try and do better at what they do for a living? But equally, even if you have to build it and there's going to be plenty of stuff we have to build. I think we're going to move faster and be more often successful if we partner with someone who has been in the life sciences sector. You may have delivered similar types of solutions for some of our competitors or for some of our customers - choosing a partner who can help you build and having a really clear view on what makes sense to build versus what makes sense to buy. That's certainly part of it. You've then got the whole funnel of opportunities. You know, we, again, just to disclose some of the errors that we made in our journey. We invited staff to submit ideas, more or less complete freedom to do so. Anything where you see an opportunity. We were literally drowning in ideas. I mean, was all credit to the folks in the organization. They wanted to see improvements. They wanted to help us get better. But we were overwhelmed by things to consider, couldn't systematically work through them all. So then we'd sort of step back and said, okay, maybe we can be a bit more strategic. Maybe we can have more of a leadership lens on this. Help, help us by using their judgment to say, these look the most promising, these look the most likely. If you could do these, these would be the most valuable, probably. That gives us more than number to focus on. And then you've got to think about organizational readiness for change. We know there are certain parts of the business that have already gone through a lot of change recently. We don't want to hammer them. You've got to think about the types of role. How much risk is there associated with this role, doing this activity in this context? Again, at this stage in our journey, we're going to focus on low risk rather than high risk. A whole bunch of factors start to come into play, some of which would be rather obvious, some of which perhaps a little less obvious. But as you rightly say, we've got to align all of those and make the right choices and it's not trivial.
Nagaraja Srivatsan (27:23)
It's not. And I think the journey, which the epiphany you had from these hackathon POC approaches to a strategic intent and deployment is where the market is going. Because initially for people to adopt that personal productivity, they tried to do the hackathon approach to getting, but when you start to put ROI, strategic priorities, change readiness, all of the stuff that you talked about, you have to align to the strategic priorities and put the governance in play. Otherwise, this would just fizzle out because as you know, some people use it, some people won't. Now, as we start to pivot it out, you talked about how to get off into the train, the first station on productivity, we're on the second station of workflow. Where's the journey going? Where's this train headed?
Stephen Pyke (28:10)
You, like everyone else, will have heard of agentic AI, and I think that is the coming wave. I don't think there's much question about that. But even as I say, and for those who don't know, agentic AI, I think we simply mean AI capable of taking on more complex tasks, particularly those that require choices or decisions to be made within the AI itself. at least semi-autonomous. And even as I say that, I hope we all pause for a moment, remembering that we are in the business of supporting clinical trials, thinking about patient safety, highly regulated context, and the idea of autonomy should certainly give us pause, even if it's limited. We need to be really careful about where and how we build agentic AI solutions. Can be done, but we need to be thoughtful about it. And then there are, I think, some less obvious issues which we might need to be careful about too. And the one that comes to mind for me is data security and privacy. Perhaps not unique to us, but certainly just as relevant. If we have an agent AI that's booking your travel, I mean, that technology exists today. So it's going into your calendar, it's looking at dates. It's interacting with flight agents. It's making payments perhaps. So doing all this, sending information back and forth, who's getting access to that information? Now, you know, that's one small slice of an example of what I might be talking about here. But as soon as you start thinking about it in a clinical trial context, you can see immediately that the security, the privacy become a very significant issue that we just need to be careful with. Again, you know, I'm not sitting here saying, it will never happen. I'm just saying there's a sort of new dimension of complexity and we need to think carefully about how we're going to manage that.
Nagaraja Srivatsan (30:09)
Let's explore that because, Stephen, we've talked about this in the past. One is data privacy and things. The other big ⁓ sword on top of us is regulatory. We always say, my God, the regulatory agencies will never accept this. And you and me have been in several of these guidances. Where do you see this regulatory environment? Is it favorable to an agentic AI model? Is it going to be like, my God, you can't touch this with a barge pole, you better do something else. Where do you see this journey from a regulatory standpoint?
Stephen Pyke (30:41)
I think the answer to the question is in large part going to be an answer to a question which is just as relevant to any AI solution. I don't think it's unique to agentic AI. Maybe answering it is more difficult with agentic AI than for the sort of more simple assistance that we're more familiar with. But I'd say the regulatory framework now is becoming much clearer, notwithstanding the fact as yet we still don't have a final regulatory guidance from any of the agencies. Notwithstanding that fact, I think we've heard enough from them. We've seen papers, we've heard them ask questions. They pointed to the issues they're concerned about. And in particular, I think the FDA draft guidance most recently issued providing that framework for risk, understanding where risk arises, and then the impact of that, potential impact of that risk on whatever the process is that it sits within. I think that framework helps us understand the rules of the game. And then it's up to us as developers and users of AI to convince ourselves first and then the regulators as appropriate that we can provide documentary evidence and assurance that we're meeting their expectations in respect of the things that we know they're concerned about.
Nagaraja Srivatsan (31:57)
Yeah. I think where you're heading is where I am, which is the FDA put this whole notion of what's called context of use to assess the risk. And context of use is so important, even in what you talked about, the change management part. Context of use is very important for ROI. Context of use is such a critical framework, which I think can be adapted even beyond regulatory to saying, I'm putting agentic AI. What is the context of use? What are my guardrails? Am I putting in human in the middle? What kind of things would happen if it then rogue? What is my quality checks going back to it? Do I have another agent watch the agent? How do we really put the controls in play before even the regulators come in? We feel comfortable that the context of use is contained to do what it's supposed to do versus it becoming not contained. Where do you think? Is that a good way and is that what leads into the AI governance setup which people are starting to do right now in the marketplace?
Stephen Pyke (33:00)
It does. In a way, think what the FDA in their document, and actually they've borrowed from some work that was done earlier by other organizations, but I think it's a good borrower. I think what they've done in recognizing context of use is sort of - ought to be intuitively obvious to all of us. You know, is it safe to use Chat GPT? Well, it depends. It depends what you want to do with it. What's your context of use? I mean, it's sort of obvious. So... Absolutely. If we're talking about governance here, one of the things that every organization, I think, began its journey with, I'm sure we certainly did, I'm sure every organization much the same, how do we set rules for the roads? How do we make sure that we've got guardrails, not only for users, but also for the developers themselves, so that we're developing the right kind of solution with the right kind of performance expectations, recognizing the context of use. And that once we've got solutions starting to be embedded, that embedding process, again, it's being monitored and governed. We're checking that things work in the way that we expect them to. We've got appropriately senior level oversight of all of the things that we're doing. I think what's pleasing to me, I've acknowledged one or two early misses, but I think one of the things we got very right actually, and I know Srivatsan, you and I were part of doing this in one instance, working together. I think getting that safe rules of the road framework, responsible AI was sort of a phrase that we were using a lot at the time. Getting that established out into the public domain, a public statement of what we expect for others, but also for ourselves. I think that was a real success and the work that we did with ACRO, my own organization, Parexel, has done in that space and others too, incredibly important and still today, I think incredibly valuable.
Nagaraja Srivatsan (34:56)
No, absolutely. And I think you built that out, right? Saying, hey, you're going to go down this path of the train, personal productivity, AI workflow, agentic AI, but in every one of them, as you proverbially set your context of use. But once you've done that, you still need to have that responsible AI framework to evaluate that context of use within a framework to saying, okay, is this still being responsible? Is it still doing what it should do? And I think that gives us a very good a roadmap to how you would go about putting things ⁓ in play. I know we can continue to talk for a long time, Stephen, though this is passionate topic for both of us. Before we ⁓ exit out, any key takeaways you want to share with the audience on how they should be going down this journey? Like, we've all had quite a bit of experience in the right way and the wrong way, and so any words of wisdom would be wonderful.
Stephen Pyke (35:51)
Words of wisdom, I'll do my best. A couple of things, I don't want to repeat myself too much, but a couple of things I haven't mentioned yet. One, I think emerged really from a survey that my company did at the beginning of this year. We surveyed participants in the life sciences, senior leaders, frontline staff, and we asked them a bunch of stuff about the evolving clinical development environment. Among the questions, there was one which was so...As you think about the things that are changing, the things that are coming, coming fast, what is it that is uppermost in your mind as you think about challenges that you're going to need to engage with? Well, no surprise, number one was AI. But I think what was particularly interesting then was when we said, okay, how prepared are your staff for these coming changes? And more than half of leaders felt their staff weren't AI ready. There's an important message there for every one of us, which is to remember that if you feel that you are, either because of your role or because of your personal interest, perhaps at the leading edge of what's going on in AI, there's an awful lot of people who aren't and you need help to come on that journey. It's going to be important because whenever we may feel about the pace of AI, it's not been as quick as some of us expected. It's coming still quite fast and it will, in my view, take us to a place where we see AI everywhere. Every role will touch AI every day. You won't be able to do your jobs without touching AI. How do we help 20,000 people at Parexel? How do we help the many thousands of people in our sector be ready for that wave of change that's coming? And I think there's a really critical kind of question embedded there that we need to sit back and ask ourselves. One little snippet from that survey, because I think it's a really nice one, relates to what we were talking about earlier. When asked about modes of training and support, by and large, leaders said online is fine. By and large, staff said, I want someone to hold my hand. I'm not saying one's right and one's wrong, but it was the only question in our survey where leaders and staff were so far apart. And I think that in itself is really interesting. So it's coming, it's coming fast. Bear that in mind.
Nagaraja Srivatsan (38:22)
And I think you're spot on on the organizational preparedness for this and really coming out with the right framework to bring everybody along. But, Stephen, it's always been fascinating. As I said, we could continue to talk about it, but you really give it a good roadmap on how do you get on the train, what's the journey, the governance, the guardrails, and the most important aspect of the human element, bringing everybody along in the journey. Thank you so much. Really appreciate your time today and looking forward to many more conversations together.
Stephen Pyke (38:53)
It's been an absolute pleasure. Thank you for the opportunities for that then.
Daniel Levine
That was really interesting to listen to Stephen talk about what is often referred to as the hype cycle and said we haven't advanced as quickly as some other sectors. But he did say we're seeing genuine benefits take hold with vendors shipping solutions and companies adopting them. What do you think?
Nagaraja Srivatsan (39:16)
I think the clinical trial space has always been quite cautious around how you have to go about adopting newer and newer technologies. But with that said, I love the framework he talked about. I think his company started by first using AI for personal productivity. So getting people comfortable using it for their own productivity improvements. And then he went out into the second framework of using that for workflow and really looking at clinical workflows from a strategic standpoint. And he's now setting himself up to go down the agentic AI path. And so I think it's a very good playbook around how you can go after the use cases in a crawl, walk, run approach.
Daniel Levine (40:05)
One of the challenges he noted was measuring ROI, particularly capturing short amounts of time that are saved. How should we be thinking about ROI in the context of AI?
Nagaraja Srivatsan (40:16)
We had a very good discussion around ROI. I think the old school way of ROI is that if I use AI to save you time, then that is the only measure of AI's productivity and improvement. What we explore is that that could be one part of it, but there could also be because you're more productive, the cost of quality is coming down, which means you're not doing rework. Or the speed which we talked about, the speed of decisioning, the speed of connection, the speed of getting information going is improving. All of these things together actually lead to a better success in the clinical trial process because we're being more efficient in how we communicate, more efficient in how we work, and more efficient in reducing waste in terms of bad misses from a quality standpoint. So I think it's a really interesting way on how you look at ROI, not just from a personal perspective, but to look at it from a 360 degree impact perspective.
Daniel Levine (41:16)
One of the other things he said is that we need to get away from AI as a pure tech play, but that if you want to get value, we have to talk about change management. His company actually brought in an external partner to help with change. I think companies are very focused on the technology in terms of what it can do, but do they understand the change management aspect that comes with AI and how, how should they approach that?
Nagaraja Srivatsan (41:40)
I love the way he framed it, like half full and half empty. He said that half of his teams were AI users and half were not. And you would call it success because half are using it, but he's also said the other half has to be brought on. Change management is no different from what was done before. You got to really bring in change agents. We talked about that, the super user communities or mentors or early adopters of the technology so that they could become the mentors and teachers to get and bring the ⁓ people a lot. He also said that people really are not against this technology. They just need to be led. They need to be taught. One of the interesting parts, he said, in the survey which they did was they want to be led and taught not virtually, but in person. But if you really think about it, having a change process on how these things have to be adopted is really important to the success of any AI initiative.
Daniel Levine (42:38)
And when you asked him about where we're heading, he did talk about agentic AI and having these semi-autonomous AI programs running. What are some of the concerns that companies need to think about? He talked about this in the context of embracing it with some caution, but is it going to be regulatory issues that people have to worry about, data security and privacy, or something else?
Nagaraja Srivatsan (43:04)
There's a general framework we talked about, which is very important, whether it's agentic AI or AI, which is this context of use. And a context of use framework helps you saying, what am I doing with this agentic AI? What is the context? And what are the guardrails I have to make myself be responsible that the AI is doing what it's supposed to do and not doing certain things at it? So it's almost like you have to articulate, OK, this is what I'm doing with my agent. This is how I am governing it. This is how I'm making sure that I'm doing quality control. This is what I do if it does not do from a behavior standpoint what it's intended to do so that it can come back in. So setting that whole process is a very important and critical component of how you can then bring about the change or adoption of concepts like agentic AI.
Daniel Levine (43:57)
Well, a great conversation that gave us a lot to think about. Thank you as always.
Nagaraja Srivatsan (44:02)
No, thank you, Danny. Appreciate it.
Daniel Levine (44:07)
Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny@levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.
Thanks for joining us.