Aired:
May 29, 2025
Category:
Podcast

Improving Clinical Trial Designs with AI

In This Episode

This episode of the Life Sciences DNA Podcast, powered by Agilisium, zooms in on how AI is quietly transforming the DNA of clinical trials. It’s not about flash—it’s about removing friction, improving design choices, and setting trials up for success before the first patient ever enrolls.

Episode highlights
  • No more guesswork. AI taps into past trial data to help design protocols that are leaner, more realistic, and aligned to patient needs—cutting down costly amendments.
  • AI isn’t just crunching numbers—it’s helping teams spot the right patients faster, ensuring trials aren’t just filled—but meaningful.
  • Before going live, AI models can simulate trial outcomes—helping teams anticipate roadblocks and refine their strategy proactively.
  • Fewer surprises = fewer setbacks. AI flags where patients might disengage or where protocols might fail, so teams can adapt early.
  • Well-structured, data-rich designs lead to clearer submissions. AI helps ensure trials meet both clinical rigor and regulatory expectations.

Transcript

Daniel Levine (00:00)

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. Hi, we're good to see ya.

Amar Drawid (00:27)

Good to see you, Danny.

Daniel Levine (00:29)

We've got Scott Chetham on the show today. Who is Scott?

Amar Drawid (00:33)

Scott is the co-founder and CEO of Faro Health. He's led clinical operations at Verily Life Sciences and served as the chief technology officer and vice president of clinical operations at Intersection Medical. He's also been a venture partner at Versant Ventures, and he earned a PhD in health and medical physics and a bachelor's degree in medical engineering from Queensland University of Technology.

Daniel Levine (01:00)

And what is Faro Health?

Amar Drawid (01:01)

Faro is focused on leveraging AI to improve the efficiency of clinical trial design with the goal of accelerating the time and reducing the cost of developing new therapies.

Daniel Levine (01:11)

And what are you hoping to hear from Scott today?

Amar Drawid (01:13)

I would like to understand what aspects of clinical trial design they would have been able to automate, what have been the challenges, and how this is translated into real-world time and cost savings. I would also like to hear how receptive clinical operations teams and the medical writers have been in integrating this type of AI system into their processes.

Daniel Levine (01:33)

Before we begin, I want to remind our audience they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy this content, be sure to hit the like button and let us know your thoughts in the comments section. With that, let's welcome Scott to the show.

Amar Drawid (01:53)

Scott, thanks for joining us. We're going to discuss today drug development, clinical operations, and how Faro Health is using AI to improve the time and cost of drug development. As we all know, drug development continues to be expensive and fraught with failure. We've seen many technologies come along with the promise of cutting costs, accelerating development times, and reducing the failure rates. But the cost still continues to grow. So what's the case for AI changing that?

Scott Chetham (02:21)

As you know, the workflows in clinical development are very set because it's a regulated field. you know, we all have to do the same things, but how we do them is a little bit different. And they're deeply entrenched and they're very manual and siloed. And a lot of those particular silos require really deep specialty training. And so what I can really do here is a few things. One is we can automate now really complicated tasks because it's done by very expensive people and it's not the best use of the time. At one point in my career, I had purview over more than 50 programs. I used to spend almost between 6 and 9 PM every night redlining documents across programs and trying to keep the knowledge that you pick up in one program and spread it to another. Because as you know, Microsoft Word doesn't really translate those key learnings very effectively across groups. And I also think it can, because part of that is the use of it, can unlock these key pieces of knowledge that sit in people's heads, but keeps getting lost and surface them at the right time while people are actually making those decisions. But I think the really exciting thing is it can fit in people's existing workflow because changing that is really hard as I'm sure you know, we're pretty entrenched in the way we like to do things.

Amar Drawid (03:43)

Okay, and as we think about AI coming in, and there's different types of AI, right? So there is the automation, which is rules-based, there is the machine learning, and then now there's the generative AI. So you see all different types of AI making a lot of difference in clinical development.

Scott Chetham (04:00)

I think they're tools and each one's tool and each one has its own place, even sometimes within the same workflow. think machine learning to be able to surface, as I say, be able to find these key things and key pieces as decisions is critical. But I think the thing that I am the most excited about and sometimes at the same time, almost the most concerned about is generative AI. I mean, we use it, but I think it's - I caution people all the time, it's really a phenomenal piece of technology, but you can't replace your critical thinking with it. It can do fluent things really well. And I think it has, and we're showing it can produce like first drafts of clinical trial protocols in about 25 minutes. It takes humans months. You can't replace the critical thinking then that drives those decision processes. And I think that's the balancing act we're going to have to work out as a field as this new technology comes in. And then kind of when we teach it as well as like as people doing this work, how do you adopt it in a, I would say in a fashion that amplifies what you're doing and accelerates it, but doesn't expose new risks.

Amar Drawid (05:07)

Yes, absolutely. definitely that's where we're seeing a lot of these regulations coming from FDA and Europeans as well, right? And we're now also moving toward an era of precision medicine. So to what extent might we improve clinical development by doing a better job around patient selection and matching clinical trial participants to the right drug?

Scott Chetham (05:27)

I'm sure you've heard this and  please answer. I mean, a lot of what we do is are we collecting data from the right patient at the right time, you know, and using the right molecule in them? And what we've seen is I think there's  another dimension to that, when we talk about feasibility. I think when we look at it with traditional feasibility is where are these patients? But I think there's more to it as what journey are these patients on? So what is their current standard of care? And then the thing is, and then I think the other piece of feasibility is around, can patients actually do what we're asking them to do? And I think this is the point where, you know, you can spend a little bit of time,  is you, some of the things that we do, but I think it's important is because you could do this by hand, by the way, it just would take you months, but AI can kind of do it in minutes, seconds. We have this thing called the schedule of activities, which you're going to be very, very familiar with, which is for people who are not in a trial or a clinical study, there's this table. And in some ways, I think of it as the heart of the trial and it's what is happening to a particular patient on a given day. So what are all the activities and things  and data elements we're going to collect from the patient through their journey in this time? What does that mean? So the question is we get this table and we give it to investigators. We give it to IRBs. We give it to health authorities to review. We sit back and think, what does this mean for a patient? How many hours is this in total? Is this, you know, what does a day look like? Is this a 14 hour day? Is this, is in some early stage research, you can have almost back to back 14 hour days. And it's really, really common in early phase research. Is that practical? So when you go beyond feasibility, the question I like to like ask teams is, could you do this? If this was a family member, could you do this? And so a particular example we had was a, a  pediatric rare disease study. Because  it was early stage, they needed to collect the pharmacokinetics. what is happening to this molecule in the body? And so it's early, so they need to characterize that. But the original PK plan or the data collection plan meant for two to 10 year olds, you would have to have these children in a clinic with supervised parents. This is, by the way, extremely common, three to four days in a row because you have to have them sitting around between giving them a drug dose and then taking blood every three, four hours because you're trying to characterize what's happening. What's the half life of this drug? kind of surface, this is where AI, if you were to look, calculate all these visit times, and by the way, this has implication for hospital staffing because now you're crossing shifts, training requirements. And it has big implications to who could even participate in this. How many family members can take that many days off work and actually be with their child through that experience. And so what was they're able to see when this data was basically shown to them in real time, when they were designing this trial, they weren't actually, this isn't going to work. So they went back to this case to the FDA and took the data to them and said, this is not, it's going to be impossible to get anyone to enroll in this trial. Could we instead do an innovative design and say, actually, we need this data, but let's do it in a way that no one child has to do multiple days in a row or 14 hour days. And when everyone was shown the data, the regulators actually granted them some leniency in what was traditional to be able to come up with something that was more meaningful. Traditionally, you would have waited for an amendment when you had zero enrollment. And so ...what AI can do, and this is just one example of surface thing insights early, is when you're making these really early decisions, right when the team's making them, they actually see the impact and easy to understand metrics. And it's pulled from kind of like past and this is all done by AI because it can basically link the information between an activity to different time sources of how long does this take, by the way, which is linked to this code. And so it can reach into systems and go, by the way, and it costs this amount of money and it can assemble all this for you. So you don't have to do these painful analyses that take time and are often too late in the process to have an impact on what I would call a good design. So I know it's a little roundabout way of answering feasibility, but I think we have to expand our definition of it. It's not just on a site. Are there patients at sites? Are there patients that can do this at that site and in the site even do it because the inverse of that is looking at the staffing demands, as I said, and the technical training required. When you surface that to people as well, it has big budget implications. so it's, it's ... This is where I think AI can, it shines a little bit is it can do this very manual busy work behind the scenes or to just help people make decisions because right now all of that just gets lost in red lines and word documents. People make it have a thought or someone goes you should do X, Y and Z based on my past experience and then it cleans up and it's all forgotten. So I think that's the exciting - I think these are the exciting things that can fit in the workflow today and just really help us make better decisions.

Amar Drawid (11:03)

Yes, yes. And I remember when designing clinical trials, I don't think this was a big factor that we went into, right? But it is important when actually running the trials. yeah. So, Scott, before we go into Faro Health, I wanted to know about, so your background is in leading clinical operations. How did you end up launching an AI company?

Scott Chetham (11:25)

It's a bit of a roundabout step, I would agree. So here's what actually happened. At a prior role, you get to the point in development operations where you start to run more and more programs. I think I got to the point I had 54. And I was working nearly 12, 14 hour days at this point. And a lot of it, as I said, was very much becoming what I would call as the central knowledge bank for that program area. You become the inherent expert and you're trying to keep all your teams aligned. As I said, I'd be constantly redlining documents and then trying to take the information and learnings from other programs and cross feed them across that while managing an increasingly growing team. And it's just getting harder and harder. I'll just be blunt. Clinical operations and development operations is just, we have to collect more data than we ever did before. We have to collect them from more sources in a highly constrained medical system that is at capacity, and we're asking them to do more. So frankly, I was getting burnt out. It's like, is this what I want to do with the rest of my life? So I actually took a sabbatical and actually went surfing in Costa Rica. And I had a very great, nice employee, that employer that I basically gave me fun employment for a year. So I went to them and we're still very, very close. It was like, I need to take another career path. And so we worked together to come out with a transition plan. And I even hired my replacements and kind of helped that through. But I had this time to surf. And at some point I was like, well, what do I want to do with the rest of my life? And I started creating a list of all these pain points and things that I think all our colleagues are just struggling with. And that's how the genesis of the company kind of came about. And from there, it was like, well, if this is the problem, how do I solve it? And obviously AI, the only way we would be able to do this is really with AI. And that was the genesis of the company. So to go around and find a technical person to kind of, I should say, someone to lead it, so I basically started assembling a team. And so that's the journey. So I'm trying to build what I wish I always had.

Amar Drawid (13:35)

Okay, gotcha. But this is a great idea in the sense that there's a problem and you're trying to solve the problem and you're bringing technology to solve it rather than having the technology and then looking for a problem to solve, right? So this is where there's much of a focus. Hey, this is a problem that we want to solve. So that's fantastic. So, all right, so what problem is Faro trying to solve?

Scott Chetham (13:57)

It's twofold. It's this knowledge walking out the door, because I think we underestimate in this industry, how much inherent human knowledge sits in people's heads, but these programs run for 12 years, often 10 to 12 years. And the things that you learn in the earlier phases that are actually critical sometimes to later phase of why did you do that? Like, why is that set up that way? Why is this thing you're collecting collected that way? And sometimes it's even down to intricacies of the inclusion exclusion criteria, like the population dynamics. You lose that. And unfortunately I've had programs fail because we've lost knowledge and then you have to recapture it and do subpopulation analysis to re-understand the thing that you actually learned five years ago. And sadly, I think it's a lot more common than we necessarily give it credit for is loss of knowledge. And then as I would say, programs expand over time because often, as you know, we start in a more narrow indication and then we try to expand it. Cause that's a very clever way to kind of get an important molecule into the market. But also, you know, from someone who just does the work, so much of like my time is spent on what I would call very important but very, very time consuming tasks and everything from budget create, like site budget creation, not that I've personally done a site budget in 20 years, but they take about four days of work. That's ripe for automation now. In fact, it's one of the things we do can do, but also like protocol writing. And I think this is one of the things when - when AI is applied appropriately, so different tools, machine learning as  you said at the beginning, machine learning for being able to pick up these patterns and say for some people, but generative AI to be able to then take a design and then translate it into words. And the other sets too, I'm going to add a third one just randomly. I think the thing is in my opinion, really well designed programs to prove a molecule is safe and effective has a precise definition of what the experiment. And I think today we write them in these 200 page long form protocols. You can actually represent, and this is one thing Faro does really well is by translating, instead of doing it that way and designing in a digital environment that can actually have precision enables you to have a "write" in one place, very precisely, definition of something. And then you can read from it and translate it with generative AI or other transformations to automate inherently complicated and long processes. So for example, I'm sure you've seen this many times, chemistry panels and everything. I'm measuring a chemistry panel. What did you mean? What analytes are in the chemistry panel? Like what machine are you running it on? Like what are going to be the normal ranges? Are you running it as central lab or local lab? Who's reviewing the safety data? All of that actually can be in a digital environment. You can do that in a few clicks and be done. You don't have to write pages and pages to describe it. And then you can use an LLM to write it for you. And so we kind of inverted the experience, but once you've done that, you can then apply AI in really exciting ways to start to automate things like site budgets. You can automate EDC programming. We're partnered with Veeva on that. So take something that's six weeks, compress it down to a tiny fraction of that amount of time. And I think this is the exciting journey we're on now. And I think it's partnering with companies as well to do it because we can't do everything. So out of what we do is the approach in my world is have an open platform that people can and data scientists and AI teams can reach into and build from us. Because I think that's an inherent I'd love to hear your opinion on this. I think one of the inherent problems if some of the previous technologies is they were too closed off and too warped off. I think we need the opposite approach that if it's your data, you should be able to use it in any way you like. And you should actually be able to just reach into our system and build on it if you want. But I'd love to hear your opinion on that, if that's the way you think about it.

Amar Drawid (18:22)

Yeah. I mean, one of the things that for the way I think about, especially generative AI, is inside generation is one of the things, right? I mean, there is so much knowledge, there's so much data. And what you said about like there's stuff in people's head. How do we translate that, right? And so some of the things that we're trying to do here is, yeah, so there is knowledge that you can capture in these RAGs, right? Like the retrieval augmentation, like the corpus that you have, but then that doesn't capture the knowledge that people have. So, how do we translate that? What we're trying to do is actually try to capture them using prompts, questions, and answers, because we want to teach the LLM how to think. That's one approach that we're trying right now. That's a tough question. That's what I also think about when we're employing new GenAI system.  And people are saying, okay, well, can it do this basic task? And I'm like, I'm not interested in that. What I'm interested in that is if I wanted to work as an analyst, I'm going to give it the knowledge that the analyst has, and then can it do a good job of an analyst rather than just trying to do a basic task. And that's a challenge. I think we are all trying to solve that,

Scott Chetham (19:39)

It's a great point. I mean, that's actually how we in some ways approach protocol authoring is that there's a lot of information on protocol authoring out there and you can teach, you can provide by RAG huge amounts of data. We, you know, we have a lot of proprietary knowledge graphs we put in there to know that A is equal, you know, links to B and C and D to give it some ability to really reason like will understand concepts. But the trick is to get them to write protocols is what's the context? What was the actual intent? Because it's hard. They don't have critical thinking to the point that, I'm doing this molecule because - it's going to have to ask you all sorts of questions on the safety profile. It's going to have to ask you. You're going to be at a thousand prompts. And I think this is such a great point you brought up is it's marrying those with, in our case, we married it with this vision, this designer to take the key concepts because combining them together, actually get a way to prompt. Inherently, the designer is an LLM prompt engine. and, but what we've found is in other people can take that, customers can take that information and they're doing all sorts of really interesting transformations now. We work with two of the top fives industry wide and they leverage that digital definition now in ways that we actually weren't quite predicting. Because you can, you know, from trial simulation, or if we were to do, you know, we run scenarios, we can run scenarios and things like that, but they're taking it even further and going, actually, if we had a design and we did this and we have this past bank of designs, now we have a digital representation from us. We, if we did install that things, then you can actually start to run historical comparisons. And as you said, do really exciting things because you're teaching, they can teach a machine now because they have a precise standardized definition. I think that's kind of what in some ways we inherently ended up building was this thing that highly structures trials with a lot of knowledge graphs behind it that says this thing links to this thing, links to this thing. And this is actually what happens when this happens. So if you told this, this happens. And that has turned out to be an interesting thing to have solved from an indirect, very indirect way.

Amar Drawid (21:59)

Okay. So, trying to understand that deeper. So, in terms of designing, writing these protocols, right? So, are you taking a lot of templates? And then you're thinking about, I mean, ... And the way I also think about it is that there... So, let's say you're running a protocol for an oncology trial, right? So, there are things that could be more specific to that specific type of cancer, things specific for oncology in general, but also there are some... just collecting some...usual LDL cholesterol biomarker data as well. So how do you structure all of this? And to what extent can you automate that writing of the protocol?

Scott Chetham (22:35)

We even made almost now 100 % of it. You will have to upload some bits, but the trick, I'll get into the specifics because I think that's the important thing is this is where I would say a study designer really helps. And what it is, if you break a protocol up into some key concepts, so you might have your objectives endpoints, your inclusion exclusion criteria, the schema and like the cohorts of what's happening. and then schedule of activities. I'll go for example, in the schedule of activities, what we have is banks of oncology designs like standard care and other things that you can reference, or your own template because I'm doing particular solid tumor or, you know, if I'm in a solid tumor trial, you can either - it looks a lot like Word, but for example, I'm going to stick with my, I'm going to put a physical exam. It's almost in everything. We type physical exam in Word and the problem is, what did you mean? Physical exam is actually different in, I'm Australian, I've been in medicine in different countries. It's actually got a slightly different definition. In the US, it doesn't include a neurological study. So what systems are you including? See, what a designer does is it has all that inherent deep information in it. So, okay, when I, so it's not going to, ours will not design the SoA completely for you. can ask it, sorry, the schedule of activities, everything being sorry. It can suggest what you should do. Then you can refine it. But then to write the protocol, what our system does is it goes in and it goes, okay, I know that I am collecting these tests. Let's say a physical exam and a physical exam is constructed of, let's say - thinking - In this particular case, let's say it does have a height and weight, so normally that, but it also has things like head, neck, shoulders, things like that, skin exam, all types of things. It knows what the systems are, so it knows when to write that, what it has to write. It knows for vital signs that, okay, I'm collecting vital signs. I know that vital signs is actually consists of X, Y, and Z. I know I'll do this, okay. If I have a chemistry panel, I know I'm collecting the chemistry panel on at these time points and these are the analytes and it builds the table up. And so from having some of these simple building blocks that team - by breaking the problem up first into two things. One is basic design with - with assistance as one system, and then a separate system that actually then goes and say agents that then actually goes and writes from this precise, very detailed definition that sits in a different model. And then be able to write the long form version of that. So, it's broken into steps on the journey that, that people follow. So I start at a high level with, you know, people start sometimes at a concept sheet stage, just objective endpoints, key things I'm going to measure and the design. We help you build up from that in one, in the designer. Once that's reached a certain amount, and so it's only as again, it's, it's key components of the study, what we call the synopsis. And the LLM can come in and write the rest of it from that. And it does that by leveraging, because it knows the intent of the design, it then can leverage the RAG system, as you said, of all the past oncology studies to write the protocol, the long form version. And then it uses a series of other agents based on training to go through and check against, okay, we had this precise intent here. This what would be rope. Are they actually aligned? And then another one that goes through independently and goes, well, actually in the history of all the things done in oncology, in protocols, you're missing this. This is in other protocols. You're missing and you have to make decisions. Like it won't do things. It won't inject concept, but it will give you a check list and comments about, by the way, you've missed this key concept. You've missed this. Cause this is in like 50 other trials. And so It's kind of matches the workflow of today. That makes you step up.

Amar Drawid (26:52)

Okay. But then do you have like different agents who are looking at these different things? So how have you built that? Like the agent structure?

Scott Chetham (27:02)

It's agentic AI and this is where I sound like I know what I'm talking about. I'm not there. I'm the dev guy. I'm a clinical development guy. I'm not necessarily computer scientists or a machine learning AI person. I'm just learning as I go. But yeah, it's agentic AI. And so there's different agents for different things who are very good at it. We've found, or the team has found that you need quite a narrow context for these things to perform really quite well. And that you actually have to, so there's, I wouldn't say it's a different agent, but you even have to break down sections in a protocol, like the safety section into one specific context and then build. And then you need something then that holistically can look at it. So, and if you try to get it all in one, it just, it loses context. So I think that's been a lot of the key understanding. And I think the other key understanding as you said, is it's teaching, it's teaching machines to understand the problem. And I think that's, that's what we brought in with our, with the study designer is to make that thing work we had to build an ontology that could represent break protocols down in concepts and how those concepts deeply relate to every other concept within the domain. So it's  exciting, I mean, and we've done it, but it was not a trivial amount of work.

Amar Drawid (28:31)

Not at all not at all and and what I've seen with like, you know training a lot of these GenAI systems, right on the technical side it's okay, but it is the domain knowledge that's gonna make or break it right? So that to me - is that the person who understands the design they are the ones who are in the end driving it and and they are the ones who are - who are I mean, they're making it successful, not you know. Technical can take you only to a certain extent above that. It's all about the domain knowledge  

Scott Chetham (29:00)

Exactly. And I think this is, you kind of asked it inherently at the beginning, but I'll reframe it because it's important, is fitting into people's workflow here really matters. And so I think these are really great tools to accelerate. So 25 minutes from a design to first draft is much better than a couple of months. That's, that's what you, yeah, that's what's achievable today. But it's only because as I said, you've got this very staged workflow based process. Our protocols are actually developed in a thought process that it works. We mirror, we match like, okay, a clinical scientist does this, teams collaborate together and they do these key concepts. Those key concepts are usually handed then to medical writers who in some ways are very highly trained and knowledgeable project managers who have to meet with the team and extract key information. We just make sure that key information is back in the design and collected at that time. Then I think the exciting observation we've got now is, well, it's starting to emerge because this stuff is very, it is early, is when medical review, medical writers now review the first draft of the protocol, something's not right, it's because it was never defined. And so you pick gaps up earlier. So I actually think it's amplifying human's capabilities to do better work because now you're basically realizing oh, the design's - it's missing this key concept. So I think this is a some of these times where one plus one actually can equal three. I don't see it replacing people. I think it will make people actually a lot better at what we're doing.

Amar Drawid (30:44)

Yeah, so you talked to medical writers and now been dealing with this content generation, but there I have seen some resistance from the medical writers to adopt these kind of things because there is that fear that is that going to replace me, right? So yes, there's the technical and business element, but there's the human element. Have you dealt with that or have the customers that you have, have they had to deal with this human element?

Scott Chetham (31:11)

Yeah, I would say the hardest thing, this is what I've learned as CA, the hardest thing is behavioral change. And I think it's just inherent. I mean, we have a, part of our business is actually change management, best practices, that we've got a consulting arm that does that now. And it came out of necessity. And so I think it's, there is, I would say about, I'm going say 80-20 rule. I would say about 80 % of people have been very excited about the technology, but then you've got a smaller pocket who are resistant. I think the form of how we deliver our technology, like actually the deliverable, so the output work. So the way we output the design and output, I would say, generate has changed based on how users want it. So outsets - While the design is in a nice kind of like, it's all done through a browser, but it's a more encapsulated experience. We found medical writers want to work in Microsoft Word and they don't want to leave Microsoft Word and that if you try to break that experience, it doesn't go very well. So we actually, a while ago, changed that part of how we operate and created what I would call a Word add-in that directly ties back into our system so that it  will sit if you have a template and whatever template your company template is and you don't have to waste time uploading it. Our agent reads the template and then, and then guides you through creating it, you know, section by section versus you having to change anything about the way you work. We found we had to do that because trying to break that - what I realized is medical - the tools in Microsoft Word for medical writing, like how they lay things out is critically important to them, that look and the styling, even though it's - there's a style that most companies have a style guide too. You can't, it's too hard to take that away and do this at the same time.  To your point, we had to change, that's where we had to change how we operate to match how people operate.

Amar Drawid (33:20)

Okay, okay. so Faro claims to eliminate the bias and hallucinations in protocol designs, right? And see, and this is the interesting thing where on the one side, we want the large language models to be creative, but on the other hand, you have this clinical trial design and you don't want it to make any mistakes. So how do you try to have that right balance?

Scott Chetham (33:45)

That's a great question. I think there's not, I don't know if anyone's achieved it yet, to be honest. I would say from our perspective for  just the generative AI writing, we have a very rigid enforcement. They said a lot of agents that go back to the design and the intent. And if it's not, you get a series of checklists and then it will rewrite certain sections because sometimes the first draft, you know, let's say you were doing a, you're measuring something. It might accidentally add something because it was in the bank of oncology things that it was trained on, but that's where our agent will go through and go back to the designer and go, no, that's not there. Go rewrite it. And it loops until it sorts that out. So the reason this works is because we have a source of truth of what the intent, like the key critical thinking piece that we extracted from people to be able to do this. I think the next question is we expand. We spoke at the beginning about one thing we also do where we surface insights back to people. We're on a journey to do, I would say, over time to add in next year, would say, ... some guidance on to designers and say, well, by the way, when we've seen this X, Y and Z, there's been an amendment. So you might want to think differently about that. I think that that's coming from us, but I think how we - there is still unknown. We are still working through how to reference that in a way, because that's, this is the time in our journey now where you could get hallucinations. So we're leaning toward referencing it and then guiding people to check the reference. Like we've still written these protocols and then let them go and check that that was real. Because I think to your point, we want more stuff, but the technology is still in flux.

Amar Drawid (35:38)

Yeah. And have you seen that employing multiple agents, does that cut down a lot on hallucinations because they're checking on the force agents work? What is your observation there?

Scott Chetham (35:51)

Talking to our team, I think having very narrow context has very much helped. Constraining like one agent to, let's say abstract the dosing schedule or something like that from a long line is much better approach than trying to have an agent even extract all the concepts. And then I think what you have to do is you augment that then with the knowledge graph of how the dosing schema would fit in with the rest of it. And I think what I think our experience is actually a lot of agents with very narrow context is a lot better than one or anything with a wider context. I think we're still even working out from an optimization and cost perspective, like, this all costs money to run. Like where is that boundary? I know that our team is running tests on that from an optimization perspective, we narrowed, I know right now we have quite narrow context on things.

Amar Drawid (36:49)

Okay. All right. And I know as you're designing these trials, sometimes as we're designing these, sometimes we are even thinking of getting market access input about what are some of the endpoints that we need to have in the clinical trial design that would be helpful in getting the access and reimbursement. Is that something that is getting incorporated? Is that something you're thinking about going forward?

Scott Chetham (37:17)

I'd love to. I think ... yeah, I think when we're working with people, we have our long wish list. Yep, we have them on the wish list. I think the tough thing about being in my role is like, what do you tackle? On the feature requests, what are the ones you tackle first? I think one of the first ones we tackled is, and we have a paper with Merck Pharmaceuticals out on this. It's about from about seven or eight months ago where we worked with them and what we do is what we call in it, we help them understand what is core information for teams designing studies. What's core, what's non-core. And by core, mean, what supports your regulatory endpoints. What do you need for commercial? What do you need for regulatory? What do you need for other things, payer in different jurisdictions? And we were able to break that up in a way teams could understand it. And here's the fascinating thing. And it's worth, it's, actually worth reading for people who run programs. They saved $130 million across six programs using this. And we were quite, we partnered quite, closely with Merck. They've been a great partner for us. And how it was is the teams were able to use this data to identify, they were able to see what patient burden is, site burden, cost of the trial in advance as they were ideating on we do this scenario? Should we do this? Should we collect it at this time point? And they could just quickly click and play with different things and see the implications in real time. And what happened is, is the thing that actually changed behavior the most was when it was phenomenal to see, cause I got to be in the room for this, for these six teams. Cause when we launched it, they all came in, it was all done in three hours, each team, they came in with what they wanted to go forward with to governance. And they looked at the patient journey a lot of the time and went, was this the right thing for the patient? And do we really need this right now? And there was a lot of debate, but what they walked out with was different to what they walked in with. And that's where the $130 million was across those six programs - it was the teams deciding this is not the right time to collect that data point. It's not necessarily the only only how we get used but it's one way that you can actually use this technology and that is I mean you can go and read that I'm I'm one of the authors on it. I mean it was really nice to kind of start to see validation of using a tool like this.

Amar Drawid (39:45)

So how do you quantify these savings that you talked about $130 million? Is that like the time saved, or is that like how do you give kind of like tell people what is the ROI?

Scott Chetham (39:58)

Yeah, that one's actually really easy to understand. They walked in with a design that was collecting this amount of, it was this many procedures or assessments over this time period. That's what they wanted to take forward. If you don't collect that information in that many number of patients, it happened to equate to about $130 million. So it's a big number. Particularly in some late stage trials, if you don't collect like a simple thing, like an extra physical exam, or an extra chemistry panel on a day, that's a lot of money in big ends. Sometimes it can be even small changes can have big effects in late stage programs. We'll have to wait to see how much faster some of those programs enrolled because the program was linear and much more patient friendly. That's an ongoing thing that we're waiting for the results from, but we have to believe that less burdensome trials for patients and sites, everyone wins. So yeah, it was a very simple calculation because we could do it. They came in with the design they were going to go forward with and they left with a different one. And that's the one that went forward.

Amar Drawid (41:01)

Gotcha. All right. And so we talked a lot about the protocol design, right? So what are some of the others like drug development processes for which you are developing these solutions?

Scott Chetham (41:12)

The way to think about us is that if you design digitally, you can reuse it. So you know, the things that we spoke about today is the next one coming out, it's in pre-release and just if select number of customers is protocol authoring, that will be commercially available for everybody. Probably Q4. Earlier, if people really want it earlier. There's EDC, automation of EDC programming. They're taking that what can be about six week build time down to a small fraction of time - again, automate. Yes. Yeah. So yeah, for the electronic data capture system, all the web forms for collecting all the information is compressing. You have to build them every time. And what's really challenging often for EDC programming for people is the protocol doesn't often get locked until the last minute because there's feedback from health authorities like the FDA. And then it's this mad scramble to get the final version in because what you built off a draft could be very different to the one that is getting locked. And so that's what and it's a very expensive time not to be enrolling. So taking that way down is a huge cost saving. Site budgets again, you can't start at a site until you start - until you create a site budget, automate that process. So it's almost everything that you know, that you know on the value chain of in the beginning from I have a design, I'm done. And so, I mean, there's people that have metadata repositories. We can kind of tell you the form of the, like the TFL tables for you, and the listing shell. So we're on that journey now to partner with companies and we only, everyone will get it, but we'd like to partner with the company and to solve the problem for them along this journey of just automating complex things. And we, as I said, we're pretty flexible. People take our output, the digital output of a protocol and are doing their own things with it now as well. And we encourage that. I don't know sometimes what they're doing. We get told six months later and it's like, wow, that's a great idea. Never thought about it. And we're fine with that, that's why we did it. But that's the thing now, as I said, I think if I can put in a summary, I think clinical development is just brutally hard and we're now being asked to do more with even less than before. So what I really think is exciting about this time with, you know, all the different AI, you know, AI comes in many flavors. As you said, it's a way we can really help our colleagues really kind of spend time on things that really matter. and some are frankly what is just really laborious, know, crunching work.

Amar Drawid (44:04)

Yes, absolutely. Scott Chetham, co-founder and CEO of ... Faro Health. Scott, thank you very much for your time today.

Scott Chetham (44:13)

Thanks for having me.

Daniel Levine

Amar, what did you think?

Amar Drawid (44:19)

I think it was a fascinating conversation about how much work Scott and his company has done in really breaking down every aspect of clinical trial design and trying to automate that and trying to make it faster. So it was a fascinating conversation.

Daniel Levine (44:35)

I think it's striking because I think one of the reasons he's able to think so granularly about the process is that he's a guy who spent most of his career deep in the weeds of clinical trials. I'm wondering what you think that lived experience has, to what extent it impacts what he's doing.

Amar Drawid (44:55)

As I mentioned to him, see this is a problem looking for a solution, not a solution looking for a problem. And he has lived through that problem. He has suffered for years, as I said, through this problem of clinical trial design. It's complex, it's very detailed, is time consuming, it's energy consuming, and he is trying to automate that. And these are exactly the kind of things that we should be automating. And as he said, so that humans are then focused on more of the creative aspects or making sure that this is all right or thinking about more newer ideas that may make the designs much better rather than just the run-of-the-mill design elements. I mean, if there are specific design elements, yes, I mean, there are...hundred trials that have happened that have tackled with that design element. Why do we have to think about it a hundred first time again? Let's automate that based on what has worked, what has not been worked. So he's trying to solve that problem. I think it's a great idea. And so the solution that he's designed is very much to solve that specific problem. And I think that's a great idea. really like that. And as I've said that before, it's not the technical thing that makes these solutions success. It's the business solution. Is the solution with the domain knowledge, is that now doing the job? So I think to me, they're doing a great thing here.

Daniel Levine (46:23)

There was an interesting point in the conversation where he actually asked you for your thoughts and you talked about the desire for insight generation and capturing the knowledge that people have, not just working through the data. I'm wondering where you think we're actually at in that regard.

Amar Drawid (46:46)

I think Danny, we're just at the beginning of it. mean, as I said to him, see, the actual knowledge you can capture in these RAG systems now, but it's what matters is how do you think about it? It's like, how do you get the actual brain, right? Like there is, when we think about it, there's the memory part and then there is the application and then the thinking part. That's the one that we still need to teach these systems. And it's very specific to specific areas. As I deal with the clinical, as Scott was talking about, the clinical operations and the clinical design, that's very specific knowledge, but also goes very deep. On the commercial side, you have knowledge about sales and marketing that is, again, very specific, but goes very deep. So you can't have one solution that can work across the board. You really have to spend time in really going deep into thinking about, for this specific problem, what is the way to think about? And right now, I know there's a prompt engineering. You can give prompts, question answers. The way I think about it, I don't think that's the final solution that we'll even be dealing with even five years down the road. I think we need better systems to catch - to capture this way of thinking and the way of thinking in these specific areas.

Daniel Levine (48:12)

It's very seductive to think of getting AI to do these very complicated tasks. In reality, I think what we've seen a lot of times is that the real value in it, at least today, is the amount of time and labor savings it can produce. When you think of something like writing clinical trial protocols, though, a lot of the costs can actually come after the fact because somebody screwed up and the protocol isn't working. We have to have a clinical trial amendment, which is, you know, protocol amendments are very costly. It can be as much as $500,000 plus on a phase three study. So I'm wondering if you, to what extent AI is not only going to accelerate the process, but is it going to flag these types of potential problems for humans? Does it change the roles for humans in the process?

Amar Drawid (49:10)

I think it could certainly flag potential problems with clinical trial design. Scott gave a great example about how much time patients have to spend in the trials. And not only that, right? Sometimes I've seen a lot of different samples that get collected. I mean, is that even healthy for a patient to have that many biopsies, right? Or having those kinds of samples. People are designing this, yeah, they are looking at those, but not maybe as consistently as you'd like. But then with AI, the advantage is that you can just train that system that it needs to look at this, it needs to look at that, and then look at it consistently and calculate these times. I think AI is very good at doing that. So I think there's definitely that element that I think AI can definitely make things better. Now, of course, where can AI make things worse? And that is if...we run it by itself. And that I do not think we should be doing that at all. mean, these clinical trial protocols are extremely important. A mistake in clinical trial protocol design can cause even deaths of patients, right? So this is something that I think designing a force draft is definitely something that we should be doing. But after that, it should be the humans who should be reviewing every word of it and making sure that it is perfect. We have to have it. So, let's use AI for where it can benefit us, and let's stop there. Let's not overdo it.

Daniel Levine (50:38)

It brings us to that other point where you were talking about the adoption issue and Scott talked about the fear that people may have about being replaced by AI and, resistance to adoption because of that. Scott talked about this cultural aspect being the most challenging. I'm wondering as someone who's been in the trenches, do you see that as the most challenging aspect of implementing AI today?

Amar Drawid (51:02)

I do see that to a decent amount. And as Scott said, there are some people who are excited about it. There are some people who are not. So you're going to see a spectrum. But a lot of times what happens is that you show to people what this can do and what it can... And see, protocol writing is an extremely tedious process. So if...it's gonna free up the writers from doing that. Yeah, of course, I think they will adopt it. Now, is that now gonna replace them? It won't, but on the other hand, it is, before if you need five medical writers, maybe you don't need five anymore, right? Maybe you need like two or three, right? And we always sugarcoat, like AI kind of things. Okay, well, yeah, there'll be other jobs, but then...Yeah, so the job will change. I think what I'm seeing now is that, see, the lower level roles are the ones that, which is just more of the grunt work, doing these day-to-day tasks. I think those are the ones where AI will be playing more of a role in replacing rather than more of the senior roles where it's the thinking, it's the knowledge of those people that's important, right? So I think those roles, I think they are going to benefit. I think it's more on the lower side that we have to be careful about. Also, the one thing we have to think about is that, okay, well, the people who went to the senior role, the reason they went there is that they went through these junior roles doing the day-to-day things, right? If now, if AI is going to replace more of that, are we going to have the next generation of those senior people, right, like who have learned this knowledge through doing it, right? I mean, that's an open question. I think that we do have to think about it. Like, yeah, short term, there is a benefit, but then in the long term, how are we going to continue to develop this profession, right? And with the knowledge, think that's that I don't think people are thinking about that. I think we're a bit far away from thinking about that. Right now, it's about, OK, well, can we automate the first thing, right? That's where we are at.  

Daniel Levine (53:05)

It was a great conversation and a lot to think about. Amar, until next time.

Amar Drawid (53:10)

Thank you, Danny.

Daniel Levine (53:11)

Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny at levinemediagroup.com.

For Life Sciences DNA, I'm Daniel Levine. Thanks for joining us.

Our Host

Dr. Amar Drawid, an industry veteran who has worked in data science leadership with top biopharmaceutical companies. He explores the evolving use of AI and data science with innovators working to reshape all aspects of the biopharmaceutical industry from the way new therapeutics are discovered to how they are marketed.

Our Speaker

Dr. Scott Chetham is the co-founder and CEO of Faro Health, a San Diego–based company revolutionizing clinical trial protocol design through AI. With a Ph.D. in Health/Medical Physics, he previously led clinical operations at Verily Life Sciences and oversaw clinical affairs at ImpediMed. His mission is to modernize trial design by unlocking structured data and embedding real-time analytics to streamline development decisions.