Aired:
February 26, 2026
Category:
Podcast

Scaling AI-Driven Drug Discovery Through Biomimicry and Plasma Proteomics

In This Episode

In this episode of the Life Sciences DNA Podcast, host Nagaraja Srivatsan speaks with Scotch McClure, CEO of Maxwell Biosciences, about how artificial intelligence is fundamentally transforming drug discovery through plasma proteomics, biomimicry, and AI-guided experimentation. The discussion explores how AI can move beyond literature analysis to become an active orchestration layer across scientific discovery — integrating structured experimental data, unstructured scientific knowledge, and real-world feedback loops to accelerate therapeutic innovation. McClure shares Maxwell Biosciences’ journey from mapping human plasma peptides to developing immune-inspired small-molecule therapeutics designed to combat resistant pathogens safely and efficiently. The conversation highlights a new paradigm where AI not only analyzes science but continuously learns from experimentation, failures, and human expertise.

Episode highlights

AI-First Drug Discovery: From Big Data to Targeted Molecules

Traditional discovery approaches struggle with biological complexity. Maxwell Biosciences narrowed the problem by focusing on plasma peptides — a small but biologically significant subset of the human proteome — enabling AI models to identify meaningful therapeutic targets faster.

From Literature to In-Silico Validation

AI enables researchers to synthesize knowledge from thousands of publications, refine targets, and validate hypotheses through computational experimentation before moving into laboratory and preclinical studies — dramatically accelerating discovery timelines.

Biomimicry: Designing Drugs Inspired by Human Biology

Rather than mimicking molecules from external organisms, the company focuses on peptides already active within the human body. This biomimetic strategy improves safety profiles and reduces adverse effects compared to traditional therapeutic approaches.

Failure Data as a Competitive Advantage

A major breakthrough discussed in the episode is the value of failed experiments. By systematically capturing and feeding failure data into AI models, teams create learning systems that guide future experimentation and avoid unproductive pathways.

AI as the Scientific Orchestrator

Modern discovery requires collaboration across statisticians, AI engineers, clinicians, and experimental scientists. McClure describes a “generalist hub” model — increasingly powered by AI agents — that coordinates specialized expertise and continuously aligns research with organizational goals.

Structured + Unstructured Data Convergence

The discovery process combines:

  • Scientific literature and publications (unstructured data)
  • Experimental and clinical outputs (structured data)
  • Research discussions and institutional knowledge

Together, these inputs create a continuously learning system that improves hypothesis generation and decision-making.

Reinforcement Learning Through Real-World Experiments

Experimental results — including animal studies — feed back into AI models, creating iterative learning loops. This closed-feedback architecture enables faster refinement of therapeutic candidates and improved prediction accuracy.

Governance, Safety, and AI Guardrails

AI-driven discovery requires strong governance frameworks. Maxwell Biosciences embeds safety constraints, medical advisory oversight, and predefined ethical guardrails into its AI systems to ensure responsible scientific progress.

The Future: Health as a Service

McClure envisions a future where biomimetic therapeutics enable proactive health management rather than reactive disease treatment — potentially transforming healthcare into a continuous, preventive model powered by AI-guided biology.

Transcript

Daniel Levine (00:00):

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilesium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilsiium.com. Sri, we've got Scotch McClure on the show today. Who is Scotch?

Nagaraja Srivatsan (00:29):

Scotch McClure is the CEO of Maxwell Biosciences. He's an engineer with over 20 plus years of experience, and his background is quite diverse, which includes engineering and scientific research teams, as well as working closely with military intelligence. He brings together commercialization experience in terms of high-tech, hardware, and software, and he's a pioneer today in the field of plasma proteomics. He brings in big data, biometric drug design, and knowledge management to bring new drugs to market.

Daniel Levine (01:01):

And what is Maxwell Biosciences?

Nagaraja Srivatsan (01:03):

Maxwell Biosciences is a global health technology company pioneering in a new category of immune-inspired small molecules. These molecules are designed to mimic the body's natural defenses to combat pathogens without harming healthy cells. Maxwell's AI-first platform enables rapid innovation against rising threat of resistant pathogens, offering scalable and shelf-stable and microbiome resilient solutions for a healthier planet.

Daniel Levine (01:32):

And what are you hoping to talk to Scotch about today?

Nagaraja Srivatsan (01:35):

I'd really like to talk about how the role of AI is being played in drug discovery, how you could bring in AI to understand the wide variety of knowledge which exists in publications, and then bringing that together into manageable targets, which then can be in-silico tested. And then of course, to bring and build an AI model, which then can get reinforced learning and feedback from those experiments to get us to new drugs much more quicker and faster.

Daniel Levine (02:07):

Before we begin, I want to remind our audience that they can stay up on the latest episodes of the Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comments section. And don't forget to listen to us on the go by downloading an audio-only version of the show from your preferred podcast platform. With that, let's welcome Scotch to the show.

Nagaraja Srivatsan (02:35):

Hi, Josh. Welcome to the show. Really appreciate you being on with us. Why don't you give us a little bit of a background on your history and how you got here?

Scotch McClure (02:45):

Sure. Yeah. First, Sri, I just want to start with gratitude. Thanks for having us on the show. We're big fans of the show. So my background is in artificial intelligence all the way back to over 10 years ago. My background includes US military work where I was involved in the beginnings of the worldwide web with the military. And then from there, moved into dot.com in the commercial sector, sold a software company and a hardware company, partnered with Google on a commercial real estate AI company that was sold to a billionaire. And that was 10 years ago. That was in 2015. And in January of 2016, I started this company, Maxwell Biosciences, with the idea that we would use AI to map the proteome of the human plasma, which is the peptides of the plasma, only the small peptides, not the larger proteins. So it turns out that there are about 100,000 proteins being produced at any given time in the body.

(04:10):

So 100,000 different ones at any even given time. And that's extremely multifactorial. And so you probably understand a little bit about factorial math where it's essentially like an infinite number of combinations based on when you're sleeping or if you're hungry or if you're satiated or like you just ate, or you ate something that you didn't like, or you ate something that you really did like or you're in love or like all of these different things, that's all proteins, right? So that's the endocrine system. And so we said, "That's a little bit too much big data for us. How do we pull that down and look at a much smaller section of the proteome?" And so we said, "First, let's look at only the peptides, not the larger proteins, and then let's only look at the peptides that are circulating in the plasma, not the peptides that are inside of the cell." So it turns out that 97% of the peptides that your body produces are actually produced only and designed only to be inside the cell.

(05:18):

So the only time that you would find them in the plasma would be, for instance, like during apoptosis or some sort of traumatic stress or something where your body is - the cells are blowing up and they're leaking into the plasma and your body very quickly gets rid of those with enzymes and things like that. So that allows us to only focus on 3% of those peptides, which was a much smaller section, still a massive number.

Nagaraja Srivatsan (05:44):

Massive amount.

Scotch McClure (05:44):

Yeah. But it allowed us to work in a resource constrained environment, which was only my funding at the time. I was the only one funding the company for the first year or so. We brought in a bunch of interns and said, "Okay, let's map these." And what we found was that almost all of them have no meaning whatsoever, as far as science has researched. So we are on the very leading edge of science now. So it was only one out of 10 of the ... So we did essentially sort of fancy statistical correlation based on longitudinal studies that the US federal government had done over the past 50 years. So they've been taking blood samples from people over the past 50 years. And so then we were able to do statistical correlation to longevity based on the peptides that they found in those people's plasma.

(06:45):

And so of those, the ones that were very highly statistically correlated, only one in 10 had any studies associated with them. The rest of them were literally just alphanumerics named after the gene that expresses them. Now that was 10 years ago. I don't know what the proportions are. We decided to focus on one of them that we found it was the number three correlated. So there were two that were above it for higher correlation to long-term health span, but no one had ever studied those before 10 years ago. Number three was LL37, which is expressed by the CAMP gene, CAMP, or Catholicidin Antimicrobial Peptide gene. And it had literally thousands of studies associated with it. And so that we said, okay, now we've got a lot of meaning associated with this peptide. Let's find out what it does. And so we just had real gold mine there because it is associated with apoptosis, stem cell recruitment, anti-inflammatory cytokine release, inflammatory cytokine release.

(07:55):

So it balances both. It is absolutely required for all tissue repair in the body. It's expressed in all of the lymphatic fluids. It's in your sweat, in your tears, in all of the mucosa inside of your body, the internal topical epithelial tissues where all of the mucus goes all the way down through the body. It's expressed in every single tissue in the body and highly regulated locally by dose because lower doses, lower excretion in that tissue end up creating lower inflammatory states and higher doses within that tissue create an inflammatory state. And so your body is using that as the lever to either inflame or anti-inflame.

Nagaraja Srivatsan (08:48):

It is very interesting because you're actually taking science, statistics, and of course big data and then bringing it up into a kind of a discerning market. Walk me through that journey because you said you bought a bunch of interns, but you also needed a bunch of scientists who would then take blood samples and process that and you would need high throughput machines to give you the structure. And then of course then the whole data lake or warehouse or things where you're looking at this thing and then the computational progress to work on it. So walk me through that journey because it's fascinating because you've taken a problem and broken it up into different parts.

Scotch McClure (09:28):

You always say, begin with the goal in mind because there is no faster way to spend many millions of dollars than to just pursue science and technology, which is most of what NIH funding is for, is they're just funding science, right? And that was not our goal. I began the company in that date on January 2016 with a vision statement. I hired consultants to come in to help me reduce paragraphs down to one sentence. And that sentence was to create health for the world safely and affordably, because obviously the whole world shares this one problem where, for instance, we do not have the tools necessary to extend human lifespan to where we would like it to be. So like in the hundreds of years, right? But we're getting there. And I think within our lifetime, we will. We will have this. In fact, I think it's inevitable at this point because of the combination of AI, the study of quantum biology, and then also the biomimicry, and I'll get into that.

(10:42):

So through our AI work, essentially we kind of went old school. And so we discovered this for ourselves, we discovered this LL37, there are already thousands of studies out there that the NIH had studied because it's the main peptide of the human innate immune system. And it's actually a relatively small peptide, which lends itself to biomimicry, to create a small molecule to mimic the functions of that compound, the architecture of the compound. And so in the process of studying it, part of the data that we collected were who are the top rated published professors for that peptide? And we reached out to them. One of them that we reached out to, her name was Dr. Analise Barron at the time she was at Stanford University. When she had discovered this peptide and created a biomimic for it, she was at Northwestern University. So she introduced us to another group of folks that had gotten a grant from DARPA, currently the Department of War.

(12:01):

At the time, it was the Department of Defense, and their grant was to create a synthetic immune system based on mimicry of this type of peptide inside of the body. And so I got in touch with them. I brought them onto our scientific advisory board and I told them, "Hey, we want to create a small molecule mimic of this peptide." And we ended up buying patents--licensing patents from one of the teams and actually buying six patents from Annalise Barron, who was currently at Stanford. They were actually owned by Northwestern University, Department of Energy in the United States, as well as the NIH. All three of them claimed to be the main owner of the patent. And so none of them would agree with anybody else on terms. So we actually had to bring in a lawyer and litigate against them, which they told us we had to do.

(12:59):

They said, "The only way you're going to be able to do this is to litigate." And so then we had to litigate with them. And so then they settled and said, "We'll assign the rights back to the original inventors." And so then we purchased the patents from the original inventors because the government just can't get out of its own way. They literally weren't able to. So they said we had to litigate. So it went back to the original inventors. We bought it from the original inventors. And then, so that was about eight years ago now. And so then when we started raising money then to create a drug, and now we're going into human trials this year. So it's been a long journey. I didn't have any gray in my beard at the time. So it's been a very significant portion of my life that's been invested in this, but the result is going to be absolutely incredible because the biomimetics that we have now have been shown ... We've got like seven US military agreements.

(13:53):

We're about to land a $300 million grant from the Indian government. We've had successful non-human primate studies with India's ICMR. We're going to be publishing a series of articles in collaboration with ICMR. So the ICMR scientists are going to be on that publication showing that the compound that we tested was absolutely safe in the non-human primates. These are rhesus maquaque monkeys, about 50 pounds, and there were 12 monkeys in each arm of the study. On one side, one arm of the study, you had MRSA bacteria, which is multi-drug resistant MRSA. It's pretty much impossible to use an antibiotic to fight that strain. And then on the other side, we had multi-drug resistant candida albicans fungi. And the ICMR did this inside of a biocontainment facility. So imagine 24 monkeys, big monkeys, 50 pound monkeys inside of a biocontainment facility. And so they swabbed up inside of the sinus cavity.

(15:03):

This is a nasal infection that we, a sinus infection inside the monkeys. They swabbed in there, they tested the microbiome, and then they infected with these horrible pathogens that would be deadly to a human. And we suppressed the immune system of these monkeys to make sure their own immune system could not fight it off and then hoped that they wouldn't die too quickly. So then we infected the monkeys. We gave them time to develop an infection, waited to show that they were very sick and then started treating them, and they were cured in three days, absolutely cured in three days. And so we know that they were cured because we did something called shotgun analysis, which essentially looks at all of the nucleic acids that are discovered inside of the test sample. And so we're able to identify what pathogens as well as commensal microbiome was in there by the DNA, by the nucleic acids.

(16:15):

And we showed that the DNA of the pathogens that we put into the sinus cavity went from highly infective amounts of colony forming units to nothing. It's gone. We completely removed it. And so this is a first for mankind. It's a really big deal and it's extremely important for India because about a third of India's GDP comes from the manufacture and commercialization of generic antibiotics and antibiotics are failing globally because of antibiotic resistance, which means a third of India's GDP potentially could fail within the next few years. So this is an existential issue for Modi and for the government. And so this is why ... So they said, "Well, we can't give you, a US company, a grant. It needs to go to an Indian company." And so we collaborated with an Indian entrepreneur and so this grant is going to go to an Indian company that is licensing our technology for potentially to allow these Indian companies that are producing antibiotics now then to license our compounds and just happens like a miracle that it works on their same capital equipment to produce the compounds.

(17:42):

So it's very similar to a peptide and most antibiotics are peptides.

Nagaraja Srivatsan (17:47):

Yeah. So walk me through the actual process of discovery. What you described as the preclinical primate experiment, which is fantastic. But to get there, you started, as you said, longitudinally, right? You looked at all the different patients over 50 years and you knew that this was the most publications, which is the 3% rule. And then you started to apply AI into this dataset. Walk me through that process of data sieving. How do you bring that data, which was massive. What kind of algorithms you've kept honing in, honing in, honing in to get to that right peptide structure.

Scotch McClure (18:30):

First, what we needed to do was we needed to understand what makes an antimicrobial peptide toxic, right, because we did not want to mimic the toxic aspects of antimicrobial peptides. And all animals have antimicrobial peptides. In fact, all organisms have some kind of innate immune system. It's generally antimicrobial peptides, plants, lizards, snakes, mammals, et cetera. And so we collected 160,000 antimicrobial peptides into probably the world's largest antimicrobial peptide database. Obviously, that data was not normalized in any way, right? It came from so many different sources. And so then we had to normalize. So when we normalized the data, we had to cut out a lot and it went down to about 11,000 peptides that had very normalized data sets. So then we did an analysis on that. And so our machine learning algorithm then was able to predict with 90% accuracy from this training set.

(19:45):

So we didn't know exactly what structures were actually causing the toxicity, but the machine learning model would be able to predict. And so then we started doing sort of trial and error in silico on different biomimetics to see which ones would be toxic. And so it turns out that the original ones that Dr. Analise Barron, who is currently at Stanford, the ones she came up with just intuitively are the ones that were the best candidates. So those biomimetics are the ones that we're moving forward with, or very similar to them. It's not exactly the ones that she's published in academia. We've made some slight tweaks to them, but those are the ones that we're heading into human trials with.

Nagaraja Srivatsan (20:36):

And I think that's where a lot of AI discovery is going, right? People are reading literature, like you said, thousands of that, cleaning out what would be the actual molecule, then using the molecule to do in silico trials to verify. And this is classic, this new way of doing discovery. The old way was I wouldn't have that. I would take literature, do in silico,

(21:00):

Figure

(21:01):

out what it is, and that's too much. Now you're taking literature, honing it down into few targets or molecules,

(21:09):

Then

(21:09):

doing the in silico to validate it, going back and refining that, and now you're at a much faster pace in doing it versus what you would do before, right? Just walk me through, is that fair that this is kind of the new age of how you do AI discovery and...

Scotch McClure (21:24):

So I would say our main advantage, so I think everyone has access to AI and there's so many peptides to mimic and we're not going to be able to do it. But the key aspect is to mimic peptides that are currently active in the human body. So like Wegovy and the GLP-1 agonists that we see that are so popular today, from Eli Lilly and Novo Nordisk making them billions of dollars, have very harmful side effects. And so there's people going blind and there's all kinds of liver damage, all of this. It's because it's coming from a lizard, right? They're mimicking a peptide from a Gila monster. So it's a peptide that is found in Gila monster venom. It's a paralyzing venom. And so what they could have done instead is mimic peptides inside of the body that express a signal that says that you're full, that you don't need to eat anymore.

(22:29):

And so then they wouldn't have these gastrointestinal paralysis side effects. So the key there is starting with which one of the peptides do you want to mimic? And if you're going to go into humans, it makes sense to mimic a peptide that is inside of a human already because you have a much better probability of being safe. And so that's, I think, one of our main strengths and why we have no side effects seen in any of our studies at all, including non-human primates, which is an excellent model for humans.

Nagaraja Srivatsan (23:02):

And Scotch, would you say that given that you have that database of 11,000 human peptide stuff, as you went from 160 to normalized datasets, that now you could pivot to other disease states because now, as you said, you have that human stuff and you could go and mimic a biomimic for GLP-1 or other areas. And so the vector of your growth is not just discovering an antibiotic part of it, but it could go to other disease and therapeutic areas.

Scotch McClure (23:32):

Absolutely. Yeah. We're already exploring that right now. We're looking at mimics of brain-derived neurotrophic factor, BDNF. We're looking at multiple other peptides right now, and we're actually planning for the collapse of health insurance, the health insurance model as it stands today. We believe that the future will be something like health as a service, like a SaaS model, software as a service model, where people are essentially paying something like a subscription to a company to keep them healthy. Because this biomimetic, we're assuming that the biomimetic is going to be successful because of looking at the data, it looks very high probable that this is going to be very successful. It's going to replace antibiotics, antifungals, antiviral therapeutics. It may even make vaccines unnecessary because you don't need a potentially harmful vaccine if you have a very safe antiviral therapeutic. So you would only need a vaccine for something that you have a very high risk of catching a virus that would kill you so quickly that you wouldn't have time to get the therapeutic, right?

(24:48):

And that's very, very rare, super rare. Like that's the type of thing that the military has contracts with us for right now. So they're super rare. It's never going to be a commercial target for any company because it kills people so quickly it's never going to be a big population. But for instance, like the common cold, like coronavirus, rhinovirus, influenza, those things, we have compounds that are effective against all influenza, all coronavirus. So we believe that those will be essentially trivialized, that we're moving into an age that is post age of diseasing, where now you're not ... Allopathic medicine, this model of allopathic medicine where you pay to fight a disease is going to go away because diseases will essentially be a thing of the past and we'll be in an era where you'll be essentially paying for what kind of function you want. So if you can dial up and down the endocrine system, if you can dial up and down what you can essentially control like, okay, do I want to divert energy to intellectual performance or do I want to shunt energy over into physical performance today?

(26:12):

Or is today just going to be a relaxing day? Is today just a, I just want to go down into, I want to turn off my fight or flight side and I want to balance my nervous system now over to just total rest. And we kind of do that today with alcohol or things like that where we try to relax ourselves. In the future and

Nagaraja Srivatsan (26:36):

Or meditation, meditation and

Scotch McClure (26:40):

Or Prana and all of that, going into Ayurvedic medicine, this is a practice that goes back many thousands of years, right? But we'll be able to do that pharmaceutically as well. So for instance, people that have PTSD and they're not able to rebalance their immune system, we'll be able to do that pharmaceutically as well. And I'm a big believer in Ayurvedic medicine. I practice mindfulness and all of that because it's actually better to not do it pharmaceutically if you're able to do it. It's really if you have some sort of disorder that you would need to do this pharmaceutically.

Nagaraja Srivatsan (27:16):

No, absolutely. Scotch, let me pivot a little bit because what you're describing is the new age organization of how you do AI discovery in a different way, but you're bringing in different teams, you're bringing in a scientific team for the in silico, you have a curation team which looks at literature search, the AI team, bringing all of these things together. Walk me through that. How does somebody put a company together where you're bringing in diverse skillsets together? And the second part is when you bring diverse skillsets together, there's harmony if you can have a symphony, but also there's discard because all of these guys are not looking at it from their perspective. And so how do you bring that team together? It'd be really fascinating to know how you put that thinking.

Scotch McClure (27:58):

I've seen it done not just at our company, but also at other companies. And the winning formula that I've seen is that you have a sort of super generalist hub, which is generally the CEO or the chief technology officer, and that person is essentially having to learn a lot about what everyone is doing and they're coordinating everyone. In the future, and maybe even in the very short-term future, I believe that will be an AI agent that'll be doing that because I think things are moving so fast now that no human can keep up. And so you'll have a sort of generalist hub agent and we can say that's human or AI, it doesn't matter. And then you have specialized agents that are the spokes to that hub. And so for drug discovery AI in particular, you need essentially like a statistician, somebody that is using R and Python and statistical tools to feed statistical structured data into an AI system and providing a knowledge graph on top that is being fed out of a machine learning system.

(29:25):

So multiple different types of AI. So you've got machine learning that's feeding in structured data into an LLM, which is the general hub. And then you've got human input from various different real world experiments like animal experiments and stuff that is structured data that is being fed in as well. So obviously you have to have humans involved in order to facilitate and coordinate experimentation that has been done in the real world. And a lot of these things are--there's no automated way to do it. You're not using a robot or anything else because you're creating biomimetics for peptides that have never been mimicked before. And so there's literally no scientists you can go to and say, "You're the expert on this. How do we do it? " You just have to get a lot of scientists together and have a debate. You have to have a scientific advisory board and you say, "This is why I think we should do it."

(30:23):

And then you give it your best shot. And we've failed many times, of course, over the past 10 years. And you just have to iterate through those failures and learn from those failures. And what we found is that the failure data is absolutely gold. You want that failure data in your model. You want the meta tagging on that data to say, "This didn't work. We don't know exactly why." But once you get enough of the failure data, it actually, we start producing wisdom essentially within the machine where it says, "Oh, don't do that because it's probably this." And so then we get really good suggestions then coming out of our AI system. So we have an AI system actually that runs most of the company at this point. We're feeding all of our email, calendar, everything into the system. It's the same system that we're using for drug discovery because we're finding that the conversations between scientists actually have a lot of value.

(31:20):

It's not just their intuitive understanding of the drug that is expressed in these conversations and it's coming in then being fed into our model as well.

Nagaraja Srivatsan (31:29):

Well, wonderful. I mean, you're just describing the holy grail of, hey, I have unstructured data, which is in literature and knowledge. You're bringing in a knowledge graph to figure out exactly what is going on. Then you have structured data, which is experiments. You're now parlaying structured and unstructured data together. Then you're putting all of this thing through in silico to verify what's working, what's not working, because that's just a hypothesis. You're testing it and validating it. And then you're feeding that back into the model to saying, "Don't go down this path, go down this path." And the feedback loop is then going to make it much better to almost gamify which areas they should be going after. And this is a genetic process. Tomorrow you can drop, I'm just making enough coronavirus peptide, what you need. And you look at the literature search, find out what exactly is the peptide, do your AI model and experiment, do the in silico, then come back and feed and say, "Don't go down this pathway, go down this."

(32:30):

 Am I describing kind of what you're going through in your company or am I over describing-

Scotch McClure (32:35):

No, no, that's absolutely right. From today's current standard of technology, that is what an engineer would look at and say, "These are the quantitative measurable states for how we progress to this point." That is not how we'll be progressing into the future though. So with the rapid change of how AI is progressing, you actually can't do it the way that we did it because you would be too far behind everyone else. You essentially have to be educating in AI. You have to be training in AI with how all of your data, including conversations, emails, everything. And then you have to slowly give up control of the scientific process to the AI, still coordinating and still being the conductor. And that means creating what we call a canon. So we have created a canon within the organization that essentially aligns our AI to what we want to do within the organization.

(33:47):

So it starts with the vision statement, right? It starts with whatever it is that you want to do. For us, it's to create health for the world safely and affordably. For someone else, it might be cure cancer or something like that. So that is a curing cancer or stopping cancer globally or something. That's a vision statement. It's not really measurably possible to do that because people are cancering constantly, they're getting infected constantly. And so that's an endless loop. You have to have a vision statement then to say, "Okay, what do I want to do by a certain time?" And that gives the machine then a sense of timescale of how to budget time and resources, right? And then a list of do not dos, right? Things that we're not willing to accept. So these are the guardrails of like, we are not willing to cut corners on safety.

(34:43):

And so then you have to get very specific on that prompt for what does that mean? What does cutting corners on safety mean? And then we are willing to test in animals and that we go through the list of what we are willing to do in animals in order to preserve humans and not take too much of a risk in harming humans, which gets really tricky because machines are very literal. And so you have to instruct the machine that there's no way to get a hundred percent chance that you're not going to harm somebody, or say 0% chance that you will harm someone. And so you have to get into these weird conversations with the AI of like, what is the acceptable chance that you could harm someone? And what we have decided is that you kind of want to skirt that argument. You want to have to go around it and you say, only move forward with molecules that are effective and safe in non-human primates and then iterate on that molecule going forward to see if you can use that molecule then for many other disease states so that you have an acceptable level of safety after phase one safety trials in humans.

(36:06):

And so that's what we're focusing on right now is just to go after that same, use the same molecule as a nasal spray and a subcutaneous injection and an intravenous and all of that. And then go sort of start from scratch on the other molecules through animal trials and all of that.

Nagaraja Srivatsan (36:23):

Yeah. So last question, you said slowly the human is giving a little bit of the guardrails to the AI. Of course, that raises the question around AI governance, the ethics behind it. You did talk about what are the guardrails in terms of safety and others, but what is your governance setup? Is it you and CTO? Is it external boards? Who checks the checker? So walk me through that.

Scotch McClure (36:51):

So we have a medical advisory board of the top medical doctor experts within the field of head infections, which is what we're focusing on right now, is head infection. So sinus, ears, eyes. And so they review what we're doing. We have a team of R&D professionals that have 20 to 30 years experience each. So they have a lot of intuitive knowledge of what works in a human trial and what doesn't, which is why we're trying to collect that in our emails, in even text messages, and stuff are being fed into this AI system to collect that intuitive, the wisdom that they have gained over time because they're constantly referring to lessons that they've learned over their decades of work in human trials. So we're collecting that and adding all that in. And we actually had to come up with a new type of database, which is, we call a structured heuristic understanding RAG database.

(37:59):

So SHU, SHU RAG database. We just submitted a patent application for that because we were getting so much unstructured data into the system from all of the chats and emails and everything that it was becoming very expensive to work with. And so we had to structure it based on a heuristic, which you'll soon be able to read in the patent, but we're open sourcing that. And so the basic fabric of that, the basic fabric of that architecture is now at openshu.ai, the word open, and then SHU, which stands for structured heuristic understanding.ai. And so that whole GitHub just got published today just coincidentally with your podcast. And we're about to officially launch it with Zayed University and Talouf in Abu Dhabi. I'm in the UAE right now. I just came from a meeting with them. That's why I'm in the suit and tie.

Nagaraja Srivatsan (39:05):

No, perfect. No, no, this is fantastic. I know we could go on for hours. You're fascinating, in the bleeding and more than cutting edge, bleeding edge of where science and AI come together. Really appreciate the conversation. I think in summary, you're really in a fascinating space of where intelligence or knowledge is coming from published articles, but then you're validating that very quickly through scientific means and to be very targeted. And then you're constantly learning so that you can then pivot and make it applicable for other areas, but you're doing that in a very safe and structured manner. And so it's fascinating. I think these 10 years have taught you a lot, and I think the next 10 years, you're going to bring in a lot of new drugs to market. So congratulations and thank you for being on the podcast.

Scotch McClure (39:59):

Really appreciate it. Thank you, Sri, and thank your team for me as well. I really enjoyed it. It's quite an honor and a privilege to be on your show. Thank you for your time.

Daniel Levine (40:12):

Well, Sri, that was quite a conversation. What did you think?

Nagaraja Srivatsan (40:15):

I think it was a really exciting conversation with a visionary CEO who's really bringing together where AI discovery is going in the future. It's the classic intersection between what you're doing with unstructured information or knowledge, and then bringing that unstructured knowledge into science to make sure that you're discerning what works and then giving a feedback back to AI to make sure that you're being more targeted and more demonstrative on what the outcomes are. So it was a really good conversation.

Daniel Levine (40:48):

You did take it from the abstract to the specific case with their lead peptide candidate and how it went from all these massive amounts of data to a single peptide. Pretty typical of what we're seeing in AI today. Is it unusual, the approach they're taking?

Nagaraja Srivatsan (41:07):

No, I think he's talked about this, that statistically, if you look at it, there are 100,000 or so peptides, 97% are within the cell. He was looking at the 3%, which is outside in the plasma, which is trying to get to a finite part, but the 3% initially they found that it could be anything, right? It could be any symptom. And so they had to go to literature to find out what matters the most. And they found this particular peptide, which has been most widely published as something which is in the plasma. And then they started to do research around more and more about what it is being used for. And they found this good partner from Stanford who had the patents on some of the mechanisms of action for these peptides. And then they were off to the races to prove what value this particular thing would be to either boost it up or down in primates.

(42:01):

And they're seeing some very good results. Of course, you got to get it in human and scale, but it's really fascinating. How discovery is going to be done in the future is very similar to this. You have knowledge, you have science and you have AI. All of these things have to come together.

Daniel Levine (42:18):

One of the things he said that really stood out to me was that they found the failure data absolutely gold, and he talked about the need for having enough failure data. What did you make of that?

Nagaraja Srivatsan (42:31):

I think that's the best part in AI. We always try to talk about the fallacy of the positive, right? When things work, it's great, but we don't learn. When things fail, we learn. I like what Thomas Edison said, the thousand failures led to the success of the creation of the invention of the light bulb. And it is that because without that experimentation, you're not giving AI good models around pathways to go to, but more importantly, pathways not to go to. And that's part of science and experimentation. So it's really fascinating that you're institutionalizing that and creating a database off that error repositories or failure repositories is phenomenal.

Daniel Levine (43:12):

It's interesting too, because you had asked about the challenge of bringing together people with different skillsets. And he talked about the use of a generalist hub for this, but he sees this transitioning to AI playing that role with generalist agents and specialist agents with human input facilitating and coordinating experimentation. He talked about the need to slowly give up control to the AI. What did you make of that?

Nagaraja Srivatsan (43:41):

I think the first part he said is you need an orchestrator. You have subject matter experts who are scientists, who are knowledge people who are AI, and each of them talk their own different lingua franca. So how do you bring that together? And he said that the CEO and the CTO is that glue who's bringing in the knowledge of what the mission is, and then how do you then rally everybody towards a mission and making sure that you're having the right governance? But as he said, the combinatorial around this is going to be in finite possibilities. And so he's looking at an AI assist to make that general governance process work where AI can discern a lot of the data and information around from a knowledge source standpoint, from a scientific standpoint, from an output standpoint, including the chatter between these different teams to come out with a set of guidelines to saying, "Okay, this is where your vision is and these are the possibilities which can help you."

(44:37):

And now humans step in and saying, "Okay, I want to go down this path versus that path." So I think it's really a good moniker on how things will evolve, that you're starting with a generalist human governance, but you're going to then take components of that and have it AI assist. I don't think you're going to be fully only driven and empowered by AI, but always you're going to have AI to assist the human to do better.

Daniel Levine (45:02):

Well, it was another great conversation, Sri. Thanks as always. Thank you. Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bimonthly podcast produced by the Levine Media Group with production support from FullView Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is providing courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny@levinemediagroup.com. For life sciences DNA, I'm Daniel Levine. Thanks for joining us.

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Scotch McClure is the CEO of Maxwell Biosciences, a global health technology company pioneering immune-inspired small molecules designed to mimic the body’s natural defenses. With over two decades of experience spanning engineering, artificial intelligence, commercialization, and scientific research, McClure focuses on applying AI and big data to accelerate therapeutic innovation and combat antimicrobial resistance.