Aired:
February 27, 2025
Category:
Podcast

Integrating AI into Precision Medicine

In This Episode

This episode of the Life Sciences DNA Podcast, powered by Agilisium, zooms in on how AI is making precision medicine actually work at scale. With so many data sources—genomics, labs, EHRs—AI is the engine making it all talk, learn, and guide decisions in real time.

Episode highlights
  • Breaks down the sheer data complexity behind truly personalized treatment—and why it takes more than just doctors and databases.
  • Explains how AI stitches together fragmented data points to give a full picture of each patient’s health profile.
  • Shows how AI tools are helping clinicians predict which treatments will work best—before the first dose is given.
  • Talks about how automation helps scale this approach so more patients benefit—not just those in academic trials.
  • Highlights how these systems are being used in real clinics today—bringing research breakthroughs to the bedside.

Transcript

Daniel Levine (00:00)

The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com.

We've got Toby Guennel on the show today. For our audience members who may not know Toby, who is he?

Nagaraja Srivatsan (00:32)

Danny, it's great to have Toby here. Toby is the Senior Vice President of Product Innovation and co-founder of QuartzBio. He has a strong background in biostatistics and data science, and his work focuses on improving the efficiencies and effectiveness of clinical trials through the innovative data management and analysis techniques. He's known for his expertise to leverage AI and advanced analytics to enhance decision-making in the drug development marketplace.

Daniel Levine (00:59)

And what are you hoping to discuss with Toby today?

Nagaraja Srivatsan (01:02)

I really like Toby's take on this transition into use of AI. He's really a thought leader who has used and adopted AI within the product as well as the operations team. And we're really going to have a good discussion around where AI is and where the technology is heading from here.

Daniel Levine (01:21)

Before we begin, I want to remind our audience they can stay up on the latest episodes of the Life Sciences DNA by hitting the subscribe button. If you enjoy this content, be sure to hit the like button and let us know your thoughts in the comments section. With that, let's welcome Toby to the show.

Nagaraja Srivatsan (01:42)

Hey, Toby, it's great to have you here on the AI podcast. Really looking forward to discussing AI in drug development. Toby, why don't you start by describing what is going on in the AI landscape and specific to drug development.

Tobias Guennel (01:59)

Hey Sri, no thanks for having me. I'm looking forward to the discussion today. Yeah, so if you try to look at AI, I think that's been a really couple of really exciting years. There's been tons of developments from simple chatbots to now really being able to drive innovation through generative AI capabilities. And all of those capabilities are now starting to flow into precision medicine, clinical trial in general.

So really exciting time to be part of the precision medicine ecosystem and figuring out how to best integrate AI into that landscape. At QuartzBio, we focus on how can we pioneer the integration of AI into precision medicine. Over the last decade, we really have developed an intelligence platform that really serves as the ideal foundation for building advanced AI solutions.

It connects different data across the entire precision medicine value chain, from sample collection to biomarker analysis to clinical and cell and gene therapy data. And this comprehensive integration ultimately gives you a 360 view across everything on sample and biomarker data. And the goal is ultimately to give stakeholders real-time visibility into the clinical trial operations and insights into what is happening and what potentially could be used to enhance their research efforts. How we really thought about AI enablement for our platform is to focus on the really solid foundation our platform provides and supercharging our stakeholders to leverage the latest and greatest in generative AI to reduce the time spent on kind of the mundane and repetitive day-to-day tasks and ultimately proactively serving up information and insights to the stakeholders through our platform.

Nagaraja Srivatsan (03:50)

Toby, so many exciting activities in this journey. You talked about all the way from data collection to standardization to AI chatbots to making much more of productivity improvements. If you were to pick up one or two use cases on the application of AI, let's use the chatbot feature. Can you describe a little bit in more detail around what kind of processes are you trying to automate? How is the sausage made? How do you make this thing work within the constructs of your particular product?

Tobias Guennel (04:25)

Yeah, absolutely. So if we take a look at clinical trial operation as a starting point, I'm going to maybe focus on two use cases. One is focused more on operational aspects. The other one is focused more on insight generation from biological data. So maybe starting with operational insights, what happens in clinical trials? There's a lot of moving pieces. A lot of stakeholders are involved starting from collection of biospecimens at a clinical site. These biospecimens move from site to a central lab or to a testing lab. These central labs are processing biospecimens and potentially sending them to storage locations, to testing labs, where potentially data is being generated. There's fairly complex collection schemes that are being implemented in precision medicine, clinical trials of biomarker-driven tract development, and there's a lot of data that ultimately is moving along what we think about as a sample data value chain. Now, what has happened in the past is there are a lot of manual steps that are happening. Different stakeholders, different data owners basically have to go through the data that is generated in these biospecimens, integrate it into their systems, then having to transfer that data to different...stakeholders along the value chain. And if you really envision on how AI could help here is to automate and autonomously, potentially as autonomously as possible, to move the data into the right systems, to raise data issues that potentially are being discovered where there are discrepancies across different systems, being able to provide insights into rather control operations, especially focused on the biospecimen operations is progressing as intended through the clinical protocol, or if there are issues, there certain sites that have potentially under collections or there are consent issues. So if you think about all the different components that go into biomarker-driven drug development, there's a lot of opportunities where AI can really help. And if you think about it, AI agents that leverage AI capabilities to autonomously potentially provide information to different stakeholders, raise potentially issues for review by key stakeholders.

Nagaraja Srivatsan (06:42)

That's fantastic. I think you hit upon a key area of AI agents. So Toby, are you building one big AI agent for these processes or do you have multiple agents you're building in this process?

Tobias Guennel (06:57)

Now, it's a great question. Really thought about it in different ways, right? Can we build one big agent that can do everything or is there a need to have multiple agents that are really good at a very specific task that needs to be done? Where we landed is we built what we think about as a position AI agent network. There's one orchestration agent we call our virtual assistant that ultimately is working with a variety of different agents, data management agents, navigator agents, intelligence agents, and is orchestrating the tasks that need to be performed as part of the clinical trial operations or in their generation value chain. And we thought about what do we need to be able to support the various tasks. Reason being for building more of an agent network that is expandable is that when there are new needs, when there are changes in potentially how a particular task can be more optimally addressed by new developments that are seeing by improved large language models, you can swap out different components without having to re-architecting the entire network. So that's how we approached it, where we landed. And so far it looks really promising.

Nagaraja Srivatsan (08:17)

Yeah. So, Toby, you hit upon a very interesting part, which is to develop an agent which is fit for purpose, doing things which are very specific. You talked about data manager agent or other parts of your agent architecture. As you start to put these different components together and you're trying to orchestrate, what kind of challenges are you facing? How easy was it to design it? How easy was it to orchestrate it? Can you tell us a little bit more about the whole mechanics of how you end up doing it?

Tobias Guennel (08:45)

I think what I'm going to talk about is really applicable to like generally how do you bring generative AI solutions from, you know, prototype or POC into production. I think nowadays there's many tools that allow you to do quick prototypes and figure out whether something is potentially feasible. But bringing into production is a whole other skill where you have to think about other aspects that may not be applicable if you're just doing a quick prototype.

I'm going to hit on four key points that we addressed when we brought our solution to production. First one is you have to have a solid data foundation. The garbage in, garbage out principle applies to AI. Maybe a little less than other solutions, but it still applies for sure to AI. Having a robust data foundation really is crucial for success of any AI initiative, meaning looking at the right data, developing ground truth data set that allows you to really measure performance of your solution and being able to have connectivity across different data sources, which often is kind of the first challenge. The good part for us was that that really was the easiest challenge so far since we already had built our intelligence platform to be able to build high quality data assets. So I think that challenge was a lower level for us.

The second part is you have to have well-defined real-world use cases. And the key point is not to find a problem that fits your AI solution, but to find an AI solution that effectively solves your problem. So identifying and defining the real-world use cases that can benefit from generative AI is essential and that involves understanding the specific needs and pain points of your stakeholders, designing your AI solution to address these effectively.

And when you look at the AI landscape, you could get the impression that an AI can really solve the world's problems already. The reality is maybe a little less dreamy, even though it's still super exciting. think having the right definition of the use cases, but also what are your key performance indicators that you want to hit with your solution is really important and gives you then a real measurable impact that you can go after.

The third piece is about security. I think that's on everybody's mind, especially in the life science industry. We are a highly regulated requirement and you have to have a strong security framework. You've got to ensure data privacy, compliance with regulations. You have to safeguard against potential new security threats that are coming with large language model applications.

And some of the data features that have been reported recently really are prime examples why you have to be really careful on how to implement your AI solutions in sensitive environments. So we thought about it as, at the end of the day, we have a very well-defined security framework as part of our platform. We have a really well-defined software development lifecycle that was co-operatively built for operating in the regulated environment that life science presents. So we basically fit our AI solution in that software development lifecycle, including all of the various validation steps that have to be done. And we'll put it into a well-controlled virtual private cloud where everything stays and that's secured environment and really adhering to best practices in the industry for data integrity and security. And then the last piece that we thought about is building a scalable and future-proof implementation. And I think that's a key piece for everybody that is working with AI solutions nowadays. The key for having or the reason for having a scalable and future-proof implementation is really that relying on a specific technique or specific large language model ultimately will limit you as we go into the future. The AI industry and the general landscape is evolving really rapidly. There's almost daily new news about better models, better AI frameworks, better reasoning capabilities, and being able to build a framework that allows you to plug and play new techniques, new models is really key to to keep up ultimately and improve your solutions as the technology evolves.

Nagaraja Srivatsan (13:19)

Toby, that was fantastic. Four key mile markers for anybody who's going down the pathway, solid data, getting you real world use cases, having security, and then last but not the least, really creating a scalable and future-proof model. As you went through this journey, I'm sure everything was not perfect and you would have faced a lot of road bumps, some challenges, people challenges, technology challenges, process challenges. Can you touch upon some of the challenges which you went through in your AI journey and maybe some mitigating strategies on how you overcame them?

Tobias Guennel (13:56)

Yeah, if we maybe go down the people process technology chain, we started with building our AI engineering team really early on. We were looking at different solutions and how they could potentially really enhance our platform. And in order to do so, we had a core team that really was focused on rapidly prototyping, rapidly trying out various solutions and see how they can ultimately support our use cases. We engaged with different R & D partners to ultimately supplement our team, but having a really strong core team that understands use cases very deeply and can help with really leading the effort and the charge is key. The process piece, I think is, you know, similar to how any new platform development or platform component development has to fit in into your roadmap. You have to fit it into the right roadmap. You have to think about how can you be agile in implementing the rapidly evolving AI capabilities while still adhere to best practices, especially in life sciences for validation and providing a validated platform.  

So there are definitely challenges. That's not always easy. You basically could swap out your models almost every week with a new model. So you really have to think about what does the process look like for building version- controlled solutions that allow you to still innovate, but at the same time provide a validated platform. From a technology perspective, we really thought about looking at a very broad range of solutions, large language models, agenda frameworks, even cloud providers and how we want to really build the production- ready platform and production- ready AI enablement within our platform. That can be challenging, right? There's a lot out there. There's a lot of noise. There's a lot of different technology that promise to do a lot of different things. So you have to really be very focused around How can I quickly prototype? What are some of the key metrics I want to evaluate to evaluate new technologies? How can I also review potentially security implications of what is being implemented within the platform? So you really have to think through not just, you know, is the technology the newest, latest and greatest, but also what are the implications of anything that goes around with implementing that into the platform?

Nagaraja Srivatsan (16:37)

That's super. I mean, it's not technology for technology's sake, but the implementation ability of that technology is a very critical component. And in the people side of things, how did you bring your people along? Was there, I know you built the AI team from scratch, whether change management or resistance, were you able to upskill some of your talent? Walk me through a little bit about how you saw the people challenge, especially when you're trying to build a new area.

Tobias Guennel (17:04)

I think continued education is key. We encourage and expect that our teams, and that is not just for the AI engineering team, but in general, our teams are really asked to stay up to date with the greatest and latest in technology developments. And that's specifically true for AI technologies. When we embarked on this journey at this point, maybe 18, 20 months ago, the AI landscape looked very different, right? People were just figuring out like, how can we even host large language models? Nowadays, it's not really how do you host them, but how do you choose among the 20, 30, 40 large language models that are available to implement in your platform, right? So really having continuing education, giving teams opportunities to be able to evaluate new technologies as they come to the market really is key. I think that also then applies to giving teams the ability to do prototyping, right? So not being just focused on getting something to production, but also doing some prototyping continuously on the site to see all the opportunities, how you can improve the AI components even if it's not in the next release, but the following release, right? Being able to do the prototyping and identify potential use cases that you may not have prioritized initially, but that will come down the roadmap. Identifying the right AI solutions is key. And you need to give teams the ability to prototype.

Nagaraja Srivatsan (18:44)

That's fantastic. What you're saying is give them more hands-on ability to fail fast, try and experiment. Those are all very much a growth mindset culture. And Toby, how do you instill that? Because not everybody comes up and does nine to five, their job. How do you instill them to be continuously learning and training and experimenting? It'll be fascinating to know what kind of culture you have set together for your team.

Tobias Guennel (19:11)

Yeah, I think QuartzBio in general, we see ourselves as a cutting edge leader in providing software as a service solution for precision medicine. Precision medicine is at the forefront of drug development. The precision medicine, biomarker driven drug development biotechnology companies and pharma companies that are pushing in that direction are cutting edge leaders in that realm. Cell and gene therapy is the next frontier that is coming right after.

So in order to support cutting edge of pharma and biotech companies, our company needs to say cutting edge as well. So our mindset is to be at the leading and bleeding edge of technology to continue pushing forward use cases that you're already supporting, identifying new use cases that we want to support for new emerging areas in life sciences.

That's how we approach everything at QuartzBio. So we really look at it, you know, we are innovation leader, we expect our teams to be innovation leader, and we expect our teams really to think of what can we do better, faster, more efficient in order to push the envelope.

Nagaraja Srivatsan (20:18)

That's wonderful that you're creating an environment for innovation, having everybody become that innovation leader. Toby, you come along, as you said, in 18 months, lots of changed AI and now agent architecture, new LLMs coming up. You talked about going from deploying one LLM to now selecting 30 to 40. Where do see the future? What would happen if you're here in the next two, three, five years? What is the art of the possible?

Tobias Guennel (20:46)

Yeah, great question. If I would know what it looks like in three years, would be, party would play lot of pretty soon. But I think overall AI already has and will continue to have a significant impact across industry. That is true for the life science industry as well. AI techniques will scale in different ways from building higher performance models with better reasoning capabilities to having smaller models that have same performance metrics as maybe the more legacy larger models, but significant cost and latency advantages to really - an evolution of autonomous AI agents that will be able to tackle tasks of increasing complexity and effectively collaborate among each other. I think the concept of having agents autonomously work together to really take away the, say 80 % of time spent on non-critical tasks that  oftentimes data scientists, translational researchers, operations lead have to spend on in order to get to the insights that they're actually needing and wanting to act on, really expanding the time to having a lot less time spent on these mundane day-to-day asks and being able to consume and act on insights, I think is really what that agentic framework will ultimately allow you to do. So as AI continues to evolve, anticipate even greater automation and efficiency, not just in the data management side, but also in the insight generation that ultimately serves up proactively ready to act on insights, significantly reducing manual effort and increasing efficiencies, both from the operational side, but also from having access to insight faster. Now, as AI scales, you can really see how an agent network could grow. That framework of being able to have a framework with agents that ultimately can do various types of tasks, tasks of different complexity, tasks that may not even exist right now, but for new use cases come up, they can then expand agent capabilities to support those tasks. It really is really promising and exciting and that will only get better as reasoning capabilities of AI models expand as...you have models that you can build that are very task specific to support those agents. As you can build those and can leverage different techniques to make those agents task specific. Those types of capabilities to build that agent network, make it more efficient and really as autonomously as possible, is ultimately going to improve what we all are trying to get to, right? Patient outcomes, make precision medicine more efficient, effective and impactful.

Nagaraja Srivatsan (23:45)

That's a wonderful vision of the future where you started by saying that it is going to be multiple different agents fit for purpose. What you highlighted was each of those agents now have the ability to reason, ability to be more contextual, and ability to evolve as they go, and getting to 80 % of those mundane tasks to be automated and making it very much collaborative. So I think that's a wonderful vision. What do you think would be key roadblocks to maintain this event? What are the risk factors? That seems like a wonderful vision. What do you think could potentially be risks which can stop from realizing that vision?

Tobias Guennel (24:27)

I think what comes to mind as one of the primary roadblocks is building that trust that AI and AI agents can even semi-autonomously help those and perform those tasks. Now, how do you build that trust is by showing obviously performance metrics. You're building performance metrics that are measurable, are tangible, and that really ultimately, stakeholders can look at it and say, I believe what the AI agent is giving me, right? It's not a hallucination, didn't do X, Y, Z wrong, right? Or it didn't do the tasks that I wanted it to do or didn't do it the right way. I think that is key piece. The other part is ultimately while AI and AI agents should be able to and will at some point be able to do various tasks semi-autonomously, you do want that human oversight and have humans in the loop to make sure that critical decisions are still reviewed, even though maybe that is a very clear indication and call to action that AI can give you and give you the right information to make informed decisions. So having the right workflows where human are still in the loop and now able to provide feedback can make key decisions.

But having the right information to make those key decisions. I think that's another aspect that ultimately will build the trust into those AI capabilities. Other pieces, think, is cost efficiency and user experience as with any product component. And I think those are probably the smaller road blocks to overcome. And there has been already quite a bit of advancement made over the last 12 to 18 months.

Nagaraja Srivatsan (26:10)

So, Toby, I'll explore a little bit controversial part. There is this notion which is being coined by somebody called "AI obesity". That is, when the human in the loop is depending too much on AI, you tend to be very pedantic and trust everything which is going on and therefore makes you lazy. So what are your thoughts? Because you've said absolutely human has to be in the loop. You have to get AI to be validated, but as we start to build that trust, which is your point one, we tend to be a little bit more lax and not be that right human in the loop. What are your thoughts and how do you prevent AI obesity from happening?

Tobias Guennel (26:49)

Yeah, I think with any technology, technology is as good as ultimately the people using them and really the information that it generates. So you have to start at the beginning when you build AI solutions, you need to make sure that they're well-built, that they're well-validated and that the performance in terms of accuracy of task performance is there, I think that's the starting point. That's critical and that is part of any robust software development lifecycle that really supports the life science implementations. Now, the second piece is, I think, awareness of the information, providing the right context, often for information that is being provided to the human that makes the decision is also key. There needs to be context given that needs to be information provided in a way that's easily consumable and understandable. That gives those making the decision the right information to ultimately look at it in an easy way, right? It can be five pages of, this is everything I did, but it's gotta be, there's gotta be enough information to ultimately make informed decision. And potentially, and that's something that AI has evolved quite a bit over the last years is providing references as to here's why that decision is being made, right? So here are the reasons as to why we arrived at making a particular decision. You know, the end users ultimately, then we'll be able to follow the thought trail of how something actually, you know, got to that point. The other part  is awareness. I think generally, and this is true for any technology, new technology, and especially something that is so disruptive as GenDev.AI. You have to educate, educate end users as to here's ultimately what the AI solutions can do for you, right? Here are its limitations, here's what you need to look out for. And that needs to ultimately then flow into operational processes of how, you know, a company, for example, biotech company is looking at leveraging AI solutions in a safe and secure way and integrating them into the operations.

Nagaraja Srivatsan (29:03)

Great points. know the 30 minutes just went up very quickly. Toby, if you were to give back some key takeaways, what would they be? What are some of the key takeaways from this conversation? And what would lead the larger AI community to benefit from this dialogue?

Tobias Guennel (29:19)

Yeah, so I think the biggest takeaway is that AI will revolutionize precision medicine. I think there's no way around it and then praising it rather than pushing it away, I think is what I'm thinking about as one of the big takeaways. Now, in order to do so, you really want to think about a strong data foundation in order to feed your AI enablement of your processes, your technologies and people ultimately and making it or involving experts in the field that are aware of life science regulations, compliance requirements and are able to help guide how to really think about addressing your specific pain points, use cases and challenges with the right AI solution. I think is something that I would, I would definitely recommend. So, know, collaboration with different experts and different teams in order to build that trust internally and externally, I think is key.

Nagaraja Srivatsan (30:21)

This has been wonderful. Thank you so much for your time, Toby, and really appreciate you taking the time to be with me. And I think we have some really good insights from you, so really appreciate that. Thank you.

Tobias Guennel (30:32)

Thank you Sri. Appreciate it.

Daniel Levine (30:35)

Well, Sri, what did you think?

Nagaraja Srivatsan (30:38)

Danny, Toby was very good in articulating the journey of AI. As he talked about, you start with agentic architecture, which is fit for purpose agents. He then talked about how you then can orchestrate them together to work together. But more importantly, how do you then get them working to deliver good and proper business outcomes? So was a very consistent team and message on how you put these things together.

Daniel Levine (31:06)

How widely used are agents today in any sense the impact they're having?

Nagaraja Srivatsan (31:12)

I think agents are being used today quite a bit as Toby was saying previously, we would then just use a large language model. Now we're having a combination between large language models and small language models and we're building what we call fit for purpose agents. And I think it's a very good methodical way in building AI solutions. What I also like what Toby was going about is his four keys of how you would go about deploying AI. He emphasized around having a strong data foundation. So garbage in is garbage out. So having a very good data foundation. Making sure you're doing real world use cases, which is very important to make sure that you're not building, taking a hammer to a nail, but making sure that you're building the right use cases. Security, making sure that you're guarding against risks. And finally, really looking at it.

How do you fit for purpose and scale this for the future, future-proofing it? I think these four are very good insights as you go down the AI journey, where you're not chasing a shiny object, but using AI to solve real-world business problems.

Daniel Levine (32:21)

Yeah, I think for most people today, their experience with AI has been they make an input and they get an output. I think of agents doing work almost in the background and being seamless and less visible. How might this suggest how a relationship with AI may change over time?

Nagaraja Srivatsan (32:41)

Yeah, I believe that AI should be having a buddy relationship, not somebody who's replacing it. So you work to make sure that you're working back and forth with your buddy, you're giving information, taking information, and making that informed decision making. As Toby elaborated, human in the loop is very important, which means that you're working in a collaborative manner. It's not...that AI makes all the decisions and we take it or human makes the decisions, you're working in a very collaborative way. And I think that's how this one will have to evolve. AI and especially agentic AI is a little different from enterprise systems or transactional systems where they were deterministic output systems. You give an input and they give you a form answer. With AI and agents, you need to make sure that it's a collaborative process where you and the agent are working together.

Daniel Levine (33:36)

You asked Toby about change management and the people challenge. I'm wondering about this on the implementation side. How big a challenge is this from the company that's a trial sponsor and integrating something like agents into the way they're working?

Nagaraja Srivatsan (33:53)

I think it goes back to how do you prove trust? Because nobody is going to just implement AI agents in a clinical trial validated marketplace. And so you have to demonstrate that each of the agents are doing the functionality they are deeming they would do, building the trust, making sure that they are open about their outputs so that you can make sure that they are being adopted within the clinical trial process. But with that said, as Toby and other market leaders are doing in the marketplace, there's a lot of research going on in this space and lot of implementation. So we've gone from pilots to now real implementation use cases. And so this is real. This is no longer, as Toby said, that you're just trying a pilot and hypothesizing. It is trying to use and make it production ready.

Daniel Levine (34:44)

Well, it was a great conversation and exciting to see this in the area of precision medicine where AI can do so much to really make this a reality. Sri, until next time.

Nagaraja Srivatsan (34:57)

Thank you so much and really looking forward to our follow-up episodes on AI. Thank you.

Daniel Levine (35:04)

Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny @ levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine. Thanks for joining us.

Our Host

Senior executive with over 30 years of experience driving digital transformation, AI, and analytics across global life sciences and healthcare. As CEO of endpoint Clinical, and former SVP & Chief Digital Officer at IQVIA R&D Solutions, Nagaraja champions data-driven modernization and eClinical innovation. He hosts the Life Sciences DNA podcast—exploring real-world AI applications in pharma—and previously launched strategic growth initiatives at EXL, Cognizant, and IQVIA. Recognized twice by PharmaVOICE as one of the “Top 100 Most Inspiring People” in life sciences

Our Speaker

Dr. Tobias Günnel is a translational informatics expert and Senior Vice President of Product & Chief Architect at QuartzBio, a division of Precision for Medicine, Inc. He leads innovation in SaaS platforms for clinical sample inventory and biomarker data management, pioneering scalable solutions that bridge laboratory workflows with clinical development and translational science.