Building AI ready Biopharma Organisations
In This Episode
In this episode of the Life Sciences DNA Podcast, hosts Nagaraja Srivatsan and Daniel Levine sit down with Scott Cenci, Chief Information and Data Officer at Acadia Pharmaceuticals, to explore how AI is being scaled across the biopharma value chain—from early development through clinical operations and commercial execution. Scott shares a pragmatic, experience-driven view of AI adoption, shaped by decades of leadership across large pharma and biotech. The conversation moves beyond hype to examine how organizations can responsibly introduce generative AI and emerging agentic capabilities while maintaining quality, compliance, and patient safety. From medical writing and biostatistics to enterprise data strategy and ROI measurement, this episode offers a candid look at what it really takes to move AI from pilots to production in a regulated environment.
- Where AI lands first:
Why clinical development (especially medical writing) is a practical starting point for GenAI in biopharma. - Human-in-the-loop by design:
How roles shift from “writer” to “editor” while keeping accountability and quality intact. - Adoption is the real work:
Why change management, training, and early adopters matter more than model performance alone. - SaaS vs custom AI bets:
How to decide when to lean on platform AI vs build custom solutions—and what tends to fail. - Enterprise data + access control:
How to connect structured + unstructured data while enforcing role-based insights. - ROI without the hype:
How to think about productivity, cycle time, quality, and patient impact—before hard ROI shows up. - What’s next:
LLMs everywhere in 12 months, agentic AI in 24, and “managing AI agents” as a new way of working.
Transcript
Daniel Levine
The Life Sciences DNA podcast is sponsored by Agilisium Labs, a collaborative space where Agilisium works with its clients to co-develop and incubate POCs, products, and solutions. To learn how Agilisium Labs can use the power of its generative AI for life sciences analytics, visit them at labs.agilisium.com. We've got Scott Cenci on the show today. Who is Scott?
Nagaraja Srivatsan
Scott Cenci is the Chief Information and Data Officer at Acadia Pharmaceuticals, where he leads enterprise-wide digital transformation initiatives focused on harnessing data, technology, and artificial intelligence to accelerate innovation in neuroscience and rare disease therapies. He has over 30 years of experience in pharmaceutical and biotechnology industries. He's had senior leadership roles at GenMab, Biogen, Zoetis, and Pfizer driving large-scale digital strategies that supported rapid organizational growth and operational excellence.
Daniel Levine
What is Acadia?
Nagaraja Srivatsan
Acadia is focused on developing therapies for neurological and rare diseases. It developed and commercialized the first and only FDA-approved drug to treat hallucination and delusions associated with Parkinson's disease, psychosis, and first and only approved drug in the United States and Canada for treatment of rare disease, Rett syndrome. Its clinical stage development efforts are focused on Prader-Willi syndrome, Alzheimer's disease psychosis, and multiple other programs targeting neurological diseases.
Daniel Levine
What are you hoping to talk to Scott about today?
Nagaraja Srivatsan
Scott has a wealth of information in deploying and implementing AI. He's really experienced in implementing it across the continuum of drug development from what I call molecule to market. More specifically, we're going to talk about the advent of Gen AI and its impact on several functional areas within his organization.
Daniel Levine
Well, before we begin, I want to remind our audience that they can stay up on the latest episodes of Life Sciences DNA by hitting the subscribe button. If you enjoy the content, be sure to hit the like button and let us know your thoughts in the comments section. And don't forget to listen to us on the go by downloading an audio only version of the show from your preferred podcast platform. With that, let's welcome Scott to the show.
Nagaraja Srivatsan
Scott, welcome to Life Sciences DNA Podcast. It's great to have you here. Scott, it'll be great to let us know your journey with AI, maybe how you've been doing AI or adopting AI in this and what's got you here.
Scott Cenci
Yeah, thanks, Sri. First, thanks for inviting me to come have a chat with you and Danny. I really look forward to this. It's been quite a journey. It's been three decades across the life science and technology and data. Spending my time in large pharma in the early days and moving into animal health and then into large and small to mid-biotech. I would say that AI, part of my journey started a few companies ago where we started to build out a data science practice and help the commercial business evolve its strategy through more predictive models and then move more over to a previous company to a small mid-sized biotech where we took a much broader approach to use AI across the company. I joined in late 2019 and this really exploded in late 2022 with Chat GPT 3.5 release. With those experiences I've had over the past three decades in life science across data, information, technology, security, and AI, I see this as a pivotal moment, not just for Acadia, but for the entire biopharma industry.
Nagaraja Srivatsan
That's a fantastic experience. You've done classic AI and now with Chat GPT, generative AI. Scott, as you said, you started with commercial with classic AI and predictive models. With the advent of Gen AI, is there a particular segment of the life sciences continuum you started to focus on to saying this could be low-hanging fruits for you to start to apply AI?
Scott Cenci
It's interesting, Sri. I think there's opportunity across the board, quite honestly, I think from enabling functions through research and development through our commercial business. So yes, there's classical AI opportunities within commercial that we can talk a bit about. I also think there's an opportunity to leverage language models, particularly within SaaS platforms as we see vendors coming up with solutions that leverage LLMs within their platforms, particularly in the CRM. There's some exciting opportunities that I see coming short term right now. But R &D is also very ripe for the use of AI. Certainly, classical AI has been used in R &D for some time now, but we do see opportunities now to start leveraging the use of LLMs from early phase research, discovery of molecules using AI. We see biotech move to tech bio. There's a number of companies that we're both well aware of solely using AI to discover new medicines. For Acadia, our focus is now really pivoting more towards the development side, clinical operations, clinical development opportunities, all the way from that early preclinical through the phase one through phase three journey, and ultimately to regulatory submission.
Nagaraja Srivatsan
That's wonderful. I think you brought up three motifs, the classic AI. We'll come to that. But why don't we double click a little bit on the clinical development - AI, which is front and center of it. As you look at the clinical development process, it's been very human-centric, lots of work either done by clin-ops or CROs in that marketplace. Where are you seeing within that continuum opportunity to bring new use cases and are there areas which you tend to focus on better first than later? Why don't you walk us through your thoughts there?
Scott Cenci
The biopharma use case has been medical writing. We see that LLMs are quite good at generating content, summarizing documents, and so forth. And when we look across the clinical environment, the massive amount of data and information that our medical writers, data management team, needs to deal with biostatistics, it's quite incredible. And it certainly adds to the complexity and lead times when we think about the cycle times across trials. So for us, and even in previous companies, we've tended to focus more on the medical writers as one of the early use case opportunities. And there, we certainly in previous companies have done a number of experiments, both custom as well as platform. And we certainly see early and promising results there. None of the solutions will get you 100%. This is in no case a replacement of medical writers. In fact, they play such a critical role. But what I do see is our medical writers perhaps shifting more to medical editors, where the drafting of these documents can be done, which are quite voluminous and time consuming, to where it gets it to 40%, 50%, 60%, maybe higher over time completion. And then the medical writers really become editors. They fill in the gaps. They question things that may not look right, dig deeper, do the analysis - not just at Acadia, but even in previous companies, we want the human always in the loop. We want the human at the center of it. The human is always going to be accountable for the results of whatever tools they're using. Today, we're talking about AI. In the past, there's many technologies that we've used as colleagues. We can't just point to the technology and say, well, that number was wrong because the Excel formula was wrong. Well, you're accountable at the end of the day for that Excel formula. We treat AI in very similar fashion. It's used to augment and to help, but ultimately we want humans in the loop and the human is the accountable and the decision maker and ultimately responsible for the quality of our output.
Nagaraja Srivatsan
Now, Scott, that's a really good area you're going in, right? As you started deploying AI, it's not an IT solution, as you said. You need to bring in the business together. Walk us through that journey because it's not easy where you have all these LLMs and potential. As you said, it's only doing 50 to 60%. Your stakeholders, you've been in IT, they want 100%, everything working and all of that. So this must have been quite a change to bring them on, what is in it for them, why 50 is the new good, how they could keep continuing to work with AI to build up their journey from a writer to an editor. Walk me through that. There's a whole concept of change and business stake-holdership which is involved in this one.
Scott Cenci
Sri, I love where you're going with the question because I view AI, this journey that we're on today, especially as we think about LLMs, which has much broader impact. We've moved from the classical, which was sort of restricted to those programmers, data scientists, data engineers that were leveraging AI to the use of LLMs, which is very broad impact. You still need to bring the users along the journey, our colleagues, our peers. We can't take for granted that you and I and our interactions with these tools is the same as the medical writer, the field colleague, the finance person, et cetera, within your company. In a previous company, I actually invested in a change management and communication practice within my organization because I felt it was such an important part to establish the culture, the mindset, to create a safe environment for use of these tools, experimentation, to put the governance in place because people in different parts of the organization either were saying, don't AI. We're too concerned. There's too many regulations and rules. There's too much legal risk, to making sure people have that opportunity to experiment and fail, but fail in a safe way. I think you have to do all of these things. You also have to provide the education resources because we really want our colleagues to ultimately become experts in the use of LLMs as part of their daily job. And to do that, colleagues ask for training. And so that sort of brings up, do you move from just communicating and building excitement and energy to how do I educate folks in the use of the tools? What's been great about the LLMs is I think you first need some level of face-to-face, hands-on, interactive type of session. The opportunity to bring our colleagues together in January of 2026. During that, we have an opportunity to have different selections of breakout sessions, some on leadership, some on decision making, and so forth. We have one of those six breakouts on AI. It was interesting to see that more than half of our entire colleague population wanted to take that session. We more than filled the three sessions that we had planned with over 300 plus employees. And the reason is, they see it, they either are a bit skeptical or tentative or excited, but confused. And they want someone to sort of in a classroom kind of setting, show them what it's all about. Once you do that, once they get that experience, what I found is then it's up to those early adopters that will start to share those use cases. So you move from the classroom to sort of having those leading employees that are starting to experiment, starting to use it, seeing the value, share with their friends and colleagues, if you will. And then people realize that the tools themselves, if you look at the frontier models like ChatGPT and Claude and others, Copilot, they will train you. You can ask it how to create a prompt, how do you improve this prompt, how do I add my data into this prompt, et cetera. They will train you on how to use it. And then the cycle starts to move. The flywheel moves much, much faster.
Nagaraja Srivatsan
Now that's a fascinating journey where you go from a classroom to, of course, early adopters to then kind of getting that spiral going. And it's classic change management where you have to get the proof points going. But tell me, what kind of talent are you hiring from an IT organization? Are you having propeller heads and data scientists? And how do they work with these business stakeholders? And that dynamics is not the classic IT, as you said, the classic AI was data scientists doing it. They worked on a model and then they’re told, this is what the model predicts. Here, it's not that the prediction model has to be adopted by these other stakeholders. What are you building in IT? What do you think you're getting from the business side and how are you making these teams work collaboratively?
Scott Cenci
Actually, I'll take a step back. You bring up my role at Acadia, which is a great role. And one of the reasons I was so excited to join Catherine Owen Adams and the leadership team are here at Acadia. Catherine had a vision that it's not just the technology, it's the data and the technology that has to come together. My role here as chief information and data officer is both hiring the technical capabilities, Sri, as you mentioned, but also those that really understand the data, the business process, how the business works. It's a combination of both that I think is quite powerful. One of the challenges I think companies may struggle with is when you have multiple players that are accountable for these different roles, you really need a unified approach. Otherwise, it starts to break down. The data has to be set up, established, cleansed in a way that it's easily accessible to machines, not just humans. And you need the technology capabilities, the compute, the storage, the network in order to pull these things off. So you've got the data, the technology, and the people all come together in a triad in order for those things to be successful. To dig further into your question around who are the types of people that you're hiring, as you know, it's a very competitive marketplace for AI talent today, right? And if you say, well, I want AI talent with 10 or 20 years’ experience, those people don't even exist, especially when we think about LLMs and sort of the explosion of Chat GPT in 2022. So, most folks are in fact, with only a year or two or three years of experience. So, there you look for people that are continuous learners, people that can quickly - have an agile mindset, can quickly pivot as these technologies evolve quite rapidly, and are willing to sort of roll up the sleeves, dig in, and get hands-on. I think the days of just managing vendors and partners and so forth are long gone. You need to be deeper into the technologies and really experiment, understand it yourself, and then where appropriate, we're partnering with third parties, particularly those SaaS providers that are starting to push the envelope in how they use AI within their platforms so that we can get greater value from their solutions. When it comes to deep, deep AI expertise, I think the best bet is really to partner with the technology companies that are able to attract and hire and retain the deep technology experience where you need more, say, custom solutions, architectural types of things where you may not have the talent in-house. The partnership is key both on the large SaaS providers, but also some niche technical companies that can really help you advance your capabilities combined with your internal talent.
Nagaraja Srivatsan
Now, let me explore a little bit of the SaaS. You brought it up. The SaaS people like CRM vendors are upping their AI game, and they're bringing in lots of features. You're having the LLMs upping their game and giving you a lot of features. You're having custom shops saying, I could do this. In this kind of environment, how do you make the right selection? What do you bet on? Because that's where a lot of people are saying, hey, I'm getting AI from this SaaS vendor and this SaaS vendor and this SaaS vendor. Which AI model works? And a lot of times, it's fit for purpose for one place versus it being cross-functional. And as you said, data is important. You put that together. Technology and architecture is important. But how do you make the right decision? Because there's a plethora of new solutions facing similar problems.
Scott Cenci
This is a great point, and it's not an easy time for CIOs, CDOs, and the like, right, in terms of how do you make your right bets almost at the end of the day. I've been a cloud-first, SaaS-first technology leader for quite some time. And I think even in this era of AI, it's not a time necessarily to jump ship and say, we're going to custom develop everything. Some of the data points that I've seen in reports and peers that I've talked to is where they've done custom development, those solutions tended to fail more often than not because there were so many parameters that had to be thought through that if I say, I'm going to replace our CRM solution. I'm going to take the company that's been doing it. Many pharma companies are on that same platform. That vendor has been in existence for 20 plus years creating that platform. If I think I could just tomorrow create that through an LLM, it's unlikely that I could create the full set of capabilities. Then you get into the question of, do I start to API into all of these platforms? There's quite a bit of work there. So, I choose not to go with Microsoft and Co-Pilot. I'm going to use Cloud or I'm going to use ChatGPT natively. Now I've got to interface into all of Outlook and Office capabilities. That becomes quite a bit of work and I'm not sure that it's worth the effort there. Where vendors are either there or quickly catching up and we believe in their story and where they're going with embedding AI capabilities with those within their platform, we think that that's the better choice. The key is when you want to use your enterprise-wide LLM, which is something in previous companies and at Acadia we're going to do, we want our colleagues to have access to an enterprise LLM, how do you expose the data appropriately so that people get the full benefit of that LLM? That's where I think the challenge lies is making sure that data that's sitting within these various solutions ultimately get pulled back into an enterprise data lake, data warehouse capability so that these LLMs can easily access these solutions. Otherwise, you really can't do cross-platform type of knowledge insight gathering.
Nagaraja Srivatsan
You're saying, let me get my data correct, fair enough, enterprise scale, data warehouse, data lakes, whatever you're doing. What is the role of LLM you're trying to play? Are you having that now talk to your data? What kind of use cases are you building where now you have a sound data infrastructure, structured data, and now you have the world of LLMs? How are you merging these two in this new normal?
Scott Cenci
So certainly, on the SaaS side, the vendors are doing their jobs to make sure that as they're using AI, LLMs, and so forth against those data sets, they're doing the work around the guardrails and all the details associated with that. I believe the power that any pharma company has is through the data that it generates itself as well as the data that it's purchasing. When we think about the power of an LLM, a large language model by nature, its genius is baked into the vast amount of information that it has. It can generate those insights much more easily than you and I can as a single human being with a limited capacity in terms of how much information we can hold in our brains at any given time. So, to me, the power would be is to find ways to connect that enterprise data and allow those LLMs to generate the insights. Then how do you restrict the insights to only those that should see those insights? Data by nature has to be sort of separated in terms of the role and what you have access to and so forth. I think the insight - being able to only provide the insights to those colleagues that should have access to that type of data is where we would want to put that filter versus trying to filter the data to the LLM. I almost think of it as the role of our CEO. The role of our CEO is to have access to all of our data, right? Generally speaking, there's very little restriction that the most senior person in our company would have to our data. How do we then use an LLM in many ways that could give that line of sight across all that data, but be able to process that stuff in real time and generate those insights? I think that could be very, very powerful for companies as we move forward in this AI era.
Nagaraja Srivatsan
And that's a fantastic journey. You build the data infrastructure, give them clean access, then build access controls to our role-based access to that data. But what you're saying is then the role of the LLMs gives you that deep insight, deep connections and connectivity issues over that. Scott, as you start to look at it, this is a very structured data model. As you know, in clinical development or in any place, we have hordes of documents, unstructured data all across your infrastructure. What are your thoughts now? Because CEO, I love that narrative, is not just looking at structured data coming from your SaaS platforms and others, but they're trying to correlate documents which are regulatory ready with structured data which is coming from that. Are you seeing a world where these things are coming together? How are you thinking about that in the future? I would love to get your perspective on that.
Scott Cenci
By nature, you know, LLMs are multimodal today, right? And so, you know, being able to have transcriptions of the meeting minutes, you know, doing the transcribing, you know, videos, social media, all of these things play a role. As a leader in the organization, you're absolutely right. It's not just the structured data that's sitting in a database. It's interactions with customers. It's things that you're reading on social. It's great podcasts like this one that's sharing information. How do we bring those insights together, not just within our company, but even insights that are sitting out across the internet? And being able to put those things together to generate insights that perhaps we wouldn't have had just by looking ⁓ internally at our own data or internally at a database, to me, that is where the power lies. I don't say that this is an easy problem to solve. I'm not sure any company has really fully harnessed all of their data sets because of the data siloing, different data formats, different platforms and tools, what's restricted and not restricted and so forth. There's complexities there. But I do think that that's the north star that we have to be thinking about. I think that's really where the power of a large language model within a company can really add new insights that perhaps wouldn't have been seen using traditional methods.
Nagaraja Srivatsan
Scott, I'm going to pivot the conversation a little bit into this dreaded word called ROI and prioritization, which we all live in. And the reason I say that is there's different ways in which people approach this problem. We've been exploring this in the podcast. People did hackathons and democratized this, but that didn't get to the true ROI. Some people had much more of a very laser focused on the top three priorities. I would love to get your insight on how are you thinking of 2026 because every dollar has to be justified and the return on investments. So how are you aligning with your management and how is that ROI discussion progressing?
Scott Cenci
Yeah, I mean, you're playing the role of the CFO, Sri. I look at this in many different ways. If tomorrow you said to me we couldn't invest a dollar unless I can guarantee you an ROI, I think most companies wouldn't be able to even experiment in AI, right? You know, it would be very difficult in the beginning to come up with guaranteed, especially when we think about colleague productivity as an example, right? So, you know, I made this statement, I think every colleague should have access to an LLM and I still believe wholeheartedly that to be true. I use it. Not just daily, would say many times a day, I'm into an LLM as part of my job, doing research, personal things, health questions, you name it. I'm using LLMs very frequently throughout the day and seeing that value. I know that can help my colleagues in the organization also see that value. How you put an ROI on some of these things can be really, really challenging. It's interesting that with AI, want to put an ROI or any tech investment, we want to put an ROI on everything that we do. When's the last time we did an ROI on Excel? When's the last time we did an ROI on Outlook or an iPhone or the browser or you name it, right? All of these different technologies, a new database that we're putting in, all of these different technologies we're a little bit less critical on, we know that it's needed for colleague productivity, but we don't try to measure it down to the minute or to the dollar, right? I think we're still a little early in the adoption curve of the technology where experimentation and the lack of very defined ROIs is still appropriate. I say that in the context of the small, mid-sized companies that may not have been investing over the last three years. If you look at large pharma that jumped in in many cases, large volumes of dollars and resources they've been putting on it, they're probably a little ahead of that curve now where ROI is becoming really, really important. They're beyond the experimentation phase. Their senior leadership is saying, I want a return, otherwise we will not do AI in this particular area. I think it's where you are in your maturity curve as to how quickly that ROI comes into play. However, across the industry, what I would say is we've seen a lot of experimentations that keep. So, if I think back to my previous company, we probably started those experiments early on in 2023 and POC and so forth, pilot and so forth. Then in 2024, starting to do some implementations. As I've mentioned, many of the custom stuff just didn't really pan out. And so here you are in 2025 for a company like that and saying, Hey, I want ROI. If I'm going to invest, I want ROI, right? For other companies, they may have started their process in 2024 or maybe even late into early 2025. So they're still early on that journey of experimentation. And I think that the company itself has to go through that sort of curve of building the excitement and energy around what your North Star is going to be in terms of the use of AI and how it's transformed to your company, the change management, the communication, the training, the education across your colleague base to get enough early adopters to do those POCs and pilots to then get to a point where we say, okay, we believe enough in this that we're now going to make investments in key areas of our business processes. And there we're going to look for return on investment. If you look at things like, you know, clinical, you know, development and so forth, it's rather obvious, you know, you've conducted many studies over the history of your company. Hopefully you have some decent records. I'm sure they do down to the day, maybe even the hour of how long it took to close out of phase three and submit to a regulatory authority, et cetera. Now you can apply AI to that process, sometimes even in parallel to the manual process, and then measure it and say, okay, was I able to reduce cycle time in any way through that process that I improved quality, patient safety, et cetera? What are the things that I want to measure that are impactful to my business? And then you could say, yes, we were able to reduce something by 5, 10, 20%. And then you can calculate, was the dollar investment worth the business benefit?
Nagaraja Srivatsan
Scott, you bring up two great points. One is the socialization of the tool in the hands of people. And the second, which I really actually am a big proponent of is the experimentation culture, which is not very normal. Then as you said, you have to experiment to learn and go from that classroom to adopter of the journey. Very importantly, you hit upon a few other ROIs. ROI is not on just dollars, it's offense and defense, which is better quality, better timeline and better things to go about. As you look at your 26-27 plan again, are there any big buckets or pockets which you're saying, like you said, medical writing is ripe for transformation. It's a classic LLM. You change medical writers to editors. That's a great narrative. That is, you take a functional role and now you're redefining what's a to-be role. And I think that's a very good framework because then you can sell the concept easily, right? Because now you're telling people what a better looks like. Is there other such pockets which you're seeing in clinical development, which goes from a “before and after” like the medical writer?
Scott Cenci
That's interesting in terms of the role shift, because I do think that all of us as colleagues over time, and maybe we'll get into predicting the future, like what things look like in one to two or three years, et cetera. We all need to shift how we operate. I've even had to shift how I operate as a leader. Where's my focus? Where am I spending my time? What am I researching? What am I experimenting with personally in order to bring greater value to the organization? I think all of us will start to operate a little bit differently. So the medical writing one is just a just an example, right? How does the rule of the data manager change? How does a study monitor change their role, et cetera? I think we can go through biostatistics. Another great example, our biostatisticians essentially create code to run statistical analyses. That code generation is ultimately shifting to large language models. In fact, what we're seeing is best in class is really where you're using multiple different models to act as different virtual developers, if you will. Perhaps one's doing the coding, perhaps a different model is doing the quality check, the testing. Maybe one's looking at a user interface for a particular application, et cetera. As the UI flow nicely, et cetera. Documentation, some things need to be validated, requires further documentation. So, I see the use, say in biostatistics, really the development of the code that they need to generate in order to run those statistical analyses will be done through LLMs going forward, right? So, their role in where they spend their time and what they do is going to shift. Now it's going to be more prompting to the types of applications they need to create versus actually hard coding and developing those solutions. To me, the opportunities don't just end in R &D. I see lots of opportunities within commercial. We've got some very bright colleagues that are generating ideas by the day. Probably a day doesn't go by or a week doesn't go by where somebody doesn't flag me in a hallway or a visit to one of our offices in another location and says, hey, I've got this idea. One of the things that I see us experimenting with early this coming year in 2026 is a virtual coach in the commercial business. So, when we think about launching a new product, a new indication, a new formulation, engaging different customer types, et cetera. Traditionally, folks within our commercial business would need field coaching, right? And that's usually a supervisor, a peer, et cetera. And we think there's an opportunity to really use large language model capability as a virtual coach where you can train and educate that model, give it the guardrails, and allow our field colleagues to be able to interact with some type of virtual coach. I think that's an exciting opportunity in the commercial business where I think our field colleagues could get quite a bit of benefit because not only can that virtual coach help in terms of the messaging and things like that, but also you can use it to educate what type of practice physicians they're going to engage with, how they might respond to different messages or the questions they may be asked. So, I think that's quite an exciting opportunity that we'll start to look into in 26.
Nagaraja Srivatsan
Scott, the virtual coach, actually taking what you said a step further, there's a couple of startups which are actually looking at the training of the field as a virtual coach to make that muscle memory work better and giving them real-time feedback on, this is kind of how your go to a way of approaching to the physician is, and you should be shifting it this way, and that could get you to better results and retraining your muscle memory and doing it. So, that's a great concept. Once you build a virtual coach, then you build a virtual training, and then the muscle of what that training means in real time back to each of those roles. Scott, we could continue to keep talking down this part, and I want to pivot to two parts of it. One is I definitely want to get your horizon view. Where is all of this going in the next two to three years? Where do you see the impact? And then before we go down that question, I want to really focus on what kind of culture should one create to make us successful for today, but not just for today, but for tomorrow? So maybe you could give me a horizon and then say if that's where the market is going, then what kind of infrastructure or culture you need to create. I'm happy to get your thoughts on that.
Scott Cenci
Sure. We'll start with - I'll put on my cap and my crystal ball and we'll kind of talk about the future. It's interesting. When I think back in my career, we talked about a three or five year technology roadmap. It was always challenging, but you can kind of predict that maybe five years where you would be making investments and key platforms. Maybe an ERP gets replaced in 10 years or something along that longer horizon as part of a long range forecast for a company. When we think about AI, I don't know, even the experts, it's hard to see beyond 12, maybe Sam Altman could see 18 months, Satya, et cetera, know, Elon Musk, you know, it's really, really difficult because it's moving so quickly. It's amazing. I think the latest stat I saw was Chat GPT has almost 900 million users, right? And we think about the population of 8 billion people on the planet, a tenth of the planet is using ChatGPT, and that's only one of the frontier models, right? Other people are using other models. So, the speed of change is beyond imagination. So where do I see things over the next 12, two years, maybe even stretch it out to 36 months? I think in 12 months, everyone in the corporate world will be using an LLM. I think any of the holdouts, if you will, will be adopting in 2026. For those that choose to stay on the sideline, I think it will be increasingly difficult to do your job without the use of some one or more LLM capabilities. I think about two years out, we'll start to see more and more usage of agentic AI. There's certainly a cautious tone within our industry around agentic AI because it feels like you're starting to lose a bit of control. By nature, if AI is making decisions and can pivot and take actions on your behalf, you're releasing some of the control that you have in terms of what it's doing. And so I see the industry probably easing itself into agentic AI over the next 12 to 24 months. And then if we stretch out 24 to 36, I believe that the way that we do our jobs in the future will be each colleague, even individual contributors, will be managing agents. Imagine a virtual workforce where part of your role is to manage some of your virtual agents that are doing work on your behalf. It's almost like having a few PhD interns that are going out and doing the analysis that you need and coming back with a report each day of updated information and so forth. I see that world evolving. There'll be companies, of course, that'll do that a little faster, but I'm giving you sort of where I think the generic industry might be over the next 12, 24, and 36 months.
Nagaraja Srivatsan
No, that's absolutely fascinating. It's the right way. I talk about this concept called AI teammates, and you have to start working, like you said, with both real and virtual teammates. I'm not waiting for three years. We're trying to do that in the marketplace because it's a part of the culture and maybe you could talk about if this is the vision, what kind of team, talent, culture do you need to build to make this vision happen?
Scott Cenci
One of the things that I want to comment on is when I share that timeline, this is across our industry, the average employee is doing it. Of course, companies right now, including yours, are working on agents and how do we start to deploy agents and where do we deploy agents, but there's a lot of things that we have to take into consideration before we just throw agents into running our clinical trials. We don't just go from point A to point B. It starts with technology companies coming up with the capabilities. They start to introduce that test pilot within industry. You need to think about the guardrails, the regulations, all the issues, the colleague, getting them trained up to speed. Then it's adopted and you're seeing that business value. That's, Sri, why I say the timeline is probably a bit more extended than perhaps companies just starting to create agentic capabilities. So if I think about who do we need to hire, and it almost brings up the question of for those students that are graduating high school and going into college these days or those just graduating college, it's such a difficult time in many ways. And I do get the question quite a bit is, know, what should I study? What should I focus in on? And, you know, where I've graduated and how do I pivot because the role that I thought I was going to play is being disrupted by AI. So again, I go back to continuous learners is probably one of the most important things. We all need to be, self-included, constantly educating and learning. The data and the technology landscape is evolving so quickly that none of us can just sit idle for a few years, months, or weeks for that matter. So continuous learning is certainly one of them. I think people that really understand business processes is quite helpful. We can't apply technology to a process that we don't understand, but I also look for openness to change, right? So, people that are agile, that will pivot, they're willing to challenge the status quo is equally important because just because you understand the current process, this is the way we conduct a phase three trial at company A. You have to have a willingness and a desire to try to positively disrupt that in a way that's going to make it much more efficient, perhaps faster, perhaps less human resource required, perhaps the cost could be brought down. At the end of the day, the business that we're in is to serve patients. Any way that we can safely bring new medicines to patients faster, more cost effectively, we should be 100 % focused on those opportunities. I look for people that share that common interest.
Nagaraja Srivatsan
Yeah, you're spot on, continuous learning. We all have to have that growth and change mindset, which is so critical. You need to be also an expert together, but to make sure that you're verifying what AI tells you, because it's not telling you every time the right thing. So you need to be having the capabilities to check the checker as they call it. And Scott, one last thing, as you start to paint the future, there is this term called AI slop coming in wherein we're so trusting what AI is doing that we're beginning sloppy and then the muscle memory builds where we're just check the checker and not doing all of that human in the loop, verification and all of that. And so any guardrails on how would you avoid an AI slop seeping in? Because as you said, this is an exciting journey, but then when we start to get more and more reliant, are we going to make sure that we have the right guardrails? What you said, right, the expertise and business process, the ability to challenge all of that. any thoughts on that?
Scott Censi
Yeah, you know, it's interesting. You know, if you go back to the medical writing example, right? You know, is there fear that the medical writers will, you know, just say, okay, let me just push the button and whatever comes out, you know, we just accept it and lose sight of...data analyses and tracking down information, investigating how something needs to be written and making sure the messaging is accurate and comes across appropriately, it's easy to read, et cetera. All of these things, if we ever get to a society where it's push the button, close your eyes, and it's done for you, maybe the microwave oven heating up your dinner or something like that, that is a challenge for society that we have to think about. That's not just a life science problem. I think that's a much broader issue. I don't fear that in the short term by any means because I don't think these models are giving 100 % accurate outputs that we can just submit to a regulatory authority. There's still quite a bit of work for our experts, for our human resources to verify, fill in the gaps, modify, correct, etc. I think that's a longer term challenge, Sri, that we have to think about as a society is how do we ensure that not only can I calculate six times six on a calculator, but I actually remember how to do that calculation with the handwritten mathematics, right? I think that that's a challenge that we need to continue to tackle and always be making sure that the human that's in the loop is really verifying the information of the output of the AI models. I think that's where we need to stay squarely planted for at least the short to midterm until such a time that we really don't need to worry about how to calculate the map.
Nagaraja Srivatsan
Scott, this has been a fascinating conversation. If there were a couple of takeaways from this conversation you want to leave with the audience, I'd appreciate that.
Scott Cenci
Look, again, thank you so much for the opportunity to dialogue on this. One of those topics that we're all going to be talking about for years to come because the technology is evolving so quickly. I'm super excited for our industry. I think that AI really can be transformative. Molecules take such a long time from discovery all the way through to development, out to being commercialized and in the hands of patients. I see AI having a role that it can play across the continuum. We're super excited at Acadia, investing in this area, and we think that we can bring greater value to our patients through the use of the technology, AI, the data, and other capabilities that will present itself in the future.
Nagaraja Srivatsan
Scott, thank you so much. I think we're all here because we're wanting to bring new medicines to market and help our patients with life-saving treatment. So, really appreciate all your insight. I love the banter, and it's so wonderful to have somebody like you on our show. Thank you.
Scott Cenci
Great to be here. Thank you.
Daniel Levine
Let me welcome Arun Kumar, Chief Technology Officer of Agilisium, who's joining us for the post-interview discussion. Arun, what did you think?
Arun Kumar
A phenomenal discussion. Great insights from Scott, a leader who's been three decades in the data and AI space [who] was seeing the maturity of the AI over the time and his adoption towards the GenAI and agentic AI. Great thought process and thought leadership that he put in, especially on three major areas that he touched base on. One, which he hit very hard, was the change management. How do you bring the people along with all these changes and making sure that there's going to be kind of an enabler AI, GenAI and agentics and enabler for the day in day out of the work that they do. The second is the talent - who has to actually been considered for this, who is actually a quick learner or continuous learner and people who understand the business processes and so on. And he also touched based on technology without knowing the business processes is going to be disaster, right? So, I agree with what Scott was saying. The third one was at his leadership level, always they look for ROIs, right? And he very well puts saying that, how do you put an ROI for an Excel being used or Word document and PPT being used or Outlook being used. It is evolving on seeing how do you do an ROI for the investments that you're doing in AI, GenAI, but also the thought process of having a human doing the manual processes and having AI doing parallel processes and then evaluating an ROI is kind of a great thought that came in. Great discussions, great thought process and thought leadership from Scott, I would say.
Daniel Levine
Sri, did you want to add to that?
Nagaraja Srivatsan
The role change was a very critical part. He said that people specifically in medical writing would go from writers to editors. It's this narrative that every job function would change in what they would become. He also talked about people becoming managers and going to be managing AI and virtual agents as we start to look into the future. And then the third most important part is this notion that people, to be successful in this particular area, have to be continuous learners, experimenters, trying and adopting these things. So, I would say that those were all some very critical takeaways from this conversation.
Daniel Levine
You talk about role change. It's interesting with regards to change management. One of the things he talked about was taking people, whether they're skeptical or excited, and once they have that exposure, his reliance on those early adopters to drive adoption and accelerate the adoption throughout the organization. What did you think about that?
Nagaraja Srivatsan
It was a very well-done process. So first he said that you get people to know about the platform or Gen AI through what he calls classroom sessions. But very quickly, you pivot from a classroom session to experimentation and to experimentation, leaders are going to come up, people who are much more early adopters and who are doing it. How do you nurture and encourage them to be evangelists so that you can then spread that across each of the different teams and structure? He's following a very classic change management approach of really finding and identifying who would be the change agents and then supporting them and enabling them to be successful as they start to scale these efforts across the organization.
Daniel Levine
One of the interesting points he got into when you were asking him about the use of SaaS vendors is the challenge of connecting the enterprise data and getting the full benefit of LLMs, but also ensuring that data is only available to those who should have access to it. I'm wondering if either of you can weigh in on the types of challenges that that represents.
Nagaraja Srivatsa
The first part of it is, let's start with the role-based, right? Life sciences is always based on you having to have access only to the information which you can have authorized access to. So how do you enforce that is a very critical part of it. The second part of it is really as you start to build the infrastructure, what he's saying is doing a custom implementation across multiple silos may not give you the efficiency of scale as much as if you could adopt AI within multiple of the SaaS platforms he has already implemented. So, it's a really good way of saying, where do I get the biggest bang for the buck? If I'm using a system or platform, can I use the features and AI therein to make that thing work better? But that also helps him with access control because all of these SaaS platforms have very strong roles, permissions, and access control built within that. So that's kind of where he was leading the conversation.
Daniel Levine
Arun, your thoughts?
Arun Kumar
I completely agree from the angle of role and so on and each and every persona is different and the access to the data is very critical - who has access to it and what sort of data that they can have access to. So, the controlled access provisioning across the datasets based on the persona is critical and then making sure that what is the implications of using LLM on top of the data is also very critical, that's what I would say.
Daniel Levine
One other point you discussed was the ROI challenge and what you referred to as the experiment fatigue. We're at a point where this is being broadly adopted throughout the industry and it's really going to become a requirement. How much of this requires a certain amount of faith and commitment and at the same time, you know, keeping the CFO satisfied that a company is getting its return on investment. What is the challenged implementation these companies are going to be facing here?
Nagaraja Srivatsan
So, I think he talked about it where one part of it is democratizing access to these tools. And that, he said, would be like an Excel, like a Word, like a PowerPoint, where you give access to people to experiment. And that needs to be done. And that should be broad. And the ROI is just productivity and adoption of this, and nobody asks for an Excel productivity. So why would you ask for this? That's first part. But the second part, which he said, is he's taking a look at the full continuum of the clinical development process and saying, what are the tasks which can improve quality, reduce cycle time, help accelerate clinical trial development? And I think those are very good lenses and ROI to look at different use cases, and then to make sure that you're putting the money behind those use cases, which can help you drive better clinical timeline acceleration, drive better quality, improve what you're doing from a clinical development standpoint. So, I think he has a very good way of democratizing the ROI and access while also prioritizing it across what matters within the clinical process. Arun, I'll give you the last word.
Arun Kumar
So ROI is very critical at the leadership level that Scott holds, right? He clearly touched on socializing the tools in the hands of the people and making sure that they're able to or they're comfortable in using it and so on. One of the use cases that he touched base on was the virtual coach, right? As for the commercial organization, helping the field team to see what is the better way that they can interact with HCPs and so on. So, ROI is critical. But then democratization of the tools in the hands of the people and making sure that they are comfortable in using it. And then evolving towards what are all the business processes that the LLM can actually help in optimization or kind of improve the processes is where I think he was going actually.
Daniel Levine
Well, it was a very rich conversation with a lot to think about there, but Sri, Arun, thanks so much for your time today.
Nagaraja Srivatsan
Thank you.
Arun Kumar
Thank you so much.
Daniel Levine
Thanks again to our sponsor, Agilisium Labs. Life Sciences DNA is a bi-monthly podcast produced by the Levine Media Group with production support from Fullview Media. Be sure to follow us on your preferred podcast platform. Music for this podcast is provided courtesy of the Jonah Levine Collective. We'd love to hear from you. Pop us a note at danny@levinemediagroup.com. Life Sciences DNA, I'm Daniel Levine.
Thanks for joining us.








