Ensuring Successful Implementation of AI
In This Episode
The 'Life Sciences DNA' Podcast, sponsored by Agilisium Labs, delves into groundbreaking innovations at the intersection of technology and life sciences. In this episode, host Dr. Amar Drawid is joined by Dr. Anastasia Christiansen, a renowned data science and AI leader, to explore how AI is transforming the pharmaceutical industry.
- Explore the integration of AI in pharma and the importance of organizational models that foster cross-functional collaboration.
- Discover why co-creation among teams is crucial for accelerating AI solution development and unlocking new therapeutic advancements.
- Learn about the challenges of integrating AI within large pharmaceutical organizations and the potential downsides of siloed AI teams.
- Gain insights into determining the optimal structure for AI and data teams to maximize collaboration and innovation in drug development.
Transcript
Daniel Levine (00:00)
The Life Sciences DNA podcast is sponsoredby Agilisium Labs, a collaborative space where Agilisium works with its clientsto co-develop and incubate POCs, products, and solutions. To learn howAgilisium Labs can use the power of its generative AI for life sciencesanalytics, visit them at labs.agilisium.com.
Amar, we've got Anastasia Christiansen onthe show today. Who is Anastasia? Dr. Anastasia Christiansen is a visionarydata science and AI senior executive with over three decades of experiencedriving data science, AI, and information technology innovation in thepharmaceutical industry and across four major pharma companies. She was mostrecently an SVP and and global head of AI data and analytics at Pfizer.
Prior to that, she was a vice president ofIT at Johnson & Johnson, leading the information and technology needs ofall therapy areas, including clinical development, pharmaceutical development,digital health, and innovation. In her more than 30 years in pharma, she actuallystarted as a wet lab scientist, leading drug discovery projects before movinginto data science and IT, and has worked in a variety of IT and informaticspositions, including AstraZeneca, Bristol-Myers Squibb, and Leidos Biomed.Anastasia is now a senior consultant and strategic advisor who has a wealth ofexperience applying AI within large pharmaceutical companies. And what are youhoping to hear from Anastasia today? So I would like to hear her thoughts onthe pros and cons of different data analytics, organizational models in pharmacompanies. Also some of the successful use cases of AI
and how pharma companies are adopting theuse of this technology. Before we begin, I want to remind our audience membersthat if they want to keep up on the latest episodes from Life Sciences DNA,they should hit the subscribe button. If you're enjoying this content, hit thelike button and let us know your thoughts in the comments section. With that,let's welcome Anastasia to the show.
Anastasia, thanks for joining us. You'vebeen intimately involved in the use of AI in some large and well-establishedcompanies. But first of all, I would like to get your view on the definition ofAI. All right, I'll start first by thanking you for inviting me to join youhere. I look forward to an exciting conversation. So as far as definitions ofAI, there are a number of official and unofficial definitions of AI, but moreor less,
They are all consistent in acknowledgingthat artificial intelligence, or AI, is a broad term that refers to a varietyof technologies that enable machines to perform tasks that typically requirehuman intelligence or that mimic human intelligence. It's a combination ofcomputer science, math, and cognitive sciences. And it uses algorithms to mimichuman behavior.
And so we think of AI as being a disruptivetechnology, changing the way an organization works. But I'd like to start bygetting your view from the inside of how a company might evolve its structureto integrate AI and data analytics across different departments. So you haveheaded a lot of these analytics teams, the AI teams. How do you go about it? Orwhat are some of the different
ideas you've seen in a company as theyimplement AI? So I guess the main thing, and I have seen as you said, I haveseen a lot of different technologies being implemented. In fact, I started offas a scientist in drug discovery and development and moved to data science andtechnology out of a sheer passion for bringing the experience and the knowledgethat I have from
being in the labs or directing the work inthe labs to using the technology most effectively. There is no one size fitsall. And it really is a matter of co-creation that I strongly feel in that youneed to co-create. As far as models for how do you organize this kind oftechnological work, I don't think there's
any one method that works best. And in theend, it's not really the organizational structure that gets things done. Ipersonally believe in the hybrid model or a hub and spoke model, where you havea central group as the hub, the spokes as being embedded or rather integratedinto the functions, the organizations that you're supporting.
And then you ensure that you areco-creating and you are working together. As you're talking about the hub andspoke model, and as we know in pharmaceuticals, you have a lot of differentdomains. You have research and there's biology, chemistry in there. You havedevelopment, manufacturing, medical affairs, Pfizer. So you think one centralfunction can really serve all these different domains in this hub and spokemodel.
Well, remember, there's the spokes that areembedded inside of the individual organizations. And then it's a matter ofwhether it's their dotted line or solid line. And I'll say again thatorganizational lines aren't what gets the work done. It's the partnership thathappens. So yes, one central organization will accelerate and facilitate theembedded organizations, the organizations that are integrated into theindividual
functions that you mentioned, your manufacturing,research, development, clinical, commercial, et cetera. So the spokes are theones that are integrated and the hub is the one that partners with them, scalessolutions, brings knowledge, oftentimes from one organization to the otherbecause different organizations aren't always as connected as we'd like tothink that they are. So the hub serves as that glue
that brings the spokes together, thatintegrates, that has a center of excellence that from a technology and a dataperspective and supports the integrated groups that are in individual functionsto be most effective. Gotcha. Now I've also seen like in some of the big pharmacompanies that they're just the AI teams are just separate in research versusdevelopment.
What do you think are some of the downsidesof having those kind of models in this big pharma? Yeah, so when you haveindividual groups working on their own in silos, if you don't have thatconnection to a central organization, that's when you end up with reinventingthe wheel. You end up with inefficient solutions because you're not scalingfrom one to another.
I've been in organizations long enough andin this kind of central organization enough that I've seen the immediatebenefit of taking something from research and modifying it and scaling it andusing it in commercial or vice versa. Same with manufacturing. You focus onsomething very specific to manufacturing. You learn from that and then you takethe main components of it
and you can implement elsewhere. And whatyou can do is then develop accelerators of sorts that can then be enhanced asyou take something that you developed for one organization and tweak it, tailorit for another organization. So it goes faster. You learn along the way. Andyou avoid the siloed thinking and reinventing the wheel.
But then when you think about the hub andspoke model where you do have this centralized AI team, one of the reasons I'veseen that companies are not able to manage that very well is that there is somuch to manage. There are so many different areas to cover. And then a lot oftimes the central team needs to make choices, right? About like what to cover,what not. And then sometimes the...
the domains get frustrated, they're notgetting enough support or the central team is not close enough. based on yourextensive experience, like what is it that that central team need to do to beable to make this a successful model? So again, that'll be my view based on myexperience in four major pharma companies, right? The first thing is to think aboutit as co-creation. So you're creating things
together, you're learning from one another.So when you're co-creating, you also need to prioritize, obviously, for thereasons that you said. You can't be everywhere at the same time. So you need anetwork, a council, whatever you want to call it, a group that brings theleaders. Let's say if we're doing it for AI, you need the AI leaders from theindividual functions
coming together with the leader of thecentral group and defining strategy, defining priorities, working together onunderstanding where one solution is going to help another. One solution fromone function is going to help another solution in another function and decideon priorities. Oftentimes what happens is, money talks. One organization hasthe money and they might have the loudest voice.
And so, you know, people will start there.If you have a co-created enterprise strategy that then brings together thewhere are you going to start, where are you going to end, not necessarily froma functional perspective, but from a capability perspective. And if multiplefunctions benefit from one capability, that's going to prioritize it higher
than a capability that only one function isgoing to benefit. Now that one function, it might be the most important thingthat they are looking for and they might need it right away. So that thenbecomes a conversation about how much can the central function scale and towhat extent you just allow that function to just
proceed while it's kind of who leads andwho follows, right? So maybe that function will lead because they have theneed, they have the knowledge, and they have urgency. So they're going to getstarted, but the central function will be involved in some minimal way, atleast, to learn from it and then take that and run with it later with eitherscaling or bring it to other functions as well. OK.
There's no real one way of doing it. It hasto be a partnership and prioritizing together and being ready to, I guess,being agile with decisions. Sure. And when we are talking about, like, say theAI team, but then there's also like, you have data and in data you have the,you know, the even data quality data governance, data strategy, data platforms.
related to them you have your descriptiveanalytics, you have your predictive analytics. Now a lot of the predictiverescripting, you can put that as part of AI. So even the AI team itself, how doyou like, do you think like what should be part of the AI team? Should there bethings that are like should be separate from in a data team or how do you thinkabout all of that? Personally, I think that if your data foundations are off orif your data are siloed,
you're not going to have good strong AIcapabilities. So I see the two as being very tightly connected. So you need thedata to build the models to be able to make use of AI capabilities. So for me,they do need to be very intimately connected, interrelated. With data comesgovernance and quality.
And governance, there's no doubt that therehas to be governance around data. Who can use the data, therefore who can usethe model? What confidentialities are in place and so on so that you can putthe right restrictions, if you will, on the models based on any restrictionsthat are in the data. When it comes to quality,
It's an interesting question and I've beengrappling with that a little bit because of course you need quality data inorder to have good robust models. But we also know that Chat GPT as an examplewas built on all the data that's available on the internet and we know that notall of it is quality data. So we also need to be careful about spending aninordinate
an inordinate amount of time, cleaning thedata and having the data be just so, recognizing that if you don't have somevariability in the quality of the data, the model is going to be limited aswell, because the model is not going to be able to learn what's a good, what'sgood quality data and what is not good quality data. So there are someadvantages to not looking to
clean the data 100%, but to have somevariability in there and allowing the model to learn. What you need to bevigilant about is bias in the data. So you need to be clear about what biasthere are in the data. And there are methods now. There are, in fact, AI-drivenmethods that can help with that, that can help with...
with reviewing data quality, with cleaningdata, with filling gaps, with flagging where there's bias in the data so thatthe human user can make a decision about what to do about that. But as you'resaying, right, like the exact organizational model doesn't matter. It'sbasically that the data team needs to be working very closely with the AIanalytics team, and then they need to be working closely with the
business team or so. That needs to happenwhichever way the organization aligns. Yes, I would say maybe I would hedgeless by saying it helps to have a central data group, but it has to be acentral data group because that's what's going to de-silo your data. That'swhat's going to make sure that the foundations are common and the foundations,you know, one builds on the other.
So it absolutely helps to have a centraldata group. The governance aspect of that, there's no way a central group cangovern all of the data. So that needs to be done in partnership with theindividual functions and the data owners, stewards, whatever you want to callit within those functions to make sure that we are applying the rightstandards, we're applying the right access privileges. And so we talked aboutlike this, the Hub and Spoke model and some of the benefits and challenges, butjust want to like focus on a bit more on that. Are there any other kind ofchallenges that you see? I mean, we talked about like, you overcame with theco-creation, which is very important, but how, like, what are some of thechallenges that like, as you have set up these kinds of teams, what are thechallenges that you have faced and how have you overcome those?
Yeah, some of the challenges, okay, let mesee if I can. So first of all is getting enough people in the organization whoare both technical and understand the science or the function that they'resupporting. kind of wearing that being double-headed. But of course with, youknow, enter AI though.
So you need people that have, that wearthree hats. So they understand the domain that they're working with. Theyunderstand the technology very well. And they understand the math, so to speak,right? The math and the computer science of building models. So finding peoplethat have all three skills is not easy.
It's not impossible. There's plenty offolks who do have that, but finding enough of them and bringing enough. Andthen the question is, what's enough? Because you're not going to have adepartment that's fully staffed with people like that. So having then thereally deep technologists and the really deep mathematicians and some reallydeep scientists, if we're talking about the science domain, or commercialfolks, marketing folks, if we're talking about the commercial marketing,
in that kind of central organization. Butperhaps even more important than that, having a really tight workingrelationship with the embedded group that probably would be much stronger onthe domain than they would be on the technology. So having that very tightcoupling is important. So skills is absolutely the number one.
hurdle sometimes that we get over. Theother thing is, of course, the communication. Because people get working, andthey forget to communicate with one another. It's just standard, right? So thecommunication between the various groups that need to be involved. And then theother thing I'd say, the technology. The technology is continually evolving,even before AI got so popular and
has been evolving very, very quickly. Thetechnology stack that you use. I'll say we started with trying to get rid ofdata silos by building data warehouses and building data warehouses and datamarts, and then moving from that to data lakes. And then it was data lakes,data swamps, and data lake houses. And now we talk about data fabric.
I'mgoing to say the technology, but it's also the approach is continuallyevolving. And so you need people who are evolving with it and understand thefit for purpose solution and are able to design. You have the architecturegroup that's able to design something that will evolve with the evolution ofthe technology and not find yourself five years down the line
starting from scratch. having that agileevolutionary mindset of what we're building on the technology stack iscritical. That's a tough task, though, mean, to be able to do all of this,right? It is, which is why you have roles and responsibilities, right? So youwill have in your central group, you can't just have
the doers, need to have the strategists aswell. You need to have the people who are going to stay on top of what thetechnology is, or how the technology is evolving, rather, how the tech stack isevolving, how AI capabilities are evolving, how models are learning. So you'llhave those experts who are constantly at the forefront,
not necessarily trying to implementeverything new, but rather learning what's new and being able to be connectedwith the doers on particular projects to have these meaningful conversationsabout there's this new capability now, there's this new technology. How doesthis fit in? When do we integrate that? Let's have a conversation with thepeople who are going to
use this as end users, right? And when'sthe right time to move? And if you have really good, strong architects, theywill have architected any solution in a way that it can evolve. And that'swhat's important, is being able to build something that's modular enough thatyou can then evolve, plug and play, so to speak. Okay. So it's important tohave the modular solutions because then
and you can plug and play, but then you canalso use them in a lot of different use cases, so yeah, across the pharma valuechain. Absolutely, and it's really important for your models as well, right? Tohave plug and play components to the models so you don't have to keepreinventing your code, writing new code. Yeah, absolutely. So this brings me tothe question that I want to ask you about, creating the center of excellencefor AI, which is the hub there, right?
So how like your thoughts about that? you've already shared some of the thoughtswhich is you need to have like three types of people there right and one is the technology specialist the maththe math wizards to do the models and some of these scientists or businessfolks. So how do you kind of like arrange that- that center of excellence. Workstructure is probably less important than the operating model, so I think youcan-
you can have more than one, well, not inone organization, you want one org structure. When you compare differentorganizations, you can have variations on that org structure. But the operatingmodel really needs to be around developing something, testing it, and thenscaling it. If you
So kind of a design, make test kind ofmodel that we might use in the sciences. So when you're designing, you have tomove in an agile way. The design needs to keep evolving as new technology comesin, as new requirements come in. So you evolve the design. The test is whereyou-
You build your model, for example, and youlook at whether there are any biases, whether it addresses the need that youhave, whether you have all the data that you need, whether you have any biasesin those data and so on. And only when that is tried and true tested do youlook at scaling. And when you're scaling, you think of it as a circle, right?It's a circle. So when you're scaling, you're also looking at has anythingchanged? Do we need to change anything in
the final tool. Do we need to testsomething new to bring into the final solution that you have? So you need toconstantly be going in that cycle and moving forward in that cycle. The dangeris that you can get stuck in the cycle and you don't move forward. it'sdefinitely a cycle that keeps moving forward. And if you're doing thatappropriately and you keep monitoring the technology externally, you keepmonitoring the requirements, you keep monitoring the results that you'regetting and you're evolving, you almost don't ever have to start from scratch,right? But if you build something and say, okay, here it is, guys have it,we're gonna move to another project, then come depending on what technologyyou've used and if it's AI, probably in three months, it'll be out of date,right? In the past, it might take a year before things are out of date.
So and with a hub organization, you have tobe careful that you don't run out of resources. Because if you keep everything,well, how much do you staff something that has now delivered the end product,but you need to stay on top of it? So it's that AI ops, if you will, right? AIML ops that you need to have.
And how do you staff that appropriately sothat you can keep evolving rather than do the stop and go? Gotcha. OK. Andsorry, and you have enough resources to do the next project because you need tokeep picking up the next project. So in terms of ROI, what I mean is like, whatis the return on investment and how the companies are measuring return oninvestment? What are their specific metrics that they're using?
What are your thoughts here? As you know,AI, enthusiasm around AI grew exponentially at the end of 2022 and 2023. But wehave experience from using AI before that. So we've been using AI for, well,the term was first coined in the 1940s, and we've been using it a little bit ata time then. How did we measure the return on investment
in the earlier use cases. It was mostlybased on innovation and operations, acceleration of operations. The use casesthat we've developed in the past year and a half to two years now, many of themare in the operation space. What I mean by that is accelerating a lot of thework that we do in drug discovery and development,
accelerating our manufacturing processes,accelerating our clinical trial processes, accelerating writing documents andso on. So we start by measuring the core of what is it enabling? And usuallyit's speed and then the cost of doing this. And ultimately it's the value thatthe business gets from it. I don't know that
we've reached the point yet where we canspeak a lot about the business value. But that's ultimately the goal. And thebusiness value will end up being in, honestly, in the pharmaceutical industry,it's getting medicines to patients faster. So when we reach the point where wehave reduced the time from
early discovery through to patientsignificantly and keep in mind that that time has been getting smaller andsmaller, well, shorter and shorter, not smaller, but yeah, but shorter andshorter. If when we get to the point where we can say it is now half what itused to be, say in 2025 and what it used to be in 2022, or more than that, thenthat's
the value that you then need to look at,what are all the components that contributed to that? So ultimately it's speedto patient and of course cost to getting from the very early steps through tothe, know, getting something to the patients. A lot of the benefit that we'reseeing right now, I'm not gonna say value yet, but benefit is in theoperations, in accelerating operations and
again, designing trials faster, getting theclinical trial up and running faster, protocols faster, writing clinical studyreports faster. Also in the preclinical space, doing many of the operationalactivities faster, same with manufacturing and so on. The innovation piece,which is what's probably gonna take us in leaps and bounds faster, that's stillcoming. So what I mean by it's still coming, we're using it and we're using itto define or identify new molecules for therapeutic intervention. But you needto test those and you test them in clinical trials. So that still takes time.
And so I would estimate that we probablyhave a couple more years before we see the first drug in patients or beingdelivered to the market, to the patient that started with an AI capabilitydesigning it, or that had AI being used throughout its entire process, as opposedto what we do now, where we're using AI at different steps, but not necessarilyfrom start to finish. Yes, that'll be pretty interesting. And as you said earlyon, right, right now people talk a lot about AI. But I mean, I think I've beenusing AI my entire career. Like machine learning to me is AI. And that'ssomething in research, we've been using it for the last two, three decades.
And of course, what we've seen is some of these exampleswhere analytics of this genomics data, proteomics data, using machine learning,that has produced some new drugs which have gone all the way. But as you'resaying, it's only one specific part where AI is not used, not like in theentire pipeline there, right? Yeah, and so one can argue that we are using AI inevery step of the drug discovery and development process. That's correct,right? It's a correct statement. What we're not doing is this, what we're doingis we're using it in step one, step two, step three, in order for me to notmention which steps. What we're using AI in multiple steps. What we're not yetdoing is kind of revolutionizing the steps that we normally take by sayingwe're skipping three or four steps because we're using AI. Or we're using AI tohelp us leapfrog over some of the steps. We're not at that point yet wherewe're holistically using AI across the entire process and eliminating somesteps because AI is enabling us to do that. We'll get there. We're just notthere yet. Gotcha. Now you mentioned about some of these examples, right? Like ofgetting operational gains or so. Is there like one or two examples that reallystand out at this point? Like stand out in terms of
really something with AI, machine learningthat has really changed the way drugdiscovery or manufacturing has been done over the last, let's say, decade ortwo? Yeah, so I will say that the pharmaceutical development and themanufacturing process, because it is kind of a stochastic process, we know theends, it's while...
complicated in its own right. It's not ascomplicated as biology is. Those are, that's an area that I think has had themost progress and has traditionally started using technologies, AI being one ofthem, before any of the other steps. And you can see the end results at thebenefits faster. So I will say that
during the pandemic and more than onecompany. And I worked for one company during the pandemic developing a vaccineand another company I joined after the vaccine was developed. And I can saythat in both cases and probably everybody else that was developing a vaccine,that's probably the best example that most people will relate to where AI wasused at different steps of the process. And I know in the company where I was,we used it in the manufacturing process
building digital twins of the manufacturingprocess to accelerate developing and delivering the vaccine. So there aremultiple steps that AI was used in the process of developing the vaccine. Andwe saw the benefit, right? We saw the benefit, which was not only because ofAI, but AI was a big component. There were also other
components that we were working in a veryagile way and many of the steps we were putting more people on it in order togo faster and so on. Not taking shortcuts, but just accelerating because wecould put more people on because that was the most important thing that wecould do. When you are juggling a bigger portfolio, you have to prioritize andyou often have to
you know, prioritize one program overanother and you might not have as many resources, so things might go little bitslower. So while that process of developing the vaccine identifiedopportunities for us to accelerate that we have implemented or theorganizations have implemented going forward, there are also some steps that
couldn't necessarily be maintained unlesswe were putting in an AI capability that then we would use going forward. Sonow in terms of like, we talked about the benefits and then some of theseexamples, but when you take these AI solutions from pilot projects and scale itcompany-wide, what are some of the challenges that you've seen? Not everythingcan be scaled as well as you
think there might be. And we've seenexamples. So some things will surprise you. And what I mean by that as anexample is taking something that you were doing in research and now scaling itto be used in other functions or in other processes. So what is it that we seewith scaling? We see the complexity
and the variability in how different partsof the organization work. And so the learning there is sometimes it's not aseasy to scale a solution as you might think, but you have to look at what isscalable and what is not scalable. So one size fits all
does not always work, right? And that'spart of the reason why I mentioned that cycle of design make test, right? Sowhen you're scaling is when you're actually testing and running it and scalingit, it's kind of the last step before you go back to design. And you can't takeshortcuts on that, but you can learn along the way and...
that's in the design is where you can thenlook at, another organization has a different process. So can the same makecycle, if you will, work? Or do we have to modify the design and the makecycle? So I don't know if I'm answering your question, but effectively, it'sscaling is essential. We have to look at being able to scale. It's not alwayseasy.
But when it works, when you're able toscale, you see orders of magnitude of benefit. Absolutely. Now, these days, alot of tech bio companies are coming up with AI. So what advice would you givethe traditional pharma companies as they're engaging with these new tech biocompanies for their AI initiatives? Yeah. So I think
that what I mentioned earlier about taking one step at a time and looking touse AI or accelerate one step at a time. It works. It's the approach that we'retaking. It's a different approach though than the tech bio companies aretaking. With tech bio companies, because they're starting with digital nativesand they're starting without having the legacy process, if you will, in place,they're
reimagining how you're doing, how you woulddo the entire process. So I would say, take note, partner. And there'll bemutual learning from both sides. Because the tech bio companies, while they aredigital natives, and they can think outside the box, they might also thinkoutside the box in a way that really isn't feasible, isn't possible.
And you don't want them to find out toolate. So that's where the partnership is essential. And you've seen some bigpartnerships, actually, that have happened between Recursion, for example, andGenentech, I think. And maybe Amgen as well, I can't remember. there are, sothat's just one example. There's multiple other examples where tech biocompanies are, you know, Nvidia working with pharmaceutical companies and soon.
I think the partnership is essential.Learning from one another is essential. And as there was a recent article thatI read, I think it might have been a BCG article that basically said, you haveto be careful how you think about how you build your AI center of excellence orhow you build your AI groups that you meld the
the scientific, pharmaceutical knowledgewith the technology knowledge. And you don't pluck someone out of a completelytechnical organization, plug them in, and assume that they're going to changeeverything now. Because there's going to be a learning curve, because there aredifferences in the way scientists work, the way clinicians work, the way healthcare works. And you have to understand the nuances. You have to understand
the technology, not be limited by it, butyou have to understand it in order to apply the technology appropriately. Soyou got to give some time to, if you are hiring a technologist to jump into apharmaceutical industry, give them the time to learn and adapt and vice versa.If you're taking someone out of the pharmaceutical industry into a techorganization, you've got to make sure that they're learning
the opportunities that are there and aren'trestricted by their history. Gotcha. So you've talked a lot about like how, youknow, what companies need to do to implement the AI, right? And just trying tosummarize all of that, what are some of the- what are the lessons, right? At ahigh level would you give to the companies as they are integrating AI intotheir approach? Maybe you did talk about
as tech and traditional pharma merging,make sure that it's merging the right way. Organizational boundaries don'tmatter as long as you're putting the right functions together, right? And doingthe co-development. I just wanted to talk, are there any other big lessons thatyou would like to give to the audience about how to get AI into pharma? Yeah, Iwould also say,
careful about thinking of it as a shinyobject and you're looking for where to apply it. So shiny object, apply it toanything. Think of where the biggest bottlenecks are that you have, that wehave. What are the biggest bottlenecks and how can you apply AI to resolvethose bottlenecks? If you're looking at AI as a silver bullet that's gonnasolve
every single problem, you're going to getmired in the- in the detail and you're going to be overwhelming theorganization and the the experts that you have because you're pulling them intoo many directions. So I think the most important thing that that we can do inwhether it be pharmaceutical industry or, you know, biotechs is identify the
two or three or five, however many, not 25.What are the biggest bottlenecks, the biggest problems that you have that ifyou solve are gonna be life-changing, so to speak, or are gonna be, are gonnaaccelerate what you're doing. Think of those few and then focus on those. Focuson those and work through in a
collaborative way, I said, co-creating,bringing the technologists and the scientists together to understand oneanother and work through in an iterative way to resolve it. And some problemsare going to take longer than others. And you might need to have the patienceto resolve it and not go necessarily for silver bullet, but actually make surethat you are not
boiling the ocean, that you're not tryingto completely solve the problem, but you're doing it in an iterative fashion.As long as you're seeing progress, you then iterate and you keep movingforward. Thanks for that insight. Dr. Anastasia Christiansen, senior consultantand strategic advisor. Anastasia, thank you for your time and insights today.Amar, thank you. I really appreciate it. This was fun.
Well, Amar, what did you think? It wasfascinating discussion and she gave a lot of guidance around how the pharmacompanies should be working in terms of setting up the right organizationalmodel, how they should be working with the biotech companies, tech biocompanies, and also how they should be just having the right operating model aswell. Anastasia talked about the need for
collaboration across teams, do companiesgenerally operate with a central data group or do they tend to have focusedprojects and focused teams? Well, I've seen it both ways. What I've seen so faris that in the small and the mid-sized companies, you usually have onecentralized group that can cater to the different domains. But then as
the companies become bigger, it becomes abit hard to manage. that's something that I did ask her, It is like in the bigpharma companies, because these are huge companies, and then really, like, it'sreally hard to know exactly what's going on in the different groups. That'swhere I've seen it a bit harder to manage, although some companies are tryingthat. So in the big pharma company, I've seen both models.
There's a human resource issue here. Shetalked about the need for people with scientific, technological, andmathematical expertise. How do you see the integration of AI reshaping thedemand for talent within biopharma? Yeah, see, those people, I call themunicorns because they need to have all these three aspects of their...
Yes, there are some of them, but it's hardto find them and it's hard to convince them that you should work in our companyfor a reasonable price. So yeah, it is a challenge. It's also how to bestactually use them or how to make use of them is a big question.
Also, I mean, one of the things I've seen,one challenging thing, especially those who have really great mathematical andtechnological knowledge, why they should work for pharma. A lot of them doprefer to work for the big IT companies or they tend to work for the financecompanies. Pharma is something that they don't necessarily want to work at. Sothat is a big challenge. think, yeah, so this is something, it has been...
I think the pharma companies are doing agood job of attracting the talent now. But I think it's an uphill battle atthis point. Although, it's also, I mean,a pharma company doesn't need like, you know, thousands of these people either.They need a handful of these people based on the size of the company who canthen actually shape the way things should be. And it's also, have to also lookat, okay, well, exactly what type of skills you need, because it's hard to getpeople with
all three skills at the same time. So ifyou're looking for two specific skills, then you really have to map them in theright place so that they are enjoying the job and then they're learning fromthat and you're getting value from them as well. She talked about the benefitsshe's seen so far, which have largely been speed. She didn't want to use theword value though, and she sees the innovation piece as being something that'sstill several years off. Is that true for the industry more broadly?
I would say what she was referring to whenshe talked about this innovation is- is creating these new molecules, new smalland large molecules using AI technology and driving through them. And yes, thatit's early days for those and we haven't really seen like great examples of success there. We'vehad some, you know, we've had a lot of progress in using
let's say the traditional machine learningmethods in analyzing a lot of experiments, getting a lot of insights. And evenwith generative AI now, we're seeing getting a lot of insights, a lot of evencontent creation, which is again, is going to speed things up. It's going toget the compounds the drugs on themarket faster. So as she was talking about the operational efficiency, right?Like this is all going to relate,
to translate into that. Now, in terms ofcreating these new molecules is something that we need to see how that's goingto be. I would say like some of that has like when you apply AI in earlydiscovery like genomics, proteomics, yes, you do see some novel molecules thatyou can identify, but this new idea of just creating these new molecules thatdon't exist, that is something that may drive
the best innovation, that may drive thescience quite a bit. The last thing she said was that companies reallyshouldn't think about AI as a silver bullet, but instead they should thinkabout their biggest bottlenecks and how they can apply AI. Is that a morepractical way for companies to think about integrating AI? Absolutely. I mean,there is no silver bullet. We have seen it even in my career, like
I've seen a lot of new things that come upand they're hailed as silver bullets. And of course they have their uses andthey have come and stayed. Bioinformatics was a great one at the beginning ofmy career when it was thought of as a silver bullet. I mean, of course it didopen up some new targets, a lot of new targets actually that we didn't think ofbefore. So it did.
And because of that we have now more drugsbecause of that than not, but it's not like it has come we have come up withthe cure for all diseases because of that. So these are - these make progressand a lot of these technologies will youknow bring us to the next level of scientific enlightenment that it's - this is another step. Yes, it is a good step.It is a big step, but I like the practical approach that she talked about.
Right now there I do believe there's waytoo much hype and people, people who especially who don't understand AI, thinkthat is going to solve everything. So what usually happens is that people whenpeople start understanding what you can do that then they really start gettingtheir expectations down. So yes, we needto focus on different areas because see if you try to apply it across acrossthe board you cannot. Noone has theresources to apply it across the board. There's so many use cases where it canbe applied
But you have to be very careful. You haveto pick the ones where you have the biggest pain points, apply them there, seeif it's working, right? Because there's no guarantee that it's going to work ina specific use case. So you have to do that. And then you learn from that andthen have some practical expectations about the success of AI. Well, a lot tothink about there, but another great conversation. Amar, until next time. Thankyou, Danny.
Thanks again to our sponsor, AgilisiumLabs. Life Sciences DNA is a bi-monthly podcast produced by the Levine MediaGroup with production support from Fullview Media. Be sure to follow us on yourpreferred podcast platform. Music for this podcast is provided courtesy of theJonah Levine Collective. We'd love to hear from you. Pop us a note at danny @levinemediagroup.com. For Life Sciences DNA and Dr. Amar Drawid,
I'm Daniel Levine. Thanks for joining us.