Pharma Talks: Building Trust in Agentic AI in Pharma 

Look at how agentic AI can be deployed safely within the highly regulated pharmaceutical industry to ensure outputs are based solely on trusted, client-approved data.

In a special episode of Pharma Talks, host Nataliya Andreychyk explored one of the most pressing topics in the industry today: trust in AI — specifically, agentic AI in the pharmaceutical industry. Joining her was Anna Shavurska, an AI solutions architect with over a decade of experience in technology.

Anna brought deep technical expertise to the conversation. With 14 years at Viseven, she has worked across front-end, back-end, and AWS environments, and has contributed to building the core of key products such as eWizard and eVa.

Nataliya highlighted the significance of Anna’s role not only as an expert but also as a woman shaping advanced AI technologies. She noted that having a “female touch” in building such systems is both powerful and necessary, especially in a field where women remain underrepresented.

Can Pharma Truly Trust Agentic AI?

A central question guided the discussion: can companies truly trust agentic AI in pharma with their data and processes?

Anna acknowledged that this concern is not only valid but widespread. She noted that many pharma clients are cautious about how AI systems use and process their data. However, she explained that modern architectures are designed specifically to address these risks.

At the core of Viseven’s approach is Retrieval-Augmented Generation (RAG). As Anna explained, this method ensures that AI systems rely only on approved, client-provided data stored in secure knowledge bases.

LLM systems typically work with general trained data, which is not safe for pharma clients. Instead, their solution retrieves information exclusively from trusted internal sources.

She further clarified that these systems do not learn from client data. The data remains secure, compliant, and isolated — meaning it is not used to train underlying models. When Nataliya asked whether this implies a kind of “short memory,” Anna confirmed that it does.

Eliminating Hallucinations Through Traceability

Another major concern in pharma AI adoption is the risk of hallucinations — when AI generates incorrect or misleading information.

Anna explained that while no system is entirely free of complexity, the use of high-quality, validated data significantly reduces these risks. More importantly, their approach ensures full traceability.

She noted that every AI-generated output is supported by references, with the system providing “complete linkage of all documents used to generate every sentence.” This level of transparency is critical in regulated industries like pharma, where every claim must be verifiable.

From Content Chaos to Scalable Creativity

The conversation then moved to the topic of content creation, particularly the challenges of generating consistent, compliant materials across channels.

Nataliya shared her own experience with inconsistency when using tools like ChatGPT, noting that manual corrections are often required. Anna responded by explaining how structured and compliant AI systems solve this issue.

She described how eVa AI Agent uses predefined “prompt cards” and “hot actions” to guide users through the content creation process. Rather than starting from scratch, users are prompted to follow specific AI-driven workflows.

Additionally, the system relies on master templates — HTML-coded structures with approved design elements and content blocks. This ensures that outputs are not only visually consistent but also compliant with regulatory standards.

The platform also enables companies to reuse existing materials. Given the vast amount of content pharma companies already produce, Anna emphasized the value of giving these assets “another life” by adapting them to different channels and formats.

Simplifying Prompting with Predefined Workflows

Prompting remains one of the biggest barriers to effective AI use. As Nataliya noted, crafting the “perfect prompt” can be overwhelming, especially when best practices involve long and complex instructions.

Anna explained that this challenge is addressed through prompt libraries tailored to each client’s needs. These libraries include pre-configured prompts aligned with specific workflows — whether generating emails, adapting existing content, or building campaigns from scratch.

Different organizations can configure their own scenarios, ensuring that the AI supports their unique processes. Users can either select prompts from the library or rely on guided “next steps” suggested by the system.

This structured approach removes the burden from users, making agentic AI in pharmaceuticals more accessible and practical in everyday pharma content operations.

The platform also enables companies to reuse existing materials. Given the vast amount of content pharma companies already produce, Anna emphasized the value of giving these assets “another life” by adapting them to different channels and formats.

Why Women in AI Matter More Than Ever

Beyond technology, the episode also addressed a broader and equally important topic: the role of women in AI.

Nataliya pointed out that, despite increasing visibility, women are still underrepresented in the field. She emphasized the importance of bringing diverse perspectives into AI development. Anna agreed, adding that the issue goes beyond representation — it directly impacts how AI systems behave.

AI is not just an algorithm. It is a system designed by humans and trained on human data.

She highlighted real-world examples where AI systems trained on historical data reproduced gender biases, such as favoring male candidates in hiring processes. While the data itself may reflect historical realities, relying on it without diverse input can reinforce existing inequalities.

If only men develop these systems, and only male-centered data is used, the outcomes will reflect that. This is why a “woman’s touch” is essential to make AI systems more inclusive and balanced.

A Personal Journey into Tech

The episode concluded on a more personal note, with Nataliya asking Anna about her journey into technology.

Anna shared that her path began during her university years, supported by encouragement from her father. He told her that if she completed her studies, she would become an excellent engineer within a few years. Once she started, she realized how much she enjoyed the field. “It’s its own world,” she said, reflecting on her passion for technology.

Nataliya closed the conversation by praising Anna as “an amazing engineer” who is making a real difference in the industry.

Building Trust — Technically and Humanly

The discussion made one thing clear: trust in AI pharma tools is not built through promises alone, but through architecture, transparency, and thoughtful design. From secure data handling and traceable outputs to structured workflows and inclusive development, the foundations of trustworthy AI in life sciences are both technical and human.

As pharma continues its digital transformation, voices like Anna’s, and platforms like eVa, are helping bridge the gap between innovation and trust.