Q&A: Ensuring Defensibility in GenAI Workflows

October 30, 2025

By:

Cassie Blum
Cassie Blum
Erica Jordan
Erica Jordan

Get the latest insights

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Summary: Cassie Blum, Senior Director of AI & Analytics at Lighthouse, answers frequently asked questions about GenAI defensibility—from human-in-the-loop validation to rigorous process documentation—and explains how legal teams can confidently put AI to work without compromising accountability or accuracy.

As generative AI continues to make inroads in eDiscovery, legal teams are asking a critical question: Are AI outputs defensible?  

Concerns about accountability, accuracy, and alignment with legal standards are natural. After all, defensibility is non-negotiable in litigation and regulatory matters, and it doesn’t happen by chance.  

To shed light on this, I sat down with Cassie Blum, Senior Director of AI & Analytics at Lighthouse, to answer the most common questions we hear from legal teams about the defensibility of GenAI in eDiscovery.  

Erica: What does defensibility mean for GenAI in eDiscovery? 

Cassie: Defensibility means AI-generated output can be explained, justified, and upheld under legal scrutiny. It’s about ensuring that AI results aren’t arbitrary by providing transparency, accountability, and reliability.
 

How are people involved in GenAI workflows in eDiscovery?  

Human oversight is key. In defensible GenAI workflows, prompt engineers manage AI instructions, in consultation with counsel and subject matter experts, with attorneys reviewing results on sample populations to confirm accuracy and alignment with legal objectives. This iterative, human-in-the-loop feedback process ensures that GenAI results reflect technical rigor and legal judgment.

What role should attorneys play in GenAI workflows? 

Attorneys have a critical role in governance and oversight of GenAI workflows, providing context and guidance in workflow and prompt formulation, and validation of resulting AI outputs. They do not need to create or refine prompts themselves, but they confirm that the GenAI processes as well as outputs meet legal standards and reflect their professional judgment. This ensures the results are defensible and aligned with case-specific priorities. 

How does GenAI defensibility compare to predictive AI approaches? 

The defensibility of predictive AI workflows—such as responsive or privilege identification—is more straight forward because they largely follow established and accepted TAR workflows, relying on models trained with attorney coding that are validated through sample review. Generative AI, while transformative, requires following a similar principle of rigorous process documentation and human validation: outputs are iteratively reviewed and validated by attorneys to ensure accuracy and alignment with legal guidance, while expert prompt engineers manage the AI instructions.   

How are quality and accuracy maintained in GenAI workflows? 

A sample-based approach can be used to provide both qualitative and quantitative metrics on AI performance. Iterative review with attorney feedback ensures that outputs are consistent, reliable, and defensible. Documentation of prompts and versioning adds further transparency, along with rationales, considerations, and citations to ground results.

Have there been pushbacks on this methodology? 

No. This human-in-the-loop methodology that combines expert prompt engineering and iterative validation ensures transparency, accountability, and alignment with legal standards. This structured process addresses the core concerns around defensibility and provides confidence legal teams, courts, and regulators need in GenAI-assisted workflows.

What are some examples of how you are defensibly incorporating GenAI into client workflows?  

Two straight forward examples come to mind. They both illustrate how human oversight and iterative validation make GenAI outputs defensible.  

  • Issue Coding: AI classifies documents by issue. Outputs are reviewed by attorneys on sample sets, and prompts are refined as needed. 
  • Privilege Logging: AI drafts initial log descriptions, based on privilege protocol and formatting requirements as provided by counsel. Sample sets are reviewed, and prompts are refined as needed, with attorneys performing an overall review of the output to confirm alignment with case strategy and privilege doctrines.   

My team and I have also been successfully supporting clients who want to use GenAI for responsiveness, and I think it's worth noting that responsiveness identification can be defensible but is less straight-forward in the absence of settled case law doctrine.

What is the key takeaway for legal teams considering GenAI? 

For best results, consider working with a vendor who specializes in AI-powered eDiscovery, has depth and breadth in consulting expertise around workflow development and process documentation, statistical modeling and validation, and a linguistic-based approach to both prompt and search development. We are constantly thinking about how to best use GenAI to support eDiscovery, including defensibility, in different contexts. The best practices we bring to the table are a result of this experience and help clients confidently deploy GenAI.

Conclusion 

Generative AI offers tremendous potential to streamline eDiscovery, but only when implemented with accountability and oversight. By combining expert prompt engineering, human-in-the-loop review, and iterative validation, legal teams can confidently use AI without compromising defensibility. 

Defensible AI isn’t about replacing attorneys; it’s about amplifying their expertise, ensuring every AI output is accurate, reliable, and aligned with legal objectives. With the right methodology, generative AI becomes not just a tool, but a trusted partner in eDiscovery workflows. 

Have questions about defensibility of GenAI solutions? Check out the Lighthouse AI page or connect with one of our experts.

About the Author

Cassie Blum

Cassie is a Senior Director for Review Consulting, and has over 15 years of eDiscovery experience spanning the EDRM as an Attorney, Project Manager, Review Managing Attorney, and Consultant. She has extensive experience supporting global financial services, technology, and pharmaceutical industry clients with workflow consultation and review management, including early case assessment, complex data repository management, multi-district litigation, and second requests. In her current role, Cassie oversees a team of consultants who advise on review-related workflows, leveraging best practices, leading technologies, and Lighthouse offerings to reduce cost and streamline review. Cassie also advises on and develops Lighthouse process standards and best practices, along with steering new product development. She received her J.D. from Saint Louis University School of Law and is licensed to practice in Missouri.

About the Author

Erica Jordan

Erica Jordan is Solutions Marketing Director for Lighthouse's AI-powered eDiscovery solutions. With nearly a decade of B2B product marketing experience—much of it in the fast-paced world of PNW startups—she is known for translating complex ideas into clear, compelling narratives that resonate with legal teams. Passionate about innovation, Erica helps clients harness emerging technologies to solve complex challenges and achieve measurable results.