Lighthouse Blog

Read the latest insights from industry experts on the rapidly evolving legal and technology landscapes with topics including strategic and technology-driven approaches to eDiscovery, innovation in artificial intelligence and analytics, modern data challenges, and more.

Get the latest insights

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter by trending topics
Select filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blog

Modern Attachments and Ephemeral Data: The Challenging Duo of Second Requests

Antitrust
eDiscovery and Review
Information Governance
Blog

The Data Behind Enterprise AI Adoption

AI and Analytics
Information Governance
Blog

Top Traits of an Innovative eDiscovery Project Manager (In-House Counsel Edition)

eDiscovery and Review
Legal Operations
Blog

To Find Effective AI Solutions, Look for Four Qualities in an eDiscovery Partner

AI and Analytics
eDiscovery and Review
Blog

Five Considerations for a Better Negotiated ESI Protocol with Modern Data

Chat and Collaboration Data
eDiscovery and Review
Information Governance
Blog

Does It Actually Work? How to Measure the Efficacy of Modern AI

The first step to moving beyond the AI hype is also the most important. That’s when you ask: How can AI actually make my work better? It’s the great ROI question. Technology solutions are only as good as the benefits they provide. So it’s critical to consider an AI solution’s efficacy—its ability to deliver the benefits you expect of it—before bringing it onboard.To help you do that, let’s walk through what efficacy means in general, and then look at what it means for the two types of modern AI.Efficacy varies depending on the solutionYou can measure efficacy in whatever terms matter most to you. For simplicity’s sake, let’s focus on quality, speed, and cost.When you’re looking to improve efficacy in those ways, it’s important to remember that not all AI is the same. You need to choose technology suited for your task. The two types of AI that use large language models (LLMs) are predictive AI and generative AI (for a detailed breakdown, see our previous article on LLMs and the types of AI). Because they perform different functions, they impact quality, speed, and cost in different ways.Measuring the efficacy of predictive AIPredictive AI predicts things, such as the likelihood that a document is responsive, privileged, etc. Here’s how it works, using privilege as an example. Attorneys review and code a sample set of documents.Those docs are fed to the AI model to train it—essentially, teaching it what does and doesn’t count as privilege for this matter.Then, the classifier analyzes the rest of the dataset and assigns a percentage to each document: The higher the percentage, the more likely the document is to be privileged.The training period is a critical part of the efficacy equation. It requires an initial investment in eyes-on review, but it sets the AI up to help you reduce eyes-on review down the line. The value is clearest in large matters: Having attorneys review 4,000 documents during the training period is more than worth it when AI removes more than 100,000 from privilege review.With that in mind, here’s how you could measure the efficacy of a predictive AI priv classifier.Quality: Does AI make effective predictions? AI privilege classifiers can be very effective at identifying privilege, including catching documents that other methods miss. A client in one real-life matter used our classifier in combination with search terms—and our classifier found 1,600 privileged docs that weren’t caught by search terms. Without the classifier, the client would have faced painful disclosures and clawbacks.Speed: Does predictive AI help you move faster? AI can accelerate review in multiple ways. Some legal teams use the percentages assigned by their AI priv classifier to prioritize review, starting with the most likely docs and reviewing the rest in descending order. Some use the percentages to cull the review population, removing docs below a certain percentage and reviewing only those docs that meet a certain threshold of likelihood. One of our clients often does both. For 1L review, they prioritize docs that score in the middle. Docs with extremely high or low percentages are culled: The most likely docs go straight to 2L review, while the least likely docs go straight to production. By using this method during a high-stakes Second Request, the client was able to remove 200,000 documents from privilege review.Cost: Does predictive AI save you money? Improving speed and quality can also improve your bottom line. During the Second Request mentioned above, our client saved 8,000 hours of attorney time and more than $1M during privilege review.Measuring the efficacy of generative AIGenerative AI (or “gen AI”) generates content, such as responses to questions or summaries of source content. Use cases for gen AI in eDiscovery vary widely—and so does the efficacy.For our first gen AI solution, we picked a use case where efficacy is straightforward: privilege logs. In this case, we’re not giving gen AI open-ended questions or a sprawling canvas. We’re asking it to draft something very specific, for a specific purpose. That makes the quality and value of its output easy to measure.This is another case where AI’s performance is tied to a training period, which makes efficacy more significant in larger matters. After analysts train the AI on a few thousand priv logs, the model can generate tens of thousands on its own.Given all that, here’s how you might measure efficacy for gen AI.Quality: Does gen AI faithfully generate what you’re asking it to? This is often tricky, as discussed in an earlier blog post about AI and accuracy in eDiscovery. Depending on the prompt or situation, gen AI can do what you ask it to without sticking to the facts. So for gen AI to deliver on quality and defensibility, you need a use case that affords: Control—AI analytics experts should be deeply involved, writing prompts and setting boundaries for the AI-generated content to ensure it fits the problem you’re solving for. Control is critical to drive quality.Validation—Attorneys should review and be able to edit all content generated by AI. Validation is critical to measure quality.Our gen AI priv log solution meets these criteria. AI experts guide the AI as it generates content, and attorneys approve or edit every log the AI generates. As a result, the solution reliably hits the mark. In fact, outside counsel has rated our AI-generated log lines better than log lines by first-level contract attorneys.Speed: Does gen AI help you move faster? If someone (or something) writes content for you, it’s usually going to save you time. But as I said above, you shouldn’t accept whatever AI generates for you. Consider it a first draft—one that a person needs to review before calling it final. But reviewing content is a lot faster than drafting it, so our priv log solution and other gen AI models can definitely save you time.Cost: Does gen AI save you money?Giving AI credit for cost savings can be hard with many use cases. If you use gen AI as a conversational search engine or case-strategy collaborator, how do you calculate its value in dollars and cents?But with priv logs, the financial ROI is easy to track: What do you spend on priv logs with gen AI vs. without? Many clients have found that using our gen AI for the first draft is cheaper than using attorneys.Where can AI be effective for you?This post started with one question—How can AI make your work better?—but you can’t answer it without also asking where. Where are you thinking about applying AI? Where could your team benefit the most?So much about efficacy depends on the use case. It determines which type of AI can deliver what you need. It dictates what to expect in terms of quality, speed, and cost, including how easy it is to measure those benefits and whether you can expect much benefit at all.If you’re struggling to figure out what benefits matter most to you and how AI might deliver on them, sign up to receive our simple guide to thinking about AI below. It walks through seven dimensions of AI that are changing eDiscovery, including sections on efficacy, ethics, job impacts, and more. Each section includes a brief questionnaire to help you clarify where you stand—and what you stand to gain.
AI and Analytics
eDiscovery and Review
Blog

From Data to Decisions, AI is Improving Accuracy for eDiscovery

Blogs Template M 3 Editing Share Publish From Data to Decisions, AI is Improving Accuracy for eDiscovery from-data-to-decisions-ai-is-improving-accuracy-for-ediscovery www.lighthouseglobal.com/blog/ from-data-to-decisions-ai-is-improving-accuracy-for-ediscovery 05/28/2024 From Data to Decisions, AI is Improving Accuracy for eDiscovery Learn through real scenarios how predictive and generative AI are helping to improve accuracy in eDiscovery. You’ve heard the claims that AI can increase the accuracy of analytical tasks during eDiscovery. They’re true when the AI in question is being developed responsibly through proper scoping, iterative testing, and validation. As we have known for over a decade in legal tech circles, the computational power of AI (and machine learning in particular) is perfectly suited to the large data volumes at play for many matters and the types of classification assessments required for document review. But how much of a difference can AI make? And what impact do large language models (LLMs) have in the equation beyond traditional approaches to machine learning? How do these boosts in accuracy help legal teams meet deadlines, preserve budget, and achieve other goals? To answer these questions, we’ll look at several examples of privilege review from real-world matters. Priv is far from the only area where AI can make a difference, but for this article it’ll help to keep a tight focus. Also, we’ve been enhancing privilege review with AI since 2019, so when it comes to accuracy—we have plenty of proof. What accuracy means for the two primary types of AI Before we explore examples, let’s review the two relevant categories of AI and what they do in an eDiscovery context. Predictive AI leverages historical data to predict outcomes on new data. For eDiscovery, we leverage predictive AI to provide us with a metric on the likelihood a document falls under a certain classification (responsive, privileged, etc.) based on a previously coded training set of documents. Generative AI creates novel content based directly on input data. For eDiscovery, one example could be leveraging generative AI to develop summaries of documents of interest and answers to questions we may have about the facts present in these documents. In today's context, both types of AI are built with LLMs, which learn from vast stores of information how to navigate the nuances and peculiarities of language as people actually write and speak it. (In a previous post, we share more information about LLMs and the two types of AI.) Because each of these types of AI are focused on different goals and have different outputs, predictive and generative AI also have different definitions of accuracy. Accuracy for predictive AI is tied to a traditional sense of the truth: How well can the model predict what is true about a given document? Accuracy for generative AI is more fluid: A generative AI model is accurate when it faithfully meets the requirements of whatever prompt it was given. If you ask it to tell you what happened in a matter based on the facts at hand, it may make up facts in order to be accurate to the prompt. Whether the response is true or is based on the facts of the matter depends on the prompt, tuning mechanisms, and validation. All that said, both types of AI have use cases that allow legal teams to measure their accuracy and impact. Priv classifiers prove to be more accurate than search terms Our first example comes from a quick-turn government investigation of a large healthcare company. For this matter, we worked with counsel to train an AI model to identify privilege and ran it in conjunction with privilege search terms. The privilege terms came back with 250K potentially privileged documents, but the AI model found that more than half of them (145K) were unlikely to be privileged. Attorneys reviewed a sample of the disputed docs and agreed with the AI. That gave counsel the confidence they needed to remove all 145K from privilege review—and save their client significant time and money. We saw similar results in another fast-paced matter. Search terms identified 90K potentially privileged documents. Outside counsel wanted to reduce that number to save time, and our AI privilege model did just that. Read the full story on AI and privilege review for details. Let’s return to our definition of accuracy for predictive AI: How well did the model predict what was true about the documents? Very well and more accurately than search terms. Now what about generative AI? Generative AI can draft more accurate priv logs than people We have begun to use generative AI to draft privilege log descriptions. That’s an area where defining accuracy is clear-cut: How well does the log explain why the doc is privileged? During the pilot phase of our AI priv log work, we partnered with a law firm to answer that very question. With permission from their client, the firm took privilege logs from a real matter and sent the corresponding documents through our AI solution. Counsel then compared the log lines created by our AI model against the original logs from the matter. They found that the AI log lines were 12% more accurate than those drafted by third party contract reviewers. They also judged the AI log lines to be more detailed and less repetitious. We have evidence from live matters as well. During one with a massive dataset and urgent timeline, outside counsel used our generative AI to create privilege logs and asked reviewers to QC them. During QC, half the log lines sailed through with zero edits, while the other half were adjusted only slightly. You can see what else AI achieved in the full case study about this matter. More accurate review = more efficient review (with less risk) Those accuracy numbers sound good—but what exactly do they mean for legal teams? What material benefits do you get from improving accuracy? Several, including: Better use of attorney and reviewer time. With AI accurately identifying priv and non-priv documents, attorneys spend less time reviewing no-brainers and more time on documents that require more nuanced analysis. In cases where every document will be reviewed regardless, you can optimize review time (and costs) by sending highly unlikely docs to lower-cost contract resources and reserving your higher-priced review teams for close calls. Opportunities for culling. Attorneys can choose a cutoff at a recall that makes sense for the matter (including even 100%) and automatically remove all documents under that threshold from review and straight into production. This is a crisp, no-fuss way to avoid spending time and resources on documents highly unlikely to be privileged. Lower risk of inadvertently producing privileged documents. Pretty simple: The better your system is for classifying privilege, the less likely you are to let privileged info slip through review. What does accuracy mean to you? I hope this post helps clarify how exactly AI can improve accuracy during eDiscovery and what other benefits that can lead to. Now it’s time to consider what all this means to you, your team, and your work. How important is accuracy to you? How do you measure it? Where would it help to improve accuracy, and what would you get out of that? To help you think it through, we assembled a user-friendly guide that covers accuracy and six other dimensions of AI that change the way people think about eDiscovery today. The guide includes brief definitions and examples, along with key questions like the ones above to help you craft an informed, personal point of view on AI’s potential.
AI and Analytics
eDiscovery and Review
Blog

Navigating the New Reality of HSR Second Requests

Antitrust
eDiscovery and Review
No items found. Please try different search parameters.