Ten Questions We’re Asked About Lighthouse AI Search (and Our Answers)

August 18, 2025

By:

Dana Feeney
Dana Feeney

Get the latest insights

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Summary: In our latest webinar, we showed how Lighthouse AI Search turns massive datasets into fast, accurate answers. Here are the top 10 audience questions—and expert responses—you won’t want to miss.

We recently hosted the webinar, The Evolution of Answers: Introducing Lighthouse AI Search, where we showed how teams can move from million-doc datasets to grounded answers in a matter of minutes, with a high degree of accuracy, and without Boolean gymnastics. Along with a live demo of AI Search, our expert presenters led a lively round of Q&A—and here, we’re sharing 10 of those audience questions about AI Search functionality, accuracy, and security, plus presenter responses.

Question 1: How long does a search take?

On average, we see two to three minutes. Results progress through three phases—matching documents to your question, organizing those matches into answer groups, and generating summaries—so you can watch progress in real time and review results as they land.

Question 2: Can we run multiple searches at once?

Yes. There’s effectively no practical cap; teams can run large numbers of concurrent questions (into the thousands) and then review as results complete.

Question 3: Is our data safe (and kept private)?

Yes. AI Search operates in a closed environment and only references the dataset you load for that matter. In accordance with guardrails we’ve put in place, it does not search the open internet, nor does it share data across cases. If an answer isn’t in your dataset, it won’t go elsewhere.

Question 4: How do you minimize hallucinations?

Every answer is grounded in your indexed documents and returned with exemplar source documents and highlights. As related to question three, AI Search is confined to sourcing answers from your document set, so the risk of hallucination is very low. You can click through and verify the supporting evidence immediately, keeping a human-in-the-loop for defensibility.

Question 5: Can we scope a search to a subset (e.g., board-level comms)?

Scoped searching is on the roadmap and launching soon. Today, you can target subsets by how you phrase the question (e.g., “What was the board discussing about X?”) to focus answers on specific people, timeframes, or topics.

Question 6: Do “bad” questions waste time or budget?

No. We don’t charge per search, so you can iterate freely. There are no “bad” questions because even those that don’t yield fruitful answers can give you clues about where you need to go next in your search. If a question doesn’t land, rephrase and try again—or lean on our expert search consultants to help craft prompts for your use case.

Question 7: Will it return every relevant document?

AI Search is a question-and-answer tool—not a linear review engine. It returns synthesized answers with exemplar documents that support them. From there, you can move items into downstream Relativity workflows (batching, review, seed sets for TAR, adding families/threads/near-dupes) as needed.

Question 8: Does it only work in Relativity?

Today, yes. AI Search is integrated with Relativity Server, including coding panels and one-click access to the Relativity viewer. Integration with RelOne is in development.

Question 9: Is there lead time beyond loading data?

A little. After loading data to a Relativity workspace, select the saved search you want indexed and build the AI Search index. It’s quick. We recommend standard reductions (dedupe, email threading, junk removal) but not keyword culling, since culling can hide important concepts you don’t yet know to look for.

Question 10: Can we export results easily?

Yes. While AI Search returns exemplars (not “all docs”), you can mass-code those documents and push them into your Relativity workflows for export, batching to reviewers, or using them as seed sets, then expand to related families, threads, and near-dupes.

Watch the full webinar, including the business case for AI Search and comprehensive demo, and learn more about Lighthouse AI Search. Have a question that isn’t covered here? Contact us and drop your question in the message, and we’re happy to get back to you directly.

About the Author

Dana Feeney

In her role as a Solutions Marketing Director at Lighthouse, Dana brings more than a decade of experience in writing about, planning for, and marketing eDiscovery technology and services. She uses her background in technical documentation, project management, and product-market analysis to effectively communicate the value of solutions and services while ensuring the best message reaches the right audience. Dana also is proudly involved in Lighthouse ERGs, serving as co-chairperson of the Women: Leadership Exploration and Development (WLEAD) group. She holds a Bachelor of Arts in English with a Marketing Minor from La Salle University.