AI and Analytics

Browse our library for the latest on how advancements in AI and analytics are being used to solve modern data challenges in eDiscovery and information governance.

Filter by content type
Select content type
Filter by trending topics
No items found.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text
August 3, 2023
Case Study

Unprecedented Review Accuracy and Efficiency in Federal Criminal Investigation

A global transportation company was under investigation for possible infractions of the Foreign Corrupt Practices Act (FCPA) in India. The company’s legal counsel needed to quickly produce responsive documents and find key documents to prepare their defense. Key Results 4M total documents reduced to 250K through 2 rounds of responsive review, with precision rate and recall of 85% or higher. 810 key documents quickly delivered to outside counsel, saving them hours of review and gaining more time for case strategy. A Complex Dataset Requiring Nuanced Approaches The company collected 2M documents from executives in India and the U.S. Information in the documents was extremely sensitive, making it critical to produce only those documents related to the India market. This would be impossible for most TAR tools, which use machine learning and therefore can’t reliably differentiate between conversations about the company’s business in India from discussions solely pertaining to U.S. business. Finding key documents to prepare a defense was challenging as well. The company wanted to learn whether vendors and other third parties had bribed officials in violation of the FCPA, but references to any such violations were sure to be obscure rather than overt. Zeroing In On the Right Conversations Lighthouse used a hybrid approach, supplementing machine learning models with powerful linguistic modeling. First, our linguistic experts created a model to remove documents that merely referred to India but didn’t pertain to business in that market, so that the machine learning TAR wouldn’t pull them into the responsive set. Then our responsive review team developed geographic filters based on documents confirmed as India-specific and used those filters to train the machine learning model. The TAR model created an initial responsive set, which our linguists refined even further with an additional model, based on nuances of English used in communications across different regions of India. By the end, our hybrid approach had reduced the corpus by 97%, with an 87% precision rate and 85% recall. Once this first phase of review was successfully completed, Lighthouse dove into an additional 2M documents collected from custodians located in India. Finding Key Documents Among Obfuscated Communications To help inform a defense, our search specialists focused on language that bad actors outside the company might have used to obfuscate bribery. The team used advanced search techniques to examine how often, and in what context, certain verb-noun pairs indicating an “exchange” were used (for instance, commonly used innocent pairings like give a hand vs. rarer pairs like give reward). The team could then focus on the documents containing language indicating an attempt to conceal or infer. $1.7M Saved, 810 Key Documents Found to Support Defense Lighthouse performed responsive review on two datasets of 2M documents each, reducing them to less than 250K and saving the client more than $1.7M. Out of the 237K responsive documents, Lighthouse uncovered 810 hot docs spanning 7 themes of interest. The work was complete in just 3 weeks and enabled outside counsel to provide the best defense to the underlying company. Corporate Case Studykdi; key-document-identification; case-study; investigations; reviewediscovery-review; client-success; ai-and-analyticsCase-Study; client-success; ai-and-analytics; analytics; document-review; eDiscovery; fact-finding; investigations; KDI; key-document-identification; keyword-search; TAR; TAR-Predictive-Coding; technology-assisted-review; machine-learning; transportation-industry; automotive-industry; edicovery-review; ai-and-analytics
August 1, 2023
Case Study

Connecting Matters for Better, Faster eDiscovery

A healthcare provider needed help simplifying ESI hosting for a complex series of 14 related matters across 9 states (and growing). Lighthouse went above and beyond—providing a unified workflow from hosting to review. Key Actions Quickly migrated 11M documents from existing Relativity and non-Relativity databases into a single repository, supported by AI Created one sophisticated workflow—from ESI storage to managed review—for over 14 matters across 9 states (and any other matters that arise in the future) Leveraged advanced technology to facilitate data re-use, data reduction, and review efficiency ‍ Key Results Avoided duplicate collections, hosting, and review of 1.2M documents Instantaneously provided production sets to all 14 matters, giving local counsel time to focus on unique matter documents before production Set case teams up for success in future matters with a readymade data repository, workflow, and trained review team—exponentially increasing the client’s ROI Data Everywhere and No One to Turn To A large healthcare provider was facing a growing number of separate but related litigations. With 14 ongoing matters in 9 different jurisdictions, the company’s data was spread out across multiple ESI vendors and a variety of review databases. The hosting costs of this data sprawl was threatening to explode the company’s overall budget. And with each case team and vendor taking their own approach to case strategy and review, in-house counsel was busy herding cats rather than managing overall litigation strategy. They came to Lighthouse desperately seeking a way to consolidate their overall eDiscovery approach to these matters. A Streamlined Solution for Multiple Matters, from Hosting Through Review Lighthouse seamlessly integrating all related matters into an advanced document repository. Backed by AI, this repository connected insights across matters and maximized work product reuse. Using this repository as a base, our experts built a sophisticated eDiscovery workflow for all 14 individual matters. Each process in every individual matter—from hosting to document review—was purposefully designed around insights and data from all other related matters. The result of this holistic approach was more efficient, consistent, and accurate eDiscovery across every matter—at a much lower cost than could ever have been achieved with a traditional siloed approach. Here’s how we did it: Faster, More Versatile Migration Capabilities With our advanced technology and unique migration expertise, Lighthouse quickly migrated 11M documents from existing databases—including Relativity and non-Relativity—into an advanced AI-backed document repository. At the outset, the team worked closely with the client to understand the scope, types of data, and future needs, so that the migration flowed quickly and efficiently. This approach meant that the client only had to process data once, rather than paying for processing and re-processing data with every matter. Individual case teams also immediately reaped the benefit of data and insights from every related matter, including matters that had already been successfully litigated. This helped counsel anticipate issues in their own matters, while re-using review work product for greater efficiency and consistency—ultimately saving costs and improving matter outcomes. One Hash for Unprecedented Cross-Matter Deduplication and Efficiency Unlike other data storage repositories, the Lighthouse AI-backed repository adds a hash system unique to Lighthouse. This technology normalizes documents before adding a hash value, extending our deduplication power and allowing us to identify all duplicate documents beyond what is possible using traditional deduplication technology. Our unique AI hash system also enabled faster insights into opposing party productions. The Lighthouse team used the system to compare newly received productions in one matter against documents previously received in other matters. Where matches were found, any issue coding one case team applied to a document was carried over and applied to new matching documents. This helped facilitate case team collaboration and a consistent legal strategy across matters. Broad Bench of Data Experts Rather than paying separate vendors for expertise in individual matters, in-house counsel and local case teams leaned on Lighthouse’s unified bench of subject matter experts—including ESI processing and hosting, advanced analytics, and review specialists. These experts worked together as a dedicated client service team, providing a uniquely holistic view of the entire array of related matters. However, individual specialists tagged in to perform work only when their expertise was needed, ensuring that the company didn’t rack up expensive invoices for consulting services they didn’t need or use. When our experts were called in to help, they were able to identify areas for greater efficiency and cross-matter consistency that would have been impossible if the client had remained with a siloed approach to each matter. For example, before review began, Lighthouse review experts counseled individual case to teams to implement a coding layout for each jurisdiction that facilitated work product reuse and consistency across matters. As new related matters come up, our experts will bring their deep institutional knowledge to continue to drive these types of unique efficiency and consistency gains. A Strategic Approach Leads to Faster Reviews and Productions Once data was migrated into the document repository, Lighthouse review experts designed one strategic review plan for all 14 matters that lowered costs and maximized data reuse and cross-matter insights. As part of this plan, Lighthouse created one national review database and separate jurisdiction-specific review databases. Then, Lighthouse experts used advanced AI and review technology to isolate a core set of 150K documents within the 11M documents housed in the repository that were most likely to be responsive across all jurisdictions. This core set was published to the national review database and fully reviewed by an experienced Lighthouse review team trained by our review managers to categorize each document for both national and jurisdictional responsiveness. After review, Lighthouse copied this strategic production set to each jurisdictional database. This approach kept hosting costs drastically lower for each individual matter, while providing all local case teams with an immediate first production, well ahead of production deadlines. Corporate Case Studyai; ai-and-analytics; analytics; artificial-intelligence; big-data; case-study; corporation; corporate; data-analytics; data-re-use; data-reuse; document-review; ediscovery; ediscovery-migration; healthcare-litigation; litigation; managed-review; prism; tar; tar-predictive-coding; technology-assisted-reviewediscovery-review; ai-and-analytics; client-successAI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Case-Study, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, eDiscovery-Migration, healthcare-litigation, litigation, managed-review, Prism, TAR, TAR-Predictive-Coding, technology-assisted-review, ediscovery-review, ai-and-analytics
July 1, 2023
Case Study

Simplifying Complex Multi-District Document Review

A large healthcare provider faced a series of related matters requiring document review. Lighthouse designed and executed a single review workflow that provided accurate, consistent, and efficient productions. Lighthouse Managed Review Results Efficient, compliant productions across 14 matters in 9 states (and counting) Nuanced document review performed by one experienced review team, eliminating the need to train multiple review teams Case teams avoided re-reviewing 150K core documents by reusing 100K high-quality review decisions and redactions A Perfect Storm of Review Complexities A large healthcare provider was facing 14 related matters across 9 states. The initial corpus of documents numbered 11M, with each jurisdiction adding more. While each matter shared a core set of relevant issues, they all had their own unique relevancy scope and were being handled by different outside counsel and eDiscovery teams. The corpus was also littered with personally identifiable information (PII) that required identification and redaction by review teams before production. Combining Expertise and Tech to Drive Efficiency The company turned to Lighthouse because of our extensive experience working on complex document review. Our review managers developed a sophisticated workflow to reduce the number of documents requiring review and re-review across jurisdictions by leveraging advanced technology. Custom Workflow Enables Work Product Reuse To lower costs and maximize consistency across matters, Lighthouse created an overall document repository and review database, as well as separate jurisdictional databases. The team migrated all 11M documents into the document repository and used advanced AI and review technology to isolate a core set of documents that were most likely to be responsive across all jurisdictions. Our review managers efficiently worked with all outside counsel teams to validate this core set. They also suggested and implemented a coding layout for each jurisdiction to facilitate work product reuse and consistency across matters. One Skilled Review Team and Review Process for All Matters Our combination of managed review, advanced technology, and custom data re-use workflow resulted in a single document set that met all jurisdiction-specific production requirements. These documents were duplicated across all databases for immediate production in multiple matters. To get to this caliber of review, our review managers used technology to reduce the number of documents needing eyes-on review to 90K and trained an experienced review team on both universal and jurisdictional responsiveness. Technology was also used to expedite PII redaction and propagate coding to the core set of 150K documents. Unprecedented Review Time and Cost Savings With Lighthouse’s review approach, each case team had more freedom in how they structured their post-production workflows. Our approach also provided stricter control of data and enabled more accurate and predictable billing for the client. Further, all 14 matters now had an initial production ready at the push of a button. In addition to lowering costs, this gave local counsel additional time to assess case strategy, with the first production available in advance of agreed-upon deadlines. Instantaneous Initial Production for Multiple Matters Beyond the stellar review outcomes achieved across each matter, Lighthouse’s strategic workflow and use of technology also saved the client an impressive $650K—a delightful surprise to the client, who was prepared to pay more for such a complex litigation series. As new related matters arise, the client can engage a trained and experienced review team ready to hit the ground running. Corporate Case Studycase-study; ai; ai-and-analytics; analytics; artificial-intelligence; big-data; corporation; corporate; data-analytics; data-re-use; data-reuse; document-review; ediscovery; litigation; prism; pii; phi; healthcare; healthcare-litigation; hipaa-phi; managed-review; review; tar-predictive-coding; technology-assisted-review; tar; productionediscovery-review; ai-and-analytics; client-successCase-Study, client-success, AI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, litigation, Prism, PII, PHI, Healthcare, healthcare-litigation, PII, PHI, HIPAA-PHI, managed-review, document-review, review, TAR-Predictive-Coding, technology-assisted-review, TAR, Production, ediscovery-review, ai-and-analytics
March 15, 2022
Case Study

Lighthouse Streamlines a Complicated False Claims Investigation

Over the course of five months, Lighthouse delivered approximately 4,500 documents for review—out of the 2.3 million document review set—for a Fortune 100 health insurance provider. The Challenge Complex internal False Claims Act investigation 2.3M total documents for review Five-month timeline and tight budget Lighthouse Key Actions Provided curated weekly deliveries of the most important, inclusive documents for review—with no redundant or duplicative versions Compiled summary reports of each delivery (including highlights of high-priority information) to expedite counsel review Out of 2.3M documents, identified and delivered just the 4,500 documents counsel needed to review in order to conduct a comprehensive legal analysis Key Results for Counsel Immediately gained a grasp on the relevant facts and timelines hidden within a massive review set—without wasting time reviewing irrelevant information Quickly developed a deeper understanding of the underlying risks and nuances of the investigation, through consistent and iterative communication with Lighthouse search experts Confidently completed the investigation on time and within budget—even after large volumes of new data were added mid-investigation A Challenging Internal Investigation into False Claims Act Violations A Fortune 100 health insurance provider was pursuing an internal investigation involving potentially improper diagnosis practices undertaken by a wholly-owned provider group. The scope of the investigation included analysis of reimbursements processed across 20+ disease categories, potentially triggering False Claims Act violations. With 2.3M documents to review, it was unclear how the internal investigation would be completed within a constrained budget and timeline. Counsel reached out to Lighthouse for help. Lighthouse Hands Counsel the Keys to a Focused, Efficient Investigation A small team of Lighthouse information retrieval, legal, data science, and linguistic experts immediately began working with counsel to understand the specific allegations at issue, as well as catalogue the various sources of data that needed to be investigated. The team then designed and executed a battery of complex searches tailored to find instances of fraud or wrongdoing related to the allegations at hand. By staying in close communication with counsel, the Lighthouse team ensured that new search requirements and data sources were quickly integrated into the workstream to support fact development. On a weekly basis, Lighthouse delivered a streamlined set of documents responding to counsel’s evolving theory of the case. These deliveries also included a detailed breakdown of the categories of documents identified each week, descriptions of relevant internal processes and policies, and flagging of high-priority documents of particular interest to counsel. Each delivery was distilled down to only the most inclusive, non-redundant versions of relevant documents. In addition to keeping pace with ongoing requests and deliverables, the Lighthouse team also re-executed previous searches to address waves of new data rolling in midway through the engagement. A Faster and More Comprehensive Investigation Resolution Over the course of five months, Lighthouse delivered approximately 4,500 documents for review—out of the 2.3 million document review set. The Lighthouse deliveries encompassed everything counsel needed to know in order to resolve their investigation—and nothing more. The team accomplished this precision through deep subject matter expertise surrounding the allegations and underlying issues at play, consistent and effective communication with counsel, expert topic-based searching, and additional proprietary data analytics to remove unnecessary duplicative content. By the end of their short engagement with Lighthouse, counsel had developed a comprehensive understanding of the pertinent risk areas and confidently completed their investigation—on time and within budget. Corporate Case Studycase-study; corporate; corporation; ediscovery; fact-finding; document-review; investigations; kdi; key-document-identification; keyword-search; insurance-industry; analytics; ai-and-analyticsediscovery-review; ai-and-analytics; client-successCase-Study, client-success, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, insurance-industry, analytics, ai-and-analytics, ediscovery-review, ai-and-analytics
August 15, 2022
Case Study

Lighthouse Uncovers Key Evidence in Fast-Paced Employee Fraud Investigation

KDI
Lighthouse experts uncover key evidence in just two weeks eliminating 97% of document set. The Challenge Complex internal investigation into potential employee fraud 627K total documents Two-week timeline Key Results for Counsel Confidently completed a complex fraud investigation in just two weeks—without fear of missing critical information Significantly mitigated risk to the company through the identification of previously unknown internal control gaps Lighthouse Key Actions Executed 22 strategic searches, based on expert analysis, to identify all relevant evidence of employee fraud and misconduct Uncovered hidden information, previously unknown to counsel, that revealed additional acts of fraud, embezzlement, and misconduct by targeted employees—as well as potentially problematic internal control gaps Out of 627K documents, identified and delivered, just the 16K documents counsel needed to review in order to conduct a comprehensive fact investigation A Complex Employee Fraud Investigation The audit division of a health insurance provider was pursuing an internal investigation involving potentially concealed employee conflicts of interest with external vendors. The allegations involved possible defrauding of the parent organization through noncompliant contract and billing practices, as well as embezzlement of membership incentives for personal use and gain. With approximately 627K documents to review on an exceptionally tight timeline of two weeks, it was unclear how a comprehensive internal investigation would be completed to ensure proper due diligence. Counsel reached out to Lighthouse for help. Lighthouse Experts Quickly Uncover Key Evidence A small team of Lighthouse information retrieval, legal, data science, and linguistic experts immediately began working with counsel to understand the specific allegations at issue. As part of this work, the Lighthouse team catalogued the various sources of data that needed to be investigated. Based on counsel’s theory of the case, the team devised eight main search themes that would enable them to find instances of fraud or wrongdoing related to the allegations at hand. Over the course of the short two-week engagement, the Lighthouse team completed 22 discrete searches with corresponding deliveries based on expert analysis of the eight priority search themes. Each delivery was distilled down to include only the most inclusive, non-redundant versions of relevant documents so counsel wasn’t bogged down by reviewing a slew of duplicative and/or irrelevant documents. Over the course of searching, Lighthouse experts quickly uncovered new key information that was previously unknown to counsel. This information revealed a picture of internal control gaps used to circumvent company policies, leading to problematic vendor contract arrangements and suspect billing practices. Separately, the Lighthouse team also uncovered details of relevant personal circumstances of targeted employees. This new information shed light on the potential motivation for bad acts, including substantial personal debt, resentment of parent company controls, and personal relationships with superiors in the management reporting structure. Significant Risk Mitigation and Faster Investigation Resolution with Lighthouse In just two weeks, Lighthouse delivered a targeted set of approximately 16K documents, out of a total 627K in the review set. The Lighthouse deliveries represented everything counsel needed to know about the possible fraudulent employee activity—including concealed information that posed significant risk to the company if it had been left undiscovered. The team was able to accomplish this precision through deep subject matter expertise regarding the fraud allegations, comprehensive metadata analysis and emotional content detection, consistent and effective communication with counsel, expert topic-based searching, and exhaustive content deduplication. With Lighthouse’s partnership, counsel quickly gained a thorough understanding of the internal controls, potential fraud, and the embezzlement issues at play—ultimately enabling them to significantly mitigate risk and complete their investigation in just two weeks. Corporate Case Studycase-study; corporate; corporation; ediscovery; fact-finding; document-review; investigations; kdi; key-document-identification; keyword-search; insurance-industry; analytics; ai-and-analytics; fraud-detectionediscovery-review; ai-and-analytics; client-success; lighting-the-path-to-better-ediscoveryCase-Study, client-success, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, insurance-industry, analytics, ai-and-analytics, fraud-detection, ediscovery-review, ai-and-analytics
December 30, 2021
Case Study

The Benefits of Best-in-Class Technology on a High-Stakes Matter

By partnering with Lighthouse, clients reduce their data and save millions of dollars while ensuring quality and security. What They Needed Recently, Lighthouse was brought in by the Department of Justice (DOJ) of a large western US state who had to produce data for a high-stakes, multi-million dollar breach of contract matter. The client was dissatisfied with their current eDiscovery panel and was looking for a new provider who could help centralize eDiscovery with document review, use advanced technologies to reduce data, and ensure quality and security. How We Did It To kick things off, Lighthouse and the client team met to discuss the key goals and expected outcomes of this particular case. It became very clear that the client wanted to reduce data in a defensible way and so our team of legal and technology experts got to work. At the start of the matter, our team collected and processed more than 3.5TBs of client source data (i.e. 9M documents) as well as 98K documents that had been produced by opposing counsel and 135K documents that had been produced by 22 various third parties. In addition, we collected approximately two dozen mobile devices as well as advised and assisted outside counsel on a declaration defending the process for collection and production of mobile devices. Next, we brought in the use of best-in-class technology. We leveraged our search consulting team to apply our early case assessment (ECA) tool to the data after processing, and less than 14% of the original corpus (i.e. 1.2M documents) was promoted from the ECA database. Within the ECA environment, we assisted the client with culling, search term iteration, and helped the client to develop and sample search terms for use during negotiations with opposing counsel. After agreeing upon and validating search terms with opposing counsel, the result set was promoted from ECA for review. Within the review environment, we instituted a technology assisted review (TAR) workflow to reduce the overall review population to 420K documents (a 65% reduction after applying ECA) and prepared defensibility reports for opposing counsel. Finally, we used our thread suppression technology to suppress duplicative emails. ‍ We then developed a custom automated workflow to incorporate confidential de-designation decisions from 16 co-defendants on individual documents and reproduced them. An additional 155K documents were loaded directly to review without culling. For review of the remaining ~500K records, we then implemented our managed review solution—managing a review team (provided by our trusted review partner) through a very successful first pass review, privilege review, and privilege log creation process. ‍ The Results Ultimately, the client produced 260K documents in this matter and saved significant time and money. Lighthouse was able to reduce the original corpus by more than 95% through the use of best-in-class technology and our legal, review, and technology experts. Because of the service quality, support, breadth of capabilities, and expertise exhibited during the matter, the client has since migrated several active matters from different providers to Lighthouse. ‍ Corporate Case Studycase-study; ediscovery; tar; tar-predictive-coding; investigations; analytics; predictive-coding; privilege; privilege-reviewediscovery-review; ai-and-analytics; client-successCase-Study, client-success, eDiscovery, TAR, TAR-Predictive-Coding, investigations, analytics, predictive-coding, privilege, privilege-review, ediscovery-review, ai-and-analytics
February 1, 2023
Case Study

Lighthouse Uses AI to Complete a Seamless, Customized Data Migration

Lighthouse's proprietary AI technology solves a unique data deduplication challenge while migrating over 25 terabytes for an extensive healthcare system. Key Results In 5 months, Lighthouse migrated four databases—with 25 TBs of data—all while keeping the databases active for review and production for current matters. Leveraging our AI technology, Lighthouse created an innovative solution for a large volume of Lotus Notes files originally processed as HTML files by a legacy processing tool. This solution ensured that any new Lotus Notes files would deduplicate against the migrated data, regardless of the file type or the tool used for processing. A Challenging Data Deduplication Problem A large healthcare system had been hosting its data (over 25 TBs of data across four databases) on another vendor’s platform for nearly a decade. The company knew it was time to modernize its eDiscovery program with Lighthouse. In order to do so, all 25 TBs would need to be migrated over to Lighthouse for hosting and future processing. However, in addition to data migration, the company also had a unique deduplication challenge due to the previous vendor’s original processing tool. The company’s data had originally been processed with the vendor’s legacy processing tool—which processed Lotus Notes data as HTML files, rather than the more modern EML version. The prior processing of these files into an HTML format meant that whenever duplicate Lotus Notes files were added to the database and processed using a more modern processing tool, those EML files would not deduplicate against the older HTML files in the databases. With over half their data consisting of Lotus Note files processed by the older tool in HTML format, the company was concerned that this issue would significantly increase review cost and slow down review time. Thus, in addition to the overall migration process, the company came to Lighthouse with an unfortunate Catch-22: in order to modernize its processing and eDiscovery capabilities, it was losing the ability to deduplicate a majority of its data with each new ingestion. Lighthouse Migration Expertise Because of the volume of new clients moving to Lighthouse for eDiscovery support, Lighthouse has developed an entire practice group dedicated to data migration. This group is adept at creating customized solutions to the unique challenges that often arise when migrating data out of legacy systems. The team works closely with each client to understand the scope, types of data, challenges, and future needs so that the data migration process is seamless and efficient. The Lighthouse migration team quickly got to work gathering information from the healthcare company to start this process, paying particular attention to the Lotus Notes deduplication issue. Once all relevant information was gathered, Lighthouse worked with stakeholders from the organization to form a comprehensive migration plan that minimized workflow disruption and included a detailed schedule and workflow for future data. In the process, Lighthouse also developed a custom solution for the Lotus Notes issue using our proprietary AI technology. An Innovative Solution: Lighthouse AI Lighthouse’s advanced AI technology can create a unique hash value for all data, no matter how it was originally processed. The Lighthouse migration team leveraged this innovative technology to create a unique hash value for the Lotus Notes files that were originally processed as HTML files. That hash value could then be matched against any new Lotus Notes files that were added to the database by the company, even when those files were processed as EML files. With this proprietary workflow, the healthcare company was able to seamlessly move to Lighthouse’s eDiscovery platform, which was better equipped to serve its eDiscovery needs—without losing the ability to deduplicate its data. Set Up for Success In just five months, Lighthouse completed a seamless migration of the healthcare company’s data by creating a custom migration plan that minimized blackouts and kept all databases up and running. Importantly, Lighthouse also leveraged its proprietary AI to create an innovative solution to a complex problem, ensuring continued deduplication capability and reduced discovery costs. ‍ Corporate Case Studycase-study; ai; ai-and-analytics; ai-big-data; corporate; corporation; ediscovery; ediscovery-migration; prism; processing; project-management; healthcareediscovery-review; ai-and-analytics; client-successCase-Study, client-success, AI, ai-and-analytics, AI-Big-Data, Corporate, Corporation, eDiscovery, eDiscovery-Migration, Prism, Processing, Project-Management, Healthcare, ediscovery-review, ai-and-analytics
February 1, 2023
Case Study

Global Law Firm Partners with Lighthouse to Save Millions During Government Investigation

Lighthouse partners with a global law firm to meet a 60-day production deadline for an 11.5 million-document population, saving the firm millions. What They Needed A global law firm was representing a large analytics company being investigated by the Federal Trade Commission (FTC) for antitrust activity. The company faced an extremely aggressive production deadline—approximately 60 days to collect, review, and produce responsive documents from an initial data population of roughly 11.5M. How We Did It The firm partnered with Lighthouse to create a workflow to execute multiple work streams simultaneously (collections, processing, TAR, privilege review, and logging) to ensure the company could meet the production deadline. Lighthouse expert teams managed the entire process, implementing daily standup calls and facilitating communication between all stakeholders to ensure that each workflow was executed correctly and on time. Lighthouse clients that leverage our AI technology to its full potential can realize even more cost savings and efficiency. For example, in this case, this global law firm would have seen the removal of close to 420K documents from privilege review that our AI accurately (as verified in the qc process) deemed to be highly unlikely or unlikely to be privilege. The Lighthouse team also provided strategic and defensible review methods to attack data volume and increase overall efficiency throughout the project. This included Technology Assisted Review (TAR) and email thread suppression in combination with our proprietary AI-technology and privilege log application. The different work streams that Lighthouse designed and executed to reduce the time, burden, and expense of review included: Lighthouse Forensic Collection : Lighthouse’s dedicated expert forensic team implemented a workflow to perform all initial collections, as well as all refresh collections across M365 mailboxes, Teams data, OneDrive, and SharePoint. TAR 1.0 : Lighthouse implemented predictive coding via a TAR 1.0 workflow to systematically find and remove non-relevant documents in a defensible manner. Not relevant documents that fell below the cutoff score were removed from the review population to reduce privilege review. Non-TAR Review : A detailed file analysis was conducted on documents that could not be scored via the TAR model by Lighthouse experts to remove non-responsive documents from eyes-on responsiveness review. Email Threading : Once TAR 1.0 reached stability and a cutoff score was achieved, Lighthouse applied email thread suppression on the documents above the cutoff score to further decrease privilege review and the production set overall. Managing Teams data : The Lighthouse team leveraged our proprietary chat tool to deduplicate Microsoft Teams data. Using the tool, the team stitched Teams messages back together in a format that allowed outside counsel to easily see the conversation in totality (e.g., who was part of the thread, who entered/left the chat room, who said what, at what time, etc.). The tool then integrated and threaded chat messages with search and filtering capabilities for review directly in Relativity. Privilege Review : Even as collections, TAR 1.0, email threading, and document review workflows were ongoing, the Lighthouse advanced analytics team leveraged technology in combination with their expertise to drastically reduce the privilege review set and guard against inadvertent production of privileged documents: Lighthouse Strategic Privilege Reduction : Lighthouse data reduction experts worked with outside counsel to analyze the data to identify large categories of documents that could be safely removed from privilege review, such as two large tranches of calendar items that were pulled into the privilege review. Lighthouse also ran a separate header-only privilege screen across and located a pattern in the privilege hits, which outside counsel confirmed were not privileged and removed from privilege review. AI-enabled Privilege QC : To minimize risk and increase efficiency of privilege review, Lighthouse deployed our advanced AI-technology, which uses multiple algorithms to analyze the text and metadata of documents, enabling highly accurate privilege predictions. First, it analyzed the entire review workspace and identified additional privileged documents that were not picked up by the conventional privileged screen approach. Then, the tool was utilized in privilege review QC workflows where it helped reviewers overturn first and second level privilege calls. Privilege logging application : Lighthouse also leveraged our privilege logging application to automate privilege log generation, saving outside counsel significant time and driving consistent work product in creating their privilege log. The Results Lighthouse forensic collection collected roughly 11.5M documents from more than 600 unique datasets and over 90 custodians, spanning M365 mailboxes, Teams data, OneDrive, and SharePoint sources. Lighthouse’s TAR 1.0 workflow then dramatically reduced the document population for privilege review, ultimately removing over 6M documents in full families from review, thereby delivering a savings of nearly $6.2M. The Lighthouse team’s detailed file analysis of non-TAR universe resulted in an additional 640K files removed from responsiveness review—encompassing close to a 90% reduction in the non-TAR review volume and delivering a savings of roughly $640K. Our email thread suppression process then removed another 1.1M documents from review (for a savings of $1.1M), while the Lighthouse proprietary chat tool removed over 63K Teams items and generated over 200K coherent transcript families from 1.3M individual messages. Law Firm Case Studycase-study; antitrust; ediscovery; tar; tar-predictive-coding; law-firm; hsr-second-requests; investigations; mergers; ai-and-analytics; ai-big-data; artificial-intelligence; ai; acquisitions; analytics; predictive-coding; prism; privilege; privilege-review; name-normalization; microsoft; emerging-data-sources; forensics; collectionsediscovery-review; ai-and-analytics; antitrust; chat-and-collaboration-data; client-successCase-Study, client-success, Antitrust, eDiscovery, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, ai-and-analytics, AI-Big-Data, artificial-intelligence, AI, Acquisitions, analytics, predictive-coding, Prism, privilege, privilege-review, name-normalization, microsoft, Emerging-Data-Sources, digital forensics, collections, ediscovery-review, ai-and-analytics, antitrust, chat-and-collaboration-data
December 15, 2023
eBook

AI is All the Rage—But What’s the ROI in eDiscovery?

October 27, 2023
eBook

AI for eDiscovery: Terminology to Know

September 21, 2023
Whitepaper

Analyzing the Real-World Applications and Value of AI for eDiscovery

September 6, 2023
eBook

How AI Advancements Can Revolutionize Document Review

September 15, 2021
Whitepaper

TAR + Advanced AI: The Future Is Now

AI & Analytics
May 18, 2022
eBook

Purchasing AI for eDiscovery - New, Now, and Next

AI & Analytics
June 16, 2022
eBook

eDiscovery Advancements Meet the Unique Challenges of Second Requests

September 29, 2023
Podcast

Generative AI and Healthcare: A New Legal Landscape

Lighthouse welcomes Ty Dedmon, Partner and lead of Bradley’s healthcare litigation team, to assess how generative AI is impacting litigation and what we can do to minimize the risk., Although the novel and often comical uses of generative AI have captured more recent headlines—think philosophical conversations with a chatbot or essays written in seconds using AI—there are big changes happening across sectors of the economy thanks to adoption of new tools and programs, including the legal and healthcare spaces. Recent case law and legislation highlights the new landscape emerging in healthcare litigation with potential long-term implications. Lighthouse welcomes Ty Dedmon , Partner at Bradley who leads their healthcare litigation team, to assess how generative AI is impacting litigation and what we can do to prepare, and to share advice on leverage AI innovation while minimizing the risk. This episode’s sighing of radical brilliance: “ Top AI companies agree to work together toward transparency and safety ,” Kevin Collier, NBCNews , July 21, 2023. Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , ai-and-analytics; ediscovery-review; information-governance, AI, analytics, eDiscovery, Review, information governance, generative AI, PHI, PII, healthcare, HIPAA, podcast, ai-and-analytics; analytics; artificial-intelligence; compliance; data-privacy; healthcare; healthcare-litigation; hipaa-phi; phi; pii; podcast; regulation
September 29, 2023
Podcast

What You Need to Know About the New FTC and DOJ HSR Changes

Brian Rafkin, counsel in Akin‚Äôs antitrust and competition practice, joins to examine the HSR rules and share advice for utilizing AI and workflows to manage increased scrutiny., <iframe height="200px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/f0b5195e-f4b6-4f49-a2ca-4d7aa3638bc2?dark=true"></iframe> ‚Äç Continuing a more aggressive posture toward corporate mergers, the Department of Justice and Federal Trade Commission recently announced new HSR rules that dramatically change and expand the amount and type of information that needs to be submitted with HSR filings. How will this impact future M&A activity and Second Requests? Brian Rafkin , counsel in Akin‚Äôs antitrust and competition practice, joins the podcast to examine the new HSR rules and their potential implications. He also shares best practices for utilizing technology and workflows to manage increased scrutiny and pressure on deals.  This episode‚Äôs sighing of radical brilliance: ‚Äú United States takes on Google in biggest tech monopoly trial of 21st century ,‚Äù Dara Kerr, NPR, September 12, 2023. Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , antitrust; ai-and-analytics, antitrust, AI, analytics, HSR, antitrust, FTC, DOJ, M&A, ai-and-analytics; antitrust; artificial-intelligence; biden-administration; document-review; hsr-second-requests; mergers; regulation
March 29, 2023
Podcast

Optimizing Review with Your Legal Team, AI, and a Tech-Forward Mindset

KDI
Lighthouse‚Äôs Mary Newman, Executive Director of Managed Review, joins the podcast to explore how adopting a technology-forward mindset can provide better results for document review teams.,   To keep up with the big data challenges in modern review, adopting a technology-enabled approach is critical. Modern technology like AI can help case teams defensibly cull datasets and gain unprecedented early insight into their data. But if downstream document review teams are unable to optimize technology within their workflows and review tasks, many of the early benefits gained by technology can quickly be lost. Lighthouse‚Äôs Mary Newman , Executive Director of Managed Review, joins the podcast to explore how document review teams that adopt a technology-forward mindset can provide better review results now and in the future. This episode's sighting of radical brilliance: An A.I. Pioneer on What We Should Really Fear , New York Times,  December 21, 2022.  If you enjoyed the show, learn more about our speakers and subscribe on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and  Twitter .  , ai-and-analytics; legal-operations; lighting-the-way-for-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscovery, review, ai/big data, podcast, managed review, ai-and-analytics, legal-operations, review; ai-big-data; podcast; managed-review
December 15, 2022
Podcast

Review Analytics for a New Era

AI & Analytics
Law & Candor welcomes Kara Ricupero, Associate General Counsel at eBay, for a conversation about how analytics and reimagining review can help solve data challenges and advance business imperatives., In episode two, we introduce our new co-host Paige Hunt , Vice President of Global Discovery Solutions at Lighthouse, who will be joining Bill Mariano as our guide through the legal technology revolution. In their first Sighting of Radical Brilliance together they chat about an article in Wired that explores the rise of the AI meme machine, DALL-E Mini . Then, Paige and Bill interview Kara Ricupero , Associate General Counsel and Head of Global Information Governance, eDiscovery, and Legal Analytics at eBay. They explore how a dynamic combination of new technology and human expertise is helping to usher in new approaches to review and analytics that can help tackle modern data challenges. Other questions they dive into, include: How did you identify the kind of advanced technology needed for modern data challenges?   Partnering with the right people and experts across the business to utilize technology and insights seems to be a big part of the equation. How did you work with other stakeholders to leverage analytics?  With new analytics and intelligence, has it changed how you approach review on matters or other processes? How do you think utilizing analytics will evolve as data and review continue to change? What kinds of problems do you think it can help solve?  If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , listen and rate the show wherever you get your podcasts, and join in the conversation on  Twitter .  , ai-and-analytics; ediscovery-review; lighting-the-way-for-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscovery, review, data-re-use, ai/big data, podcast, ai-and-analytics, ediscovery-review, review; data-re-use; ai-big-data; podcast
December 15, 2022
Podcast

Investigative Power: Utilizing Self Service Solutions for Internal Investigations

Self Service
Our hosts chat with Justin Van Alstyne, Senior Corporate Counsel at T-Mobile, about best practices for handling internal investigations including the self service tools that have been most effective., Paige and Bill start the show with new and exciting research from MIT Sloan on artificial intelligence and machine learning.  Next, their interview with  Justin Van Alstyne , Senior Corporate Counsel, Discovery and Information Governance at T-Mobile. They dive into internal investigations, including how a simple, on-demand software solution can offer the scalability and flexibility teams need to manage investigations with varying amounts of data. Some other questions they explore are: How we collaborate and work has changed immensely over the past few years and that evolution doesn‚Äôt appear to be slowing down. How have new tools and data sources complicated conducting internal investigations?  With organizations encountering investigations of different sizes and degree, what workflows or approaches have you found are most flexible to respond to this variability? Along with process, technology is another key part of the equation. When choosing the right technology for internal investigations, what are some of your high-priority considerations? Are there any features that are must-haves? For people contemplating deploying a self service solution, what advice do you give to ensure your team has the right level of expertise and technology to handle their internal investigations at scale? If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us wherever you get your podcasts, and join in the conversation on  Twitter .  , ediscovery-review; ai-and-analytics; lighting-the-path-to-better-ediscovery, self-service, spectra, podcast, ediscovery-and-review, ai-and-analytics, self-service, spectra; podcast
March 25, 2022
Podcast

Legal’s Balancing Act: Risk, Innovation, and Advancing Strategic Priorities

Megan Ferraro, Associate General Counsel, eDiscovery & Information Governance at Meta, joins Law & Candor to discuss the pivotal role legal is playing in helping innovation thrive while managing risk., Co-hosts Bill Mariano and Rob Hellewell start the show with Sightings of Radical Brilliance. In this episode, they review an article in Reuters exploring lawyer attrition and the ‚Äúgreat resignation.‚Äù Next, their interview with Megan Ferraro , Associate General Counsel, eDiscovery & Information Governance, Meta. They discuss the delicate balance that must be struck between risk and innovation and explore some of the following questions: How did the legal function evolve to play a bigger role in corporate strategy and innovation? What are the broader trends in the ways legal teams are supporting innovation? With businesses growing, adding new technology, and pivoting strategy quickly, what are the most critical risk challenges legal teams face today? How can legal best work with other functions in an organization to ensure strategic priorities are advanced‚Äîthrough new deals or technology, for example‚Äîwhile also balancing the risk factors?  Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us wherever you get your podcasts, and join in the conversation on Twitter .  Related Links   Blog post: Analytics and Predictive Coding Technology for Corporate Attorneys: Six Use Cases Podcast: Innovating the Legal Operations Model Blog post: What Skills Do Lawyers Need to Excel in a New Era of Business? Blog post: Purchasing AI for eDiscovery: Tips and Best Practices Article: To stem lawyer attrition, law firms must look beyond cash - report , ai-and-analytics; legal-operations; ediscovery-review, podcast, project management, risk management, ai-and-analytics, legal-operations, ediscovery-review,, podcast; project-management; risk-management
November 16, 2021
Podcast

Staying Ahead of the AI Curve

Our hosts and Harsha Kurpad of Latham Watkins discuss how to stay apprised of changes in AI technology in the ediscovery space and practical applications for more advanced analytics tools., Co-hosts Bill Mariano and Rob Hellewell start the show with Sightings of Radical Brilliance. In this episode, they review a recent  New York Times article by Cade Metz that explores how new organizations are using AI to find bias in AI . Next, they bring on Harsha Kurpad of Latham Watkins who answers the following questions around staying ahead of AI innovation in legal technology: What are some current barriers to adopting AI? How do you stay apprised of new AI technology, tools, and solutions? What are new data challenges that are leading to a greater adoption of AI or requiring the use of more sophisticated tools? How are government entities like the FTC and DOJ changing how AI is being used and what is required during investigations?  What are some best practices for training algorithms and staying on top of new approaches to training? What are some of the risks in not adopting AI or not staying apprised of changes to the tools, platforms, and how it‚Äôs being used. Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . Related Links White Paper: The Challenge with Big Data Blog Post: What Attorneys Should Know About Advanced AI in eDiscovery: A Brief Discussion Podcast: AI and Analytics for Corporations: Common Use Cases Blog Post: What is the Future of TAR in eDiscovery? (Spoiler Alert ‚Äì It Involves Advanced AI and Expert Services) , ai-and-analytics; ediscovery-review, privilege, review, ai/big data, tar/predictive coding, podcast, production, ai-and-analytics, ediscovery-review, privilege; review; ai-big-data; tar-predictive-coding; podcast; production
March 31, 2022
Podcast

Closing the Deal: Deploying the Right AI Tool for HSR Second Requests

Gina Willis of Lighthouse joins the podcast to explore some of the modern challenges of HSR Second Requests and how a combination of expertise and AI technology can lead to faster and better results., Bill Mariano and Rob Hellewell kick off this episode with another segment of Sightings of Radical Brilliance, where they discuss JPMorgan becoming the first bank to have a presence in the metaverse. Next, our hosts chat with Gina Willis , Analytics Consultant at Lighthouse, about how the right AI tool and expertise can help with HSR Second Requests. They also dive into the following key questions: What are some of the contemporary challenges with Second Requests? What AI tools are helping with some of these modern challenges? For Second Requests, what interaction and feedback between attorneys and AI algorithms is optimal to ensure substantial compliance is reached efficiently? Are there some best practices for improving this relationship‚Äîdeploying the AI better or optimizing algorithms? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us wherever you get your podcasts, and join in the conversation on Twitter .  Related Links : Blog post: Deploying Modern Analytics for Today‚Äôs Critical Data Challenges in eDiscovery Blog post: Biden Administration Executive Order on Promoting Competition: What Does it Mean and How to Prepare Article: JPMorgan bets metaverse is a $1 trillion yearly opportunity as it becomes first bank to open in virtual world , ai-and-analytics; antitrust; practical-applications-of-ai-in-ediscovery, ai/big data, tar/predictive coding, hsr second requests, podcast, acquisitions, mergers, ai-and-analytics, antitrust, ai-big-data; tar-predictive-coding; hsr-second-requests; podcast; acquisitions; mergers
November 16, 2021
Podcast

Finding Lingua Franca: The Power of AI and Linguistics for Legal Technology

In this episode, Amanda Jones of Lighthouse will illuminate some common challenges and pitfalls that can arise with modern language in ediscovery., In the very first episode of season eight, co-hosts Bill Mariano and Rob Hellewell  introduce themselves and welcome listeners back for another riveting season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. They start off with some exciting news about Lighthouse and the recent acquisition of H5 . They then dive into Sightings of Radical Brilliance, the part of the show highlighting the latest news of noteworthy innovation and acts of sheer genius. In this episode, they discuss an article in the AP that investigates how AI-powered tech landed a man in jail with scant evidence . Bill and Rob discuss the case and the AI technology involved, and what questions this raises regarding scientifically validating AI and its use as evidence in criminal cases. Bill and Rob are then joined by Amanda Jones of Lighthouse to discuss common challenges and pitfalls that can arise with modern language in ediscovery, and the interplay between AI and linguistics. Some key questions they explore, include: What is linguistic modeling? What are the critical challenges with modern language and ediscovery today? How is linguistics informing and impacting AI in ediscovery? What are best practices for implementing AI solutions and tools? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , ai-and-analytics; ediscovery-review, review, emerging data sources, ai/big data, podcast, ai-and-analytics, ediscovery-review, review; emerging-data-sources; ai-big-data; podcast
November 16, 2021
Podcast

eDiscovery Review: Family Vs. Four Corner

Pooja Lalwani of Lighthouse and our hosts discuss these two ediscovery review methodologies, and walk through the advantages and disadvantages of both and which better supports AI technology., Bill Mariano and Rob Hellewell kick off this episode with another segment of Sightings of Radical Brilliance, where they discuss Dalvin Brown’s piece in the Washington Post about how AI was used to recreate actor Val Kilmer’s voice . Bill and Rob consider this great scientific achievement along with the potentially nefarious ways it can used. Next, our hosts chat with Pooja Lalwani of Lighthouse about two key approaches to ediscovery review: family and four corner. Pooja helps break down the benefits and drawbacks of each through questions such as: What are some of the key differences between both approaches? With modern communication platforms and data creating a more dynamic and complex review process, what are some of the considerations for when and how to deploy family and four corner review? What review methodology is better suited to supporting TAR and AI tools? How do these review methodologies either help classify privilege more efficiently or potentially create limitations? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , ediscovery-review; ai-and-analytics; lighting-the-way-for-review; lighting-the-path-to-better-review, privilege, review, ai/big data, tar/predictive coding, podcast, ediscovery-review, ai-and-analytics, privilege; review; ai-big-data; tar-predictive-coding; podcast
November 16, 2021
Podcast

Achieving Cross-Matter Review Discipline, Cost Control, and Efficiency

Bill and Rob bring on Jason Rylander of Axinn to discuss techniques for unifying matter data across an organization's portfolio and how it can save significant time and money on document review., Join co-hosts Bill Mariano and Rob Hellewell as they discuss a law firm that only works on artificial intelligence and whether this is an emerging trend for the industry. Next, they‚Äôre joined by Jason Rylander of Axinn to discuss the antitrust landscape, benefits of cross-matter review, and techniques for unifying matter data across an organization‚Äôs portfolio. Jason and our hosts walk through key questions, including: With a new administration and the continued disruption from COVID, has there been an increase in the volume of antitrust matters, investigations, and litigation? What are some of the challenges or disadvantages of doing the traditional single-matter document review? What are some strategies for identifying work product or data that can be reused or repurposed?  What are some best practices when connecting matters?  Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , ediscovery-review; ai-and-analytics, collections, tar/predictive coding, hsr second requests, processing, podcast, data reuse, project management, ediscovery-review, ai-and-analytics, collections; tar-predictive-coding; hsr-second-requests; processing; podcast; data-reuse; project-management
March 23, 2021
Podcast

AI and Analytics for Corporations: Common Use Cases

Law & Candor co-hosts¬†Bill Mariano and¬†Rob Hellewell kick things off with¬†Sightings of Radical Brilliance, in which they discuss¬†the growing use of¬†emotion recognition in tech in China and how..., Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss the growing use of  emotion recognition in tech in China and how this could lead to some challenges in the legal space down the road.  In this episode, Bill and Rob are joined by  Moira Errick of Bausch Health. The three of them discuss common AI and analytics use cases for corporations via the following questions: What types of AI and analytics tools are you using and for what use cases? What is ICR and how you have been leveraging this internally? What additional use cases are you hoping to use AI and analytics for in the future? What are some best practices to keep in mind when leveraging AI and analytics tools? What recommendations do you have for those trying to get their team on board? What advice would you give to other women in the ediscovery industry looking to move their careers forward? In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us on  Apple and  Stitcher , and join in the conversation on  Twitter . , ai-and-analytics, ai/big data, podcast, ai-and-analytics, ai-big-data; podcast
December 3, 2020
Podcast

Cross-Border Data Transfers and the EU-US Data Privacy Tug of War

In the second episode of season six, co-hosts Bill Mariano and Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they review a recent trends analysis article...,   In the second episode of season six, co-hosts Bill Mariano and Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they review a recent trends analysis article written by Lighthouse‚Äôs very own John Shaw for The Lawyer that dives into new sources of evidentiary data in employment disputes .    Next, they bring on Melina Efstathiou of Eversheds Sutherland who answers questions around cross-border data transfers and the EU-US data privacy challenges outlined below: What does the surprise decision to invalidate the EU-US Privacy Shield mean for ediscovery? How does this impact other data transfer mechanisms?  What are some of the implications that Brexit could have? Are there any key tips for preparing for the future of cross-border ediscovery? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on Twitter , and discover more about our speakers and the show here . Related Links Blog Post: Worldwide Data Privacy Update Blog Post: Three Steps to Tackling Data Privacy Compliance Post GDPR Blog Post: The U.S Privacy Shield Is No Longer Valid ‚Äì What Does that Mean for Companies that Transfer Data from the EU into the US?   , data-privacy; ai-and-analytics, data-privacy, cross border data transfers, podcast, data-privacy, ai-and-analytics, data-privacy; cross-border-data-transfers; podcast
December 3, 2020
Podcast

AI, Analytics, and the Benefits of Transparency

In the final episode of season six, co-hosts Bill Mariano and Rob Hellewell review an article covering key privacy and security features on iOS4 and highlight the top features to be aware of.The...,   In the final episode of season six, co-hosts Bill Mariano and Rob Hellewell review an article covering key privacy and security features on iOS4 and highlight the top features to be aware of. The co-hosts then bring on Forbes Senior Contributor, David Teich , to discuss AI, analytics, and the benefits of transparency via the following questions:   Why is it important to be transparent in the legal realm? How does this come into play with bias? What about AI and jury selection? How do analytics come into play as a result of providing transparency? The season ends with key takeaways from the guest speaker section. Subscribe to the show here , rate us on Apple and Stitcher, connect with us on Twitter , and discover more about our speakers and the show here . Related Links Blog Post: Big Data and Analytics in eDiscovery: Unlock the Value of Your Data Blog Post:  The Sinister Six‚ĶChallenges of Working with Large Data Sets Blog Post:  Advanced Analytics ‚Äì The Key to Mitigating Big Data Risks Podcast Episode: Tackling Big Data Challenges Podcast Episode: The Future is Now ‚Äì AI and Analytics are Here to Stay , ai-and-analytics, ai/big data, podcast, ai-and-analytics,, ai-big-data; podcast
September 22, 2020
Podcast

Leveraging AI and Analytics to Detect Privilege

AI and analytics are picking up momentum in the ediscovery space. With new tools that can help ediscovery professionals see trends and patterns in their data as well as identify inefficiencies and opp,   Co-hosts Bill Mariano and  Rob Hellewell kick episode 3 of season 5 off with another riveting Sightings of Radical Brilliance segment where they discuss transforming risks into benefits through  artificial intelligence and data privacy. Bill and Rob interview  CJ Mahoney of Cleary Gottlieb, who discusses some new AI and analytics practices around privilege review. In this segment, CJ uncovers the answers to the following questions:  Why the uptick in the adoption of AI and analytics in the industry? Why did it take so long for folks to adopt?  How can one leverage AI to detect privilege?  What benefits and learnings can one apply to future work? What are some recommendations for those looking to leverage AI and analytics in similar ways? The show concludes with key takeaways from the guest speaker segment. Subscribe to Law & Candor here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post: Big Data and Analytics in eDiscovery: Unlock the Value of Your Data Podcast Episode: Tackling Big Data Challenges Podcast Episode: The Future is Now ‚Äì AI and Analytics are Here to Stay   , ai-and-analytics, analytics, ai/big data, podcast, ai-and-analytics,, analytics; ai-big-data; podcast
September 22, 2020
Podcast

Facilitating a Smooth and Successful Large Review Project with Advanced Analytics

Large dataset projects are being addressed with the broadening use of advanced analytics. However, this is introducing another level of complexity into what is already a complicated and potentially st,   Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss how  law firms are managing the hurdles of remote work , specifically comprehensive security measures, and driving efficiency.  In this episode, Bill and Rob are joined by  Adam Strayer of Paul Weiss. The three discuss facilitating successful large review projects with advanced analytics and other tools via the following questions: Why has there been an increase in the use of advanced analytics on larger matters across the industry? What are some of the key tools and strategies that drive the most value? What are the most effective and efficient workflows regarding advanced analytics? How does one combine the expertise and talents from each team involved (client, counsel, and service provider(s)) in an organized manner? In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Podcast Episode:  New Efficiency Gains in TAR 2.0 and CMML Revealed Case Study:  Drug Store Giant Sees Significant Data Reduction , ai-and-analytics; antitrust, analytics, ai/big data, hsr second requests, podcast, ai-and-analytics, antitrust, analytics; ai-big-data; hsr-second-requests; podcast
September 22, 2020
Podcast

Scaling Your eDiscovery Program: Self Service to Full Service

Being able to scale an ediscovery program from a self-service to a full-service model for particular matters can save both time and money, thus allowing for a more efficient ediscovery program overall,   In the second episode of season five, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they discuss  Solos Health Analytics‚Äôs new technology (FeverGaurd) that was designed as a fever detection software to stop the spread of COVID-19 and the PPI challenges it could raise.  Next, they bring on  Claire Caruso of Lighthouse. Together, the three of them talk through how to scale ediscovery programs from self-service to full-service and back through the following questions:  When would one need to transition from self service to full service, and back to self service?  What are the benefits of making these moves? What are some of the key things to look out for?  What are some recommendations for folks looking to optimize their structure? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post:  How to Bring eDiscovery In House from Seasoned Self-Service Adopters Podcast Episode:  The Future of On-Demand SaaS Software for Small Matters ‚Äì A Self-Service Model Story Blog Post:  Overcoming  Top Objections for Moving to a Self-Service eDiscovery Model Blog Post:  Building a Business Case for Upgrading Your eDiscovery Self-Service Practices in Six Simple Steps Podcast Episode:  Moving to the Cloud Part 1: A Corporate Journey Podcast Episode:  Moving to the Cloud Part 2: A Law Firm Journey About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ediscovery-review; ai-and-analytics, self-service, spectra, podcast, ediscovery-review, ai-and-analytics, self-service, spectra; podcast
September 22, 2020
Podcast

Effective Strategies for Managing DSARs

Since the introduction of the GDPR, organizations with a European presence have seen a rise in the number of Data Subject Access Requests (DSARs). These matters are time-consuming, costly, and not,   In the fourth episode of season five, co-hosts  Bill Mariano and  Rob Hellewell discuss how  Relativity is using its technology to help medical researchers comb through COVID-19 journal articles to help battle the virus.  Bill and Rob then introduce their guest speaker,  Nicki Woodfall of Travers Smith, who uncovers effective strategies for managing DSARs. Nicki answers the following questions in this episode: Why has there been a recent uptick in DSARs over the past few years?  What are the top challenges when it comes to managing DSARs? What are key ways to overcome these common challenges? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post: How GDPR and DSARs are Driving a New, Proactive Approach to eDiscovery Case Study:  Penningtons Manches Cooper Takes Control of their eDiscovery Process with Lighthouse Spectra About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , data-privacy; information-governance; ai-and-analytics, dsars, podcast, data-privacy, information-governance, ai-and-analytics,, dsars; podcast
June 23, 2020
Podcast

Take the Mystery out of Machine Learning: Success Stories from Real-Life Examples and How Data Scientists Impact eDiscovery

In the final episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss a¬†coronavirus tracing app and the privacy concerns that may come about from a legal perspective.¬†Bill and Rob...,   In the final episode of season three, co-hosts  Bill Mariano and  Rob Hellewell discuss a  coronavirus tracing app and the privacy concerns that may come about from a legal perspective.  Bill and Rob bring on  Sara Lockman of Walmart to discuss the mysteries behind machine learning. Together they cover what machine learning is, the benefits, success stories, and more by uncovering answers to the following questions: What is machine learning? What are the benefits of machine learning? What are some challenges to be aware of when implementing machine learning?  What are some best practices to put in place when using machine learning?  Are there any major differences between implementing machine learning on investigations versus litigation?  What are some of the practical applications you have seen used in the context of cases? How do you convince the non-believers? The season ends with key takeaways from the guest speaker section. Connect with us  Twitter , discover more about our speakers and the show  here . Related Links Blog Post:  Big Data and Analytics in eDiscovery: Unlock the Value of Your Data Podcast Episode:  The Future is Now ‚Äì AI and Analytics are Here to Stay Podcast Episode:  Tackling Big Data Challenges Podcast Episode: New Efficiency Gains in TAR 2.0 and CMML Revealed About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, analytics, ai/big data, tar/predictive coding, podcast, ai-and-analytics,, analytics; ai-big-data; tar-predictive-coding; podcast
March 24, 2020
Podcast

The Future of On-Demand SaaS Software for Small Matters – A Self-Service Model Story

Co-hosts Bill Mariano and¬†Rob Hellewell kick things off with another riveting¬†Sightings of Radical Brilliance segment where they uncover how¬†real-time translation tools are breaking down barriers...,   Co-hosts Bill Mariano and  Rob Hellewell kick things off with another riveting Sightings of Radical Brilliance segment where they uncover how  real-time translation tools are breaking down barriers and what this means for the future of legal space. Next, Bill and Rob set the stage for the final recorded guest speaker segment of the live Law & Candor show during Legaltech. For this session, they were accompanied by  TracyAnn Eggen of Dignity Health and  Steve Clark of Dentons, who discuss the future of on-demand SaaS software for small matters from both a corporate and a law firm perspective. In this segment, TracyAnn and Steve uncover the answers to the following questions:  What triggered the move to a SaaS model? How did you get wide-scale adoption? What are some best practices for implementation? The show concludes with key takeaways from the guest speaker segment. Join the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Blog Post: Overcoming  Top Objections for Moving to a Self-Service eDiscovery Model Blog Post:  Building a Business Case for Upgrading Your eDiscovery Self-Service Practices in Six Simple Steps Blog Post:  Top Four Considerations for Law Firms When Choosing a SaaS eDiscovery Solution Podcast Episode: Moving to the Cloud Part 1: A Corporate Journey Podcast Episode:  Moving to the Cloud Part 2: A Law Firm Journey About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, self-service, spectra, podcast, ediscovery-review, ai-and-analytics, self-service, spectra; podcast
March 24, 2020
Podcast

Tackling Big Data Challenges

Big data challenges and key ways to overcome them with AI, analytics, and data re-use are uncovered in this podcast episode.,   In the very first episode of season three, co-hosts  Bill Mariano and  Rob Hellewell , introduce themselves and welcome listeners back for another riveting season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob begin with Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this first episode, they dive into a recent story around the  Astros cheating scandal and their illegal use of technology to observe and relay the signs given by the opposing catcher to the pitcher known as sign-stealing. Before our co-hosts jump directly into the guest speaker segment of today‚Äôs episode, they set the stage for the first three episodes of season 3, which are recordings from the first-ever live Law & Candor show during Legaltech this past January. All three live segments are trickled out over the next three episodes.  The guest speaker segment for episode one highlights,  Josh Kreamer of AstraZeneca. Josh, Bill, and Rob discuss ever-evolving technology and data sources, and how it is now more challenging than ever to combat the cost and complexities associated with legal data. They tackle these key questions and Josh provides answers to the following:  What are some of the biggest data challenges in the industry today? What are some key solutions to these challenges? How do you implement these solutions? How do you get buy in from your team/get them excited to move forward with implementation? In conclusion, Rob shares top takeaways from episode one. If you enjoyed the show, join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Podcast Episode:  The Future is Now ‚Äì AI and Analytics are Here to Stay About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, ai/big data, podcast, ai-and-analytics,, ai-big-data; podcast
March 24, 2020
Podcast

eDiscovery Shark Tank - What’s Worth Your Investment in 2020?

In the final episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss the¬†New York SHIELD Act and its impact on data and security requirements within the space in the¬†Sightings of...,   In the final episode of season three, co-hosts  Bill Mariano and  Rob Hellewell discuss the  New York SHIELD Act and its impact on data and security requirements within the space in the Sightings of Radical Brilliance segment. Bill and Rob shake things up a bit in the final guest speaker segment of the season by conducting an eDiscovery Shark Tank-style episode, where they bring on  Chris Dahl of Lighthouse to share the most forward-thinking and innovative solutions to industry challenges that are worth folks‚Äô 2020 investment. Chris covers the following key questions: What are some of the key innovations in the legal space today? What innovations around SaaS are worth investment? How is the SaaS paradigm impacted on a global perspective? What about big data analytics? When it comes to collaboration, chat, and social, what solutions are there? What about continuous program updates, what can folks be looking for? The season ends with key takeaways from the guest speaker section.  Connect with us  Twitter , discover more about our speakers and the show  here . Related Links Blog Post:  Best Practices for Embracing the SaaS eDiscovery Revolution Podcast Episode: Microsoft Office 365 Part 1: Microsoft‚Äôs Influence on the Next Evolution of eDiscovery Podcast Episode:  Microsoft Office 365 Part 2: How to Leverage all the Tools in the Toolbox About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, self-service, spectra, analytics, emerging data sources, ai/big data, podcast, ediscovery-review, ai-and-analytics, self-service, spectra; analytics; emerging-data-sources; ai-big-data; podcast
March 24, 2020
Podcast

New Efficiency Gains in TAR 2.0 and CMML Revealed

In the fourth episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell converse around the innovation behind family tracking apps and how¬†one app helped capture a criminal in this...,   In the fourth episode of season three, co-hosts  Bill Mariano and  Rob Hellewell converse around the innovation behind family tracking apps and how  one app helped capture a criminal in this episode‚Äôs Sightings of Radical Brilliance segment.  Bill and Rob then introduce their guest speaker,  Nordo Nissi of Goulston & Storrs, and together they dive into new and uncovered efficiency gains around TAR 2.0 and CMML. They ask Nordo the following questions: What are TAR 2.0 and CMML? What are some efficiency gains you have seen around these workflows? What are some of the hidden efficiencies you have seen? What are some techniques to get to those? In the end, our co-hosts wrap up the episode with a few key takeaways. Follow us on  Twitter and discover more about our speakers and the show  here . Related Links Case Study:  Drug Store Giant Sees Significant Data Reduction About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, tar/predictive coding, podcast, ai-and-analytics,, tar-predictive-coding; podcast
December 4, 2019
Podcast

The Privilege in Leveraging Privilege Review Tools

In the second episode of season two, co-hosts¬†Bill Mariano and¬†Rob Hellewell kick off the show with¬†Sightings of Radical Brilliance. In this episode, they discuss¬†AI and how this comes into play...,   In the second episode of season two, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they discuss  AI and how this comes into play in the game of poker as well as what that means for the industry. Next, they introduce their guest speaker for episode two,  Joanna Harrison ,Solutions Architect at Lighthouse, to discuss the privileges of using privilege review tools in ediscovery. Together, they uncover the answers to the questions below: Why is privilege a priority? Why are the current methods in which privilege gets identified for review inefficient? Why is privilege review so important for folks in the ediscovery space? What kind of tools are out there to assist with privilege review? What about privilege logs? What are some key tips or tricks for setting up privilege workflows? Finally, our co-hosts wrap up the episode with a few key takeaways. Join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Blog Post: Finding the Needle Faster ‚Äì Speeding up the Second Request Process Case Study: Drug Store Giant Sees Significant Data Reduction Case Study: When the Government Investigates About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, privilege, podcast, ai-and-analytics, ediscovery-review, privilege; podcast
September 16, 2019
Podcast

The Future is Now – AI and Analytics are Here to Stay

In the d√©but episode, co-hosts Bill Mariano and Rob Hellewell introduce themselves and the premise of Law & Candor ‚Äì a podcast wholly devoted to pursuing the legal technology revolution.To kick...,   In the d√©but episode, co-hosts Bill Mariano and Rob Hellewell introduce themselves and the premise of Law & Candor ‚Äì a podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob introduce the first segment of the podcast - SIGHTINGS OF RADICAL BRILLIANCE - which, as the name implies, is the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this episode, they dive into a recent story around Elon Musk‚Äôs brain-to-computer interface and what this means for the legal space. In the next segment - the guest speaker segment - our co-hosts are joined by Karl Sobylak , Senior Product Manager at Lighthouse, to uncover the answers the following questions around AI and analytics: Why do data science and analytics seem to be making great progress in so many industries aside from the law? How will AI and analytics be incorporated in the day to day life of a lawyer? What about the fear that AI and analytics will replace lawyers, is this true? What about the potential for AI and machine learning to be more limited in the law than they are for other industries, is that true? What‚Äôs the hardest part about applying data science to the law and how would this work for a corporate legal department? In conclusion, our speakers share three top takeaways and preview the next episode. Enjoy the show? Join in on the conversation on Twitter and discover more about our speakers and the show here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here .   , ai-and-analytics, ai-and-analytics, analytics; ai-big-data; podcast
June 6, 2024
Blog

Does It Actually Work? How to Measure the Efficacy of Modern AI

The first step to moving beyond the AI hype is also the most important. That’s when you ask: How can AI actually make my work better? It’s the great ROI question. Technology solutions are only as good as the benefits they provide. So it’s critical to consider an AI solution’s efficacy—its ability to deliver the benefits you expect of it—before bringing it onboard.To help you do that, let’s walk through what efficacy means in general, and then look at what it means for the two types of modern AI.Efficacy varies depending on the solutionYou can measure efficacy in whatever terms matter most to you. For simplicity’s sake, let’s focus on quality, speed, and cost.When you’re looking to improve efficacy in those ways, it’s important to remember that not all AI is the same. You need to choose technology suited for your task. The two types of AI that use large language models (LLMs) are predictive AI and generative AI (for a detailed breakdown, see our previous article on LLMs and the types of AI). Because they perform different functions, they impact quality, speed, and cost in different ways.Measuring the efficacy of predictive AIPredictive AI predicts things, such as the likelihood that a document is responsive, privileged, etc. Here’s how it works, using privilege as an example. Attorneys review and code a sample set of documents.Those docs are fed to the AI model to train it—essentially, teaching it what does and doesn’t count as privilege for this matter.Then, the classifier analyzes the rest of the dataset and assigns a percentage to each document: The higher the percentage, the more likely the document is to be privileged.The training period is a critical part of the efficacy equation. It requires an initial investment in eyes-on review, but it sets the AI up to help you reduce eyes-on review down the line. The value is clearest in large matters: Having attorneys review 4,000 documents during the training period is more than worth it when AI removes more than 100,000 from privilege review.With that in mind, here’s how you could measure the efficacy of a predictive AI priv classifier.Quality: Does AI make effective predictions? AI privilege classifiers can be very effective at identifying privilege, including catching documents that other methods miss. A client in one real-life matter used our classifier in combination with search terms—and our classifier found 1,600 privileged docs that weren’t caught by search terms. Without the classifier, the client would have faced painful disclosures and clawbacks.Speed: Does predictive AI help you move faster? AI can accelerate review in multiple ways. Some legal teams use the percentages assigned by their AI priv classifier to prioritize review, starting with the most likely docs and reviewing the rest in descending order. Some use the percentages to cull the review population, removing docs below a certain percentage and reviewing only those docs that meet a certain threshold of likelihood. One of our clients often does both. For 1L review, they prioritize docs that score in the middle. Docs with extremely high or low percentages are culled: The most likely docs go straight to 2L review, while the least likely docs go straight to production. By using this method during a high-stakes Second Request, the client was able to remove 200,000 documents from privilege review.Cost: Does predictive AI save you money? Improving speed and quality can also improve your bottom line. During the Second Request mentioned above, our client saved 8,000 hours of attorney time and more than $1M during privilege review.Measuring the efficacy of generative AIGenerative AI (or “gen AI”) generates content, such as responses to questions or summaries of source content. Use cases for gen AI in eDiscovery vary widely—and so does the efficacy.For our first gen AI solution, we picked a use case where efficacy is straightforward: privilege logs. In this case, we’re not giving gen AI open-ended questions or a sprawling canvas. We’re asking it to draft something very specific, for a specific purpose. That makes the quality and value of its output easy to measure.This is another case where AI’s performance is tied to a training period, which makes efficacy more significant in larger matters. After analysts train the AI on a few thousand priv logs, the model can generate tens of thousands on its own.Given all that, here’s how you might measure efficacy for gen AI.Quality: Does gen AI faithfully generate what you’re asking it to? This is often tricky, as discussed in an earlier blog post about AI and accuracy in eDiscovery. Depending on the prompt or situation, gen AI can do what you ask it to without sticking to the facts. So for gen AI to deliver on quality and defensibility, you need a use case that affords: Control—AI analytics experts should be deeply involved, writing prompts and setting boundaries for the AI-generated content to ensure it fits the problem you’re solving for. Control is critical to drive quality.Validation—Attorneys should review and be able to edit all content generated by AI. Validation is critical to measure quality.Our gen AI priv log solution meets these criteria. AI experts guide the AI as it generates content, and attorneys approve or edit every log the AI generates. As a result, the solution reliably hits the mark. In fact, outside counsel has rated our AI-generated log lines better than log lines by first-level contract attorneys.Speed: Does gen AI help you move faster? If someone (or something) writes content for you, it’s usually going to save you time. But as I said above, you shouldn’t accept whatever AI generates for you. Consider it a first draft—one that a person needs to review before calling it final. But reviewing content is a lot faster than drafting it, so our priv log solution and other gen AI models can definitely save you time.Cost: Does gen AI save you money?Giving AI credit for cost savings can be hard with many use cases. If you use gen AI as a conversational search engine or case-strategy collaborator, how do you calculate its value in dollars and cents?But with priv logs, the financial ROI is easy to track: What do you spend on priv logs with gen AI vs. without? Many clients have found that using our gen AI for the first draft is cheaper than using attorneys.Where can AI be effective for you?This post started with one question—How can AI make your work better?—but you can’t answer it without also asking where. Where are you thinking about applying AI? Where could your team benefit the most?So much about efficacy depends on the use case. It determines which type of AI can deliver what you need. It dictates what to expect in terms of quality, speed, and cost, including how easy it is to measure those benefits and whether you can expect much benefit at all.If you’re struggling to figure out what benefits matter most to you and how AI might deliver on them, sign up to receive our simple guide to thinking about AI below. It walks through seven dimensions of AI that are changing eDiscovery, including sections on efficacy, ethics, job impacts, and more. Each section includes a brief questionnaire to help you clarify where you stand—and what you stand to gain.
May 17, 2024
Blog

From Data to Decisions, AI is Improving Accuracy for eDiscovery

Blogs Template M 3 Editing Share Publish From Data to Decisions, AI is Improving Accuracy for eDiscovery from-data-to-decisions-ai-is-improving-accuracy-for-ediscovery www.lighthouseglobal.com/blog/ from-data-to-decisions-ai-is-improving-accuracy-for-ediscovery 05/28/2024 From Data to Decisions, AI is Improving Accuracy for eDiscovery Learn through real scenarios how predictive and generative AI are helping to improve accuracy in eDiscovery. You’ve heard the claims that AI can increase the accuracy of analytical tasks during eDiscovery. They’re true when the AI in question is being developed responsibly through proper scoping, iterative testing, and validation. As we have known for over a decade in legal tech circles, the computational power of AI (and machine learning in particular) is perfectly suited to the large data volumes at play for many matters and the types of classification assessments required for document review. But how much of a difference can AI make? And what impact do large language models (LLMs) have in the equation beyond traditional approaches to machine learning? How do these boosts in accuracy help legal teams meet deadlines, preserve budget, and achieve other goals? To answer these questions, we’ll look at several examples of privilege review from real-world matters. Priv is far from the only area where AI can make a difference, but for this article it’ll help to keep a tight focus. Also, we’ve been enhancing privilege review with AI since 2019, so when it comes to accuracy—we have plenty of proof. What accuracy means for the two primary types of AI Before we explore examples, let’s review the two relevant categories of AI and what they do in an eDiscovery context. Predictive AI leverages historical data to predict outcomes on new data. For eDiscovery, we leverage predictive AI to provide us with a metric on the likelihood a document falls under a certain classification (responsive, privileged, etc.) based on a previously coded training set of documents. Generative AI creates novel content based directly on input data. For eDiscovery, one example could be leveraging generative AI to develop summaries of documents of interest and answers to questions we may have about the facts present in these documents. In today's context, both types of AI are built with LLMs, which learn from vast stores of information how to navigate the nuances and peculiarities of language as people actually write and speak it. (In a previous post, we share more information about LLMs and the two types of AI.) Because each of these types of AI are focused on different goals and have different outputs, predictive and generative AI also have different definitions of accuracy. Accuracy for predictive AI is tied to a traditional sense of the truth: How well can the model predict what is true about a given document? Accuracy for generative AI is more fluid: A generative AI model is accurate when it faithfully meets the requirements of whatever prompt it was given. If you ask it to tell you what happened in a matter based on the facts at hand, it may make up facts in order to be accurate to the prompt. Whether the response is true or is based on the facts of the matter depends on the prompt, tuning mechanisms, and validation. All that said, both types of AI have use cases that allow legal teams to measure their accuracy and impact. Priv classifiers prove to be more accurate than search terms Our first example comes from a quick-turn government investigation of a large healthcare company. For this matter, we worked with counsel to train an AI model to identify privilege and ran it in conjunction with privilege search terms. The privilege terms came back with 250K potentially privileged documents, but the AI model found that more than half of them (145K) were unlikely to be privileged. Attorneys reviewed a sample of the disputed docs and agreed with the AI. That gave counsel the confidence they needed to remove all 145K from privilege review—and save their client significant time and money. We saw similar results in another fast-paced matter. Search terms identified 90K potentially privileged documents. Outside counsel wanted to reduce that number to save time, and our AI privilege model did just that. Read the full story on AI and privilege review for details. Let’s return to our definition of accuracy for predictive AI: How well did the model predict what was true about the documents? Very well and more accurately than search terms. Now what about generative AI? Generative AI can draft more accurate priv logs than people We have begun to use generative AI to draft privilege log descriptions. That’s an area where defining accuracy is clear-cut: How well does the log explain why the doc is privileged? During the pilot phase of our AI priv log work, we partnered with a law firm to answer that very question. With permission from their client, the firm took privilege logs from a real matter and sent the corresponding documents through our AI solution. Counsel then compared the log lines created by our AI model against the original logs from the matter. They found that the AI log lines were 12% more accurate than those drafted by third party contract reviewers. They also judged the AI log lines to be more detailed and less repetitious. We have evidence from live matters as well. During one with a massive dataset and urgent timeline, outside counsel used our generative AI to create privilege logs and asked reviewers to QC them. During QC, half the log lines sailed through with zero edits, while the other half were adjusted only slightly. You can see what else AI achieved in the full case study about this matter. More accurate review = more efficient review (with less risk) Those accuracy numbers sound good—but what exactly do they mean for legal teams? What material benefits do you get from improving accuracy? Several, including: Better use of attorney and reviewer time. With AI accurately identifying priv and non-priv documents, attorneys spend less time reviewing no-brainers and more time on documents that require more nuanced analysis. In cases where every document will be reviewed regardless, you can optimize review time (and costs) by sending highly unlikely docs to lower-cost contract resources and reserving your higher-priced review teams for close calls. Opportunities for culling. Attorneys can choose a cutoff at a recall that makes sense for the matter (including even 100%) and automatically remove all documents under that threshold from review and straight into production. This is a crisp, no-fuss way to avoid spending time and resources on documents highly unlikely to be privileged. Lower risk of inadvertently producing privileged documents. Pretty simple: The better your system is for classifying privilege, the less likely you are to let privileged info slip through review. What does accuracy mean to you? I hope this post helps clarify how exactly AI can improve accuracy during eDiscovery and what other benefits that can lead to. Now it’s time to consider what all this means to you, your team, and your work. How important is accuracy to you? How do you measure it? Where would it help to improve accuracy, and what would you get out of that? To help you think it through, we assembled a user-friendly guide that covers accuracy and six other dimensions of AI that change the way people think about eDiscovery today. The guide includes brief definitions and examples, along with key questions like the ones above to help you craft an informed, personal point of view on AI’s potential.
October 3, 2023
Blog

Law & Candor Season 12: Five Views of Innovation and Risk Impacting AI, eDiscovery, and Legal

AI, generative AI, antitrust, second requests, HSR, eDiscovery, review, information governance, healthcare, legal operations, law firm, corporate counsel ai-and-analytics; compliance; corporate; corporate-legal-ops; data-analytics; healthcare; healthcare-litigation; innovative-technology; innovation; information-governance; law-firm; mergers; modern-data; phi; pii; podcast; self-service, spectra; regulation; production mitch montoya In a year of unprecedented advancement in AI capabilities and economic uncertainty, legal teams and attorneys have been given both a compelling look into what the future of their work may look like and a sharp picture of today’s challenges. With a critical eye on how to manage and capitalize on these dueling perspectives that define legal’s current landscape, the guests on the new season of Law & Candor offer insights on a range of issues, including generative AI, new M&A guidelines and HSR rules, collaboration data, strategic partnerships, and the future of the industry. Listen for news, AI and technology updates, and best practices from leaders confronting these challenges and charting new paths forward. Episode 1: The Power of Three: Maximizing Success with Law Firms, Corporate Counsel, and Legal Technology Episode 2: What You Need to Know About the New FTC and DOJ HSR Changes Episode 3: Why Your eDiscovery Program and Technology Need Scalability Episode 4: Generative AI and Healthcare: A New Legal Landscape Episode 5: The Great Link Debate and the Future of Cloud Collaboration To keep up with news and updates on the podcast, follow Lighthouse on LinkedIn and Twitter . And check out previous episodes of Law & Candor at lighthouseglobal.com/law-and-candor-podcast. For questions regarding this podcast and its content, please reach out to us at info@lighthouseglobal.com.
September 8, 2023
Blog

Why Legal Teams Need to Reduce Repeated Document Review

Similar matters often pull in the same documents for review during eDiscovery. Many legal teams default to manually reviewing these documents for each matter, but this is quickly becoming untenable.Legal teams can reduce the burden of repeated review through the application of advanced technology and proactive review strategies. They may encounter barriers, from limitations of their current tools to concerns about defensibility. But legal teams can take small steps now that overcome these barriers and prepare them to meet the time, budget, and other pressures they face today.Repeated review exacerbates today’s challengesWe're approaching a time when legal teams simply can’t afford to review the same documents multiple times across matters. The size of modern datasets requires teams to reduce eyes-on review wherever possible. Meanwhile, repeated review of the same documents opens the door to inconsistency, error, and risk.When teams succeed in reducing repeated review, they turn their most common pain points into new sources of value. They get more out of their review spend, help review teams work faster, and achieve the accuracy that they expect and the courts demand.In an earlier post, we dig more deeply into when repeated review happens, what it costs, and how technology can support a different approach. If you’re eager to explore solutions, that’s a great place to start.But many legal professionals are unable to think about solutions yet. They face a range of internal and external barriers that make it hard to move or even see past the status quo of repeated review.If that’s the boat you’re in, keep reading.Changing your approach may appear dauntingLegal teams often have solid reasons for persisting with repeated review. These include:Feasibility concerns Every matter is unique, and teams may assume this means nothing of value carries over from one matter to another.Attorneys may distrust the decisions or data practices associated with prior matters. They prefer starting over from scratch, even if it means repeating work.Technology barriers Legacy tools and software lack the advanced AI necessary to save work product and apply learnings from matter to matter, but adopting new technology takes time and money that legal teams are wary of spending.Companies who use multiple vendors and eDiscovery review teams may store their data in multiple places, making it hard to reuse past work product.It’s true that no two matters are exactly alike, adopting new technology can be challenging, and it’s hard to trust the reliability of something you’ve never used before. But this doesn’t mean that repeated review is still the best option. As shown above, the costs are simply too high.So, what do we do about these barriers? How are teams supposed to move past them? The answer: One step at a time.Explore what’s possible by starting smallThese barriers are most formidable when you imagine rethinking your entire review approach. The idea of looking for potential document overlap across a huge portfolio, or finding and implementing a whole new technology suite, may be too overwhelming to put into action.So don’t think of it that way. Take small steps that explore the potential for reducing repeated review and chip away at the barriers holding you back. Instead of “all or nothing,” think “test and learn.”Look ahead to future matters, perform hybrid QC on past decisionsTo explore the feasibility of reducing repeated review, look at one matter with an eye on overlap. Does it share fundamental topics or custodians with any recent or future matters? Is it likely to have spin-off litigations, such as cases in other jurisdictions or a civil suit that follows a federal one?To build trust in decisions made during prior matters, try performing QC with attorneys and technology working in tandem. This can provide a quick and informative assessment of past decisions and calibrate your parameters for review going forward.Find a technology partner who meets you where you areIf your team lacks the technology to reuse work product, the right partner can right-size a solution for your needs and appetite. The hybrid QC example above applies here too. Many legal teams find that QC is an ideal venue for assessing the performance of advanced AI and getting a taste for how it works, because it’s focused, confined, and accompanied by human reviewers. From there, your team might expand to using advanced AI on a single matter, and eventually, on multiple matters. In all cases, your partner can do the heavy lifting of operating the technology, while explaining each step along the way, with enough detail that you can articulate its use and merits in court (or can “tag in” to present that explanation for you). “The right partner” in this context is someone with the data science expertise to apply the technology in the ways you need, along with the legal experience to speak to your questions and need for defensibility.Likewise, when data or case work are spread across multiple teams and locations, a savvy partner can still find ways to avoid duplicate work. This story about coordinating review across 9 jurisdictions is a great example.Take your time—but do take actionThe beauty of starting small is how it respects both the need to improve and the difficulty of making improvements. Changing something as intricate and important as your document review strategy won’t happen overnight. That’s okay. Take your time. But don’t take repeated review as a given. It threatens timelines, budgets, and quality. And it’s not your only option.For more on the subject, including specific scenarios where teams can reduce repeated review, see our in-depth primer.ediscovery-review; ai-and-analyticsreview; ai-and-analytics; ediscovery; ediscovery-processsarah moran
August 16, 2023
Blog

3 Reasons Traditional Document Review Isn’t Flexible Enough for Your Needs

Modern data volumes and complexity have ushered in a new era of document review. The traditional approach, in which paid attorneys manually review all or most documents in a corpus, fails to meet the intense needs of legal teams today.Specifically, legal teams need to:• Scale their document review capability to cover massive datasets• Rapidly build case strategy from key information hidden in those datasets• Manage the risk inherent in having sensitive and regulated information dispersed across those datasetsAdvanced review technology—including AI-powered search tools and analytics—enables teams to meet those needs, while simultaneously controlling costs and maintaining the highest standards of quality and defensibility. Rather than ceding decisions to a computer, reviewers are empowered to make faster decisions with fewer impediments. (For a breakdown of how technology sets reviewers up for success, see our recent blog post). In a nutshell, legal teams that use advanced technology can be more flexible, tackling large datasets with fewer resources, and addressing strategy and risk earlier in the process.A flexible approach to scale: refining the responsive setResponsiveness is the center of all matters. With datasets swelling to millions of documents, legal teams must reduce the responsive set defensibly, cost-efficiently, and in a way they can trust.With traditional eyes-on review, the only way to attempt this is to put more people or hours on the job. And this approach requires reviewers to make every coding decision, which is often mentally taxing and prone to error. Advanced review technology is purpose-built to scale for large datasets and provide a more nuanced assessment of responsiveness. Namely, it assigns a probability score—say, a given document is 90% or 45% likely to be responsive—that you can use to guide the review team. Often this means reviewers start with the highest-probability docs and then proceed through the rest, eventually making their way through the whole corpus. But legal teams have a lot of flexibility beyond that. Combining machine learning with rules-based linguistic models can make responsive sets vastly more precise, decreasing both risk and downstream review costs.Using this approach, machine learning is leveraged for what it does best—identifying clearly responsive and clearly non-responsive materials. For documents that fall in the middle of machine learning’s scoring band—those the model is least certain about—linguistic models built by experts target responsive language found in documents reviewed by humans, and then expand out to find documents with similar language markers. This approach allows legal teams to harness the strengths of both computational scalability and human reasoning to drive superior review outcomes.A flexible approach to strategy: finding key documents fasterOnly about 1% to 1.5% of a responsive set consists of key documents that are central to case planning and strategy. The earlier legal teams get their hands on those documents, the sooner they can start on that invaluable work.Whereas it takes months to find key documents with traditional review, advanced technology shortens the process to mere weeks. This is because key document identification utilizes complex search strings that include key language in context. For example, “Find documents with phrase A, in the vicinity of phrases B, C, and D, but not in documents that have attributes E and F,” and so on. A small team of linguistic experts drafts these searches and refines them as they go, based on feedback from counsel. In one recent matter, this approach to key document identification proved 8 times faster than manual review, and more than 90% of the documents it identified had been missed or discarded by the manual review team.The speed and iterative nature of this process are what enable legal teams to be more flexible with case strategy. First, they have more time to choose and change course. Second, they can guide the search team as their strategy evolves, ensuring they end up with exactly the documents they need to make the strongest case.A flexible approach to risk: assessing privilege and PII sooner and more cost effectivelyReviewing for privilege is a notoriously slow and expensive part of eDiscovery. When following a traditional approach, you can’t even start this chore until after the responsive set is established.With advanced technology, you can review for privilege, PII, and other classifications at the same time that the responsive set is being built. This shortens your overall timeframe and gives you more flexibility to prepare for litigation.Legal teams can even be flexible with their privilege review budget. As with responsiveness, advanced technology will rate how likely a document is to be privileged. Legal teams can choose to send extremely high- and low-scoring documents to less-expensive review teams, since those documents have the least ambiguity. Documents that score in the middle have the most ambiguity, so they can be reviewed by premium reviewers.It’s all about options in the endBroadly speaking, the main benefit of supporting document review with advanced technology is that it gives you a choice. Legal teams have the option to start key tasks sooner, calibrate the amount and level of eyes-on review, and strategize how they use their review budgets. With linear review, those options aren’t available. Legal teams that give themselves these options, by taking advantage of supportive technology, are better able to scale, strategize, and manage risk in the modern era of document review.ediscovery-review; ai-and-analyticsediscovery-review, ai-and-analytics, ai/big dataai-big-data eric pender
July 10, 2023
Blog

To Reduce Risk and Increase Efficiency in Investigations and Litigation, Data is Key

Handling large volumes of data during an investigation or litigation can be anxiety-inducing for legal teams. Corporate datasets can become a minefield of sensitive, privileged, and proprietary information that legal teams must identify as quickly as possible in order to mitigate risk. Ironically, corporate data also provides a key to speeding up and improving this process. By reusing metadata and work product from past matters in combination with advanced analytics, organizations can significantly reduce risk and increase efficiency during the review process.In a recent episode of Law & Candor, I discussed the complex nature of corporate data and ways in which the work done on past matters—coupled with analytics and advanced review tools—can be reused and leveraged to reduce risk and increase efficiency for current and future matters. Here are my key takeaways from the conversation.From burden to asset: leveraging data and analytics to gain the advantageThe evolution of analytical tools and technologies continues to change the data landscape for litigation and investigations. In complex matters especially—think multi-district litigation, second requests, large multi-year projects with multiple review streams—the technology and analytics that can now be applied to find responsive data not only helps streamline the review process but can extend corporate knowledge beyond a single matter for a larger purpose. Companies can now use their data to their advantage, transforming it from a liability into an asset. Prior to standardization around threading and TAR and CAL workflows, repository models were the norm. Re-use of issue coding was the best way to gain efficiency, but each matter still began with a clean slate. Now, with more sophisticated analytics, it’s not just coding and work product that can be re-used. The full analysis that went into making coding decisions can be applied to other matters so that the knowledge gained from a review and from the data itself is not lost as new matters come along. This results in greater overall efficiencies—not to mention major cost-savings—over time.Enhanced tools and analytics reduce the risk of PII, privilege, and other sensitive data exposureWith today’s data volumes, the more traditional methods used in review, such as search terms and regular expression (regex), can often result in high data recall with low precision. That is, such a wide net is cast that a lot of data is captured that isn’t terribly significant, and data that does matter can be missed. Analytical modeling can help avoid that pitfall by leveraging prior work product and coding to reduce the size of the data population from the outset, sometimes by as much as 90%, and to help find information that more traditional tools often miss.This is especially impactful when it comes to PII, PHI, and privileged or other sensitive data that may be in the population, because the risk of exposure is significantly reduced as accuracy increases. Upfront costs may seem like a barrier, but downstream cost savings in review make up for itWhen technology and data analytics are used to reduce data volume from the beginning, efficiencies are gained throughout the entire review process; there are exponential gains moving forward in terms of both speed and cost. Unfortunately, the upfront costs may seem steep to the uninitiated, presenting what is the likely barrier to the lack of wide adoption of many advanced technologies. The initial outlay before a project even begins can be perceived as a challenge for eDiscovery cost centers. Also, it can be very difficult for any company to keep up with the rapid evolution of both the complex data landscape and the analytics tools available to address it—the options can seem overwhelming. Finding the right technology partner with both expertise and experience in the appropriate analytics tools and workflows is crucial for making the transition to a more effective approach. A good partner should be able to understand the needs of your company and provide the necessary statistics to support and justify a change. A proof-of-concept exercise is a way to provide compelling evidence that any up-front expenditure will more than justify a revised workflow that will exponentially reduce costs of linear document review.How to get startedSeeing is believing, as they say, and the best way to demonstrate that something works is to see it in action. A proof-of-concept exercise with a real use case—run side-by-side with the existing process—is an effective way to highlight the efficiencies gained by applying the appropriate analytics tools in the right places. A good consulting partner, especially one familiar with the company’s data landscape, should be able to design such a test to show that the downstream cost savings will justify the up-front spend, not just for a single matter, but for other matters as well. Cross-matter analysis and analytics: the new frontierTAR and CAL workflows, which are finally finding wider use, should be the first line of exploration for companies not yet well-versed in how these workflows can optimize efficiency. But that is just the beginning. Advanced analytics tools add an additional level of robustness that can put those workflows into overdrive. Cross-matter analysis and analytics, for example, can address important questions: How can companies use the knowledge and work product gleaned from prior matters and apply them to current and future matters? How can such knowledge be pooled and leveraged, in conjunction with AI or other machine learning tools, to create models that will be applicable to future efforts?Marrying the old school data repository concept with new analytics tools is opening a new world of possibilities that we’re just beginning to explore. It’s a new frontier, and the most intrepid explorers will be the ones that reap the greatest benefits. For more information on data reuse and other review strategies, check out our review solutions page.ai-and-analytics; data-privacy; ediscovery-reviewcorporate; ai-and-analytics; analytics; big-data; compliance-and-investigations; corporationcassie blum
March 26, 2021
Blog

Legal Tech Innovation: Learning to Thrive in an Evolving Legal Landscape

The March sessions of Legalweek took place recently, and as with the February sessions, the virtual event struck a chord that reverberated deep from within the heart of a (hopefully) receding pandemic. However, the discussions this time around focused much less on the logistics of working in a virtual environment and much more on getting back to the business of law. One theme, in particular, stood out from those discussions – the idea that legal professionals will need to have a grasp on the technology that is driving our new world forward, post-pandemic.In other words, the days when attorneys somewhat-braggingly painted a picture of themselves as Luddites holed up in cobwebbed libraries are quickly coming to an end. We live in an increasingly digital world – one where our professional communications are taking place almost exclusively on digital platforms. That means each of us (and our organizations and law firms) are generating more data than we know what to do with. That trend will only grow in the future, and attorneys that are unwilling to accept that fact may find themselves entombed within those dusty libraries.Fortunately, despite our reputation as being slow to adapt, legal professionals are actually an innovative, flexible bunch. Whether a matter requires us to develop expertise in a specific area of the medical field, learn more about a niche topic in the construction industry, or delve into some esoteric insurance provision – we dive in and become laymen experts so that we can effectively advocate for our clients and companies. Thus, there is no doubt that we can and will evolve in a post-pandemic world. However, if anyone out there is still on the fence, below are four key reasons why attorneys will need to become tech savvy, or at least knowledgeable enough to understand when to call in technical expertise.1. Technological Competence is Imposed by Ethics and Evidence RulesFirst and foremost, attorneys have an ethical duty (under ABA Model Rule 1.1) to “keep abreast of changes in the law and its practice, including the benefits and risk associated with relevant technology.” Thirty seven states have adopted this language within their own attorney ethics rules. Thus, just as we have a duty to continue our legal education each year to stay abreast of changes in law, we also have an ethical duty to continue to educate ourselves on the technology that is relevant to our practice.We also have a duty to preserve and produce relevant electronically stored information (ESI) (under both the Federal Rules of Civil Procedure (FRCP), as well as the ABA model ethics rules)[1] during civil litigation. To do so, attorneys must understand (or work with someone who understands) where their client’s or company’s relevant ESI evidence is, how to preserve it, how to collect it, and how to produce it. This means preserving and producing not only the documents themselves but also the metadata (i.e., the information about the data itself, including when it was generated and edited, who created it, etc.). This overall process grows more complicated with each passing year, as companies migrate to the unlimited storage opportunities of the Cloud and employees increasingly communicate through cloud-based collaboration platforms. Working within the Cloud has a myriad of benefits, but it can make it more difficult for attorneys to understand where their client’s or company’s relevant information might be stored, as well as harder to ensure metadata is preserved correctly.Together, these rules and obligations mean that whether we are practicing law within a firm or as in-house counsel at an organization, we have a duty to understand the basics of the technology our clients are using to communicate so that at the very least, we will know when to call in technical experts to meet the ethical and legal obligations we owe to those we counsel.2. Data Protection and Data Privacy is Becoming Increasingly ImportantThe data privacy landscape is becoming a tapestry of conflicting laws and regulations in which companies are currently navigating as best they can. Within the United States alone, there were a multitude of state and local laws regulating personal data that came into effect or were introduced in 2020. For companies that have a global footprint, the worldwide data protection landscape is even more complicated – from the invalidation of the EU-US privacy shield to new laws and modifications of data protection laws across the Americas and Asia Pacific countries. It will not be long before most companies, no matter their location, will need to ensure that they are abiding within the constructs of multiple jurisdictional data privacy laws.This means that attorneys who represent those companies will need to understand not only where personal data is located within the company, but also how the company is processing that data, how (and if) that data is being transmitted across borders, when (and if) it needs to be deleted, the process for effectively deleting it, etc., etc. To do so, attorneys must also have at least some understanding of the technology platforms their companies and clients are using, as well as how data is stored and transferred within those platforms, to ensure they are not advertently running afoul of data privacy laws.As far as data protection, attorneys need to understand how to proactively protect and safeguard their clients’ data. There have been multiple high-profile data breaches in the last few months,and law firms and companies that routinely house personal data are often the target of those breaches. Protecting client data requires attorneys to have a semblance of understanding of where client data is and how to protect it properly, including knowing when and how to hire experts who can best offer the right level of protection.3. Internal Compliance is Becoming More Technologically Complicated There has been a lot of interest recently in using artificial intelligence (AI) and analytics technology to monitor internal compliance within companies. This is in part due to the massive amount of data that compliance teams now need to comb through to detect inappropriate or illegal employee conduct. From monitoring departing employees to ensure they aren’t walking out the door with valuable trade secret information, to monitoring digital interactions to ensure a safe work environment for all employees – companies are looking to leverage advances in technology to more quickly and accurately spot irregularities and anomalies within company data that may indicate employee malfeasance.Not only will this type of monitoring require an understanding of analytics and AI technology, but it will also require grasping the intricacies of the company’s data infrastructure. Compliance and legal teams will need to understand the technology platforms in place within their organization, where employees are creating data within those platforms, as well as how employees interact with each other within them.4. The Ability to Explain Technology Makes Us Better AdvocatesFinally, it is important to note that the ability to understand and explain the technology we are using makes us better and more effective advocates. For example, within the eDiscovery space, it can be incredibly important for our clients’ budgets and case outcomes to attain court acceptance of AI and machine-learning technology that can drastically limit the volume of data requiring expensive and tedious human review. To do so, attorneys often must first be able to get buy-in from their own clients, who may not be well versed in eDiscovery technology. Once clients are on-board, attorneys must then educate courts and opposing counsel about the technology in order to gain approval and acceptance.In other words, to prove that the methods we want to use (whether those methods relate to document preservation and collection, data protection, compliance workflows, or eDiscovery reviews) are defensible and repeatable, attorneys must be able to explain the technology behind those methods. And as in all areas of law, the most successful attorneys are ones who can take a very complicated, technical subject and break it down in a way that clients, opposing counsel, judges, and juries can understand (or alternatively are knowledgeable enough about the technology to know when it is necessary to bring experts in to help make their case).Best Practices for Staying Abreast of TechnologyReach out to technology providers to ask for training and tips when needed. When evaluating providers, look for those that offer ongoing training and support.For attorneys working as in-house counsel, work to build healthy partnerships with compliance, IT, and data privacy teams. Being able to ask questions and learn from each other will help head off technology issues for your company.For attorneys working within law firms, work to understand your clients’ data infrastructure or layout. This may mean talking to their IT, legal, and compliance teams so that you can ensure you are up to date on changes and processes that affect your ability to advocate effectively for your client.Look for CLEs, trainings, and vendor offerings that are specific to the technology you and your clients use regularly. Remember that cloud-based technology, in particular, changes and updates often. It is important to stay on top of the most recent changes to ensure you can effectively advocate for your clients.Recognize when you need help. Attorneys don’t need to be technological wizards in order to practice law, however, you will need to know when to call in experts…and that will require a baseline understanding of the technology at issue.To discuss this topic more, feel free to connect with me at smoran@lighthouseglobal.com. [1] ABA Model Rule 3.4, FRCP 37(e) and FRCP 26)ai-and-analytics; ediscovery-review; data-privacy; information-governanceanalytics, data-privacy, information-governance, ediscovery-process, blog, law-firm, ai-and-analytics, ediscovery-review, data-privacy, information-governanceanalytics; data-privacy; information-governance; ediscovery-process; blog; law-firmsarah moran
April 27, 2021
Blog

Legal Tech Innovation: Gaining Trust in New Technology and Processes

LegalWeek’s April conference took place recently, and as with the sessions earlier this year, the April thought leadership panels touched on many of the struggles we are all facing in the legal technology space. But where the February sessions focused on the post-pandemic future of legal technology and the March sessions focused on getting back to the business of law, the April sessions weaved in a more nuanced theme: obtaining organizational buy-in from stakeholders around legal technology and processes.The need for stakeholder buy-in for any type of legal technology change is imperative. Without it, organizations and law firms stop evolving and become stagnant as more agile competitors onboard better, more efficient processes, tools, and teams. But perhaps more importantly, being unable to obtain stakeholder involvement and approval can also end up leaving the company and law firms open to risk.For an example of the ramifications of failing to obtain the necessary buy-in, let’s take look at the legal technology process that many organizations and law firms have been struggling to implement recently: defensible disposal of legacy data. Without an effective defensible data disposal process and policy, data volumes can balloon out of control – especially in a Cloud environment – meaning that organizations and/or law firms will needlessly waste money storing obsolete data that should have been disposed of previously. But it also can increase risk in several ways. For starters, legacy data may contain personally identifiable information (PII) that organizations may be legally required to dispose of after a specified time period, pursuant to sectorial or jurisdictional data privacy laws. Even if personal data does not fall within the purview of a disposal requirement, keeping it for longer than it is needed for business purposes can still pose a risk should the company or firm holding it suffer a data breach or ransomware attack. Additionally, even obsolete non-personal data can cause confusion, disruption, and increased cost and risk if it winds up subject to a legal hold or swept up in an internal investigation. But despite all this, implementing an effective defensible data disposal program is a challenge for many because it often requires sweeping organizational buy-in, from the highest C-Suite executive to the lowliest employee with access to a company-sponsored collaborative platform.So how can legal teams get the buy-in necessary to implement new legal technology and processes that enable organizations and law firms to compete and evolve? It is tempting to think that buy-in starts with learning to control stakeholders. But attempting to control other teams and individuals will only lead to misalignment, tension, and failed implementation. Instead, gaining stakeholder buy-in actually starts with trust. Stakeholders must trust that whatever you are proposing to implement (whether that is a new technology, a new policy, or a new workflow) will be beneficial to them, to their team, and to the organization as a whole and that implementation is actually feasible. Below I have outlined a few tips for gaining stakeholder trust and buy-in for new legal technology and processes.Identify all the necessary stakeholders. Whether you want to onboard a new legal technology or implement a new legal data policy, like an updated document retention schedule, you will need to understand who the decisions makers are, as well as identify anyone who will be affected by the new tools, processes, or workflow.Prepare, Prepare, Prepare. Once you have identified the stakeholders and all those affected by the planned change, you can start preparing to gain their trust. This means doing all the necessary research and legwork up front so that you are well informed and have a fully developed, practical plan in place to present to those stakeholders. For instance, if you are seeking to onboard advanced AI technology to help streamline your eDiscovery program, you can prepare to gain trust by first talking to peers in the industry, as well as legal technology providers, to find the best technology and pricing options. Once you’ve selected an option, choose a test case and run a proof of concept to validate the effectiveness within your own data.Run the numbers. Once you’ve done the research and are satisfied that the new technology or workflow will be a good fit for your organization, quantify that fit by focusing on the bottom line. How much money will this be able to save your organization or law firm? How much risk can it eliminate and how can you quantify that risk? How can this new process or tool improve efficiency and how much money will that efficiency save? What is at stake if this new technology or process is not implemented and how can you quantify that? What is your plan for how this new tool or process will be funded by the organization or law firm?Stop, Collaborate, and Listen. Once you have identified all relevant stakeholders and collected the data, it is time to gather everyone together to present your research (either individually or via cross-organizational working groups or teams). Note that the order in which you present data to stakeholders will depend on your organization or law firm. For some, it may be best to get management and executives on board first to help drive change further downstream. In others, it may be more impactful to get lower-level teams on board first before presenting to final decision makers. Whichever order you choose, it is imperative to remember to listen and accept feedback once you’ve made your pitch. Remember this process will be iterative. It will require you to be flexible and possibly deviate from your original plan. It may also necessitate going back to the drawing board completely and selecting a different workflow or tool that works better for other groups. It may end up changing your desired implementation timeline. But the key to gaining trust from stakeholders is to get them involved early and listen to their feedback regarding planning, onboarding, and implementation.Retain Trust. Congratulations! Once all stakeholders have come to a consensus and you have achieved buy-in from all necessary decision makers, you are ready to implement and onboard. But that is not the end of this process. After implementation, you will need to protect the trust you have worked so hard to earn. You can do this by ensuring that everyone has the necessary training to effectively use the tool or abide by the new workflow or process. Nothing erodes trust more than incorrect (or non-existent) utilization. Whether you’re seeking to onboard a new eDiscovery platform or you’re rolling out a new legal hold technology, people who are affected by the change will need to understand how to use the technology and/or comply with the program. Set up training programs and then have avenues of ongoing support where people can ask questions and continue to train should they need it.I hope these tips come in handy when you are looking for buy-in from stakeholders around legal technology and processes. To discuss this topic more, feel free to connect with me at smoran@lighthouseglobal.com. ai-and-analytics; ediscovery-review; legal-operationscloud, data-privacy, information-governance, ai-big-data, preservation-and-collection, blog, ai-and-analytics, ediscovery-review, legal-operations,cloud; data-privacy; information-governance; ai-big-data; preservation-and-collection; blogsarah moran
December 20, 2022
Blog

Why You Need a Specialized Key Document Search Team in Multi-District Litigation

KDI
Few things are more ominous to a company’s in-house counsel than the prospect of facing thousands of individual lawsuits across 30-40 jurisdictions, alongside various other companies in a multi-district litigation (MDL) proceeding. In-house teams can, of course, lean on the expertise of external law firms that have strong backgrounds in MDLs. However, even for experienced law firms, coordinating an individual company’s legal defense with other law firms and in-house counsel within a joint defense group (JDG) can be a Sisyphean task. But this coordination is integral to achieving the best possible outcome for each company, especially when it comes to identifying and sharing the documents that will drive the JDG’s litigation strategies. An MDL can involve millions of documents, emanating from multiple companies and their subsidiaries. Buried somewhere within that complicated web of data is a small number of key documents that tell the story of what actually happened—the documents that explain the “who, what, where, and when” of the litigation. Identifying those documents is critical so that JDG counsel can understand the role each company played (or didn’t play) in the plaintiffs’ allegations, and then craft and prepare their defense accordingly. And the faster those documents are identified and shared across a JDG, the better and more effective that defense strategy and preparation will be. In short: A strong and coordinated key document search strategy that is specific to the unique ecosystem of an MDL is crucial for an effective defense. Ineffective search strategies leave litigators out at sea Unfortunately, outdated or ineffective search methodologies are often still the norm rather than the exception. The two most common strategies were created to find key documents in smaller, insular litigation proceedings involving one company. They are also relics of a time when average data volumes involved in litigation were much smaller. Those two strategies are: one, relying on linear document review teams to surface key documents as they review documents one by one in preparation for production, and, two, relying on attorneys from the JDG’s counsel teams to arbitrarily search datasets using whatever search terms they think may be effective. Let’s take a deeper look at each of these methodologies and why they are both ineffective and expensive: Relying on linear review teams to find key documents. Traditional linear review teams are often made up of dozens or even hundreds of contract attorneys with no coordination around key document searches and little or no day-to-day communication with JDG counsel. Each attorney reviewer may also only see a tiny fraction of the entire dataset and have a skewed view of what documents are truly important to the JDG’s strategy. The results are often both overinclusive (with thousands of routine documents labeled “key” or “hot” that JDG counsel must wade through) and underinclusive (with truly important documents left unflagged and unnoticed by review teams). This search method is also painfully slow. Key documents are only incidentally surfaced by the review team if they notice them while performing their primary responsibility—responsive review. Relying on attorneys from JDG counsel teams. Relying on individual attorneys from the JDG’s outside counsel to perform keyword searches to find key documents is also ineffective and wastefully expensive. Without a very specific, coordinated search plan, attorneys are left running whatever searches each thinks might be effective. This strategy inevitably will risk plaintiffs finding critical documents first, leaving defense deposition witnesses unprepared and susceptible to ambush. This search methodology is also a dysfunctional use of attorney time and legal spend. Merits counsel’s value is their legal analytic skillset—i.e., their ability to craft the best litigation strategy with the evidence at hand. Most attorneys are not technologists or linguistic experts. Asking highly skilled attorneys to craft the most effective technological and linguistic data search is a bit like asking an award-winning sushi chef to jump onboard a fishing vessel, navigate to the best fishing spot, select the best bait, and reel in the fish the chef will ultimately serve. Both jobs require a highly specialized skillset and are essential to the end goal of delighting a client with an excellent meal. But paying the chef to perform the fisherman’s job would be ineffective and a waste of the chef’s skillset and time. Both of these search strategies are also reactive rather than proactive, which drives up legal costs, wastes valuable resources, and worsens outcomes for each company in a JDG. A better approach to MDL preparation and strategy Fortunately, there is a more proactive, cost-efficient, holistic, and effective way to identify the key documents in an MDL environment. It involves engaging a small team of highly trained linguists and technology search experts, who can leverage purpose-built technology to find the best documents to prepare effective litigation strategies across the entire MDL data landscape. A specialized team with this makeup provides a number of key advantages: Precise searches and results—Linguistic experts can carefully craft narrow searches that consider the nuance of human language to more effectively find key documents. A specialized search team can also employ thematic search strategies across every jurisdiction. This provides counsel with a critical high-level overview of the evidence that lies within the data for each litigation, enabling each company to make better, more informed decisions much earlier in the process.Quick access to key documents—Technology experts leveraging advanced AI and analytics can ensure potentially damaging documents bubble up to the surface—even in the absence of specific requests from JDG counsel. Compare this to waiting for those documents to be found by contract attorneys as they review an endless stream of documents, one by one, during the linear review process. A flexible offensive and defensive litigation strategy—A team of this size and composition can react more nimbly, circulate information faster, and respond quicker to changes in litigation strategy. For example, once counsel has an overview of the important facts, the search team can begin to narrow their focus to arm counsel with the data needed for both offensive and defensive litigation strategies. The team will be incredibly adept at analyzing incoming data provided by opposing counsel—flagging any gaps and raising potential deposition targets. Defensively, they can be used by counsel to get ahead of any potentially damaging evidence and identify every document that bolsters potential defense arguments. An expert partner throughout the process—A centralized search team is able to act as a coordinated “search desk” for all involved counsel, as well as a repository and “source of truth” for institutional knowledge across every jurisdiction. As litigation progresses, the search team becomes the right hand of counsel—using their knowledge and expertise to prepare deposition and witness preparation binders and performing ad-hoc searches for counsel. Once a matter goes to trial in one jurisdiction, the search team can use the information gleaned from that proceeding to inform their searches and strategy for the next case. Conclusion Facing a complex MDL is an undoubtedly daunting process for any company. But successfully navigating this challenge will be downright impossible if counsel is unable to quickly find and understand the key facts and issues that lie buried within massive volumes of data. Traditional key document search methodologies are no longer effective at providing that information to counsel. For a better outcome, companies should look for small, specialized search teams, made up of linguistic and technology experts. These teams will be able to build a scalable and effective search strategy tailormade for the unique data ecosystem of a large MDL—thereby proactively providing counsel with the evidence needed to achieve the best possible outcome for each company. lighting-the-way-for-review; ai-and-analytics; ediscovery-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscoveryreview, blog, ai, ai-and-analytics, ediscovery-reviewreview; blog; aikdisarah moran
September 16, 2021
Blog

What is the Future of TAR in eDiscovery? (Spoiler Alert – It Involves Advanced AI and Expert Services)

Since the dawn of modern litigation, attorneys have grappled with finding the most efficient and strategic method of producing discovery. However, the shift to computers and electronically stored information (ESI) within organizations since the 1990s exponentially complicated that process. Rather than sifting through filing cabinets and boxes, litigation teams suddenly found themselves looking to technology to help them review and produce large volumes of ESI pulled from email accounts, hard drives, and more recently, cloud storage. In effect, because technology changed the way people communicated, the legal industry was forced to change its discovery process.The Rise of TARDue to growing data volumes in the mid-2000s, the process of large teams of attorneys looking at electronic documents one-by-one was becoming infeasible. Forward-thinking attorneys again looked to technology to help make the process more practical and efficient – specifically, to a subset of artificial intelligence (AI) technology called “machine learning” that could help predict the responsiveness of documents. This process of using machine learning to score a dataset according to the likelihood of responsiveness to minimize the amount of human review became known as technology assisted review (TAR).TAR proved invaluable because machine learning algorithms’ classification of documents enabled attorneys to prioritize important documents for human review and, in some cases, avoid reviewing large portions of documents. With the original form of TAR, a small number of highly trained subject matter experts review and code a randomly selected group of documents, which are then used to train the computer. Once trained, the computer can score all the documents in the dataset according to the likelihood of responsiveness. Using statistical measures, a cutoff point is determined, below which the remaining documents do not require human review because they are deemed statistically non-responsive to the discovery request.Eventually, a second iteration of TAR was developed. Known as TAR 2.0, this second iteration is based on the same supervised machine learning technology as the riginal TAR (now known as TAR 1.0) – but rather than the simple learning of TAR 1.0, TAR 2.0 utilizes a process to continuously learn from reviewer decisions. This eliminates the need for highly trained subject matter experts to train the system with a control set of documents at the outset of the matter. TAR 2.0 workflows can help sort and prioritize documents as reviewers code, constantly funneling the most responsive to the top for review.Modern Data ChallengesBut while both TAR 1.0 and TAR 2.0 are still widely used in eDiscovery today – the data landscape looks drastically different than it did when TAR first made its debut. Smartphones, social media applications, ephemeral messaging systems, and cloud-based collaboration platforms, for example, did not exist twenty years ago but are all commonly used within organizations for communication today. This new technology generates vast amounts of complicated data that, in turn, must be collected and analyzed during litigations and investigations.Aside from the new variety of data, the volume and velocity of modern data is also significantly different than it was twenty years ago. For instance, the amount of data generated, captured, copied, and consumed worldwide in 2010 was just two zettabytes. By 2020, that volume had grown to 64.2 zettabytes.[1]Despite this modern data revolution, litigation teams are still using the same machine learning technology to perform TAR as they did when it was first introduced over a decade ago – and that technology was already more than a decade old back then. TAR as it currently stands is not built for big data – the extremely large, varied, and complex modern datasets that attorneys must increasingly deal with when handling discovery requests. These simple AI systems cannot scale the way more advanced forms of AI can in order to tackle large datasets. They also lack the ability to take context, metadata, and modern language into account when making coding predictions. The snail pace of the evolution of TAR technology in the face of the lightning-fast evolution of modern data is quickly becoming a problem.The Future of TARThe solution to the challenge of modern data lies in updating TAR workflows to include a variety of more advanced AI technology, together with bringing on technology experts and linguistics to help wield them. To begin with, for TAR to remain effective in a modern data environment, it is necessary to incorporate tools that leverage more advanced subsets of AI, such as deep learning and natural language processing (NLP), into the TAR process. In contrast to simple machine learning (which can only analyze the text of a document), newer tools leveraging more advanced AI can analyze metadata, context, and even the sentiment of the language used within a document. Additionally, bringing in linguists and experienced technologists to expertly handle massive data volumes allows attorneys to focus on the actual substantive legal issues at hand, rather than attempting to become an eDiscovery Frankenstein (i.e., a lawyer + a data scientist + a technology expert + and a linguistic expert all rolled into one).This combination of advanced AI technology and expert service will enable litigation teams to reinvent data review to make it more feasible, effective, and manageable in a modern era. For example, because more advanced AI is capable of handling large data volumes and looking at documents from multiple dimensions, technology experts and attorneys can start working together to put a system in place to recycle data and past attorney work product from previous eDiscovery reviews. This type of “data reuse” can be especially helpful in tackling the traditionally more expensive and time-consuming aspects of eDiscovery reviews, like privilege and sensitive information identification and can also help remove large swaths of ROT (redundant, obsolete, or trivial data). When technology experts can leverage past data to train a more advanced AI tool, legal teams can immediately reduce the need for human review in the current case. In this way, this combination of advanced AI and expert service can reduce the endless “reinventing the wheel” that historically happens on each new matter.ConclusionThe same cycle that brought technology into the discovery process is again prompting a new change in eDiscovery. The way people communicate and the systems used to facilitate that communication at work are changing, and current TAR technology is not equipped to handle that change effectively. It’s time to start incorporating more modern AI technology and expert services into TAR workflows to make eDiscovery feasible in a modern era.To learn more about the advantages of leveraging advanced AI within TAR workflows, please download our white paper, entitled “TAR + Advanced AI: The Future is Now.” And to discuss this topic more, feel free to connect with me at smoran@lighthouseglobal.com. [1] “Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025” https://www.statista.com/statistics/871513/worldwide-data-created/practical-applications-of-ai-in-ediscovery; ai-and-analytics; ediscovery-reviewai-big-data, tar-predictive-coding, ediscovery-process, prism, blog, data-reuse, ai-and-analytics, ediscovery-reviewai-big-data; tar-predictive-coding; ediscovery-process; prism; blog; data-reusesarah moran
November 11, 2019
Blog

Top Four Considerations for Law Firms When Choosing a SaaS eDiscovery Solution

“The world’s most valuable resource is no longer oil, but data.” That’s what The Economist said in a fascinating opinion piece in 2017 that really stuck with me. This bold statement now seems more prescient than ever, as digital data continues to explode in volume and the advent of the cloud is significantly expanding where that valuable data, or electronically stored information (ESI), lives. So how has the legal world, and particularly all of us in the eDiscovery realm, fared?While this data revolution developed, lawyers, as per usual, have been a bit slow to adapt. As we’ve grappled with how to manage the explosion of data and cloud storage has gone mainstream, new and advanced SaaS solutions tied to cloud-based technology have taken the rest of the world by storm. It was only a matter of time before corporate clients would get on board and make the move to the cloud as is evident with the large majority of corporations who have transitioned to Office 365.So as clients focus more and more on controlling their budgets and demanding more eDiscovery efficiency, shifting to modernized, cloud-based SaaS technology seems like a no-brainer for law firms. What’s not to love about immediately eliminating the inefficiencies and manual tasks that accompany traditional eDiscovery workflows and creating satisfied clients in the process?In my previous two blogs, I discussed the top reasons why SaaS, self-service, spectra eDiscovery is exactly the right solution and the way of the future for law firms, and also best practices for embracing the data revolution. In this blog, I wrap up my SaaS exploration and present the top things to consider when choosing the best and most versatile SaaS solution for your eDiscovery program.1. Quick to Onboard - Software in the eDiscovery space has had a notoriously rocky road as far as simplicity and user friendliness. Many iterations of self-service, spectra, on-prem software are too complicated and require training fit for advanced users only. Another missing piece of the puzzle has often been the lack of clear and consumable metrics on areas like billing, usage, ingestion, and processing stats which are the key to helping inform users to make better decisions. With the new generation of eDiscovery technology, lawyers and litigation support professionals would most benefit from choosing a SaaS, self-service, spectra tool that’s quick and easy to understand. Look for a tool where cases with multiple users can be easily managed across matters and locations, and where you can create, upload, and process matters quickly, all with a customizable reporting dashboard.2. Access to Industry Leading Tools - One of the biggest issues we’ve seen as eDiscovery software has evolved is the need for users of on-prem software to purchase, install, and maintain multiple tools and systems in order to have a comprehensive internal workflow that spans the EDRM. This is not only expensive, but time consuming and risky considering the security implications that come with holding client data on your own servers. With SaaS, it’s critical to choose a platform that will provide access to all of the industry leading tools from processing to analytics to production in one comprehensive tool that is purchased, maintained, and upgraded by the solution provider. Users will immediately see direct cost savings from not having to manage multiple systems themselves when they adopt this type of end-to-end SaaS solution.3. Full-Service Support - Another important consideration when selecting a self-service, spectra, SaaS tool is to choose a flexible solution provider who can scale up if your matter changes and you end up needing full-service support. While having a self-service, spectra tool allows for complete independence in key areas like processing and production, what happens when your matter gets much bigger than anticipated and the data is too unwieldy to handle in-house, or if your internal team simply needs to shift their focus to something else? In this case, it’s critical to partner with a solution provider who has solid and experienced client support teams that can jump in any time you need help in your self-service, spectra journey.4. Secure Infrastructure - Last but not least, in this age of data breaches and cybersecurity on the top of the list of concerns for law firms and corporate clients alike, make sure you fully vet any SaaS tool you’re considering by thoroughly researching the solution provider’s back-end infrastructure. Look for vendors who have a scalable architecture for data processing and automation that you’ll be able to take full advantage of while eliminating the overhead that comes with infrastructure development and management on your end. That infrastructure should come with the peace of mind of security certifications such as SOC 2 and ISO 27001. You can also eliminate the concern that often comes with the security of a public cloud by choosing a solution provider that hosts data within their own private cloud or within their own data centers.Ultimately, as the global economy continues to shift from traditional commodities and lands squarely on data as its main driver, there’s a world of opportunity ahead for the legal world and eDiscovery. With data already moved to the cloud for most companies and their focus shifted to reducing expenses and risk, eDiscovery and SaaS for law firms is a perfect fit.ediscovery-review; ai-and-analyticscloud, self-service, spectra, blog, ediscovery-review, ai-and-analyticscloud; self-service, spectra; bloglighthouse
October 12, 2021
Blog

What Attorneys Should Know About Advanced AI in eDiscovery: A Brief Discussion

AI & Analytics
What does Artificial Intelligence (AI) mean to you? In the non-legal space, AI has taken a prominent role, influencing almost every facet of our day-to-day life – from how we socialize, to our medical care, to how we eat, to what we wear, and even how we choose our partners.In the eDiscovery space, AI has played a much more discreet but nonetheless important role. Its limited adoption so far is due, in part, to the fact that the legal industry tends to be much more risk averse than other industries. The innate trust we have placed in more advanced forms of AI technology in the non-legal world to help guide our decision making has not carried over to eDiscovery – partly because attorneys often feel that they don’t have the requisite technological expertise to explain the results to opposing counsel or judges. The result: most attorneys performing eDiscovery tasks are either not using AI technology at all or are using AI technology that is generations older than the technology currently being used in other industries. All this despite the fact that attorneys facing discovery requests today must regularly analyze mountains of complicated data under tight deadlines.One of the most prominent roles AI currently plays in eDiscovery is within technology assisted review (TAR). TAR uses “supervised” machine learning algorithms to classify documents for responsiveness based on human input. This classification allows attorneys to prioritize the most important documents for human review and, often, reduce the number of documents that need to be reviewed by humans. TAR has proven to be especially helpful in HSR Second Requests and other matters with demanding deadlines. However, the simple machine learning technology behind TAR is already decades old and has not been updated, even as AI technology has significantly advanced. This older AI technology is quickly becoming incapable of handing modern datasets, which are infinitely more voluminous and complicated than they were even five years ago.Because the legal industry is slower to adopt more advanced AI technology, many attorneys have a muddled view of what advanced AI technology exists, how it works, and how that technology can assist attorneys in eDiscovery today. That confusion becomes a significant detriment to modern attorneys, who must start being more comfortable with adopting and utilizing the more advanced AI tools available today if they stand a chance overcoming the increasingly complicated data challenges in eDiscovery. This confusion behind AI can also lead to a vicious cycle that further slows down technology adoption in the legal space: attorneys who lack confidence in their ability to understand available AI technology subsequently resist adoption of that technology; that lack of adoption then puts them even further behind the technology learning curve as technology continues to evolve. This is where legal technology companies with dedicated technology services can help. A good legal technology company will have staff on hand whose entire job it is to evaluate new technology and test its application and accuracy within modern datasets. Thus, an attorney who has no interest in becoming a technology expert just needs to be proficient enough to know the type of tools that might fit their needs – the right technology vendor can do the rest. Technology experts can also step in to help provide detailed explanations of how the technology works to stakeholders, as well as verify the outcome to skeptical opposing counsel and judges. Moreover, a good technology provider can also supply expert resources to perform much of the day-to-day utilization of the tool. In essence, a good legal technology vendor can become a trusted part of any attorney team – allowing attorneys to remain focused on the substantive legal issues they are facing. With that in mind, it’s important to “demystify” some common AI concepts used within the eDiscovery space and explain the benefits more advanced forms of AI technology can provide within eDiscovery. Once comfortable with the information provided here, readers can take a deeper dive into the advantages of leveraging advanced AI within TAR workflows in our full white paper – “TAR + Advanced AI: The Future is Now.” Armed with this information, attorneys can begin a more thoughtful conversation with stakeholders and legal technology companies regarding how to move forward with more advanced AI technology within their own practice.Demystifying AI Jargon in eDiscoveryAt its most basic, AI refers to the science of making intelligent machines – ones that can perform tasks traditionally performed by human beings. Therefore, AI is a broad field that encompasses many subfields and branches. The most relevant to eDiscovery are machine learning, deep learning, and natural language processing (NLP). As noted above, the technology behind legacy TAR workflows is supervised machine learning. Supervised machine learning uses human input to mimic the way humans learn through algorithms that are trained to make classifications and predictions. In contrast, deep learning eliminates some of that human training by automating the feature extraction process, which enables it to tackle larger datasets. NLP is a separate branch of machine learning that can understand text in context (in effect, it can better understand language the way humans understand it).The difference between the AI technology in legacy TAR workflows and more advanced AI tools lies in the fact that advanced AI tools use a combination of AI subsets and branches (machine learning, deep learning, and NLP) rather than just the supervised machine learning used in TAR. Understanding the Benefits of Advanced AIThis combination of AI subsets and branches used in advanced AI tools provides additional capabilities that are increasingly necessary to tackle modern datasets. These tools not only utilize the statistical prediction that supervised machine learning produces (which enables traditional TAR workflows), but also include the language and contextual understanding that deep learning and NLP provide. Deep learning and NLP technology also enable more advanced tools to look at all angles of a document (including metadata, data source, recipients, etc.) when making a prediction, rather than relying solely on text. Taking all context into consideration is increasingly important, especially when making privilege predictions that lead to expensive attorney review if a document is flagged for privilege. For example, with traditional TAR, the word “judge” in the phrases, “I don’t think the judge will like this!” on an email thread between two attorneys and, “Don’t judge me!” on a chat thread with 60 people regarding a fantasy football league will be classified the same way – because statistically, there is not much difference between how the word “judge” is placed within both sentences. However, newer tools that combine supervised machine learning with deep learning and NLP can learn the context of when the word “judge” is used as a noun (i.e., an adjudicator in a court of law) within an email thread with a small number of recipients versus when the word is being used as a verb on an informal chat thread with many recipients. The context of the data source and how words are used matters, and an advanced AI tool that leverages a combination of technologies can better understand that context.Using Advanced AI with TAROne common misconception regarding using newer, more advanced AI tools is that old workflows and models must go out the window. This is simply not true. While there may be some changes to review workflows due to the added efficiency generated by advanced AI tools (the ability to conduct privilege analysis simultaneously with responsive analysis, for example), attorneys can still use the traditional TAR 1.0 and TAR 2.0 workflows they are familiar with in combination with more advanced AI tools. Attorneys can still direct subject matter experts or reviewers to code documents, and the AI tool will learn from those decisions and predictive responsiveness, privilege, etc.The difference will be in the results. A more advanced AI tool’s predictions regarding privilege and responsiveness will be more accurate due to its ability to take nuance and context into consideration –leading to lower review costs and more accurate productions.ConclusionMany attorneys are still hesitant to move away from the older, AI eDiscovery tools they have used for the last decade. But today’s larger, more complicated datasets require more advanced AI tools. Attorneys who fear broadening their technology toolbox to include more advanced AI may find themselves struggling to stay within eDiscovery budgets, spending more time on finding and less time strategizing – and possibly even falling behind on their discovery obligations.But this fear and hesitancy can be overcome with education, transparency, and support from legal technology companies. Attorneys should look for the right technology partner who not only offers access to more advanced AI tools, but also provides implementation support and expert advisory services to help explain the technology and results to other stakeholders, opposing counsel, and judges.To learn more about the advantages of leveraging advanced AI within TAR workflows, download our white paper, “TAR + Advanced AI: The Future is Now.” And to discuss this topic more, feel free to connect with me at smoran@lighthouseglobal.com.ai-and-analytics; chat-and-collaboration-data; ediscovery-review; lighting-the-path-to-better-ediscoveryreview, ai-big-data, tar-predictive-coding, blog, ai-and-analytics, chat-and-collaboration-data, ediscovery-review,review; ai-big-data; tar-predictive-coding; blogai-analyticssarah moran
October 19, 2022
Blog

To Reinvigorate Your Approach to Big Data, Catch the Advanced AI Wave

Emerging challenges with big data—large sets of structured or unstructured data that require specialized tools to decipher— have been well documented, with estimates of worldwide data consumed and created by 2025 reaching unfathomable volumes. However, these challenges present an opportunity for innovation. Over the past few years, we’ve seen a renaissance in AI products and solutions to help address and evolve past these issues. From smaller players creating bespoke algorithms to bigger technology companies developing solutions with broader applications, there are substantial opportunities to harness AI and rethink how to manage data.A recent announcement of Microsoft’s Syntex highlights the immense possibilities for, and investment in, leveraging AI to manage content and augment human expertise and knowledge. The new feature in Microsoft 365 promises advanced AI and automation to classify and extract information, process content, and help enforce security and compliance policies. But what do new solutions like this mean for eDiscovery and the legal industry?There are three key AI benefits reshaping the industry you should know about:1. Meeting the challenges of cloud and big data2. Transforming data strategies and workflows3. Accelerating through automationMeeting the challenges of cloud and big data Anyone close to a recent litigation or investigation has witnessed the challenge posed by today’s explosion of data—not just volume, but the variety, speed, and uncertainty of data. To meet this challenge, traditional approaches to eDiscovery need to be updated with more advanced analytics so teams can first make sense of data and then strategize from there. Simultaneous with the need to analyze post-export documents, it’s also clear that proactively managing an organization’s data is increasingly essential. Organizations across all industries must comply with an increasingly complex web of data privacy and retention regulations. To do so, it is imperative that they understand what data they are storing, map how that data flows throughout the organization, and have rules in place to govern the classification, deletion, retention, and protection of data that falls within certain regulated categories of data types. However, the rise of new collaboration platforms, cloud storage, and hybrid working have introduced new levels of data complexity and put pressure on information governance and compliance practices—making it impossible to use older, traditional means of information governance workflows. Leveraging automation and analytics driven by AI advances teams from a reactive to proactive posture. For example, teams can automate a classification system with advanced AI where it reads documents entering the organization’s digital ecosystem, classifies them, and labels them according to applicable sensitivity or retention categories implemented by the organization—all of which is organized under a taxonomy that can be searched later. This not only helps an organization better manage data and risks upfront—creating a more complete picture of the organization’s data landscape—but also informs better and more efficient strategies downstream. Transforming data strategies and workflows New AI capabilities give legal and data governance teams the freedom to think more holistically about their data and develop strategies and workflows that are updated to address their most pressing challenges. For eDiscovery, this does not necessarily mean discarding legacy workflows (such as those with TAR) that have proven valuable, but rather augmenting them with advanced AI, such as natural language processing or deep learning, which has capabilities to handle greater data complexity and provide insights to approach a matter with greater agility. But the rise of big data means that legal teams need to start thinking about the eDiscovery process more expansively. An effective eDiscovery program needs to start long before data collection for a specific matter or investigation and should contemplate the entire data life cycle. Otherwise, you will waste substantial time, money, and resources trying to search and export insurmountable volumes of data for review. You will also find yourself increasingly at risk for court sanctions and prolonged eDiscovery battles if your team is unprepared or ill-equipped to find and properly export, review, and produce the requested data within the required timeline. For compliance and information governance teams, this proactive approach to data has even greater implications since the data they’re handling is not restricted to specific matters. In both cases, AI can be leveraged to classify, organize, and analyze data as it emerges—which not only keeps it under control but also gives quicker access to vital information when teams need it during a matter.Advanced AI can be applied to analyze and organize data created and held by specific custodians who are likely to be pulled into litigation or investigations, giving eDiscovery teams an advantage when starting a matter. Similarly, sensitive or proprietary information can be collected, organized, and searched far more seamlessly so teams don’t waste time or resources when a matter emerges. This allows more time for case development and better strategic decisions early on.Accelerating through automation Data growth continues to show no signs of slowing, emphasizing the need for data governance systems that are scalable and automated. If not, organizations run the risk of expending valuable resources on continually updating programs to keep pace with data volumes and reanalyzing their key information.The best solutions allow experts in your organization to refine and adjust data retention policies and automation as the organization’s data evolves and regulations change. In today’s cloud-based world, automation is a necessity. For example, a patchwork of global and local data privacy regulations (GDPR, California’s CCPA, etc.) include restrictions related to the timely disposal of personal information after the business use for that data has ended. However, those restrictions often conflict with or are triggered by industry regulations that require companies to keep certain types of documents and data for specific periods of time. When you factor in the dynamic, voluminous, and complex cloud-based data infrastructure that most company’s now work within, it becomes obvious why a manual, employee-based approach to categorizing data for retention and disposal is no longer sustainable. AI automation can identify personal information as it enters the company’s system, immediately classify it as sensitive data, and label it with specific retention rules. This type of automation not only keeps organizations compliant, it also enables legal and data governance teams to support their organization’s growth—whether it’s through new products, services, or acquisitions—while keeping data risk at bay. Conclusion Advancements in AI are providing more precise and sophisticated solutions for the unremitting growth in data—if you know how to use them. For legal, data governance, and compliance teams, there are substantial opportunities to harness the robust creativity in AI to better manage, understand, and deploy data. Rather than be inhibited by endless data volumes and inflexible systems, AI can put their expertise to work and ultimately help to do better at the work that matters. practical-applications-of-ai-in-ediscovery; ai-and-analytics; chat-and-collaboration-data; microsoft-365; lighting-the-path-to-better-information-governancemicrosoft, ai-big-data, cloud-security, blog, record-management, ai-and-analytics, chat-and-collaboration-data, microsoft-365,microsoft; ai-big-data; cloud-security; blog; record-managementmitch montoya
October 15, 2019
Blog

Three Reasons Why Law Firms Should Adopt SaaS for eDiscovery

Lawyers, and the legal field in general, are not exactly known for their willingness to embrace new technology and change the tried and true, traditional ways they’ve always used to practice law. But as technology has taken over our everyday lives and become the norm across most industries, there’s no time like the present for lawyers and litigation support professionals to take a second look at how they can get up to speed on the best and newest eDiscovery technology that will ultimately transform their business, and in turn, create happier clients who are laser focused on reducing costs and increasing efficiency.Like cassette tapes and the beloved Walkman, when it comes to eDiscovery, that old model of managing your own IT infrastructure and utilizing on-prem review platforms is becoming a thing of the past. This reminds me of other eDiscovery relics we knew and loved… not to call anyone out, but dare I mention Concordance or Clearwell in case you’re still using them?!In this changing technology landscape where most clients are moving (or have moved) their data to the cloud, it’s a perfect match to also modernize your law firm’s eDiscovery program and adopt a self-service, spectra model that will work seamlessly with data stored in the cloud, and deliver less risk and more benefits for both you and your clients.Just what are those benefits? Here are three reasons why adopting new technology and going to a SaaS eDiscovery solution will bring added efficiency, more billable hours, and happier clients.Eliminate the Risk and Expense of Managing Your Own IT Infrastructure - For law firms, managing an IT infrastructure and maintaining servers for the purpose of hosting client data is expensive and involves a large amount of risk. Electronic data has become overwhelmingly voluminous and types of data have become so much more complex than when law firms first got into this business and we were primarily dealing with email. Think about mobile devices, chat data, ephemeral communication, etc. as just the tip of the iceberg. With cybersecurity as a top concern for corporations, I think it’s fair to say that law firms probably never meant to take on the risk that comes with managing a complex IT infrastructure for their clients. Having a self-service, spectra, modern SaaS solution at their fingertips, law firms can lower costs and transfer the risk of hosting client data to the SaaS solution provider.Using SaaS Review Platforms Improves Client Services - Not only will a SaaS solution provide the benefit of relieving the security and risk burden, it will improve client services which is a win-win for the firm and the client. Although on-prem review platforms are what law firms have typically used, a SaaS platform reduces costs and improves efficiency. With an on-prem solution, license fees and infrastructure maintenance fees generally create out-of-pocket costs with no cost recovery mechanism. Moving to a SaaS solution introduces new ways to recover costs and makes solving substantive client concerns the primary job, rather than the inefficiencies that come along with maintaining an on-prem solution. To make the process of implementing a SaaS solution much easier, it’s important to note at this stage that building a business case and getting senior management on board with upgrading to a SaaS solution is critical. That way all parties understand the benefits to both the firm and its clients will be on the same page with making the change.Upgrading to SaaS Allows Firms to Provide the Latest Technology to Clients - Wouldn’t it be amazing if you could easily and quickly upgrade your eDiscovery technology and always provide the latest and greatest technology to clients? Moving to a SaaS platform immediately provides this benefit as the service provider maintains the infrastructure and makes technology upgrades behind the scenes for you. In case you’re feeling a little nervous that opportunity for some portions of internal work will disappear with this eDiscovery model, in fact the opposite is true. This isn’t a threat to the traditional litigation support model. It will instead allow for a greater focus on more valuable and strategic work while a solid partnership is established with the trusted service provider who runs the infrastructure of the SaaS platform and will work alongside you.If your primary goal is to create efficiency, lower costs, and ultimately make your clients happy, now’s the time to take your eDiscovery program to the next level and adopt a SaaS, self-service, spectra solution. You’ll have a modernized eDiscovery platform that allows for independent access and control to process, review, and produce data, while removing the risk and cost that comes with managing an IT infrastructure.ediscovery-review; ai-and-analyticscloud, self-service, spectra, cloud-security, blog, ediscovery-review, ai-and-analyticscloud; self-service, spectra; cloud-security; bloglighthouse
June 20, 2023
Blog

Three Ways to Use eDiscovery Technology to Reduce Repeated Review

Minimizing Re-Review
By now, legal teams facing discovery are aware of many of the common technology and technology-enabled workflows used to increase the efficiency of document review on a single matter. But as data volumes grow and legal budgets shrink, legal teams must begin to think beyond a “matter-by-matter” approach. They must start applying technology more innovatively to create efficiencies across matters to minimize the burden of repeatedly reviewing the same documents again and again. Fortunately, many common technology-enabled review workflows (e.g., technology assisted review (TAR), advanced search guided by linguistic experts, and AI-powered review analytics) can help teams apply work product and insights from past matters to current and future matters. This not only saves time but also increases consistency and lowers the risk of inadvertent disclosures and cumbersome clawbacks.The opportunity to reduce repeated review is quite large, both because the problem is rampant and the technology that can help solve it is underutilized. A 2022 survey by the ABA showed that “predictive coding” is the least common application of eDiscovery software, used by only one in five law firms. In fact, 73% of respondents said they don’t know what predictive coding is (we explain it below). As document review continues to grow in complexity, and budget and other constraints apply pressure from other directions, more organizations should consider taking advantage of everything that technology has to offer.Repeated review is a large and familiar burdenRepeated review is baked into the status quo. Matters spanning multiple jurisdictions, civil litigations tied to government investigations, and matters involving the same or related IP are just a few examples in which the same documents could come up for review multiple times. Instead of looking across matters holistically, legal teams often feel obligated to roll up their sleeves, lower their heads, and review the same documents all over again—even when relevancy overlaps and for categories of information that remain relatively static across matters (privilege, trade secret, personally identifiable information (PII), etc.). This has obvious consequences for time and cost. The time invested on reviewing documents for privilege in a current matter, for example, becomes time saved on future matters involving those same documents. Risk is a factor as well. A document classified as privileged, or that contains PII or another sensitive category, in one matter should be classified the same way in the next one. But without a record of past matters, attorneys start over from scratch each time, which opens the door to inconsistency. And while it’s certainly possible to undo the mistake of producing sensitive documents, it can be quite time-consuming and expensive.Rejecting the status quo While the burden and risks associated with repeated review are felt every day, few legal teams and professionals are searching for a solution. Those willing to look beyond the status quo, however, will see that repeated review isn’t actually necessary, at least not to the degree that it’s done today. We also find that the keys to reducing repeated review lie in technology that many teams already use or have access to.Reusing work product from TAR and CAL workflows TAR 1.0, TAR 2.0, and Continuous Active Learning (CAL) workflows use machine learning technology to search and classify documents based on human input and their own ability to learn and recognize patterns. This is called predictive coding and it’s most often used to prioritize responsive documents for human review. The parameters for responsiveness change with the topics of each matter, so it’s not always possible to reuse those classifications on other matters. TAR and CAL tools can also be effective at making classifications around privilege, PII, and junk documents, which are not redefined from matter to matter. If a document was junk last time (say company logos attached to emails, blank attachments, etc.) it’s going to be junk this time too. Therefore, reusing these classifications made by technology on one matter can save legal teams even more time in the future. Refining review with linguistic expertsLinguistic experts add an extra layer of nuance to document review technology that makes them more precise and effective at classification. They develop complex criteria, based on intricate rules of syntax and language, to search and identify documents in a more targeted way than TAR and CAL tools.They can also help reduce repeated review by conducting bespoke searches informed by past matters. This process is more hands-on than using TAR and CAL tools; human linguists take lessons learned from one matter and incorporate them into their work on a related matter. It’s also more refined, so it can help in ways that TAR and CAL tools can’t.Litigation related to off-label drug use offers a good example. A company might have multiple matters tied to different drugs, making relevance unique for each matter. In this scenario, linguistics experts can identify linguistic markers that show how sales reps communicated with healthcare providers within that company. Then when the next off-label document review project begins, documents with those identifiers can be segregated for faster review. In this way, work from linguistic experts in one matter can help improve efficiency and minimize first-level review work on new matters. Apply learnings across matters using AI Review tools built on AI can reduce repeated review by classifying documents based on how they were classified before. AI tools can act as a “central mind” across matters, using past decisions on company data to make highly precise classifications on new matters. The more matters the AI is used on, the more precise its classifications become. The beauty here is that it applies to any amount of overlap across matters. The AI will recognize any documents that it has reviewed previously and will resurface their past classifications.Some AI tools can even retain the decision on past documents and associate it with a unique hash tag, so that it can tell reviewers how the same or similar documents were coded in previous matters—without the concern of over-retaining documents from past matters. Curious to challenge your status quo?TAR, AI, and other solutions can be invaluable parts of a legal team’s effort to curb repeated review — but they’re not the only part. In fact, the most important factor is a team’s mindset. It takes forethought and commitment to depart from the status quo, especially when it involves unfamiliar tools or strategies.The benefits can be profound, and the road to achieving them may be more accessible than you think.Find tips for starting small, as well as more information about how and why to address the burden of repeated review, in our deep dive on the subject.ai-and-analytics; ediscovery-review; lighting-the-path-to-better-ediscoveryreview, ai-big-data, blog, ai-and-analytics, ediscovery-reviewreview; ai-big-data; blogminimizing-re-reviewsarah moran
November 20, 2020
Blog

The Sinister Six…Challenges of Working with Large Data Sets

Collectively, we have sent an average of 306.4 billion emails each day in 2020. Add to that 23 billion text messages and other messaging apps, and you get roughly 41 million messages sent every minute[1]. Not surprisingly, there have been at least one or two articles written about expanding data volumes and the corresponding impact on discovery. I’ve also seen the occasional post discussing how the methods by which we communicate are changing and how “apps that weren’t built with discovery in mind” are now complicating our daily lives. I figured there is room for at least one more big data post. Here I’ll outline some of the specific challenges we’ll continue to face in our “new normal,” all while teasing what I’m sure will be a much more interesting post that gets into the solutions that will address these challenges.Without further delay, here are six challenges we face when working with large data sets and some insights into how we can address these through data re-use, AI, and big data analytics:Sensitive PII / SHI - The combination of expanding data volumes, data sources, and increasing regulation covering the transmission and production of sensitive personally identifiable information (PII) and sensitive health information (SHI) presents several unique challenges. Organizations must be able to quickly respond to Data Subject Access Requests (DSARs), which require that they be able to efficiently locate and identify data sources that contain this information. When responding to regulatory activity or producing in the course of litigation, the redaction of this content is often required. For example, DOJ second requests require the redaction of non-responsive sensitive PII and/or SHI prior to production. For years, we have relied on solutions based on Regular Expressions (RegEx) to identify this content. While useful, these solutions provide somewhat limited accuracy. With improvements in AI and big data analytics come new approaches to identifying sensitive content, both at the source and further downstream during the discovery process. These improvements will establish a foundation for increased accuracy, as well as the potential for proactively identifying sensitive information as opposed to looking for it reactively.Proprietary Information - As our society becomes more technologically enabled, we’re experiencing a proliferation of solutions that impact every part of our life. It seems everything nowadays is collecting data in some fashion with the promise of improving some quality of life aspect. This, combined with the expanding ways in which we communicate means that proprietary information, like source code, may be transmitted in a multitude of ways. Further, proprietary formulas, client contacts, customer lists, and other categories of trade secrets must be closely safeguarded. Just as we have to be vigilant in protecting sensitive personal and health information from inadvertent discloser, organizations need to protect their proprietary information as well. Some of the same techniques we’re going to see leveraged to combat the inadvertent disclosure of sensitive personal and health information can be leveraged to identify source code within document populations and ensure that it is handled and secured appropriately.Privilege - Every discovery effort is first aimed at identifying information relevant to the matter at hand, and second to ensure that no privileged information is inadvertently produced. That is… not new information. As we’ve seen the rise in predictive analytics, and, for those that have adopted it, a substantial rise in efficiency and positive impact on discovery costs, the identification of privileged content has remained largely an effort centered on search terms and manual review. This has started to change in recent years as solutions become available that promise a similar output to TAR-based responsiveness workflows. The challenge with privilege is that the identification process relies more heavily on “who” is communicating than “what” is being communicated. The primary TAR solutions on the market are text-based classification engines that focus on the substantive portion of conversations (i.e. the “what” portion of the above statement). Improvments in big data analytics mean we can evaluate document properties beyond text to ensure the “who” component is weighted appropriately in the predictive engine. This, combined with the potential for data re-use supported through big data solutions, promises to substantially increase our ability to accurately identify privileged, and not privileged, content.Responsiveness - Predictive coding and continuous active learning are going to be major innovations in the electronic discovery industry…would have been a catchy lead-in five years ago. They’re here, they have been here, and adoption continues to increase, yet it’s still not at the point where it should be, in my opinion. TAR-based solutions are amazing for their capacity to streamline review and to materially impact the manual effort required to parse data sets. Traditionally, however, existing solutions leverage a single algorithm that evaluates only the text of documents. Additionally, for the most part, we re-create the wheel on every matter. We create a new classifier, review documents, train the algorithm, rinse, and repeat. Inherent in this process is the requirement that we evaluate a broad data set - so even items that have a slim to no chance of being relevant are included as part of the process. But there’s more we can be doing on that front. Increases in AI and big data capabilities mean that we have access to more tools than we did five years ago. These solutions are foundational for enabling a world in which we continue to leverage learning from previous matters on each new future matter. Because we now have the ability to evaluate a document comprehensively, we can predict with high accuracy populations that should be subject to TAR-based workflows and those that should simply be sampled and set aside.Key Docs - Variations of the following phrase have been uttered time and again by numerous people (most often those paying discovery bills or allocating resources to the cause), “I’m going to spend a huge amount of time and money to parse through millions of documents to find the 10-20 that I need to make my case.” They’re not wrong. The challenge here is that what is deemed “key” or “hot” in one matter for an organization may not be similar to that which falls into the same category on another. Current TAR-based solutions that focus exclusively on text lay the foundation for honing in on key documents across engagements involving similar subject matter. Big data solutions, on the other hand, offer the capacity to learn over time and to develop classifiers, based on more than just text, that can be repurposed at the organizational and, potentially, industry level.Risk - Whether related to sensitive, proprietary, or privileged information, every discovery effort utilizes risk-mitigation strategies in some capacity. This, quite obviously, extends to source data with increasing emphasis on comprehensive records management, data loss prevention, and threat management strategies. Improvements in our ability to accurately identify and classify these categories during discovery can have a positive impact on left-side EDRM functional areas as well. Organizations are not only challenged with identifying this content through the course of discovery, but also in understanding where it resides at the source and ensuring that they have appropriate mechanisms to identify, collect and secure it. Advances in AI and big data analytics will enable more comprehensive discovery programs that leverage the identification of these data types downstream to improve upstream processes.As I alluded to above, these big data challenges can be addressed with the use of AI, analytics, data reuse, and more. Now that I have summarized some of the challenges many of you are already tasked with dealing with on a day-to-day basis, you can learn more about actual solutions to these challenges. Check out my colleague’s write up on how AI and analytics can help you gain a holistic view of your data.To discuss this topic more or to ask questions, feel free to reach out to me at NSchreiner@lighthouseglobal.com.[1] Metrics courtesy of Statistachat-and-collaboration-data; ai-and-analyticsprivilege, analytics, ai-big-data, data-re-use, phi, pii, blog, chat-and-collaboration-data, ai-and-analyticsprivilege; analytics; ai-big-data; data-re-use; phi; pii; blognick schreiner
December 17, 2020
Blog

TAR Protocols 101: Avoiding Common TAR Process Issues

A recent conversation with a colleague in Lighthouse’s Focus Discovery team resonated with me – we got to chatting about TAR protocols and the evolution of TAR, analytics, and AI. It was only five years ago that people were skeptical of TAR technology and all the discussions revolved around understanding TAR and AI technology. That has shifted to needing to understand how to evaluate the process of your team or of opposing counsel’s production. Although an understanding of TAR technology can help in said task, it does not give you enough to evaluate items like the parity of types of sample documents, the impact of using production data versus one’s own data, and the type of seed documents. That discussion prompted me to grab one of our experts, Tobin Dietrich, to discuss the cliff notes of how one should evaluate a TAR protocol. It is not totally uncommon for lawyers to receive a technology assisted review methodology from producing counsel – especially in government matters but also in civil matters. In the vein of the typical law school course, this blog will teach you how to issue spot if one of those methodologies comes across your desk. Once you’ve spotted the issues, bringing in the experts is the right next step.Issue 1: Clear explanation of technology and process. If the party cannot name the TAR tool or algorithm they used, that is a sign there is an issue. Similarly, if they cannot clearly describe their analytics or AI process, this is a sign they do not understand what they did. Given that the technology was trained by this process, this lack of understanding is an indicator that the output may be flawed.Issue 2: Document selection – how and why. In the early days of TAR, training documents were selected fairly randomly. We have evolved to a place now where people are being choosy about what documents they use for training. This is generally a positive thing but does require you to think about what may be over or under represented in the opposing party’s choice of documents. More specifically, this comes up in 3 ways:Number of documents used for training. A TAR system needs to understand what responsive and non-responsive looks like so it needs to see many examples in each category to approach certainty on its categorization. When using too small a sample, e.g. 100 or 200 documents, this risks causing the TAR system to incorrectly categorize. Although a system can technically build a predictive model from a single document, it will only effectively locate documents that are very similar to the starting document. The reality of a typical document corpus is that it is not so uniform as to rely upon the single document predictive model.Types of seed documents. It is important to use a variety of documents in the training. The goal is to have the inputs represent the conceptual variety in the broader document corpus. Using another party’s production documents, for example, can be very misleading for the system as the vocabulary used by other parties is different, the people are different, and the concepts discussed are very different. This can then lead to incorrect categorization of documents. Production data, specifically, can also add confusion with the presence of Bates or confidentiality stamps. If the types of seed documents/training documents used do not mirror typical types of documents expected from the document corpus, you should be suspicious.Parity of seed document samples. Although you do not need anything approaching the perfect parity of responsive and non-responsive documents, it can be challenging to use 10x the number of non-responsive versus responsive documents. This kind of disparity can distort the TAR model. It can also exacerbate either of the above issues, number, or type of seed documents.Issue 3: How is performance measured? People throw around common TAR metrics like recall and precision without clarifying what they are referring to. You should always be able to tell what population of documents these statistics relate to. Also, don’t skip over precision. People often throw out recall as sufficient, but precision can provide important insight into the quality of model training as well.By starting with these three areas, you should be able to flag some of the more common issues in TAR processes and either avoid them or ask for them to be remedied. ai-and-analytics; ediscovery-reviewanalytics, ai-big-data, tar-predictive-coding, blog, ai-and-analytics, ediscovery-reviewanalytics; ai-big-data; tar-predictive-coding; bloglighthouse
February 5, 2021
Blog

TAR 2.0 and the Case for More Widespread Use of TAR Workflows

Cut-off scores, seed sets, training rounds, confidence levels – to the inexperienced, technology assisted review (TAR) can sound like a foreign language and can seem just as daunting. Even for those legal professionals who have had experience utilizing the traditional TAR 1.0 model, the process may seem too rigid to be useful for anything other than dealing with large data volumes with pressing deadlines (such as HSR Second Requests). However, TAR 2.0 models are not limited by the inflexible workflow imposed by the traditional model and require less upfront time investment to realize substantial benefits. In fact, TAR 2.0 workflows can be extremely flexible and helpful for myriad smaller matters and non-traditional projects, including everything from an initial case assessment and key document review to internal investigations and compliance reviews.A Brief History of TARTo understand the various ways that TAR 2.0 can be leveraged, it will be helpful to understand the evolution of the TAR model, including typical objections and drawbacks. Frequently referred to as predictive coding, TAR 1.0 was the first iteration of these processes. It follows a more structured workflow and is what many people think of when they think of TAR. First, a small team of subject-matter experts must train the system by reviewing control and training sets, wherein they tag documents based on their experience with and knowledge of the matter. The control set provides an initial overall estimated richness metric and establishes the baseline against which the iterative training rounds are measured. Through the training rounds, the machine develops the classification model. Once the model reaches stability, scores are applied to all the documents based on the likelihood of being relevant, with higher scores indicating a higher likelihood of relevance. Using statistical measures, a cutoff point or score is determined and validated, above which the desired measure of relevant documents will be included. The remaining documents below that score are deemed not relevant and will not require any additional review.Although the TAR 1.0 process can ultimately result in a large reduction in the number of documents requiring review, some elements of the workflow can be substantial drawbacks for certain projects. The classification model is most effectively developed from accurate and consistent coding decisions throughout the training rounds, so the team of subject-matter experts conducting the review are typically experienced attorneys who know the case well. These attorneys will likely have to review and code at least a few thousand documents, which can be expensive and time consuming. This training must also be completed before other portions of the document review, such as privilege or issue coding, can begin. Furthermore, if more documents are added to the review set after the model reaches stability (think, a refresh collection or late identified custodian) the team will need to resume the training rounds to bring the model back to stability for these newly introduced documents. For these reasons, the traditional TAR 1.0 model is somewhat inflexible and suited best for matters where the data is available upfront and not expected to change over time (i.e. no rolling collections) so that the large number of documents being excised from the more costly document review portion of the project will offset the upfront effort expended training the model.TAR 2.0, also referred to as continuous active learning (CAL), is a newer workflow (although it has been around for a number of years now) that provides more flexibility in its processes. Using CAL, the machine also learns as the documents are being reviewed, however, the initial classification model can be built with just a handful of coded documents. This means the review can begin as soon as any data is loaded into the database, and can be done by a traditional document review team right from the outset (i.e. there is no highly specialized “training” period). As the documents are reviewed, the classification model is continuously updated as are the scores assigned to each document. Documents can be added to the dataset on a rolling basis without having to restart any portion of the project. The new documents are simply incorporated into the developing model. These differences make TAR 2.0 well suited for a wider variety of cases and workflows than the traditional TAR 1.0 model.TAR 2.0 Workflow ExamplesOne of the most common TAR 2.0 workflows is a “prioritization review,” wherein the highest scoring documents are pushed to the front of the review. As the documents are reviewed the model is updated and the documents are rescored. This continuous loop allows for the most up-to-date model to identify what documents should be reviewed next, making for an efficient review process, with several benefits. The team will review the most likely relevant, and perhaps important, documents first. This can be especially helpful when there are short timeframes within which to begin producing documents. While all documents can certainly be reviewed, this workflow also provides the means to establish a cutoff point (similar to TAR 1.0) where no further review is necessary. In many cases, when the review reaches a point where few relevant documents are found, especially in comparison to the number of documents being reviewed, this point of diminishing returns signals the opportunity to cease further review. The prioritization review can also be very effective with incoming productions, allowing the system to identify the most relevant or useful documents.An alternative TAR 2.0 workflow is the “coverage” or “diverse” review model. In this model, rather than reviewing the highest scoring documents first, the review team focuses on the middle-scoring range documents. The point of a diverse review model is to focus on what the machine doesn’t know yet. Reviewing the middle range of documents further trains the system. In this way, a coverage TAR 2.0 review model provides the team with a wide variety of documents within the dataset. When using this workflow for reviews for productions, the goal is to end up with the documents separated between those likely relevant and those likely not relevant. This workflow is similar to the TAR 1.0 workflow as the desired outcome is to identify the relevant document set as quickly or directly as possible without reviewing all of the documents. To illustrate, a model will typically begin with a bell-shaped curve of the distribution of documents across the scoring spectrum. This workflow seeks to end with two distinct sets, where one is the relevant set and the other is the non-relevant set.These workflows can be extremely useful for initial case assessments, compliance reviews, and internal investigations, where the end goal of the review is not to quickly find and produce every relevant document. Rather, the review in these types of cases is focused on gathering as much relevant information as possible or finding a story within the dataset. Thus, these types of reviews are generally more fluid and can change significantly as the review team finds more information within the data. New information found by the review team may lead to more data collections or a change in custodians, which can significantly change the dataset over time (something TAR 2.0 can handle but TAR 1.0 cannot). And because the machine provides updated scoring as the team investigates and codes more documents, it can even provide the team with new investigational avenues and leads. A TAR 2.0 workflow works well because it gives the review team the freedom to investigate and gain knowledge about a wide variety of issues within the documents, while still ultimately resulting in data reduction.ConclusionThe above workflow examples illustrate that TAR does not have to be the rigid, complicated, and daunting workflow feared by many. Rather, TAR can be a highly adaptable and simple way to gain efficiency, improve end results, and certainly to reduce the volume of documents reviewed across a variety of use cases.It is my hope that I have at least piqued your interest in the TAR 2.0 workflow enough that you’ll think about how it might be beneficial to you when the next document review project lands on your desk.If you’re interested in discussing the topic further, please freely reach out to me at DBruno@lighthouseglobal.com.ai-and-analytics; ediscovery-reviewtar-predictive-coding, blog, ai-and-analytics, ediscovery-reviewtar-predictive-coding; blogdavid bruno
January 20, 2021
Blog

Self-Service eDiscovery for Corporations: Three Tips for a Successful Implementation

Given the proliferation of data and evolving variety of data sources, in-house counsel teams are beginning to exhaust resources managing increasingly complex case data. self-service, spectra eDiscovery legal technology offers a compelling solution. Consider the impact of inefficiencies faced by in-house counsel, today - from waiting for vendors to load data or provide platform access, to scrambling, to keeping up with advancing technologies, and managing data security risks - it’s a lot. The average in-house counsel team isn’t just dealing with these inefficiencies on large litigations, they’re encountering these issues in even the smallest compliance and internal investigations matters.self-service, spectra solutions offer an opportunity to streamline eDiscovery programs, allowing in-house legal teams to get back to the business of case management and legal counseling. It’s understandable we’re witnessing more and more companies moving to this model.So, once your organization has decided it is ready to step into the future and take advantage of the benefits self-service, spectra eDiscovery solutions have to offer, what’s next? Below, I’ve outlined three best practices for implementing a self-service, spectra eDiscovery solution within your organization. While any organizational change can seem daunting at the outset, keeping the below tips in mind will help your company seamlessly move to a self-service, spectra model.1. Define how you leverage your self-service, spectra eDiscovery solution to scale with ease.One of the key benefits of a quality self-service, spectra solution is that it puts your organization back in the eDiscovery driver’s seat. You decide what cases you will handle internally, with the advantage of having access to an array of eDiscovery expertise and matter management services when needed, even if that need arises in the middle of an ongoing matter. Cloud-based self-service, spectra solutions can readily handle any amount of data, and a quality self-service, spectra solution provider will be able to seamlessly scale up from self-service, spectra to full-service without any interruption to case teams.Having a plan in place regarding how and when you will leverage each of these benefits (i.e. self-service, spectra vs. full-service) will help you manage internal resources and implement a pricing model that fits your organization’s needs.2. Select a pricing model that works for your organization.Every organization’s eDiscovery business is different and self-service, spectra pricing models should reflect that. After determining how your organization will ideally leverage a self-service, spectra platform, decide what pricing model works best for that type of utilization. self-service, spectra solution providers should be able to provide a variety of licensing options to choose from, from an a la cart approach to subscription and transaction models.Prior to communicating with your potential solution provider, define how you plan to leverage a self-service, spectra solution to meet your needs. Then you can consider the type of support you require to balance your caseload with team resources and prepare to talk to providers about whether they can accommodate that pricing. Once you have on-boarded a self-service, spectra solution, be sure to continue to evaluate your pricing model, as the way you use the solution may change over time.3. Discuss moving to a self-service, spectra model with your IT and data security teams .Another benefit of moving to a self-service, spectra model is eliminating the burden of application and infrastructure management. Your in-house teams will be able to move from maintaining (and paying for) a myriad of eDiscovery technologies to a single platform providing all of the capabilities you need without the IT overhead. In effect, moving to a self-service, spectra solution gives your team access to industry-leading eDiscovery technology while removing the cost and hassle of licensing and infrastructure upkeep.A self-service, spectra model also allows you to transfer some of your organization’s data security risk to a solution provider. You gain peace of mind knowing your eDiscovery data and the supporting tech is administered by a dedicated IT and security team in a state-of-the-art IT environment with best-in-class security certifications.Finally, to ensure your organization can realize the full benefit of moving to a self-service, spectra solution, it’s imperative that your IT team has a seat at the table when selecting a solution platform. They can help to ensure that whatever service is selected can be fully and seamlessly integrated into your organization’s systems. Keeping these tips in mind as your organization begins its self-service, spectra journey will help you realize the benefits that a quality self-service, spectra eDiscovery platform can provide. For more in-depth guidance on migrating to self-service, spectra platforms, Brooks Thompson’s blog posts discussing tips for overcoming self-service, spectra objections and building a self-service, spectra business case.ediscovery-review; ai-and-analyticsself-service, spectra, blog, ediscovery-review, ai-and-analyticsself-service, spectra; bloglighthouse
August 19, 2021
Blog

Overcoming eDiscovery Trepidation - Part I: The Challenge

In this two-part series, I interview Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP about his thoughts on the state of eDiscovery within law firms today, including lessons learned and best practices to help attorneys overcome their trepidation of electronic discovery and build a better litigation practice. This first blog focuses on the history of eDiscovery and the logical reasons that attorneys may still try to avoid it, often to the detriment of their clients and their overall practice. IntroductionThe term “eDiscovery” (i.e., electronic discovery) was coined circa 2000 and received significant consideration by The Sedona Conference and others, well in advance of November 2006. That’s when the U.S. Supreme Court amended the Federal Rules of Civil Procedure to include electronically stored information (ESI), which was widely recognized as categorically different from data printed on paper. The amendments specifically mandated that electronic communications (like email and chat) would have been preserved in anticipation of litigation and produced when relevant. In doing so, it codified concepts explored by Judge Shira Scheindlin’s groundbreaking Zubulake v. UBS Warburg decisions.By 2012, the exploding volumes of data led technologists assisting attorneys to employ various forms of artificial intelligence (AI) to allow analysis of data to be accomplished in blocks of time that were still affordable to litigants. The use of predictive coding and other forms of technology-assisted review (TAR) of ESI became recognized in U.S. courts. By 2013 updates to the American Bar Association (ABA) Model Rules of Professional Conduct officially required attorneys to stay current on “the benefits and risks” of developing technologies. By 2015, the FRCP was amended again to help limit eDiscovery scope to what is relevant to the claims and defenses asserted by the parties and “proportional to the needs of the case,” as well as to normalize judicial treatments of spoliation and related sanctions associated with ESI evidence. In the same year, California issued a formal ethics opinion obligating attorneys practicing in California to stay current with ever changing eDiscovery technologies and workflows in order to comply with their ethical obligation of competently providing legal services.In the 15 years that have passed since those first FRCP amendments designed to deal with the unique characteristics of ESI, we’ve seen revolutionary changes in the way people communicate electronically within organizations, as well as explosive growth in the volume and variety of data types as we have entered the era of Big Data. From the rise of email, social media, and chat as dominant forms of interpersonal communication, to organizations moving their data to the Cloud, to an explosion of ever-changing new data sources (smart devices, iPhones, collaboration tools, etc.) – the volume and variety of which makes understanding eDiscovery’s role in litigation more important than ever.And yet, despite more than 20 years of exposure, the challenges of eDiscovery (including managing new data forms, understanding eDiscovery technology, and adhering to federal and state eDiscovery standards) continue to generate angst for most practitioners.So why, in 2021, are smart, sophisticated lawyers still uncomfortable addressing eDiscovery demands and responding to them? To find out, I went to one of the leading experts in eDiscovery today, Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP. Mr. Calhoun has over 40 years of experience in litigation and counseling, and he currently serves as Chair of the firm’s Electronic Discovery, Information Management & Compliance Practice. Over the years he has found creative solutions to eDiscovery challenges, like having a court enter a case management order requiring all 42 parties in a complex construction defect case to use a single technology provider, which dropped the technology costs to less than 2.5% of what they would have been had each party employed its own vendor. In another case (which did not involve privileged communications), he was able to use predictive coding to rank 600,000 documents and place them into tranches from which samples were drawn to determine which tranches could be produced without further review. It was ultimately determined that about 35,000 documents would not have to be reviewed after having put eyes on fewer than 10,000 of the original 600,000.I sat down with Mr. Calhoun to discuss his practice, his views of the legal and eDiscovery industries, and to try to get to the bottom of how attorneys can master the challenges posed by eDiscovery without having to devote the time needed to become an expert in the field.Let’s get right down to it. With all the helpful eDiscovery technology that has evolved in the market over the last 10 years, why do you think eDiscovery still poses such a challenge for attorneys today? Well, right off the bat, I think you’re missing the mark a bit by focusing your inquiry solely around eDiscovery technology. The issue for many attorneys facing an eDiscovery challenge today is not “what is the best eDiscovery technology?” – because many attorneys don’t believe any eDiscovery technology is the best “solution.” Many believe it is the problem. No technology, regardless of its efficacy, can provides value if it is not used. The issue is more fundamental. It’s not about the technology, it is about the fear of the technology, the fear of not being able to use it as effectively as competitors, and the fear of incurring unnecessary costs while blowing budgets and alienating clients.Practitioners fear eDiscovery will become a time and money drain, and attorneys fear that those issues can ultimately cost them clients. Technology may, in fact, be able to solve many of their problems – but most attorneys are not living and breathing eDiscovery on a day-to-day basis (and, frankly, don’t want to). For a variety of reasons, most attorneys don’t or can’t make time to research and learn about new technologies even when they’re faced with a discovery challenge. Even attorneys who do have the inclination and aptitude to deal with the mathematics and statistical requirements of a well-planned workflow, who understand how databases work, and who are unfazed by algorithms and other forms of AI, often don’t make the time to evaluate new technology because their plates are already full providing other services needed by their clients. And most attorneys became lawyers because they had little interest in mathematics, statistics, and other sciences, so they don’t believe they have the aptitude necessary to deal with eDiscovery (which isn’t really true). This means that when they’re facing gigabytes or even terabytes of data that have to be analyzed in a matter of weeks, they often panic. Many lawyers look for a way to make the problem go away. Sometimes they agree with opposing counsel not to exchange electronic data; other times they try to bury the problem with a settlement. Neither approach serves the client, who is entitled to an expeditious, cost effective, and just resolution of the litigation. Can you talk more about the service clients are entitled to, from an eDiscovery perspective? By that, I mean – can you explain the legal rules, regulations, and obligations that are implicated by eDiscovery, and how those may impact an attorney facing an electronic discovery request? Sure. Under Rule 1 of the FRCP and the laws of most, if not all, states, clients are entitled to a just resolution of the litigation. And ignoring most of the electronic evidence about a dispute because a lawyer finds dealing with it to be problematic rarely affords a client a just result. In many cases, the price the client pays for counsel’s ignorance is a surcharge to terminate the litigation. And, counsel’s desire to avoid the challenge of eDiscovery very often amounts to a breach of the ethical duty to provide competent legal services.The ABA Model Rules (as well as the ethical rules and opinions in the majority of states) also address the issue. The Model Rules offer a practitioner three alternatives when undertaking to represent a client in a case that involves ESI (which almost every case does). To meet his or her ethical obligation to provide competent legal services, the practitioner can: (1) become an expert in eDiscovery matters; (2) team up with an attorney or consultant who has the expertise; or (3) decline the engagement. Because comparatively few attorneys have the aptitude to become eDiscovery experts and no one who wants to practice law can do so by turning down virtually all potential engagements, the only practical solution for most practitioners is finding an eDiscovery buddy.In the end, I think attorneys are just looking for ways to make their lives (and thereby their clients’ lives) easier and they see eDiscovery as threatening to make their lives much harder. Fortunately, that doesn’t have to be the case.So, it sounds like you’re saying that despite the fact that it may cost them clients, there are sophisticated attorneys out there that are still eschewing legal technology and responding to discovery requests the way they did when most discovery requests involved paper documents? Absolutely there are. And I can empathize with their thought process, which is usually something along the lines of “I don’t understand eDiscovery technology and I’m facing a tight discovery deadline. I do know how to create PDFs from scanned copies of paper documents and redact them, if necessary. I’m just going to use the method I know and trust.” While this is an understandable way to think, it will immediately impose on clients the cost of inefficient litigation and settlements or judgments that could have been reduced or avoided if only the evidence had been gathered. Ultimately, when the clients recognize that their counsel’s fear of eDiscovery is imposing a cost on them, that attorney will lose the client. In other words, counsel who refuse to delve into ESI because it is hard is similar to a person who lost car keys in a dark alley but insists on only looking under the streetlight because it is easier and safer than looking in the dark alley.That’s such a great analogy. Do you have any real-world examples that may help folks understand the plight of an attorney who is basically trying to ignore ESI?Sure. Here’s a great example: Years ago, my good friend and partner told me he would retire without ever having to learn about eDiscovery. My partner is a very successful attorney with a great aptitude for putting clients at ease. But about a week after expressing that thought, he came to me with 13 five-inch three-ring binders. He wanted help finding contract paralegals or attorneys to prepare a privilege log listing all the documents in the binders. An arbitrator had ordered that if he did not have a privilege log done in a week, his expert would not be able to testify. His “solution” was to rent or buy a bunch of dictating machines and have the reviewers dictate the information about the documents and pay word processers overtime to transcribe the dictation into a privilege log. I asked what was in the binders. Every document was an email thread and many had families. My partner had received the data as a load file, but he had the duplications department print the contents rather than put them into a review platform. Fortunately, the CD on which the data was delivered was still in the file.I can tell this story now because he has since turned into quite the eDiscovery evangelist, but that is exactly the type of situation I’m referring to: smart, sophisticated attorneys who are just trying to meet a deadline and stay within budget will do whatever takes to get the documents or other deliverable (e.g., a privilege log) out the door. And without the proper training, unfortunately, the solution is to throw more bodies at the problem – which invariably ends up being more costly than using technology properly.Can you dive a bit deeper there? Explain how performing discovery the old-fashioned way on a small case like that would cost more money than performing it via a dedicated eDiscovery technology.Well, let me finish my story and then we’ll compare the cost of using 20th and 21st Century technologies to accomplish the same task. As I said, when I agreed to help my partner meet his deadline, I discovered all the notebooks were filled with printed copies of email threads and attachments. My partner received a load file with fewer than 2 GBs and gave it to the duplications department with instructions to print the data so he could read it. We gave the disk to an eDiscovery provider, and they created a spreadsheet using the email header metadata to populate the log information about who the record was from, who it was to, who was copied, whether in the clear or blind, when it was created, what subject was addressed, etc. A column was added for the privilege(s) associated with the documents. Those before a certain date were attorney-client only. Those after litigation became foreseeable were attorney-client and work product. That made populating the privilege column a snap once the documents were chronologically arranged. The cost to generate the spreadsheet was a few hundred dollars. Three in-house paralegals were able to QC, proofread, and finalize the log in less than three days for a cost of about $2,000.Had we done it the old-fashioned way, my partner was looking at having 25 or 30 people dictating for five days. If the reviewers were all outsourced, the cost would have been $12,000 to $15,000. He planned to use a mix of in-house and contract personnel - so, the cost would have been 30% to 50% higher. The transcription process would have added another $10,000. The cost of copying the resulting privilege log that would have been about 500 pages long with 10 entries per page for the four parties and arbitrator would have been about $300. So even 10 years ago, the cost of doing things the old-fashioned way would have been about $35,000. The technology-assisted solution was about $2,500. Stay tuned for the second blog in this series, where we delve deeper into how attorneys can save their clients money, achieve better outcomes, and gain more repeat business once they overcome common misconceptions around eDiscovery technology and costs. If you would like to discuss this topic further, please reach out to Casey at cvanveen@lighthouseglobal.com and Gordon at Gordon.Calhoun@lewisbrisbois.com.ediscovery-review; ai-and-analyticsediscovery-process, blog, spectra, law-firm, ediscovery-review, ai-and-analyticsediscovery-process; blog; spectra; law-firmcasey van veen
May 21, 2021
Blog

Self-Service eDiscovery: Top 3 Technical Pitfalls to Avoid

Whether it’s called DIY eDiscovery, SaaS eDiscovery, or self-service, spectra eDiscovery, one thing is clear—everyone in the legal world is interested in putting today’s technologies to work for them to get more done with less. It’s a smart move, given that many legal teams are facing an imbalance between needs and resources. As in-house legal budgets are being slashed, actual workloads are increasing.Now more than ever, legal teams need to ensure they’re choosing and using the right tools to effectively manage dynamic caseloads—a future-ready solution capable of supporting a broad range of case types at scale. Given the variety of options on the market, it’s understandable there’s some uncertainty about what to pursue, let alone what to avoid. Below, I have outlined guidance to help your legal team navigate the top three potential pitfalls encountered when seeking a self-service, spectra eDiscovery solution.1. Easy vs. PowerfulThere are a lot of eDiscovery solutions out there making bold promises, but many still force users to choose between ease of use and full functionality. While a platform may be simple to learn and navigate, it may fail to offer advanced features like AI-driven analysis and search, for example.Think of it like the early days of cell phones, when we were forced to choose between a classic brick-style device or a new-to-market smartphone. Older phones were easy to use, offering familiar capabilities like calling and text exchange, while newer smartphones provided impressive, previously unknown functionalities but came with a learning curve. With the advancement of technology, today’s device buyers can truly have it all at hand—a feature-rich mobile phone delivered in an intuitive user experience.The same is true for dynamic eDiscovery solutions. You shouldn’t have to choose between power and simplicity. Any solution your team considers should be capable of delivering best-in-class technology over one simple, single-pane interface.2. Short-Term Thinking vs. Long-Term Gains As organizations move to the seemingly unlimited data storage capacities of cloud-based platforms and tools, legal teams are facing a landslide of data. Even the smallest internal investigation may now involve hundreds of thousands of documents. And with remote working being the new global norm, this trend will only continue to grow. Legal teams require eDiscovery tools that are capable of scaling to meet any data demand at every stage of the eDiscovery process.When evaluating an eDiscovery solution, keep the future in mind. The solution you select should be capable of managing even the most complex case using AI and advanced analytics—intelligent functionality that will allow your team to efficiently cull data and gain insights across a wide variety of cases. Newer AI technology can aggregate data collected in the past and analyze its use and coding in previous matters—information that can help your team make data-driven decisions about which custodians and data sources contain relevant information before collection. It also offers the ability to re-use past attorney work product, allowing you to save valuable time by immediately identifying junk data, attorney-client privilege, and other sensitive information.3. Innovation vs. UpkeepThanks to the DIY eDiscovery revolution, your organization no longer has to devote budget and IT resources to upkeeping a myriad of hardware and software licenses or building a data security program to support that technology. Seek a trusted solution provider that can take on that burden with development and security programs (with the requisite certifications and attestations to prove it). This should include routine technology assessment and testing, as well as using an approach that doesn’t disrupt your ongoing work.As you’re asked to do more with less, the right cloud-based eDiscovery platform can ensure your team is able to meet the challenge. By avoiding the above pitfalls, you’ll end up with a solution that’s able to stand up against today’s most complex caseloads, with powerful features designed to improve workflow efficiency, provide valuable insights, and support more effective eDiscovery outcomes.If you’re interested in moving to a DIY eDiscovery solution, check out my previous blog series on self-service, spectra eDiscovery for corporations, including how to select a self-service, spectra eDiscovery platform, tips for self-service, spectra eDiscovery implementation, and how self-service, spectra eDiscovery can make in-house counsel life easier. ediscovery-review; ai-and-analyticsself-service, spectra, ediscovery-process, corporation, prism, blog, spectra, corporate, ediscovery-review, ai-and-analyticsself-service, spectra; ediscovery-process; corporation; prism; blog; spectra; corporatelighthouse
January 25, 2021
Blog

Self-Service eDiscovery for Corporations: Four Considerations For Selecting the Solution That’s Right for You

Let’s begin by setting the stage. You’ve evaluated the ways a self-service, spectra eDiscovery solution could benefit your organization and determined the approach will help you boost workflow efficiency, free up internal resources, and reduce eDiscovery practice and technology costs. You’ve also researched how to ideally implement a solution and armed yourself with strategies to build a business case and overcome stakeholder objections that may arise.You’re now ready to move on to the next step in your organization’s self-service, spectra eDiscovery journey: selecting the right solution provider. When it comes to selecting a solution provider, one size does not fit all. Every organization has different eDiscovery needs—including yours—and those needs evolve. From how attorneys and eDiscovery teams are structured within the organization and their approach to investigations and litigations, to the types of data sources implicated in those matters and how those matters are budgeted—there’s a lot to be considered.The self-service, spectra solution you choose should be able to adapt to your changing needs and grow with your organization. Below, I’ve outlined four key considerations that will help you select a fitting self-service, spectra solution for your organization.1. Is the solution capable of scaling to handle any matter? ‍It’s important to select a self-service, spectra eDiscovery solution capable of efficiently handling any investigation or litigation that comes your way. A cloud-based solution can easily, swiftly scale to handle any data volume.You’ll also want to ensure your solution can handle the type of data your organization routinely encounters. For example, collecting, processing, and reviewing data generated by collaborative applications like Microsoft Teams may require special tools or workflows. The same can be said for data generated by chat messages or cellphone data. Before selecting a self-service, spectra solution, you’ll benefit from outlining the types of data your organization must handle and asking potential solution providers how their platform supports each.Additionally, you may be interested in the ability to move to a full-service model with your provider, should the need arise. With scalable service, your team will have access to reliable support if a matter become too challenging to manage in house. With a scalable solution bolstered by a flexible service model, your organization can bring on help as needed, without disruption. 2. Does the solution drive data reduction and review efficiency across the EDRM?‍Organizational data volumes are increasing year after year—meaning even small, discrete internal investigations can quickly balloon into hundreds of thousands of documents. Collecting, processing, analyzing, and producing large amounts of data can be costly, complicated, time consuming, and may open up your organization to legal risk if the right tools and workflows are not in place.Look for a self-service, spectra solution capable of managing data at scale, with the ability to actively help your organization reduce its data footprint. This means choosing a provider that can offer expert guidance around data reduction techniques and tools. Ask potential solution providers if they have resources to address the cost burden of data and mitigate risk through strategies like defensible data collections, effective search term selection, or crafting early case assessment (ECA), and technology assisted review (TAR) workflows.The provider should also be able to deliver technology engineered to reduce data resource draw, like processing that allows access to data faster, tools to cut down on hosted review data volume, and AI and analytics that provide the ability to re-use attorney work product across multiple matters. In short, seek a self-service, spectra solution that gives your organization the ability to defensibly and efficiently reduce the amount of costly human review across your organization’s portfolio. 3. Will the solutions’ pricing model align to your organization’s changing needs? Your organization’s budget requirements are unique and will likely change over time. Look for a solution provider that can change in accord and offer a variety of pricing models to fit your budgetary requirements. Ask prospective providers if they are able to design pricing around your organization’s expectations for utilization. Modern pricing models can be flexible yet predictable to prevent unexpected charges or overages, and ultimately align to your organization’s financial needs.4. Is the solution’s roadmap designed to take your organization into the future? When selecting a self-service, spectra solution it’s easy to focus on your current needs, but it’s equally important to consider what a self-service, spectra solution provider has planned for the future. If a vendor is not forward thinking, an organization may find itself being forced to used outdated technology that’s not able to take on new security challenges or process and review emerging data sources.Pursue a provider that demonstrates the ability to anticipate market trends and design solutions to address them. Ask potential providers to articulate where they see the market moving and what plans they have in place to update their technology and services to reflect what’s new. It can be helpful to question if a provider’s roadmap aligns to your organization’s direction. For example, if you know your company is planning to make a systematic change, like moving to a bring your own device (BYOD) policy or migrating to the cloud, you’ll want to confirm the self-service, spectra solution can support that change. Asking these types of questions before selecting a provider will guarantee the solution you choose will be able to grow with both your organization and the eDiscovery industry as a whole. With awareness and understanding of the true potential offered in a self-service, spectra solution, you can ultimately choose a provider that will help you level up your organization’s eDiscovery program. ediscovery-review; ai-and-analyticsself-service, spectra, blog, ediscovery-review, ai-and-analyticsself-service, spectra; bloglighthouse
January 13, 2022
Blog

Purchasing AI for eDiscovery: Tips and Best Practices

AI & Analytics
eDiscovery is currently undergoing a fundamental sea change, including how we think about data governance and the EDRM. Linear review and older analytic tools are quickly becoming outdated and unable to handle modern datasets, i.e., eDiscovery datasets that are not only more voluminous than ever before, but also more complicated – emanating from an ever-evolving list of new data sources and steeped in variety of text and non-text-based languages (foreign language, slang, emojis, video, etc.).Fortunately, technological advancements in AI have led to a new class of eDiscovery tools that are purpose built to handle “big data.” These tools can more accurately identify and classify responsiveness, privileged, and sensitive information, parse multiple formats, and even provide attorneys with data insights gleaned from an organization’s entire legal portfolio.This is great news for legal practitioners who are faced with reviewing and analyzing these more challenging datasets. However, evaluating and selecting the right AI technology can still present its own unique hurdles and complexities. The intense purchasing process can raise questions like: Is all AI the same? If not, what is the difference between AI-based tools? What features are right for my organization or firm? And once I’ve found a tool I like, how do I make the case for purchasing it to my firm or organization?These are all tough questions and can lead you down a rabbit hole of research and never-ending discussions with technology and eDiscovery vendors. However, the right preparation can make a world of difference. Leveraging the below steps will help you simplify the process, obtain answers to your fundamental questions, and ultimately select the right technology that will help you overcome your eDiscovery challenges and up level your eDiscovery program.1. Familiarize Yourself with Subsets of AI in eDiscoveryNewer AI technology is significantly better at tackling today’s modern eDiscovery datasets than legacy technology. It can also provide legal teams with previously unheard-of data insights, improving efficiency and accuracy while enabling more data-driven strategic decisions. However, not all technology is the same – even if technology providers tend to generally refer to it all as “AI.” There are many different subsets of AI technology, and each may have vastly different capabilities and benefits. It’s important to understand what subsets of AI can provide the benefits you’re looking for, and how those different technology subsets can work together. For example, Natural Language Processing (NLP) enables an AI-based tool to understand text the same way that humans understand it – thus providing much more accurate classifications results – while AI tools that leverage deep learning technology together with NLP are better able to handle large and complex datasets more efficiently and accurately. Other subsets of AI give tools the ability to re-use data across matters as well as across entire legal portfolios. Learning more about each subset and the capability and benefits they can provide before talking to eDiscovery vendors will give you the knowledge base necessary to narrow down the tools that will meet your specific needs. 2. Learn How to Measure AI ROIAs a partner to human reviewers, advanced AI tools can provide a powerful return on investment (ROI). Understanding how to measure this ROI will enable you to ask the right questions during the purchasing process to ensure that you select a tool that aligns with your organization or law firm’s priorities. For example, if your team struggles with review accuracy when utilizing your current tools and workflows, you’ll want to ensure that the tool you purchase is quantifiably more accurate at classifying documents for responsiveness, privilege, sensitive information, etc. The same will be true for other ROI metrics that are important to your team, such as lower overall eDiscovery spend or increased review efficiency.These metrics will also help you build a strong business case to purchase your chosen tool once you’ve selected it, as well as a verifiable way to confirm the tool is performing the way you want it to after purchase.3. Come Prepared with a List of QuestionsIt’s easy to get swept up in conversations about tools and solutions that end without the metrics you need. A simple way to control the conversation and ensure you walk away with the information you need is to prepare a thorough list of questions that reflect your priorities. Also be sure to have a method to record each vendor’s response to your questions. A list of standard questions will keep conversations more productive and provide a way to easily contrast and compare the technology you’re evaluating. Ensure that you also ask for quantifiable metrics and examples to back up responses, as well as references from clients. This will help you verify that vendor responses are backed by data and evidence.4. Know the Pitfalls of AI Adoption—and How to Avoid ThemIt won’t matter how much you understand AI capabilities, whether you’ve asked the right questions, or whether you understand how to measure ROI, if you don’t know how to avoid common AI pitfalls. Even the best technology will fail to return the desired results if it’s not implemented properly or effectively. For example, there are some workflows that work best with advanced AI, while other workflows may fail to return the best results possible. Knowing this type of information ahead of time will help you get your team on board early, ensure a smooth implementation, and enable you to unlock the full potential of the technology.These tips will help you better prepare for the AI purchasing process. For more information, be sure to download our guide to buying AI. This comprehensive guide offers a deep dive into tips and tactics that will help you fully evaluate potential eDiscovery AI tools to ensure you select the best tool for your needs. The guide can also be used to reevaluate your current AI and analytic eDiscovery tools to confirm you’re using the best available technology to meet today’s eDiscovery challenges.lighting-the-way-for-review; ai-and-analytics; ediscovery-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscoveryreview, ai-big-data, blog, ai-and-analytics, ediscovery-reviewreview; ai-big-data; blogai-analyticssarah moran
December 2, 2020
Blog

Preparing for Big Data Battles: How to Win Over AI and Analytics Naysayers

Artificial intelligence (AI), advanced analytics, and machine learning are no longer new to the eDiscovery field. While the legal industry admittedly trends towards caution in its embrace of new technology, the ever-growing surge of data is forcing most legal professionals to accept that basic machine learning and AI are becoming necessary eDiscovery tools.However, the constant evolution and improvement of legal tech bestow an excellent opportunity to the forward-thinking eDiscovery legal professional who seeks to triumph over the growing inefficiencies and ballooning costs of older technology and workflow models. Below, we’ll provide you with arguments to pull from your quiver when you need to convince Luddites that leveraging the most advanced AI and analytics solutions can give your organization or law firm a competitive and financial advantage, while also reducing risk.Argument 1: “We already use analytical and AI technology like Technology Assisted Review (TAR) when necessary. Why bring on another AI/analytical tool?”Solutions like TAR and other in-case analytical tools remain worthwhile for specific use cases (for example, standalone cases with massive amounts of data, short deadlines, and static data sets). However, more advanced analytical technology can now be used to provide incredible insight into a wider variety of cases or even across multiple matters. For example, newer solutions now have the ability to analyze previous attorney work product across a company’s entire legal portfolio, giving legal teams unprecedented insight into institutional challenges like identifying attorney-client privilege, trade secret information, and irrelevant junk data that gets pulled into cases and re-reviewed time and time again. This gives legal teams the ability to make better decisions about how to review documents on new matters.Additionally, new technology has become more powerful, with the ability to run multiple algorithms and search within metadata, where older tools could only use single algorithms to search text alone. This means that newer tools are more effective and efficient at identifying critical information such as privileged communications, confidential information, or protected personal information. In short, printing out roadmap directions was advanced and useful at the time, but we’ve all moved on to more efficient and reliable methods of finding our way.Argument 2: “I don’t understand this technology, so I won’t use it” This is one of the easiest arguments to overcome. A good eDiscovery solution provider can offer a myriad of options to help users understand and leverage the advances in analytics and AI to achieve the best possible results. Whether you want to take a hands-off approach and have a team of experts show you what is possible (“Here are a million documents. Show me all the documents that are very likely to be privileged by next week”), or you want to really dive into the technology yourself (“Show me how to use this tool so that I can delve into the privilege rate of every custodian across multiple matters in order to effectuate a better overall privilege review strategy”), a quality solution provider should be able to accommodate. Look for providers that offer training and have the ability to clearly explain how these new technologies work and how they will improve legal outcomes. Your provider should have a dedicated team of analytics experts with the credentials and hands-on experience to quell any technology fears. Argument 3: “This technology will be too expensive.”Again, this one should be a simple argument to overcome. The efficiencies that the effective use of AI and analytics achieve can far outweigh the cost to use it. Look for a solution provider that offers a variety of predictable pricing structures, like per gig pricing, flat fee, fees generated by case, fees generated across multiple cases, or subscription-based fees. Before presenting your desired solution to stakeholders, draft your battle plan by preparing a comparison of your favored pricing structure vs. the cost of performing a linear review with a traditional pricing structure (say, $1 per doc). Also, be sure to identify and outline any efficiencies a more advanced analytical tool can provide in future cases (for example, the ability to analyze and re-use past attorney work product). Finally, when battling against risk-averse stakeholders, come armed with a cost/benefit analysis outlining all of the ways in which newer AI can mitigate risk, such as by enabling more accurate and consistent work product, case over case.ai-and-analyticsanalytics, ai-big-data, blog, ai-and-analytics,analytics; ai-big-data; bloglighthouse
August 22, 2021
Blog

Privilege Mishaps and eDiscovery: Lessons Learned

Discovery in litigation or investigations invariably leads to concerns over protection of privileged information. With today’s often massive data volumes, locating email and documents that may contain legal advice or other confidential information between an attorney and client that may be privileged can be a needles-in-a-haystack exercise. There is little choice but to put forth best efforts to find the needles.Whether it’s dealing with thousands (or millions) of documents, emails, and other forms of electronically stored information, or just a scant few, identifying privileged information and creating a sufficient privilege log can be a challenge. Getting it right can be a headache — and an expensive one at that. But get it wrong and your side can face time-consuming and costly motion practice that doesn’t leave you on the winning end. A look at past mishaps may be enlightening as a reminder that this is a nuanced process that deserves serious attention.Which Entity Is Entitled to Privilege Protection?A strong grasp of what constitutes privileged content in a matter is important. Just as important? Knowing who the client is. It may seem obvious, but history suggests that sometimes it is not; especially when an in-house legal department or multiple entities are involved.Consider the case of Estate of Paterno v. NCAA. The court rejected Penn State’s claim of privilege over documents Louis Freeh’s law firm generated during an internal investigation. Why? Because Penn State wasn’t the firm’s client. The firm’s engagement letter said it was retained to represent the Special Investigations Task Force that Penn State formed after Pennsylvania charged Jerry Sandusky with several sex offenses. There was a distinct difference between the university and its task force.The lesson? Don’t overlook the most fundamental question of who the client is when considering privilege.The Trap of Over-Designating DocumentsWhat kind of content involving the defined client(s) is privileged? This can be a tough question. You can’t claim privilege for every email or document a lawyer’s name is on, especially for in-house counsel who usually serve both a business and legal function. A communication must be confidential and relate to legal advice in order to be considered privileged.Take Anderson v. Trustees of Dartmouth College as an example. In this matter, a student was expelled in a disciplinary action filed suit against Dartmouth. Dissatisfied with what discovery revealed, the student (representing himself) filed a motion to compel, asking the judge to conduct an in camera review of what Dartmouth had claimed was privileged information. The court found that much of the information being withheld for privilege did not, in fact, constitute legal advice, and that Dartmouth’s privilege claim exceeded privilege’s intended purpose.Dartmouth made a few other unfortunate mistakes. It labeled entire email threads privileged instead of redacting the specific parts identified as privileged. It also labeled every forwarded or cc’ed email the in-house counsel’s name was on as privileged without proving the attorney was acting as a legal advisor. To be sure, the ability to identify only the potentially privileged parts of an email thread — which could include any number of direct, forwarded and cc’d recipients and an abundance of inclusive yet non-privileged content — is not an easy task, but unfortunately, it is a necessary one.The lesson? If the goal of discovery is to get to the truth — foundational in American jurisprudence — courts are likely to construe privilege somewhat narrowly and allow more rather than fewer documents to see the light of day. In-house legal departments must be especially careful in their designations given the flow and volume of communications related to both business and legal matters, and sometimes the distinction is difficult to make.Be Ready to Back Up Your ClaimNo matter how careful you are during the discovery process, the other party might challenge your claim of privilege on some documents. “Because we said so” (aka ipse dixit) is not a compelling argument. In LPD New York, LLC v. adidas America, Inc, adidas claimed certain documents were privileged. When challenged, adidas’ response was to say LPD’s position wasn’t supported by law. The court said: Not good enough. Adidas had the burden to prove the attorney-client privilege applied and respond to LPD’s position in a meaningful way.The lesson? For businesses, be prepared to back up a privilege claim that an in-house lawyer was acting in their capacity as a legal advisor before claiming privilege.A Protection Not Used Often Enough: Rule 502(d)Mistakes do happen, however, and sometimes the other party receives information they shouldn’t through inadvertent disclosure. With the added protection of a FRE 502(d) order, legal teams are in a strong position to protect privileged information and will be in good shape to get that information back. Former United States Magistrate Judge Andrew Peck, renowned in eDiscovery circles, is a well-known advocate of this order.The rule says, “A federal court may order that the privilege or protection is not waived by disclosure connected with the litigation pending before the court — in which event the disclosure is also not a waiver in any other federal or state proceeding.”Without a 502(d) order in place, a mistake usually means having to go back and forth with your opponent, arguing the elements under 502(b). If you’re trying to claw back information, you have to spend time and money proving the disclosure was inadvertent, that you took reasonable steps to prevent disclosure, and you promptly took steps to rectify the error. It’s an argument you might not win.Apple Inc. v. Qualcomm Incorporated is a good example. In 2018, Apple lost its attempt to claw back certain documents it mistakenly handed over to Qualcomm during discovery in a patent lawsuit. The judge found Apple didn’t meet the requirements of 502(b). Had Apple established a 502(d) order to begin with, 502(b) might not have come into play at all.The lesson? Consider Judge Peck’s admonition and get a 502(b) order to provide protection against an inadvertent privilege waiver.Advances in Privilege IdentificationLuckily, gone are the days where millions of documents have to be reviewed by hand (or eyes) alone. Technological tools and machine learning algorithms can take carefully constructed privilege instructions and find potentially privileged information with a high degree of accuracy, reducing the effort that lawyers must expend to make final privilege calls.Although automation doesn’t completely take away the need for eyes on the review process, the benefits of machine learning and advanced technology tools are invaluable during a high-stakes process that needs timely, accurate results. Buyer beware however, such methods require expertise to implement and rigorous attention to quality control and testing. When you’re able to accurately identify privileged information while reducing the stress of creating a privilege log that will hold up in court, you lessen the risk of a challenge. And if a challenge should come, you have the data to back up your claims.ai-and-analyticsprivilege, ai-big-data, blog, ai-and-analytics,privilege; ai-big-data; bloglighthouse
September 1, 2021
Blog

Overcoming eDiscovery Trepidation - Part II: A Better Outcome

In this two-part series, I interview Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP about his thoughts on the state of eDiscovery within law firms today, including lessons learned and best practices to help attorneys overcome their trepidation of electronic discovery and build a better litigation practice. This second blog focuses on how attorneys within law firms can save their clients money, achieve better outcomes, and gain more repeat business once they overcome common misconceptions around eDiscovery.You mentioned earlier that you think attorneys who try to shoehorn volumes of electronic data into older workflows developed for paper discovery will likely cause attorneys to lose clients. Can you explain how? Sure. My point was that non-technological workflows often pop into the minds of attorneys because they are familiar, comfortable approaches to responding to document requests. Because they are familiar and can be initiated immediately, there is a great temptation to jump in and avoid planning and employing the expertise essential for an optimal workflow. Unfortunately, jumping in without much planning produces a result that is often unnecessarily costly to clients, particularly if the attorneys employ in-house resources (which are usually several times more costly than outsourced staff). In-house resources often regard document review and analysis as an undesirable assignment and have competing demands for their time from other projects and cases. This can result in unexpected delays in project completion and poor work product (in part because quality degrades when people are required to perform tasks they dislike). The end result is untimely, lower quality, and more costly than anticipated, which will ultimately cost the attorney their client.Clients will always gravitate towards the professional who can deliver a better, more cost-effective, and more efficient solution while avoiding motion expenses. That means that attorneys who are informed enough to use technology to save clients money on multiple cases are going to earn the trust and confidence of more and more clients. And that is the answer to the question as to what’s in it for the professional if he or she takes the time to learn about or partners with someone who already knows eDiscovery.Well, coming from a legal technology company, I agree with that sentiment. But we also tend to see attorneys from the other end of the spectrum: lawyers who understand the benefits advanced eDiscovery technology can provide, but avoid it because of fears around overhead expense and surprise fees. Have you seen this within your own practice? If so, how do you advise attorneys who may have similar feelings? I experience the same thing and, again, this type of thought process is completely understandable. When eDiscovery technologies were comparatively new, they seemed disproportionately expensive. The cost to process a GB of data could exceed $1,000, hosting charges ran into the many tens of dollars per month and there were no analytics to expedite review. When the project management service was in its infancy, too many of those providing services simply followed uninformed instructions from counsel. An instruction to process data was not met with inquiries as to whether all data collected should be processed or if an alternative should be explored when initial analysis indicated the data expansion would be unexpectedly large. Further, early case assessment (ECA) strategies utilizing only extracted text and metadata were years in the future. The only saving grace was that data volumes were miniscule compared to what they are today. But that was not enough to prevent widespread reports about massive eDiscovery vendor bills. As you might suspect, the problem was not so much the technology or even the lack thereof as it was the failure to spend the time to develop an appropriate workflow and manage the eDiscovery process so the results were cost effective. Any tips on how attorneys can overcome the remnant fear of eDiscovery “sticker shock”?This challenge can be met by research, planning, and negotiation: research into the optimal technologies and which providers are equipped to provide them, planning an appropriate workflow, and negotiation with eDiscovery platform providers to customize the offerings to the needs of your case. If you have the aptitude, consider investing some time and doing some research about eDiscovery solutions that provide predictable, transparent prices outside of the typical hourly and per-GB fee structure. A good eDiscovery platform provider should work with you to develop a fee arrangement that makes sense for your caseload and budget. There is no reason why even a small firm or individual practitioner cannot negotiate subscription-based or consumption-based fees for eDiscovery solutions the same way that forward thinking serial litigants like large corporations and insurers have. The pricing models exist and there is no reason they cannot be scaled for users with smaller demands. Under this type of arrangement, there will be no additional costs or surprise fees, which in turn will allow any practitioner to pass that price predictability on to his or her clients. Ultimately, this lower cost, increased predictability, and efficiency will enable an attorney to grow his or her book of business with repeat customers and referrals.So, if an attorney is able to negotiate an alternative fee arrangement with a legal technology provider, is that the end of the problem? Should that solve all an attorney’s eDiscovery concerns? It’s a start – but no. Even with a customized eDiscovery technology solution, part of the concern for most attorneys is the magnitude of the effort required to respond to discovery requests. On one hand, they’re faced with document requests fashioned by opposing counsel fearful of missing something that might be important unless they are massively overinclusive. They ask for each, every, and all documents and any form of electronic media that involves, concerns, or otherwise relates to 30, 50, 100, or more discrete topics. On the other hand, the attorney must reconcile this task of preserving, identifying, collecting, processing, analyzing, reviewing and producing ESI in a manner that complies with the applicable discovery laws or case specific discovery orders… all under what may be a modest budget approved by the client. This is where experience (or guidance from an experienced attorney), as well as a good eDiscovery technology provider can be a huge help. The principle that underlies a solution to the conundrum as to how to manage an overly broad discovery request with a limited budget is: proportionality. Emphasizing this principle is a major focus of the 2015 amendments to the FRCP. Got it. I think the logical follow up question to that answer is: how can attorneys attain “proportionality” in the face of ridiculous discovery requests (while also not exceeding the limited amount the client is prepared to spend)?The key to balancing these conflicting demands is insisting upon proportionality early and often. The principle needs to be addressed at a granular level with a robust understanding of the client’s data that will be the subject of opposing counsel’s discovery requests. For example, the number of custodians from whom data should be collected should not be a laundry list of everyone who might have knowledge about the issues in the case. Rather, counsel should be focused on the key players and how much data each has. The volume of data that counsel can afford to collect, process, analyze, review, and produce should depend largely on what the litigation budget is, which in turn should generally depend on the amount in controversy. There are exceptions to this rule of thumb, but this approach to proportionality needs to be raised during the initial meetings of counsel in advance of the first case management order. If the case is one where the general rule does not apply (e.g., a matter of public interest), the client should be informed immediately because the cost of litigation is likely to be disproportionate to its economic value and the client may prefer to have some other entity litigate the issue. An experienced attorney should be involved in this meet and confer process because the results of these early efforts are likely to create the foundation and guard rails for the remainder of the case. Any issues that are left to future negotiation create a potential for costs to balloon in unexpected ways. Can you dive a bit deeper into proportionality at different phases of the discovery process? Is there anything else attorneys can do to keep cost from ballooning before data is collected?As I alluded to a moment ago, one key to controlling scope and cost is to negotiate a limited number of custodians that is proportional to the value of the case. In larger cases, it will be appropriate to create tiers of custodians and limit progression into the lower tier custodians to those instances where opposing counsel make a good faith showing that additional discovery is necessary based on identifiable gaps of information rather than upon speculation about what might be found if more discovery is permitted. If opposing counsel doesn’t agree to a limited number of custodians or staging discovery in larger cases, counsel would be well advised to prepare a case management order or a protective order to keep the scope of discovery proportional to the value of the case. To be successful, an attorney and his or her technology provider will have to understand the data in the client’s possession and provide metrics and costs associated with the alternative approaches to discovery.Great advice. How about once data is collected and analysis has begun? How can attorneys keep costs within budget once they've got the data into an eDiscovery platform?Attorneys should continue to budget proportionally throughout the case. This budget will obviously include the activities identified by the Electronic Discovery Reference Model (EDRM). The EDRM provides a roadmap to respond to opposing parties’ discovery requests: identifying those documents that are needed to make our case, regardless of whether opposing parties requested them; winnowing the documents identified to a subset for use in deposition preparation; drafting potentially dispositive motions; and preparing for mediation; and, if necessary, preparing for inclusion on the trial exhibit list. The EDRM was designed to help attorneys identify documents that are reasonably calculated to lead to the discovery of admissible evidence or relate to claims and defenses asserted in the case. In a case with 100,000 documents collected, that could easily be 10,000 to 15,000 documents. The documents considered for use in depositions, law and motion, or mediation will be a small fraction of that amount and will include a similar culling of those documents produced by other parties and third parties. Only a fraction of those will make it onto the trial exhibit list and fewer will be presented to the trier of fact.Responding to discovery and preparing the case for resolution are two very different tasks and the attorney’s budget must accommodate these two different activities. Monies must be reserved for other written discovery requests, both propounding them and responding to them, and for depositions. Because the per-GB prices for these activities are predictable, an attorney and technology provider should be able to readily determine how much information they can afford to collect and put into the eDiscovery workflow. Counsel needs to be ready to share this information with opposing parties during the early meetings of counsel. But what happens when there is just a legitimately large amount of data, even after applying all the proportionality tactics you described earlier? Counsel should only agree to look at more data than that to which the parties originally agreed if opposing counsel can show good cause to incur that time and expense. If more data needs to be analyzed, the only reliable way to avoid busting the budget is to use AI to build on the document classification that occurred during the initial round of eDiscovery activities. Counsel should take advantage of statistically defensible sampling to determine the prevalence of responsive documents in the data and cut off analysis and review when a defensible rate of recall has been achieved. The same technologies should be employed to identify documents that should not be produced, e.g., those that are privileged or contain trade secrets unrelated to the pending litigation or other data exempt from discovery – enabling counsel to reduce the amount of expensive attorney review required on a given case.By proactively managing eDiscovery proportionality and leveraging all the efficiency that modern eDiscovery platforms provide (either by developing the necessary expertise to do so or associating with an attorney who does) – any lawyer will be able to handle any discovery request in a cost-effective manner.You mentioned choosing a database and legal technology provider. Do you have any advice for attorneys on how to choose the best one to meet their needs?I won’t weigh in on specifics, but I will say this: do the necessary research or consult with someone who has. In addition to investigating the various technologies available, counsel must become familiar with a variety of pricing models for delivery of the technologies needed to respond to eDiscovery requests. Instead of treating every case as an a la carte proposition, consider moving to a subscription-based self-service eDiscovery platform solution. This allows counsel savvy with the technology to control his or her cases within the platform and manage costs in a much more granular way than is possible when using a full-service eDiscovery technology provider, without incurring additional licensing, hosting, and technology fees. With a self-service solution, a provider hosts the data within their own cloud (and thus takes on the data security, hosting, and technology fees), while counsels gain access to all the current versions of eDiscovery tools to help manage the client’s costs. It will also allow counsel to customize the platform and automate workflows to meet his or her own specific needs, so that no one is spending time and money re-inventing the wheel with every new case. A self-service solution also comes with the added benefit of being immediately available from any web browser and gives counsel the ability to transfer data into platform at the touch of a button. (This means that when a prospective client asks whether you have a solution to handle the eDiscovery component of a case, the answer will always be an immediate “yes”).What happens if counsel does not feel ready to take on all eDiscovery responsibilities in a “self-service” model?If counsel is not ready to take on full responsibility for managing the eDiscovery process but still wants the cost-savings of a self-service model, find a technology provider that offers project management services and guidance that will act as training wheels until counsel is ready to navigate the process without assistance. There are also service providers who offer flexible arrangements, where large matters can be handled by their full-service team while smaller matters or investigations can remain “self-service” and be handled directly by counsel.Those are great tips, Gordon – I couldn’t have said it better myself. Any last thoughts for attorneys related to discovery and leveraging eDiscovery technology? Thank you, it’s been a pleasure. As for last thoughts, I think it would be this: in 2021, no attorney should fear responding to eDiscovery requests. Attorneys who still have that fear need to start asking, “If the data exists electronically, can I use technology to extract what I need less expensively than if I put eyeballs on every document?” The answer is almost always, “Yes.” The next question those attorneys should ask is, “How do I go about extracting the information I need at the lowest possible cost?” The answer to that question may be unique to each attorney, and this is where I recommend doing some up-front research and preparation to identify the best technology solution before you are looking down the barrel at a tight discovery deadline.Ultimately, finding the right technology solution will enable you to meet every discovery request with confidence and ultimately grow your book of business. If you would like to discuss this topic further, please reach out to Casey at cvanveen@lighthouseglobal.com and/or Gordon Calhoun at Gordon.Calhoun@lewisbrisbois.com.ediscovery-review; ai-and-analyticsediscovery-process, blog, spectra, law-firm, ediscovery-review, ai-and-analyticsediscovery-process; blog; spectra; law-firmcasey van veen
April 22, 2021
Blog

Navigating the Intersections of Data, Artificial Intelligence, and Privacy

While the U.S. is figuring out privacy laws at the state and federal level, artificial and augmented intelligence (AI) is evolving and becoming commonplace for businesses and consumers. These technologies are driving new privacy concerns. Years ago, consumers feared a stolen Social Security number. Now, organizations can uncover political views, purchasing habits, and much more. The repercussions of data are broader and deeper than ever.Lighthouse (formerly H5) convened a panel of experts to discuss these emerging issues and ways leaders can tackle their most urgent privacy challenges in the webinar, “Everything Personal: AI and Privacy.”The panel featured Nia M. Jenkins, Senior Associate General Counsel, Data, Technology, Digital Health & Cybersecurity at Optum (UnitedHealth Group); Kimberly Pack, Associate General Counsel, Compliance, at Anheuser-Busch; Jennifer Beckage, Managing Director at Beckage; and Eric Pender, Senior Director at Lighthouse (formerly with H5); and was moderated by Sheila Mackay, Managing Director at Lighthouse (formerly with H5).While the regulatory and technology landscape continues to rapidly change, the panel highlighted some key takeaways and solutions to protect and manage sensitive data leaders should consider:Build, nurture, and utilize cross-functional teams to tackle data challengesDevelop robust and well-defined workflows to work with AI technology Understand the type and quality of data your organization collects and stores Engage with experts and thought leadership to stay current with evolving technology and regulations Collaborate with experts across your organization to learn the needs of different functions and business units and how they can deploy AI Enable your company’s innovation and growth by understanding the data, technology, and risks involved with new AIDevelop collaboration, knowledge, and cross-functional teamsWhile addressing challenges related to data and privacy certainly requires technical and legal expertise, the need for strong teamwork and knowledge sharing should not be overlooked. Nia Jenkins said her organization utilizes cross-functional teams, which can pull together privacy, governance, compliance, security, and other subject matter experts to gain a “line of sight into the data that’s coming in and going out of the organization.”“We also have an infrastructure where people are able to reach out to us to request access to certain data pools,” Jenkins said. “With that team, we are able to think through, is it appropriate to let that team use the data for their intended purpose or use?”In addition to collaboration, well-developed workflows are paramount too. Kimberly Pack explained that her company does have a formalized team that comes together on a bi-monthly basis and defined workflows that are improving daily. She emphasized that it all begins with “having clarity about how business gets done.”Jennifer Beckage highlighted the need for an organization to develop a plan, build a strong team, and understand the type and quality of the data it collects before adopting AI. Businesses have to address data retention, cybersecurity, intellectual property, and many other potential risks before taking full advantage of AI technology.Engage with internal and external experts to understand changing regulations Keeping up with a dynamic regulatory landscape requires expanding your information network. Pack was frank that it’s too much for one person to learn themselves. She relies on following law firms, becoming involved in professional organizations and forums, and connecting with privacy professionals on LinkedIn. As she continually educates herself, she creates training for various teams at her organization, including human resources, procurement, and marketing.“Really cascade that information,” said Pack. “Really try to tailor the training so that it makes sense for people. Also, try to have tools and infographics, so people can use it, pass it along. Record all your trainings because everyone’s not going to show up.”The panel discussed how their companies are using AI and whether there’s any resistance. Pack noted her organization has carefully taken advantage of AI for HR, marketing, enterprise tools, and training. She noted that providing your teams with information and assistance is key to comfort and adoption.“AI is just a tool, right?” Pack said. “It’s not good, it’s not bad.” The privacy team conducts a privacy impact assessment to understand how the business can use the technology. Then her team places any necessary limitations and builds controls to ensure the team uses the technology ethically. Pack and Jenkins both noted that the companies must proactively address potential bias and not allow automated decision-making.Evaluate the benefits and risks of AI for your organization The panel agreed organizations should adopt AI to remain competitive and meet consumer expectations. Pack pointed out the purpose of AI technology is for it to learn. Businesses adopting it now will see the benefits sooner than those that wait.Eric Pender noted advanced technologies are becoming more common for particular uses: cybersecurity breach response, production of documents, including privilege review and identifying Personally Identifiable Information (PII), and defensible disposal. Many of these tasks have tight timelines and require efficiency and accuracy, which AI provides.The risks of AI depend on the nature of the specific technology, according to Beckage. It’s each organization’s responsibility to perform a risk assessment, determine how to use the technology ethically, and perform audits to ensure the technology is working without unintended consequences.Facilitate innovation and growth It is also important to remember that in-house and outside counsel don’t have to be “dream killers” when it comes to innovation. Lawyers with a good understanding of their company’s data, technology, and ways to mitigate risk can guide their businesses in taking advantage of AI now and years down the road.Pack encouraged compliance professionals to enjoy the problem-solving process. “Continue to know your business. Be in front of what their desires are, what their goals are, what their dreams are, so that you can actively support that,” she said.Pender says companies are shifting from a reactive approach to a proactive approach, and advised that “data that’s been defensively disposed of is not a risk to the company.” Though implementing AI technology is complex and challenging, managing sensitive, personal data is achievable, and the potential benefits are enormous.Jenkins encouraged the “four B’s.” Be aware of the data, be collaborative with your subject matter experts, be willing to learn and ask tough questions of your team, and be open to learning more about the product, what’s happening with your business team, and privacy in an ever-changing landscape.Beckage closed out the webinar by warning organizations not to reinvent the wheel. While it’s risky to copy another organization’s privacy policy word for word, organizations can learn from the people in the privacy space who know what they’re doing well.ai-and-analytics; data-privacyprivilege, cybersecurity, ai-big-data, pii, blog, preservation, ai-and-analytics, data-privacyprivilege; cybersecurity; ai-big-data; pii; blog; preservationlighthouse
June 28, 2021
Blog

New Rules, New Tools: AI and Compliance

We live in the era of Big Data. The exponential pace of technological development continues to generate immense amounts of digital information that can be analyzed, sorted, and utilized in previously impossible ways. In this world of artificial intelligence (AI), machine learning, and other advanced technologies, questions of privacy, government regulations, and compliance have taken on a new prominence across industries of all kinds.With this in mind, H5 recently convened a panel of experts to discuss the latest compliance challenges that organizations are facing today, as well as ways that AI can be used to address those challenges. Some key topics covered in the discussion included:Understanding use cases involving technical approaches to data classification.Exploring emerging data classification methods and approach.Setting expectations within your organization for the deployment of AI technology.Keeping an AI solution compliant.Preventing introducing bias into your AI models.The panel included Timia Moore, strategic risk assessment manager for Wells Fargo; Kimberly Pack, associate general counsel of compliance for Anheuser-Busch; Alex Lakatos, partner at Mayer Brown; and Eric Pender, engagement manager at H5; The conversation was moderated by Doug Austin, editor of the eDiscovery Today blog.Compliance Challenges Organizations Are Facing TodayThe rapidly evolving regulatory landscape, vastly increased data volumes and sources, and stringent new privacy laws present unique new challenges to today’s businesses. Whereas in the recent past it may have seemed liked regulatory bodies were often in a defensive position, forced to play catch-up as powerful new technologies took the field, these agencies are increasingly using their own tech to go on the offensive.This is particularly true in the banking industry and broader financial sector. “With the advent of fintech and technology like AI, regulators are moving from this reactive mode into a more proactive mode,” said Timia Moore, strategic risk assessment manager for Wells Fargo. But the trend is not limited to banking and finance. “It’s not industry specific,” she said. “I think regulators are really looking to be more proactive and figure out how to identify and assess issues, because ultimately they’re concerned about the consumer, which all of our companies are and should be as well.”Indeed, growing demand by consumers for increased privacy and better protection of their personal data is a key driver of new regulations around the world, including the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) and various similar laws in the United States. It’s also one of the biggest compliance challenges facing organizations today, as cyber attacks are now faster, more aggressive, and more sophisticated than ever before.Other challenges highlighted by the panel included:Siloed departments that limit communications and visibility within organizationsA dearth of subject matter expertiseThe possibility of simultaneous AI requests from multiple regulatory agenciesA more remote and dispersed workforce due to the pandemicUse Cases for AI and ComplianceIn order to meet these challenges head on, companies are increasingly turning to AI to help them comply with new regulations. Some companies are partnering with technology specialists to meet their AI needs, while some are building their own systems.Anheuser-Busch is one such company that is using an AI system to meet compliance standards. As Kimberly Pack, associate general counsel of compliance for Anheuser-Busch, described it: “One of the things that we’re super proud of is our proprietary AI data analyst system BrewRight. We use that data for Foreign Corrupt Practices Act compliance. We use it for investigations management. We use it for alcohol beverage law compliance.”She also pointed out that the BrewRight AI system is useful for discovering internal malfeasance as well. “Just general employee credit card abuse…We can even identify those kinds of things,” Pack said. “We’re actively looking for outlier behavior, strange patterns or new activity. As companies, we have this data, and so the question is how are we using it, and artificial intelligence is a great way for us to start being able to identify and mitigate some risks that we have.”Artificial intelligence can also play a key role in reducing the burden from alerts related to potential compliance issues or other kinds of wrongdoing. The trick, according to Alex Lakatos, partner at Mayer Brown, is tuning the system to the right level of sensitivity—and then letting it learn from there. “If you set it to be too sensitive, you’re going to be drowned in alerts and you can’t make sense of them,” Lakatos said. “You set it too far in the other direction, you only get the instances of the really, really bad conduct. But AI, because it is a learning tool, can become smarter about which alerts get triggered.”Lakatos also pointed out that when it comes to the kind of explanations for illegal behaviors that regulators usually want to see, AI is not capable of providing those answers. “AI doesn’t work on a theory,” he said. “AI just works on correlation.” That’s where having some smart people working in tandem with your AI comes in handy. “Regulators get more comfortable with a little bit of theory behind it.”H5 has identified at least a dozen areas related to compliance where AI can be of assistance, including: key document retention and categorization, personal identifiable information (PII) location and remediation, first-line level reviews of alerts, and policy applicability and risk identification.Data Classification, Methods, and ApproachesThere are various methods and approaches to data classification, including machine learning, linguistic modeling, sentiment analysis, name normalization, and personal data detection. Choosing the right one depends on what companies want their AI to do.“That’s why it’s really important to have a holistic program management style approach to this,” said Eric Pender, engagement manager at H5. “Because there are so many different ways that you can approach a lot of these problems.”Supervised machine learning models, for instance, ingest data that’s already been categorized, which makes them great at making predictions and predictive models. Unsupervised machine learning models, on the other hand, which take in unlabeled, uncategorized information, are really good at data pattern and structure recognition.“Ultimately, I think this comes down to the question of what action you want to take on your data,” Pender said. “And what version of modeling is going to be best suited to getting you there.”Setting Expectations for AI DeploymentOnce you’ve determined the type of data classification that best suits your needs, it’s crucial to set expectations for the AI deployment within your company. This process includes third-party evaluation, procurement, testing, and data processing agreements. Buying an off-the shelf solution is a possibility, though some organizations—especially large ones—may have the resources to build their own. It’s also possible to create a solution that features elements of both. In either case, obtaining C-suite buy-in is a critical step that should not be overlooked. And to maintain trust, it’s important to properly notify workers throughout the organization and remain transparent throughout the process.Allowing enough time for proper proof of concept evaluation is also key. When it comes to creating a timeline for deploying AI within an organization, “it’s really important for folks to be patient,” according to Pender. “People who are new to AI sometimes have this perception that they’re going to buy AI and they’re going to plug it in and it just works. But you really have to take time to train the models, especially if you’re talking about structured algorithms and you need to input classified data.”Education, documentation, and training are also key aspects of setting expectations for AI deployment. Bear in mind, at its heart implementing an AI system is a form of change management.“Think about your organization and the culture, and how well your employees or impacted team members receive change,” said Timia Moore of Wells Fargo. “Sometimes—if you are developing that change internally, if they’re at the table, if they have a voice, if they feel they’re a meaningful part of it—it’s a lot easier than if you just have some cowboy vendor come in and say, ‘We have the answer to your problems. Here it is, just do what we say.’”Keeping AI Solutions Compliant and Avoiding BiasWhen deploying an AI system, the last area of consideration discussed by the panel was how to keep the AI solution itself compliant and free of bias. Best practices include ongoing monitoring of the system, A/B testing, and mitigating attacks on the AI model.It’s also important to always keep in mind that AI systems are inherently dependent on their own training data. In other words, these systems are only as good as their inputs, and it’s crucial to make sure biases aren’t baked into the AI from the beginning. And once the system is up and running—and learning—it’s important to check in on it regularly.“There’s an old computer saying, ‘Garbage in, garbage out,’ said Lakatos. “The thing with AI is people have so much faith in it that it is become more of ‘garbage in, gospel out.’ If the AI says it, it must be true…and that’s something to be cautious of.”In today’s digital world, AI systems are becoming more and more integral to compliance and a host of other business functions. Educating yourself and making sure your company has a plan for the future are essential steps to take right away.The entire H5 webcast, “New Rules, New Tools: AI and Compliance,” can be viewed here.ai-and-analytics; data-privacyccpa, gdpr, blog, ai, big-data, -data-classification, fcpa, artificial-intelligence, compliance, ai-and-analytics, data-privacyccpa; gdpr; blog; ai; big-data; data-classification; fcpa; artificial-intelligence; compliancemitch montoya
December 14, 2021
Blog

Minimizing Self-Service eDiscovery Software Tradeoffs: 3 Tips Before Purchasing

Legal professionals often take for granted that the eDiscovery software they leverage in-house must come with capability tradeoffs (i.e., if the production capability is easy to use, then the analytics tools are lacking; if the processing functionality is fast and robust, then the document review platform is clunky and hard to leverage, etc.).The idea that these tradeoffs are unavoidable may be a relic passed down from the history of eDiscovery. The discovery phase of litigation didn’t involve “eDiscovery” until the 1990s/early 2000s, when the dramatic increase in electronic communication led to larger volumes of electronically stored information (ESI) within organizations. This gave rise to eDiscovery software that was designed to help attorneys and legal professionals process, review, analyze, and produce ESI during discovery. Back then, these software platforms were solely hosted and handled by technology providers that weren’t yet focused entirely on the business of eDiscovery. Because both the software and the field of eDiscovery were new, the technology often came with a slew of tradeoffs. At the time, attorneys and legal professionals were just happy to have a way to review and produce ESI in an organized fashion, and so took the tradeoffs as a necessary evil.But eDiscovery technology, as well as legal professionals’ technological savvy, has advanced light years beyond where it was even five years ago. Many firms and organizations now have the knowledge and staff needed to move to a “self-service, spectra” eDiscovery model for some or all of their matters – and eDiscovery technology has advanced enough to allow them to do so. Unfortunately, despite these technological advancements, the tradeoffs that were so inherent in the original eDiscovery software still exist in some self-service, spectra eDiscovery platforms. Today, these tradeoffs often occur when technology providers attempt to develop all the technology required in an eDiscovery platform themselves. The eDiscovery process requires multiple technologies and services to perform drastically different and overlapping functions – making it nearly impossible for one company to design the best technology for each and every eDiscovery function, from processing to review to analytics to production.To make matters worse, the ramifications of these tradeoffs are much wider than they were a decade ago. Datasets are much larger and more diverse than ever before – meaning that technological gaps that cause inefficiency or poor work product will skyrocket eDiscovery costs, amplify risk, and create massive headaches for litigation teams. But because these types of tradeoffs have always existed in one form or another since the inception of eDiscovery, legal professionals still tend to accept them without question.But rest assured best-in-class technology does exist now for each eDiscovery function. The trick is being able to identify the functionality that is most important to your firm or organization, and then select a self-service, spectra eDiscovery platform that ties all the best technology for those functions together under one seamless user interface.Below are three key steps to prepare for the research and purchasing process that will help drastically minimize the tradeoffs that many attorneys have grown accustomed to dealing with in self-service, spectra eDiscovery technology. Before you begin to research eDiscovery software, you’ve got to fully understand your firm or organization’s needs. This means finding out what eDiscovery technology capabilities, functionality, and features are most important to all relevant stakeholders. To do so:Talk to your legal professionals and lawyers about what they like and dislike about the current technology they use. Don’t be surprised if users have different (or even opposing) positions depending on how they use the software. One group may want a review platform that is scaled down without a lot of bells and whistles, while another group heavily relies on advanced analytics and artificial intelligence (AI) capabilities. This is common, especially among groups that handle vastly different matter types, and can actually be a valuable consideration during the evaluation process. For instance, in the scenario above, you know you will need to look for eDiscovery software that can flex and scale from the smallest matter to the largest, as well as one that can create different templates for disparate use cases. In this way, you can ensure you purchase one self-service, spectra eDiscovery software that will meet the diverse needs of all your users.Communicate with IT and data security teams to ensure that any platform conforms with their requirements.These two groups often end up being pulled into discussions too late once purchasing decisions have already been made. This is unfortunate, as they are integral to the implementation process, as well as to ensuring that all software is secure and meets all applicable data security requirements. Data security in eDiscovery is non-negotiable, so you want to be sure that the eDiscovery technology software you select meets your firm or organization’s data security requirements before you get too far along in the purchasing process.Create a prioritized list of the most important capabilities, functionality, and attributes to all the stakeholders once you’ve gathered feedback.Having a defined list of must-haves and desired capabilities will make it easier to vet potential technology software and ultimately help you identify a technology platform that fits the needs of all relevant stakeholders.ConclusionWith today’s advanced technology, attorneys and legal professionals should not have to deal with technology gaps in their self-service, spectra eDiscovery software, just as law firms and organizations should not have to blindly accept the higher eDiscovery cost and risk those gaps cause downstream. Powerful best-in-class technology for each step of the eDiscovery process is out there. Leveraging the steps above will help you find a self-service, spectra eDiscovery software solution that ties all the functionality you need under one seamless, easy-to-use interface.For more detailed advice about navigating the purchasing process for self-service, spectra eDiscovery software, download our self-service, spectra eDiscovery Buyer’s Guide here. ediscovery-review; ai-and-analyticsself-service, spectra, review, analytics, processing, blog, production, ediscovery-review, ai-and-analyticsself-service, spectra; review; analytics; processing; blog; productionsarah moran
December 8, 2020
Blog

Legal Tech Trends from 2020 and How to Prepare for 2021

Legal tech was no match for 2020. Everyone’s least favorite year wreaked havoc on almost every aspect of the industry, from data privacy upheavals to a complete change in the way employees work and collaborate with data.With the shift to a remote work environment by most organizations in the early spring of 2020, we saw an acceleration of the already growing trend of cloud-based collaboration and video-conferencing tools in workplaces. This in turn, means we are seeing an increase in eDiscovery and compliance challenges related to data generated from those tools – challenges, for example, like collecting and preserving modern attachments and chats that generate from tools like Microsoft Teams, as well compliance challenges around regulating employee use of those types of tools.However, while collaboration tools can pose challenges for legal and compliance teams, the use of these types of tools certainly did help employees continue to work and communicate during the pandemic – perhaps even better, in some cases, than when everyone was working from traditional offices. Collaboration tools were extremely helpful, for example, in facilitating communication between legal and IT teams in a remote work environment, which proved increasingly important as the year went on. The irony here is that with all the data challenges these types of tools pose for legal and IT teams, they are increasingly necessary to keep those two departments working together at the same virtual table in a remote environment. With all these new sources and ways to transfer data, no recap of 2020 would be complete without mentioning the drastic changes to data privacy regulations that happened throughout the year. From the passing of new California data privacy laws to the invalidation of the EU-US privacy shield by the Court of Justice of the European Union (CJEU) this past summer, companies and law firms are grappling with an ever-increasing tangle of regional-specific data privacy laws that all come with their own set of severe monetary penalties if violated. How to Prepare for 2021The key-takeaway here, sadly, seems to be that 2020 problems won’t be going away in 2021. The industry is going to continue to rapidly evolve, and organizations will need to be prepared for that.Organizations will need to continue to stay on top of data privacy regulations, as well as understand how their own data (or their client’s data) is stored, transferred, used, and disposed of.Remote working isn’t going to disappear. In fact, most organizations appear to be heading to a “hybrid” model, where employees split time working from home, from the office, and from cafes or other locations. Organizations should prepare for the challenges that may pose within compliance and eDiscovery spaces.Remote working will bring about a change in employee recruiting within the legal tech industry, as employers realize they don’t have to focus talent searches within individual locations. Organizations should balance the flexibility of being able to expand their search for the best talent vs. their need to have employees in the same place at the same time.Prepare for an increase in litigation and a surge in eDiscovery workload as courts open back up and COVID-related litigation makes its way to discovery phases over the next few months.AI and advanced analytics will become increasingly important as data continues to explode. Watch for new advances that can make document review more manageable.With continuing proliferation of data, organizations should focus on their information governance programs to keep data (and costs) in check.To discuss this topic further, please feel free to reach out to me at SMoran@lighthouseglobal.com. ai-and-analytics; ediscovery-review; legal-operationscloud, analytics, emerging-data-sources, data-privacy, ai-big-data, blog, ai-and-analytics, ediscovery-review, legal-operations,cloud; analytics; emerging-data-sources; data-privacy; ai-big-data; blogsarah moran
August 23, 2022
Blog

Legal's Balancing Act: Risk, Innovation, and Advancing Strategic Priorities

As legal teams expand their responsibilities and business impact throughout their organizations, there’s a delicate balance legal professionals must strike in their roles: be better partners and balance risk.To tease out this complex and dynamic relationship, Megan Ferraro, Associate General Counsel of eDiscovery and Information Governance at Meta, recently joined as a guest on Law & Candor.Highlights from that conversation are below.The legal function's bigger roleLegal departments are playing a more significant part in strategy and innovation because the role of in-house counsel has changed greatly in the past few decades. There's been a considerable shift in forward-thinking companies from viewing legal as a blocker to more of a strategic partner.Successful legal teams are partnering internally to ensure attorneys across their organization get early signals to address potential inquiries in litigation or investigations. Additionally, companies are now hiring in-house teams to fill roles where those legal partners can identify and assess legal risk early on.In-house counsel have become advocates for why legal deserves a seat at the table at all company levels, which contributes to the overall success of the business.A great example of how legal is partnering with other parts of their organizations to drive innovation is through the role of product counsel at technology companies. The most effective product counsel have a deep understanding of product goals early, which helps them to identify and address legal issues more quickly and accurately. By working closely with the product team through development, updates, and deployment, they also serve as a conduit between legal and product teams to help advance projects and address potential risks.Critical risks for legal teams todayOne of the most significant challenges for in-house legal teams is keeping up with the pace of their organization’s growth—whether it’s developing products and services, forging unique partnerships, or adopting new technology and software.Often, business teams do not appreciate how even the slightest difference in facts can contribute to different outcomes in the law. Managing the expectations of the business regarding the time it takes to do legal analysis is extremely important.It's normal to take the time to think about these challenging issues. An important adage for the business to remember is that the law is not “Minute Rice.”The balancing act between risk and innovationWeighing risk and innovation requires that you keep pace with changes throughout the organization, including pivots in strategic priorities, with a variety of stakeholders. Staying ahead of these developments and allowing counsel enough time to evaluate potential impacts is key to understanding if the benefits are worth the risk, and if not, how to adjust a business plan accordingly.Along with providing the guidance stakeholders need to assess risk and make decisions, legal teams also frequently manage how organizational data is stored and accessed with IT departments. If other teams throughout the business do not have the information they need, they can't move as fast to help the company innovate. How long to keep data, what format it is in, and who can access it are all questions that can have a huge impact on innovation.Cross-functional collaborationIn-house counsel are increasingly working with other leaders in their organizations to inform strategic decisions, but having a seat at the table requires listening and staying connected to “clients” within the business. Strategic priorities can change very often, especially in a fast-paced environment.Knowing not just what these priorities are but how the business interprets them and what success means to the company will contribute to the most successful legal partners for balancing risk factors and supporting innovation.To listen to the full conversation and hear more stories from the legal technology revolution, check out Law & Candor.ai-and-analytics; information-governanceblog, risk-management, ai-and-analytics, information-governanceblog; risk-managementlighthouse
August 26, 2020
Blog

Legal Tech Trends to Watch

We are now past the midpoint of 2020, which means we are more than halfway through the first year of a brand new decade. This midway point is a great time to take a look at the hottest trends in the legal tech world and predict where those trends may lead us as we move further into the new decade.If we were evaluating future trends in legal tech during a normal year, there might be one or two uncertainties or prominent events from the first half of the year that we would need to take into account. Maybe a shift in global data safety laws or a change to the Federal Rules of Evidence. But, as I’m sure we’re all tired of reading, 2020 has not been a normal year (“the new normal”, “these uncertain times”, “these unprecedented events”, etc. etc. etc.). No matter how you phrase it, we can all agree that 2020 has been… unpredictable. Or to be a bit less understated: the first six months of 2020 have drastically changed how many corporations and law firms function on a day-to-day basis, and industry leaders are predicting that many of those changes will have a lasting effect. For example, a recent Gartner survey of company leaders from HR, legal and compliance, finance, and real estate industries showed that 82% of those responding plan to allow employees to continue working remotely in some capacity once employees are allowed back in the office, while close to half responded that they will allow employees to work remotely full time.So what does that mean for the legal tech industry? Well, while the world around us has changed dramatically due to the events of 2020, many of those changes actually dovetail quite nicely into where legal tech was already headed. In this article, we will look at the latest trends in legal tech and how 2020, in all its chaos, has affected those trends.SaaS self-service, spectra eDiscovery: The growing adoption of cloud services is leading us to a unique hybrid approach to managing eDiscovery programs: SaaS self-service, spectra eDiscovery solutions. This new subscription-based approach gives law firms and corporate legal teams the ability to take charge of their own fates by bringing their eDiscovery program in house, while leaving much of the security risks, costs, and IT burdens to a reputable, secure vendor that can house the data in a private cloud or within its own data centers. The benefit of controlling your own eDiscovery program in house are obvious. Legal teams would have the ability to control costs and access their data whenever and wherever they need to without the expense and hassle of having to go through a middle man. It would also give legal teams more control over their own costs, deadlines, and workflows, with the ability to fluidly scale up or down depending on case need. The self-service, spectra subscription approach is also unique in that it leaves the burden and risk of creating and managing an entire IT data storage infrastructure with the vendor. A security-minded vendor with SOC 2 and ISO 27001 security certifications can house data in a private cloud or their own data center, providing a completely secure environment without the overhead and risk of managing that data in house. A subscription service also may come with the reassurance that if a project or timeline becomes more burdensome than expected, the in-house team could easily pass off a workflow or entire project to the vendor seamlessly.In 2020, a SaaS self-service, spectra solution has the added benefit of being available in every location around the world, at any time. If a worldwide pandemic has taught us anything, it is that traveling to multiple locations throughout the world to set up data centers to handle the specific needs of a case or a client is no longer a feasible solution. Housing and accessing data in the Cloud does not require abiding by global travel restrictions or mandatory quarantines. A SaaS self-service, spectra model where data is stored in the Cloud allows for global expansion without concern for pandemics, natural disasters, or political uncertainty.Big Data Analytics: Big data analytics and technology assisted review (TAR) are certainly not new ideas to 2020. The technology and tools have existed for years and the legal industry has slowly been adopting them. (I say “slowly” in contrast to how fast these tools are developed and adopted in other areas outside of the legal field.) The need to find reliable ways to comb through massive amounts of data in the eDiscovery and compliance arenas will only grow, and we can expect that the technology will only continue to improve and become even more reliable.One could argue that the biggest hindrance to big data analytics in the legal world is not the advancement of the technology, but rather the ability and willingness of many lawyers and courts to adopt that technology as a defensible, necessary legal tool in the modern world of big data. The legal field is notoriously slow to adopt new technology. As a personal example, I clerked for a prominent, incredibly smart criminal defense attorney who still used carbon paper to make copies of important court filings. This occurred during the same year that the third season of Lost aired (or the same year that the first season of Madmen premiered - pick your reference. Either way, not that long ago). And every law firm is rife with stories of the old-school partner who holes up in the firm library (the existence of which could also be an example to my point, in and of itself) because she doesn’t believe in online legal research. While the practice of law is steeped in an awe-inspiring mix of tradition and history, it can also be frustratingly slow to expand on that tradition because it refuses to use a copier. Even Don Draper had a copier by the second season.However, if we can say one positive thing about 2020, it is that the last six months have pushed the legal world into the technological future more than any other time period to date. Almost every in-house counsel, law firm, and court across the globe has been forced to find a way to conduct its business in a completely remote environment. This means that judges, law firms, and in-house counsel are facing the reality that the legal world needs to rely on and adapt to technology in order to survive. One hopes that this new reality helps lead to a more robust adoption of technological advancement in the legal world in general, and hopefully, a shift away from the reactionary relationship the legal industry always seems to have with technology. Because data volumes will only continue to explode and there will come a time in the near future when it will not be defensible to tell a judge or a client that discovery may take years in order to allow time for a team of 200 contract attorneys to look at each individual document that hits on a search term. Analytics will eventually be a requirement for a defensible eDiscovery program, and 2020 may be the year that helps many in the legal field take a more proactive approach to its adoption.New sources of data (i.e. collaboration tools): Like big data analytics, online collaboration tools like Teams and Slack are not new to 2020, but this year has certainly helped push the use of these tools to the forefront of many companies’ day-to-day business. It seems like new collaboration tools arise every month and companies are increasingly pushing employees to utilize them. Organizations are realizing the value of these collaboration tools in a post-COVID environment, where online collaboration is not only preferable, but absolutely critical. Not to repeat some of 2020’s greatest memes, but I’m sure we’ve all seen the 2020 adage that this is the year that we all realized that not only could that meeting have been an email, that email could have been an instant message. Data actually proves that theory to be true. Microsoft for example, found that chat messages within Microsoft Teams meetings increased over 10x from March 1 to June 1.The widespread use of these types of tools, in turn, generates more and more unique data that needs to be accounted for during an eDiscovery or compliance event Going forward, organizations will need to ensure that they know which tools their employees or contractors are using, what data those tools generate, and how to defensibly collect, process, and review that data in the event of a lawsuit or investigation (or retain a vendor who can guide them through that process). Which brings us to our final 2020 trend…Continuous program update subscription services: Going hand-in-hand with the above, watch out for eDiscovery programs and solutions that can manage the continuous delivery of program updates on all of the applications and platforms that organizations use to effectively perform their work. Gone are the days when the same data collection or processing workflow could be used for years at a time and still be defensible. From iPhone iOS to Teams, systemic updates to work applications and platforms can now roll out on an almost weekly basis, and it is imperative that legal and compliance teams stay on top of those updates and adapt to them in order to ensure that company information remains secure and that any data generated can be defensibly collected and processed when needed. In 2020 and beyond, look for technologically advanced eDiscovery subscription services that give companies the ability to prepare for and stay ahead of the never-ending stream of software updates.To discuss this topic further, please feel free to reach out to me at SMoran@lighthouseglobal.com.ai-and-analytics; ediscovery-review; legal-operationscloud, ai-big-data, blog, ai-and-analytics, ediscovery-review, legal-operations,cloud; ai-big-data; blogsarah moran
February 16, 2021
Blog

Legal Tech Innovation: The Future is Bright

Recently, I had the opportunity to (virtually) attend the first three days of Legalweek, the premier conference for those in the legal tech industry. Obviously, this year’s event looked much different than past years, both in structure and in content. But as I listened to legal and technology experts talk about the current state of the industry, I was happily surprised that the message conveyed was not one of doom and gloom, as you might expect to hear during a pandemic year. Instead, a more inspiring theme has emerged for our industry - one of hope through innovation.Just as we, as individuals, have learned hard lessons during this unprecedented year and are now looking towards a brighter spring, the legal industry has learned valuable lessons about how to leverage technology and harness innovation to overcome the challenges this year has brought. From working remotely in scenarios that previously would have never seemed possible, to recognizing the vital role diversity plays in the future of our industry – this year has forced legal professionals to adapt quickly, utilize new technology, and listen more to some of our most innovative leaders.Below, I have highlighted the key takeaways from the first three days of Legalweek, as well as how to leverage the lessons learned throughout this year to bring about a brighter future for your organization or law firm.“Human + Machine” not “Human vs. Machine” Almost as soon as artificial intelligence (AI) technology started playing a role within the legal industry, people began debating whether machines could (or should) eventually replace lawyers. This debate often devolves into a simple “which is better: humans or machines” argument. However, if the last year has taught us anything, it is that the answers to social debates often require nuance and introspection, rather than a “hot take.” The truth is that AI can no longer be viewed as some futuristic option that is only utilized in certain types of eDiscovery matters; nor should it be fearfully viewed as having the potential to replace lawyers in some dystopian future. Rather, AI has become essential to the work of attorneys and ultimately will be necessary to help lawyers serve their clients effectively and efficiently.1Data volumes are exponentially growing year after year, so much so that soon, even the smallest internal investigation will involve too much data to be effectively reviewed by human eyes alone. AI and analytics tools are now necessary to prioritize, cull, and categorize data in most litigations for attorneys to efficiently find and review the information they need. Moreover, advancements in AI technology now enable attorneys to quickly identify categories of information that previously required expensive linear review (for example, leveraging AI to identify privilege, protected health information (PHI), or trade secret data).Aside from finding the needle in the haystack (or simply reducing the haystack), these tools can also help attorneys make better, more strategic counseling and business decisions. For example, AI can now be utilized to understand an organization’s entire legal portfolio better, which in turn, allows attorneys to make better scoping and burden arguments as well as craft more informed litigation and compliance strategies.Thus, the age-old debate of which is better (human or machine learning) is actually an outdated one. Instead, the future of the legal industry is one where attorneys and legal professionals harness advanced technology to serve their clients proficiently and effectively.Remote Working and Cloud-Based Tools Are Here to StayOf course, one of the biggest lessons the legal industry learned over the past year is how to effectively work remotely. Almost every organization and law firm across the world was forced to quickly pivot to a more remote workforce – and most have done so successfully, albeit while facing a host of new data challenges related to the move. However, as we approach the second year of the pandemic, it has become clear that many of these changes will not be temporary. In fact, the pandemic appears to have just been an accelerator for trends that were already underway prior to 2020. For example, many organizations were already taking steps to move to a more cloud-based data architecture. The pandemic just forced that transition to happen over a much shorter time frame to facilitate the move to a remote workforce.This means that organizations and law firms must utilize the lessons learned over the last year to remain successful in the future, as well as to overcome the new challenges raised by a more remote, cloud-based work environment. For example, many organizations implemented cloud-based collaboration tools like Zoom, Slack, Microsoft Teams, and Google Workspace to help employees collaborate remotely. However, legal and IT professionals quickly learned that while these types of tools are great for collaboration, many of them are not built with data security, information governance, or legal discovery in mind. The data generated by these tools is much different than traditional e-mail – both in content and in structure. For example, audible conversations that used to happen around the water cooler or in an impromptu in-person meeting are now happening over Zoom or Microsoft Teams, and thus may be potentially discoverable during an investigation or legal dispute. Moreover, the data that is generated by these tools is structured significantly differently than data coming from traditional e-mail (think of chat data, video data, and the dynamic “attachments” created by Teams). Thus, organizations must learn to put rules in place to help govern and manage these data sources from a compliance, data security, and legal perspective, while law firms must continue to learn how to collect, review, and produce this new type of data.It will also be of growing importance in the future to have legal and IT stakeholder collaboration within organizations, so that new tools can be properly vetted and data workflows can be put in place early. Additionally, organizations will need a plan in place to stay ahead of technology changes, especially if moving to a cloud-based environment where updates and changes can roll out weekly. Attorneys should also consider technology training to stay up-to-date and educated on the various technology platforms and tools their company or client uses, so that they may continue to provide effective representation.Information Governance is Essential to a Healthy Data StrategyRelated to the above, another key theme that emerged over the last year is that good information governance is now essential to a healthy company, and that it is equally important for attorneys representing organizations to understand how data is managed within that organization.The explosion of data volumes and sources, as well as the unlimited data storage capacity of the Cloud means that it is essential to have a strong and dynamic information governance strategy in place. In-house counsel should ensure that they know how to manage and protect their company’s data, including understanding what data is being created, where that data resides, and how to preserve and collect that data when required. This is important not only from an eDiscovery and compliance perspective but also from a data security and privacy perspective. As more jurisdictions across the world enact competing data privacy legislation, it is imperative for organizations to understand what personal data they may be storing and processing, as well as how to collect it and effectively purge it in the event of a request by a data subject.Also, as noted above, the burden to understand an organization’s data storage and preservation strategy does not fall solely on in-house counsel. Outside counsel must also ensure they understand their client’s organizational data to make effective burden, scoping, and strategy decisions during litigation.A Diverse Organization is a Stronger OrganizationFinally, another key theme that has emerged is around recognizing the increasing significance that diversity plays within the legal industry. This year has reinforced the importance of representation and diversity across every industry, as well as provided increased opportunities for education about how diversity within a workforce leads to a stronger, more innovative company. Organizational leaders are increasingly vocalizing the key role diversity plays when seeking services from law firms and legal technology providers. Specifically, many companies have implemented internal diversity initiatives like women leadership programs and employee-led diversity groups and are actively seeking out law firms and service providers that provide similar opportunities to their own employees. The key takeaway here is that organizations and law firms should continue to look for ways to weave diverse representation into the fabric of their businesses.ConclusionWhile this year was plagued by unprecedented challenges and obstacles, the lessons we learned about technology and innovation over the year will help organizations and law firms survive and thrive in the future.To discuss any of these topics more, please feel free to reach out to me at SMoran@lighthouseglobal.com.1 In fact, attorneys already have an ethical duty (imposed by the Rules of Professional Conduct) to understand and utilize existing technology in order to competently represent their clients.ai-and-analytics; ediscovery-review; legal-operationscloud, information-governance, ai-big-data, blog, ai-and-analytics, ediscovery-review, legal-operations,cloud; information-governance; ai-big-data; blogsarah moran
October 31, 2022
Blog

Legal and AI: A Symbiotic Relationship for Modern Disclosure

The goal of Practice Direction 57AD (PD57AD, previously known as the Disclosure Pilot Scheme) is to modernise the UK’s disclosure practice. This transformation is essential because the traditional, manual, and combative approach to disclosure is unsustainable in the face of today’s massive data volumes and ever-evolving data sources. Manually collecting and reviewing millions of documents one-by-one has become prohibitively expensive, impossibly time consuming, and prone to the risk of both under and over disclosure. When you add in the combative approach between opposing parties, the traditional disclosure process becomes a recipe for skyrocketing legal costs, missed deadlines, and data issues that can derail entire matters. Conversely, a more cooperative approach that leverages AI technology can help improve the process—by allowing attorneys to focus their expertise on critical parts of the matter and refining AI tools to better handle data now and for future, related matters.Thus, PD57AD focuses on two pivotal elements to modernise disclosure: cooperation and technology. Specifically, PD57AD requires parties to “liaise and cooperate with the legal representatives of the other parties to the proceedings…so as to promote the reliable, efficient, and cost-effective conduct of disclosure, including through the use of technology.” Similarly, the Disclosure Review Document asks that each party outline how they “intend to use technology assisted review/data analytics to conduct a proportionate review of the data set” and further reminds parties of their duty to cooperate. Through PD57AD, legal teams’ relationships to each other and with technology is changing in a few crucial ways that present opportunities to work smarter, more cost effectively , and with greater agility.The duty to cooperateJudges are increasingly focusing on the language requiring cooperation between parties in PD57AD and will admonish counsel who attempt to use the disclosure process as a tool to punish an opposing party. For instance, in McParland & Partners Ltd v Whitehead, when a dispute arose involving the framing of the issues of disclosure, the judge took the opportunity to broadly remind both parties of the following: “It is clear that some parties to litigation in all areas of the Business and Property Courts have sought to use the Disclosure Pilot as a stick with which to beat their opponents. Such conduct is entirely unacceptable, and parties can expect to be met with immediately payable adverse costs orders if that is what has happened.”As data volumes grow and PD57AD becomes more cemented into the fabric of UK’s disclosure practice, there is a growing intolerance for “weaponised” disclosure practices by courts. Certainly, parties can expect that the days of “data dumping” (i.e., the strategy of over collecting and producing documents to bury the opposing party in data) or conversely, winning burden arguments related to the cost and time of manual review, are over. The duty to leverage technology Instead of this combative approach, courts will expect that parties come together cooperatively to agree on the use of technology to perform targeted disclosure that is both more cost effective and efficient. Indeed, in a cloud-based world, this symbiotic relationship between technology and legal is the only successful path forward for an effective disclosure process. Under this modern approach, the technology used to collect, cull, review, and produce data must be leveraged in such a way that results can be verified by opposing counsel and judges. This means that all workflows and processes must be transparent, defensible, and agreed upon by opposing counsel. Even prior to the implementation of the Disclosure Pilot Scheme in 2018, judges had begun to crack down on parties who attempted to “go it alone” by unilaterally leveraging technology to cull or search data in a non-transparent way, without the consent of opposing counsel and/or without implementing industry standard best practices. For example, in Triumph Controls UK Ltd., the judge explicitly admonished a party for deploying a computer assisted review (CAR) search strategy overseen by “ten paralegals and four associates” rather than a “single, senior lawyer who has mastered the issues in the case” to ensure that the criteria for relevance was consistently applied to effectively teach the CAR technology. He also rebuked the party’s CAR approach because it was not transparent and could not be independently verified. Because these technology best practices were not followed, the judge forced the producing party to go back and cooperatively agree with opposing counsel on an alternative review methodology to sample and re-review a portion of the original dataset. The future of disclosure for counsel and clients The modernisation of the disclosure process through cooperation and technology means that it will be increasingly imperative that each party has the requisite legal and technology expertise to meet the requirements of PD57AD. Specifically, each party must have a barrister who understands disclosure law and can guide them through each step of the process in a way that complies with PD57AD. Each party should also have an expert who understands how to implement technology to perform targeted, efficient, and transparent disclosure workflows. As seen from legal decisions emanating around PD57AD, parties without this expertise who attempt to “wing it” will increasingly find themselves facing delayed proceedings, hefty legal costs, and unfavourable judgements by courts. Law firms or corporations that don’t have the requisite expertise internally must look for an external partner that does. This is where an experienced managed review partner can provide a true advantage to both law firms and their clients. Parties should look for a partner who can provide a team of technology experts and experienced barristers, working in tandem and leveraging the industry’s best technology. This team should be ready to jump in at the outset of every matter to understand the nuances of the client’s data, as well as the underlying legal issues at play, so that each step of the disclosure process is performed transparently, defensibly, and efficiently. Over time, a managed review team can become a valuable extension of corporate in-house and law firm teams. This partner can use institutional knowledge, gained by working with the same clients across multiple matters, to create customised, strategic, and automated disclosure workflows. These tailored processes, designed directly for a client’s data infrastructure and technology, can save millions and achieve better outcomes. In turn, law firms can refocus their attention on the evidence that actually matters, while assuring their clients that the disclosure process is contributing to lower legal costs and better overall results.ConclusionUnder the modern approach to disclosure, parties must have someone on their team with the necessary legal and technology expertise to perform the type of targeted, cooperative, and transparent disclosure methodology now required by PD57AD. This partnership between legal and technology is truly the only path forward for a successful disclosure endeavour in the face of today’s more voluminous and complicated datasets. Parties that do not have this expertise should look for an experienced managed review partner who can provide a consistent team of legal and technology experts who can perform each step of the disclosure process efficiently, transparently, and defensibly. ai-and-analytics; ediscovery-reviewreview, ai-big-data, blog, ai-and-analytics, ediscovery-reviewreview; ai-big-data; blogjennifer cowman
January 4, 2021
Blog

How to Overcome Common eDiscovery Challenges for Franchises

Co-authored by Hannah Fotsch, Associate, Lathrop GPM; Samuel Butler, Associate, Lathrop GPM; and Casey Van Veen, Vice President Global eDiscovery Solutions, Lighthouse2020 has been an incredibly tough year for many businesses, with companies big and small shuttering at a record pace due to COVID-19 restrictions and significant reductions in customer travel and spending. But there is one surprising business type that many people seem to want to continue to invest in despite the pandemic: the franchise business model.For example, both the U.S. Chamber of Commerce and Business.com recently highlighted franchise-model businesses that were not only surviving the pandemic and associated lockdowns, but thriving. And in fact, one of those thriving franchise business types called out by the authors was franchise consulting businesses (consultants that help match aspiring franchise owners with franchise opportunities). Apparently, the pandemic has actually increased investment interest in franchise opportunities.There may be a few different reasons why people are looking to the franchise business model during an economic downturn. Many franchise businesses have the benefit of a widely known name brand and market presence. Many have the benefit of leveraging a fully baked business model ‚Äì one that has presumably already been proven successful. Many also have more support than solo businesses in a variety of key business development areas, including marketing, advertising, and training. In short, the franchise business model may have more appeal during this economic upheaval than a solo business model because people trust the support it can provide in times of economic trouble.However, there are still several common pitfalls that can drag profits down and slow economic growth, leaving the franchise model just as exposed to failure as a solo business model in this time of economic uncertainty. One of those pitfalls is litigation and internal investigations, and the resulting eDiscovery challenges those two can raise. Not only do businesses operating within a franchise model face the same types of litigation and employee workplace issues that all other businesses face ‚Äì they may also have to deal with added litigation that is unique to the franchisor-franchisee relationship. All of this means increased cost and overhead, especially when it comes to preserving, collecting, reviewing, and producing the required data during the discovery phase.In this article, we discuss the legal eDiscovery challenges and the primary legal issues that we see affecting franchise businesses, large and small. We‚Äôll also provide best-practice tips that can help keep eDiscovery costs down and enable franchise businesses to utilize their advantage and continue to survive and thrive during this trying time.Legal eDiscovery ChallengesThere are four main challenges we see affecting franchise businesses currently: (1) the explosion of data sources; (2) the increased frequency of internal investigations and compliance matters; (3) the lack of a playbook to ensure discovery is managed in a low risk, low-cost manner; and (4) big data challenges.Explosion of Data SourcesWalk through any franchise store, restaurant, or facility today and you will be amazed at the number of devices and systems that must be contemplated in discovery.Fixed systems on property: Video security, card key access, time clock, email, and desktop computersCloud-based systems: Many of the above systems can also be found in the Cloud along with M365 and Google Suite of business documents, email, collaboration tools, and backupsEmployee sources: Personal email, cell phones (video, app chat, texts), iPads, and tabletsCorporate maintained systems: Marketing documents, HR systems, Material Safety Data Sheets (MSDSs), proprietary training, and competitive analysis documentationMoreover, employees at different franchise businesses may often choose to communicate on different platforms, which can exponentially diversify data sources. This amount and variety of sources can pose a myriad of challenges from an eDiscovery perspective.The duty to preserve data begins as soon as litigation is ‚Äúreasonably foreseeable.‚Äù Thus, once an allegation that may lead to litigation surfaces, the clock begins ticking, not only to effectively respond to the allegation but also to ensure that evidentiary data at issue is preserved. And once discovery begins, that preserved data will need to be collected. All of this can present challenges for the ill-prepared: How do you collect data from employees‚Äô personal devices? What are the local state and federal rules regarding the privacy of personal devices? How does collecting the data differ from Apple device vs Android devices? The need to be aware of platforms that create data and the possibilities for collecting that data from them must be addressed before litigation begins, or businesses risk losing data that could be essential to litigation.Key takeaway: Know your data sources as a standard course of business. Make sure that you know where data resides, how it can be accessed, and what can and cannot be collected from data sources.Internal Investigations & Compliance MattersThere has been a drastic increase in internal investigations and compliance matters with franchise clients recently. Hotline and compliance phone line tips, allegations around employee theft, and suspected fraud are on the rise. The key to resolving these types of investigations quickly and cost efficiently is speed. Attorneys and company executives need to know as soon as possible: is there truly an issue, how far does it go, how long has it been happening, how many employees does this effect, and what is the exposure (financially, socially). It is important to develop workflows and tools to help decision-makers and their legal experts sift through the mountains of data quickly.To understand the importance of this, consider this example. A company sales representative leaves the business and does not disclose their next line of work. A tip line reveals they the representative may have left for a competitor. Shortly thereafter, business deals that were executed and even ones in the pipeline suddenly disappear to a competitor. The former employer quickly conducts a forensic investigation on the representative‚Äôs laptop computer. Despite their attempt to hide their activity, the investigation reveals that the representative had downloaded proprietary customer lists, price sheets, and other valuable IP during their last week of employment and had also moved large chunks of confidential information from the company‚Äôs servers to thumb drives and utilized their personal email to store work communications. Without a strategic plan in place laying out how to quickly execute a forensic internal investigation in this type of situation, the company would have lost substantial revenue to a competitor.Companies that are particularly concerned about former employees stealing proprietary information can even go further than creating an effective investigatory and remediation strategy ‚Äì putting a departing employee forensic monitoring program in place can prevent this time of abuse from happening in the first place.Key takeaway: Have a program in place to certify that departing employees leave with only their personal belongings and not proprietary company information.Lack of an eDiscovery PlaybookPlaybooks come in many forms today: user manuals, company directives, cooking instructions, and recipe guides. A successful playbook for the legal department will establish a practical process to follow should a legal or compliance issue arise. Playbooks, like a checklist for a pilot about to fly a plane, ensure that everyone is following a solid process to avoid risk. These documents also prevent rogue players from recreating the wheel and going down potentially expensive rabbit holes.Repetitive litigation situations are particularly well suited for acting according to playbooks, and standardizing the response to these situations helps to ensure the predictability of both outcomes and expenses. For example, these documents can be as granular as necessary but typically include a few key topics such as:The process for responding to a 3rd party subpoena, service, or allegation of wrongdoingThe company‚Äôs systems that are typically subject to discoveryIT contacts that can help gather the information/dataA list of service providers/trusted partners to assistStandard data processing and production specifications (i.e. time zone, global deduplication, single-page TIFF images 400 dpi, text, and metadata fields)Preferred technologies to search, review, and produce documents (i.e. Relativity)Key takeaway: Playbooks can shave days off of the engagement process with outside counsel and data management companies. Having a repeatable process and plan on day one will save time and money as well as reduce risk.Big Data ChallengesFranchisors face issues in litigation that are unique to the industry, from vicarious liability claims involving the actions of franchisees or their employees to the sheer unpredictability that comes from extensive business relationships involving franchisees of a breathtaking range of sophistication. An increase in litigation leads to an increase in data. Even a run-of-the-mill dispute can lead to the need to gather (and potentially review) more than 100,000 documents. Add one or two more small disputes, and the amount of data quickly becomes unmanageable (and expensive).Fortunately, there have been impressive advances in the field of advanced legal analytical and artificial intelligence (AI). These innovative eDiscovery tools can help legal professionals analyze data to quickly identify documents that are not important to the litigation or investigation (thereby eliminating the need to review them), as well find the ‚Äústory‚Äù within a data set. For example, some analytical tools can help identify code words that an employee might have used to cover up nefarious actions, or analyze communications patterns that allow attorneys to identify the bad actors in a given situation. Other tools now have the capability of analyzing all of the company‚Äôs previously collected and attorney-reviewed data, which substantially reduces the need for attorney review in the current matter.All of these tools work to reduce data burden, which in turn reduces costs and increases efficiency.Key takeaway: Take the time to learn what eDiscovery solutions are available on the market today and how you can leverage them before you are faced with a need to use them.To discuss this topic more, please feel free to reach out to me at CVanVeen@lighthouseglobal.com. ediscovery-review; ai-and-analyticscloud, ai-big-data, compliance-and-investigations, ediscovery-process, blog, ediscovery-review, ai-and-analyticscloud; ai-big-data; compliance-and-investigations; ediscovery-process; blogcasey van veen
November 16, 2022
Blog

In Flex: Utilizing Hybrid Solutions for Today's eDiscovery Challenges

Self Service
As eDiscovery becomes more complex, organizations are turning to hybrid solutions that give them the flexibility to scale projects up or down as needed. Hybrid solutions offer the best of both worlds: the ability to use self-service, spectra for small matters or full-service for large and complex matters. This flexibility is essential in today's litigation landscape, where the volume and complexity of data can change rapidly. Hybrid solutions give organizations the agility to respond quickly and effectively to changing eDiscovery needs. In a recent webinar, I discussed hybrid eDiscovery solutions with Jennifer Allen, eDiscovery Case Manager at Meta, and Justin Van Alstyne, Senior Corporate Counsel, Discovery and Information Governance at T-Mobile. We explored some of the most pressing eDiscovery challenges, including data complexity, staffing, and implementation. We also discussed scenarios that require flexible solutions, keys to implementing new technology, and the future of eDiscovery solutions. Here are my key takeaways from our conversation.Current eDiscovery challengesA hybrid approach can transition between an internally managed solution and a full-service solution, depending on the nuances and unique challenges of the matter. This type of solution can be beneficial in situations where the exact needs of the case are not known at the outset. A few challenges come into play when deciding your approach to a project:Data volume: When dealing with large data sets, being able to scale is critical. If the data for a matter balloons beyond the capacity of an internal team, having experts available is critical to avoid any disruptions in workflows or errors.Data predictability: When it comes to analyzing data, consistency and predictability can greatly inform your approach to analysis. Standard data allows for more flexibility, as there is an expectation that the results will fall within a certain range. However, to ensure accurate representation, caution must be exercised when dealing with complicated big data. It is important to consider variables, potential outliers, and how the data is compiled and presented. Internal capacity: It's important to monitor and manage the internal workload of your team closely. When everyone is already at their maximum capacity, it can be tempting to outsource various tasks to a full-service project manager. Technology can be a more cost-effective and efficient method for filling the gaps.The right talent and knowledge Finding and utilizing the right team in today's competitive labor market can be difficult. A hybrid solution can help with this by providing a scalable way to get the most out of your workforce. With a hybrid solution, you have the option to staff fewer technical positions and provide training on the data or matters your organization most frequently encounters with your existing team. But, if you have a highly complicated data source, you can still staff an expert who knows how to handle that data. An expert can shepherd the data into a solution, do extensive quality control to ensure that you marry up the family relationships correctly, and give confidence that you're not making a mistake.To assuage concerns about the solution being misused, technology partners can provide training and education, and limit access to who can create, edit, or delete projects within the tool. This training helps to upskill your team by teaching them more advanced technology, which leads to more efficient and sophisticated approaches to matters.Flexible solutions for different mattersA hybrid solution can be a great option for a variety of matters, including internal investigations, enforcement matters, third-party subpoenas, and case assessments. These matters can benefit from the flexibility and scalability provided by a hybrid approach.When determining if a matter needs full-service treatment, it's important to consider the specific requirements at hand. Questions around the volume and frequency of data production, the types of data involved, and the necessary metadata and tagging all play a role in determining if a self-service, spectra approach will suffice or if full-service support is needed. It's always important to consider the timeline and potential challenges during the transition. Using experience with similar cases can provide valuable insight into what might work best in your situation.Keys for implementing eDiscovery solutionsThere are a few critical components to keep in mind when evaluating which eDiscovery solutions and tools are right for your business now, and as it grows.Training team: With any new solution or product there may be some trepidation around learning and adoption. Leverage vendor support to answer your questions and help train your team. Keep them involved in your communications with outside counsel and internal teams so you can receive suggestions and assistance if needed. As users get more experience with the software, they will begin to feel empowered and understand how the tool can be used most effectively. Scalability: One of the most significant hurdles to scaling big eDiscovery projects is the amount of data that needs to be processed. With new data sources, tighter deadlines, and more urgency, it can be difficult to keep up with the demand. Using a fully manual process or a project management solution has a greater chance for error or increased cost. A flexible solution can help your team keep up with increasing data volumes while reducing costs and errors. Automation: Automating repetitive tasks and workflows can dramatically speed up data collection and analysis. This can be a huge advantage when investigating large, complex cases. Additionally, automation can help to ensure that data is collected and parsed consistently.Cost-benefit analysis: Through support and training with a self-service, spectra tool, you can work to reduce the number of support requests. This can minimize the time your team spends on each request and ultimately lowers the cost of providing support. The cost reduction of self-service, spectra tools is often substantial, and it can have a positive snowball effect as your team becomes more skilled at the task. You can reinvest those savings into other business areas with less need for oversight and fewer mistakes. The future state of eDiscovery solutionsThe proliferation of DIY eDiscovery solutions has made it easier for organizations to take control of their data and manage their cases in-house. As AI technology, including continuous active learning (CAL) and technology-assisted review (TAR), continues to evolve, teams will better understand how to handle the growing demands of data and implement hybrid tools. As we move into the future of eDiscovery and legal technology, DIY models will play an increasingly important role in supporting business needs.ediscovery-review; ai-and-analytics; lighting-the-path-to-better-ediscoveryself-service, spectra, blog, ediscovery-review, ai-and-analyticsself-service, spectra; blogself-service, spectrapaige hunt
April 27, 2023
Blog

How the Right Legal Team, AI, and a Tech-Forward Mindset Can Optimize Review

To keep up with the big data challenges in modern review, adopting a technology-enabled approach is critical. Modern technology like AI can help case teams defensibly cull datasets and gain unprecedented early insight into their data. But if downstream document review teams are unable to optimize technology within their workflows and review tasks, many of the early benefits gained by technology can quickly be lost.In a recent episode of Law & Candor, I was happy to discuss the ongoing evolution of document review—including the challenges of incorporating available technologies. We explored some of the most pressing eDiscovery challenges, including today’s data complexity, and how to break through the barriers that keep document review stuck in the manual, linear review model. We also discussed the value of expertise and where it may be applied to optimize review in various phases of a project. Here are my key takeaways from our conversation.Increasing data complexity challenges and entrenched manual review paradigms Today’s digital data—a wellspring of languages, emojis, videos, memes, and unique abbreviations—looks nothing like the early days of electronic information, and it is certainly a universe away from the paper world where legal teams had to plow through documents with paper cuts, redaction tape, and all. Yet, that “paper process” thinking—the manual, linear review model—still has a firm hold in the legal community and presents an unfortunate barrier to optimizing review. The evolution is telling. As digital data began to take over, the early AI adopters and the “humans need to look at everything” review camps staked their ground. Although the two are moving closer together as time goes on, the use of technology is not as highly leveraged as it could be, leaving clients to pay the high costs of siloed review when technology-enabled processes could enhance accuracy and reduce costs. There are a variety of factors that can contribute to this resistance, but it may also be simply a matter of comfort; it’s always easier to do what you already know in the face of changes that may seem too difficult or complex to contemplate. For the best result, know when and where to leverage available technologies in the review process Human beings are certainly a core component of the document review process, and they always will be, but thinking about the entire review lifecycle strategically, from collection through trial preparation, is critical when it comes to understanding where you can gain value from technology. Technology should be considered a supplement to—not a substitute for—human assessment and knowing where to use it effectively is important. When considering the overall document review process, two key questions are: Where can you get more value by using technology? And where are the potential areas of either nuanced or high-risk communications that may require a more individualized assessment? The goal, after all, isn’t to replace humans with technology, but rather to replace outmoded contract review factories with smarter alternatives that leverage the strengths of both technology and human expertise. A smaller review team, coupled with experts who can effectively apply machine learning and linguistic modeling techniques in the right place, is a much more efficient and cost-effective approach than simply using a stable of reviewers. Technology buyers need to understand what a given tech does, how it differs from other products, and what expertise should be deployed to optimize its use Ironically, the profusion of viable tech options that can applied to expedite document review may be off-putting, but this is a “many shades of gray” situation. Many products do similar things and it is important to understand what the differences are—they may be significant. Today’s tools are quite powerful and layering them alongside the TAR tools that document review teams have become more familiar with is what allows for the true optimization of the review process. These tools are not plug-and-play, however. You need to know what you’re doing. It takes specific expertise to be able to assess the needs of the matter, the nature of the data, the efficacy of the appropriate tools, and whether they’re providing the expected result. Collaboration is still the critical core component of document reviewAnd let’s not forget that document review is a collaborative process between client counsel, project managers, and the review team. Within this crucial collaboration, specific expertise at various points in the process ensures the best result, including: • Expertise in review consulting to assess the right options for both the data that’s been collected and the project goals.• Individualized experts in both the out-of-the-box TAR technology as well as any proprietary technology being used so that the tech can be fine-tuned to optimize the benefits.• A core team of expert human reviewers with the appropriate skills.Experimentation with technology can help bridge the divideWith so many products available to enhance the document review workflow, it makes sense to test potential options. Running a parallel process for a particular aspect of the review to get comfortable with a new product can be very helpful. For example, privilege review, which is an expensive part of the review process, could be a good place to test an alternate workflow. An integrated approach works bestThe bottom line is that an integrated approach, advanced technology, and human expertise, is the best solution. The technology to increase the efficiency and effectiveness of document review is out there and most of it has been shown to be low risk and high value. The cost-effectiveness of an integrated approach has been shown over and over again: In using the appropriate technology, budgets can be reduced, and savings reinvested in new matters. It is up to the client and their legal and technology teams to work together in deciding what combination of tools makes the most sense for their organization and matter types. Just make sure to call upon those with the appropriate expertise to provide guidance. For more examples of how AI and human expertise are optimizing review, check out our review solutions page. ai-and-analytics; ediscovery-reviewai-big-data, blog, managed-review, ai-and-analytics, ediscovery-reviewai-big-data; blog; managed-reviewmary newman
March 17, 2021
Blog

How Name Normalization Accelerates Privilege Review

A time-saving tool that consolidates different names for the same entity can make all the difference. One of the many challenges of electronic information and messaging rests in ascertaining the actual identity of the message creator or recipient. Even when only one name is associated with a specific document or communication, the identity journey may have only just begun.The many forms our monikers take as they weave in and out of the digital realm may hold no import for most exchanges, but they can be critical when it comes to eDiscovery and privilege review, where accurate identification of individuals and/or organizations is key.It’s difficult enough when common names are shared among many individuals (hello, John Smith?), but the compilation of our own singular name variations and aliases as they live in the realm of digital text and metadata make life no less complicated. In addition, the electronic format of names and email addresses as they appear in headers or other communications can also make a difference. Attempts to consolidate these variations when undertaking document review is painstaking and error-prone.Not metadata — people. Enter “name normalization.” Automated name normalization tools come to the rescue by isolating and consolidating information found in the top-level and sub-level email headers. Automated name normalization is designed to scan, identify, and associate the full set of name variants, aliases, and email addresses for any individual referenced in the data set, making it easier to review documents related to a particular individual during a responsive review.The mindset shift from email sender and recipient information as simply metadata to profiles of individuals is a subtle but compelling one, encouraging case teams and reviewers to consider people-centric ways to engage with data. This is especially helpful when it comes to identifying what may be—and just as importantly what is not—a potentially privileged communication.Early normalization of names can optimize the privilege workflow.When and how name normalization is done can make a big difference, especially when it comes to accelerating privilege review. Name normalization has historically been a process executed at the end of a review for the purpose of populating information into a privilege log or a names key. However, performing this analysis early in the workflow can be hugely beneficial.Normalizing names at the outset of review or during the pre-review stage as data is being processed enables a team to gain crucial intelligence about their data by identifying exactly who is included in the correspondence and what organizations they may be affiliated with. With a set of easy-to-decipher names to work with instead of a mix of full names, nicknames, initials without context, and other random information that may be even more confusing, reviewers don’t have to rely on guesswork to identify people of interest or those whose legally-affiliated or adversarial status may trigger (or break) a privilege call.Name normalization tools vary, and so do their benefits. Not all name normalization tools are created equal, so it is important to understand the features and benefits of the one being used. Ideally, the algorithm in use maximizes the display name and email address associations as well as the quality and legibility of normalized name values, with as little cleanup required as possible. Granular fielded output options, including top level and sub-header participants is also helpful, as are simple tools for categorizing normalized name entities based on their function, such as privilege actors (e.g., in-house counsel, outside counsel, legal agent) and privilege-breaking third parties (e.g., opposing counsel, government agencies). The ability to automatically identify and classify organizations as well as people (e.g., government agencies, educational institutions, etc.) is also a timesaver.Identification of privilege-breaking third parties is important: although some third parties are acting as agents of either the corporation or the law firms in ways that would not break privilege, others likely would. Knowing the difference can allow a team to triage their privilege review by either eliminating documents that include the privilege breakers from the review entirely, significantly reducing the potential privilege pile, or organizing the review with this likelihood in mind, helping to prevent any embarrassing privilege claims that could be rejected by the courts.Products with such features can provide better privilege identification than is currently the norm, resulting in less volume to manage for privilege log review work later on and curtailing the re-reviews that sometimes occur when new privilege actors or breakers come to light later in the workflow. This information enables a better understanding of any outside firms and attorneys that may not have been included in a list of initial privilege terms and assists in prioritizing the review of documents that include explicit or implied interaction with in-house or outside counsel.Other privilege review and logging optimizers. Other analytics features that can accelerate the privilege review process are coming on the scene as AI tools become more accepted for document review. Privilege Analytics from within Lighthouse Matter Analytics can help review teams with this challenging workflow, streamlining and prioritizing second pass review with pre-built classifiers to automate identification of law firms and legal concepts, tag and tier potentially privileged documents, detect privilege waivers, create privilege reasons, and much more.Interested in how Name Normalization works in Privilege Analytics? Let us show you!ediscovery-review; ai-and-analyticsblog, name-normalization, privilege-review, ediscovery-review, ai-and-analyticsblog; name-normalization; privilege-reviewlighthouse
July 14, 2021
Blog

How to Get Started with TAR in eDiscovery

In a recent post, we discussed that requesting parties often demand more transparency with a Technology Assisted Review (TAR) process than they do with a process involving keyword search and manual review. So, how do you get started using (and understanding) TAR without having to defend it? A fairly simple approach: start with some use cases that don’t require you to defend your use of TAR to outside parties.Getting Comfortable with the TAR WorkflowIt’s difficult to use TAR for the first time in a case for which you have production deadlines and demands from requesting parties. One way to become comfortable with the TAR workflow is to conduct it on a case you’ve already completed, using the same document set with which you worked in that prior case. Doing so can accomplish two goals: You develop a better understanding of how the TAR algorithm learns to identify potentially responsive documents: Based on documents that you classify as responsive (or non-responsive), you will see the algorithm begin to rank other documents in the collection as likely to be responsive as well. Assuming your review team was accurate in classifying responsive documents manually, you will see how those same documents are identified as likely to be responsive by the algorithm, which engenders confidence in the algorithm’s ability to accurately classify documents. You learn how the TAR algorithm may identify potentially responsive documents that were missed by the review team: Human reviewers are only human, and they sometimes misclassify documents. In fact, many studies would say they misclassify them regularly. Assuming that the TAR algorithm is properly trained, it will often more accurately classify documents (that are responsive and non-responsive) than the human reviewers, enabling you to learn how the TAR algorithm can catch mistakes that your human reviewers have made.Other Use Cases for TAREven if you don’t have the time to use TAR on a case you’ve already completed, you can use TAR for other use cases that don’t require a level of transparency with opposing counsel, such as: Internal Investigations: When an internal investigation dictates review of a document set that is conducive to using TAR, this is a terrific opportunity to conduct and refine your TAR process without outside review or transparency requirements to uphold. Review Data Produced to You: Turnabout is fair play, right? There is no reason you can’t use TAR to save costs reviewing the documents produced to you to while determining whether the producing party engaged in a document dump. Prioritizing Your Document Set for Review: Even if you plan to review the entire set of potentially responsive documents, using TAR can help you prioritize the set for review, pushing documents less likely to be responsive to the end of the queue. This can be useful in rolling production scenarios, or if you think that eventual settlement could obviate the need to reduce the entire collection.Combining TAR technology with efficient workflows that maximize the effectiveness of the technology takes time and expertise. Working with experts who understand how to get the most out of the TAR algorithm is important. But it can still be daunting to use TAR for the first time in a case where you must meet a stringent level of defensibility and transparency with opposing counsel. Applying TAR to use cases first where that level of transparency is not required enables your company to get to that efficient and effective workflow—before you have to prove its efficacy to an outside party.ediscovery-review; ai-and-analyticstar-predictive-coding, ediscovery-review, ai-and-analyticstar-predictive-codingmitch montoya
January 22, 2020
Blog

From A to Ziti: Finding Hidden Meaning and Intent in Large Datasets

Investigators experienced in interrogating data know that there may be more to a communication than meets the eye. Whether from intentional or unconscious behavior, clues abound.When key facts are conveyed in nuanced or disguised language, it is important to explore the available collection of documents and communications with a linguistic and forensic sensibility. Although a difficult endeavor, the payoff is high when previously hidden meaning and intent surfaces in your investigation. In the process, individual finds often lead to a larger set of findings providing you a deeper understanding of who exactly knew or did what, and how they felt about it.Follow the ScentUnlike for a typical discovery request, searching for hidden meaning and intent in large data sets requires an ability to pinpoint nuanced — and often indirect — textual cues. This capability is distinct from relevance-based classification techniques such as TAR, which are optimized to ensure consistent coverage for a topic across a large data set. When looking for possible subterfuge or heightened emotion, for example, the task is more akin to incremental detective work than it is batch or prioritized classification. As such, it is useful to frame your efforts within an iterative search workflow that brings you into contact with potentially interesting content and communications, while also allowing you an ability to pivot off your search to explore particular key people, events, and timelines where interesting content appears to be clustered. Make a ListAn important first step in searching for key content you might otherwise be missing is to develop or tailor pre-existing lists of keywords and phrases targeting the types of behaviors and sentiments you are interested in uncovering.For example, if you are investigating possible fraud, you may want to focus part of your search on isolating communications in which there are textual traces suggesting concealment. Some concealment-related phrases to add to a keyword list for a fraud investigation could include “do not share this,” “take off line,” or “delete email,” to name just a few.Additionally, if you are interested in isolating internal chatter conveying strong concern or worry, you could include items like “atrocious,” “huge mistake,” “ill advised,” or “ordeal.” Ziti? Or Fraud?Apart from language expressing worry or concealment, other language worth targeting to get at hidden meaning and intent could include profanity and slang. Also, keep in mind the cultural context in which the communications and documents you are searching through were produced. For example, in a recent bribery and corruption case in NY state involving NY state government officials and private business executives, “ziti” (or “zitti”) was used as a code word to refer to bribes and extortion money. This particular code word in this context was borrowed from the language used by organized crime in New York and surrounding states.Stay on TopicGiven the richness of language and culture, keyword lists targeting hidden figurative meaning can grow to hundreds, even thousands, of words and phrases. To avoid a deluge of hits, it is useful to pair these special keywords with broad issue indicators to make sure you are targeting not only figurative language, but also potentially relevant content. For example, if you are interested in isolating potential fraud around billing practices, one possible tactic would be to leverage proximity search by pairing fraud-related terms like “unusual” with a broad topical keyword term “billing” (e.g., unusual /50 bill[s,ed,ing]). Using this tactic in a systematic way across targeted sentiments and topics will get you a richer result set to focus your in-depth review on.Prepare Ahead of TimeAs with any search effort, setting up your data by threading email conversations and identifying near-duplicate sets of documents are two of the many approaches available to winnow down and prioritize the set of documents you perform targeted searches on. Techniques such as name normalization can also be especially helpful when your aim is to understand who is communicating with whom on a consistent basis.Keep Smiling It is also useful to explore how best to tailor the indexing of your data for searching — for instance, emojis are often used in key relevant conversations, yet they are rarely indexed automatically for search in review platforms. From both a discovery and investigative perspective, this can be a big blind spot. Preliminary research on the topic shows an increase in the number of US cases referring to emoji as evidence increased from 33 in 2017 to 53 in 2018.Searching for key content conveyed through nuanced language is a complex task that is substantively distinct from relevance and topic classification. With the right mindset, workflow, and tools, you will be able to structure and manage this effort in order to isolate key facts otherwise left hidden that are relevant to your case.ai-and-analyticsblog, key-document-identification, fraud-detection, ai-and-analytics,blog; key-document-identification; fraud-detectionlighthouse
July 9, 2019
Blog

Finding the Needles Faster – Speeding up the Second Request Process

Facing a second request can be painful, kind of like searching for a needle in a haystack exacerbated by a strict deadline looming above it all. And, as volumes of data continue to grow and types of data become increasingly complex, these matters are often inefficient and costly, while getting to the key documents (needles) quickly can feel like an insurmountable challenge.In the 2019 Antitrust Leadership Panel, I gathered together a group of top antitrust experts to discuss the grueling challenges, the role of technology and emerging trends, and a few concrete recommendations for progress to make second requests more efficient and less costly. The video series of the panel was very well received and I have since been asked by several viewers, “So, Bill, how do I apply these ideas to my current antitrust practice or process? How do I find the needles in the haystack?”In this blog, I will answer just that and distill the key takeaways from the panel to share with your team so that you can be better equipped to tackle a second request and find the critical needle in that giant, ever-evolving haystack of data.Lesson 1: Technology is a must, but so is trust.Our expert panelists all agreed that although the usage of technology can be challenging to negotiate with the DOJ and the FTC, its application is paramount in getting to key documents quickly. With that in mind, the first phase in preparing for your next second request and conquering the haystack of data is to leverage technology. Here are some simple steps to get you started:It’s critical to understand what technology is out there and what it can do (i.e. AI, predictive coding, email threading, deduplication, etc.). Ensure you and your team stay on top of what each of these tools does and how you can leverage them in second requests. Understanding and educating yourself on all aspects of the technology is key to increasing the government’s trust and acceptance of new tools they may not be familiar with…more on that in the next step.Once you understand what technology options are available and have the best probability for success in your specific case, select the tool or tools that make the most sense for your team and secure them (i.e. by leveraging your vendor’s tools or procuring them in house). Work with your vendor or in-house team to develop sound evidence that will persuade the DOJ and FTC to accept technology so that it can be more broadly used and leveraged within your matters. This will allow you to save significant time and money and, who knows, if we all did it the DOJ and FTC may be more likely to accept itLesson 2: Proportionality can save you time and money, leverage it.The panel also discussed that although the DOJ and FTC sometimes don’t seem to make proportionality a priority in second requests and may intentionally request broader swaths of data to buy more time outside of the strict statutory guidelines, it’s clear that proportionality should be a primary focus for both parties to limit the burdensome amount of data that must be collected and reviewed. Consider this next set of steps as another way to potentially save time and money when trying to dig through the haystack of second request data.When faced with a second request, first discuss amongst your team what arguments for proportionality can be made.Ensure your arguments for proportionality are based on compelling evidence and bring them to the DOJ and FTC at the onset of the second request.If your argument is not accepted on one matter, work with your vendor to focus on building more evidence to get the DOJ and FTC on the side of proportionality in your next matter.Lesson 3: Privilege review tools can be a privilege in the long run.According to the panelists, having better and more user-friendly privilege review tools would result in a significantly improved second request process for everyone involved. So, how do you take concrete action on that? Here are a few additional steps to improve the privilege review process and break down one of the most burdensome parts of tackling the haystack.Reach out to your vendor and ask what tools and solutions they have around privilege review.Test out their privilege tools on your next matter and provide feedback for continuous process improvement.Work with your vendor to develop customized privilege tools using advanced analytics to find privileged documents more quickly and easily.When leveraging privilege tools, be sure to track solid metrics and develop new evidence to showcase to the DOJ and FTC why they can trust the advanced technology.Share these takeaways within your team and apply the steps that make sense for your practice so that the next time you’re faced with a daunting second request and a seemingly insurmountable amount of data, you’ll be well positioned to tackle the challenge and find the right needles in the haystack from the onset.Want to discuss this topic more? Feel free to reach out to me at BMariano@lighthouseglobal.com.To explore related content, click the links below:Antitrust Leadership Panel: Time and CostAntitrust Leadership Panel: The Role of TechnologyAntitrust Leadership Panel: Evolving for the Futureai-and-analytics; antitrustanalytics, hsr-second-requests, blog, ai-and-analytics, antitrustanalytics; hsr-second-requests; blogbill mariano
January 14, 2021
Blog

Four Ways a SaaS Solution Can Make In-House Counsel Life Easier

Your team is facing a wall of mounting compliance requirements and internal investigations, as well as a few larger litigations you fear you may not be able to handle given internal resource constraints. Each case involves unwieldy amounts of data to wade through, and that data must be collected from constantly-evolving data sources—from iPhones to Microsoft Teams to Skype chats. You’re working with your IT team to ensure your company’s most sensitive data is protected throughout the course of all those matters.All of this considered, your team is faced with vetting eDiscovery vendors to handle the large litigation matters and ensuring those vendors can effectively protect your company’s data. Simultaneously, you are shouldering the burden of hosting a separate eDiscovery platform for internal investigations with a legal budget that is already stretched thin. Does this sound familiar? Welcome to the life of a modern in-house attorney. Now more than ever, in-house counsel need to identify cost-effective ways to improve the effectiveness and efficiency of their eDiscovery matters and investigations with attention to the security of their company’s data. This is where adopting a cloud-based self-service, spectra eDiscovery platform can help. Below, I’ve outlined how moving to this type of model can ease many of the burdens faced by corporate legal departments.1. The Added Benefit of On-Demand Scalability‍A cloud-based, self-service, spectra platform provides your team the ability to quickly transfer case data into a cutting-edge review platform and access it from any web browser. You’re no longer waiting days for a vendor to take on the task with no insight into when the data will be ready. With a self-service, spectra solution, your team holds the reigns and can make strategic decisions based on what works best for your budget and organization. If your team has the bandwidth to handle smaller internal investigations but needs help handling large litigations, a scalable self-service, spectra model can provide that solution. If you want your team to handle all matters, large and small, but you worry about collecting from unique sources like Microsoft Teams or need help defensibly culling a large amount of data in a particular case, a quality self-service, spectra provider can handle those issues and leave the rest to you. In short, a self-service, spectra solution gives you the ability to control your own fate and leverage the eDiscovery tools and expertise you need, when you need them. 2. Access to the Best eDiscovery Tools – Without the Overhead Costs A robust self-service, spectra eDiscovery solution gives your team access to the industry’s best eDiscovery tools, enabling you to achieve the best outcome on every matter for the most efficient cost. Whether you want to analyze your organization’s entire legal portfolio to see where you can improve review efficiency across matters, or you simply want to leverage the best tools from collection to production, the right solution will deliver. And with a self-service, spectra model, your team will have access to these tools without the burden of infrastructure maintenance or software licensing. A quality self-service, spectra provider will shoulder these costs, as well as the load of continuously evaluating and updating technology. Your team is free to do what it is does best: legal work.3. The Peace of Mind of Reliable Data Security In a self-service, spectra eDiscovery model, your service provider shoulders the data security risk with state-of-the-art infrastructure and dedicated IT and security teams capable of remaining attentive to cybersecurity threats and evolving regulatory standards. This not only allows you to lower your own costs and free up valuable internal IT resources, but also provides something even more valuable than cost savings—the peace of mind that comes with knowing your company’s data is being managed and protected by IT experts.4. Flexible, Predictable Pricing and Lower Overall Costsself-service, spectra pricing models can be designed around your team’s expectations for utilization—meaning you can select a pricing structure that fits your organization’s unique needs. From pay-as-you-go models to a subscription-based approach, self-service, spectra pricing often differs from traditional eDiscovery pricing in that it is clear and predictable. This means you won’t be blindsided at the close of the month with hidden charges or unexpected hourly fees from a law firm or vendor. Add this type of transparent pricing to the fact that you will no longer be shouldering technology costs or paying for vendor services you don’t need, and the result is a significantly lower eDiscovery overhead that can fit within any legal budget. These four benefits can help corporations and in-house counsel teams significantly improve eDiscovery efficiency and reduce costs. For more information on how to move your organization to a self-service, spectra eDiscovery model, be sure to check out our other articles related to the self-service, spectra eDiscovery revolution – including tips for overcoming self-service, spectra objections and building a self-service, spectra business case.ediscovery-review; ai-and-analyticsself-service, spectra, blog, ediscovery-review, ai-and-analyticsself-service, spectra; bloglighthouse
May 20, 2021
Blog

eDiscovery, Ethics, and the Case for AI

Ever since ABA Model Rule of Professional Conduct 1.1 [1] was modified in 2012 to include an ethical obligation for attorneys to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology [2]” (emphasis added), attorneys in almost every state have had a duty to stay abreast of how technology can both help and harm clients. In other words, most attorneys practicing law in the United States have an ethical obligation to not only understand the risks created by the technology we use in our practice (think data breaches, data security, etc.), but also to keep abreast of technology that may benefit our practice.Nowhere is this obligation more implicated than within the eDiscovery realm. We live in a digital world and our communications and workplaces reflect that. Almost any discovery request today will involve preserving, collecting, reviewing, and producing electronically stored information (ESI) – emails, text messages, video footage, Word documents, Excels, PowerPoints, social media posts, collaboration tool data – the list is endless. To respond to ESI discovery requests, attorneys need to use (or in many cases, hire someone who can use) technology for every step of the eDiscovery process – from preservation to production. Under Model Rule 1.1, that means that we must stay abreast of that technology, as well as any other technology that may be beneficial to completing those tasks more effectively for our clients (whether we are providing legal advice to an organization as in-house counsel or externally through a law firm).In this post, I posit that in the very near future, this ethical obligation should include a duty to understand and evaluate the benefits of leveraging Artificial Intelligence (AI) during almost any eDiscovery matter, for a variety of different use cases.AI in eDiscoveryFirst, let’s level set by defining the type of technology I’m referring to when I use the term “AI,” as well as take a brief look at how AI technology is currently being used within the eDiscovery space. Broadly speaking, AI refers to the capability of a machine to imitate intelligent human behavior. Within eDiscovery, the term is often also used broadly to refer to any technology that can perform document review tasks that would normally require human analysis and/or review.There is a wide range of AI technology that can help perform document review tasks. These include everything from older forms of machine learning technology that can analyze the text of a document and compare it to the decisions made about that document by a human to predict what the human decision would be on other documents to newer generations of analytics technology that can analyze metadata and language used within documents to identify complicated concepts, like the sentiment and tone of the author. This broad spectrum of technology can be incredibly beneficial in a number of important document review use cases – the most common of which I have outlined below: Culling Data - One of the most common use cases for AI technology within eDiscovery is leveraging it to identify documents that are relevant to the discovery request and need to be produced. Or, conversely, identify documents that are irrelevant to the matter at hand and do not need to be produced. AI technology is especially proficient at identifying documents that are highly unlikely to be responsive to the discovery request. In turn, this helps attorneys and legal technologists “cull” datasets, essentially eliminating the need to have a human review every document in the dataset. Newer AI technology is also better at identifying documents that would never be responsive to any document request (i.e., “junk” documents) so that these documents can be quickly removed from the review queue. More advanced AI technology can do this by aggregating previously collected data from within an organization as well as the attorney decisions made about that data, and then use advanced algorithms to analyze the language, text, metadata, and previous attorney decisions to identify objectively non-responsive junk documents that are pulled into discovery request collections time and time again. Prioritizing and Categorizing Data - Apart from culling data, AI can also be used to simply make human review more efficient. Advanced AI technology can be used to identify specific concepts and issues that attorneys are looking for within a dataset and group them to expedite and prioritize attorney review. For example, if a litigation involves an employee accused of stealing company information, advanced AI technology can analyze all the employee’s communications and digital activities and identify any anomalies, such as an activity that occurred during abnormal work hours or communications with other employees with whom they normally would not have reason to interact. The machine can then group those documents so that attorneys can review them first. This identification and prioritization can be critical in evaluating the matter as a whole, as well as helping attorneys make better strategic decisions about the matter. Review prioritization can also simply help meet court-imposed production deadlines on time by enabling human reviewers to focus on data that can go out the door quickly (i.e., documents that the machine identified as highly likely to be responsive but also highly unlikely to involve issues that would require more in-depth human review like privilege, confidentiality, etc.). Identifying Sensitive Information - On the same note, AI technology is now more adept at identifying issues that usually require more in-depth human review. Newer AI technology that uses advanced Natural Language Processing (NLP) and analyzes both the metadata and text of a document is much better at identifying documents that contain sensitive information, like attorney-client privileged communications, company trade secrets, or personally identifiable information (PII). This is because more advanced NLP can take context into account and, therefore, more accurately identify when an internal attorney is chatting with other employees over email about the company fantasy football rankings vs. when they are providing actual legal advice about a work-related matter. It can do this by analyzing not only the language being used within the data, but also how attorneys are using that language and with whom. In turn, this helps attorneys conducting eDiscovery reviews prioritize documents for review, expedite productions, and protect privileged information.Attorneys’ Ethical Obligation to Consider the Benefits of AI in eDiscovery The benefits of AI in eDiscovery should now be clear. It is already infeasible to conduct a solely human linear review of terabytes of data without the help of AI technology to cull and/or prioritize data. A review of that amount of data (performed by humans reviewing one document at a time) can require months and even years, a virtual army of human reviewers (all being paid at an hourly rate), as well as the training, resources, and technology necessary for those reviewers to perform the work proficiently. Because of this, AI technology (via technology assisted review (TAR)) has been widely accepted by courts and used by counsel to cull and prioritize large sets for almost a decade.However, while big datasets involving terabytes of data were once the outliers in the eDiscovery world, they are now quickly becoming the norm for organizations and litigations of all sizes due to exploding data volumes. To put the growing size of organizational data in context, the total volume of data being generated and consumed has increased from 33 zettabytes worldwide in 2018 to a predicted 175 zettabytes in 2025[3]. This means that soon, even the smallest litigation or investigation may involve terabytes of data to review. In turn, that means that AI technology will be critical for almost any litigation involving a discovery component.And that means that we as attorneys will have an ethical duty to keep abreast of AI technology to competently represent our clients in matters involving eDiscovery. As we have seen above, there is just no way to conduct massive document reviews without the help of AI technology. Moreover, the imperative task of protecting sensitive client data like attorney-client privilege, trade secret information, and PII (which all can be hidden and hard to find amongst massive amounts of data) also benefits from leveraging AI technology. If there is technology readily available that can lower attorney costs and client risk, while ensuring a more consistent and accurate work product, we have a duty to our clients to stay aware of that technology and understand how and when to leverage it.But this ethical obligation should not scare us as attorneys and it doesn’t mean that every attorney will need to become a data scientist in order to ethically practice law in the future. Rather, it just means that we, as attorneys, will just need to develop a baseline knowledge of AI technology when conducting eDiscovery so that we can effectively evaluate when and how to leverage it for our clients, as well as when and how to partner with appropriate eDiscovery providers that can provide the requisite training and assist with leveraging the best technology for each eDiscovery task.ConclusionAs attorneys, we have all adapted to new technology as our world and our clients have evolved. In the last decade or so, we have moved from Xerox and fax machines to e-filings and Zoom court hearings. The same ethic that drives us to evolve with our clients and competently represent them to the best of our ability will continue to drive us to stay abreast of the exciting changes happening around AI technology within the eDiscovery space.To discuss this topic more, feel free to connect with me at smoran@lighthouseglobal.com.‍[1] “Client-Lawyer Relationship: A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” ABA Model Rules of Professional Conduct, Rule 1.1.[2] See Comment 8, Model Rules of Professional Conduct Rule 1.1 (Competence)[3] Reinsel, David; Gantz, John; Rydning, John. “The Digitization of the World From Edge to Core.” November 2018. Retrieved from https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf. An IDC White Paper, Sponsored by SEAGATE.ai-and-analyticsanalytics, ai-big-data, ediscovery-process, red-flag-reporting, departing-onboarding-employee, prism, blog, focus-discovery, ai-and-analytics,analytics; ai-big-data; ediscovery-process; red-flag-reporting; departing-onboarding-employee; prism; blog; focus-discoverysarah moran
June 19, 2020
Blog

Delivering Value: Sharing Legal Department Metrics that Move the Core Business

Below is a copy of a featured blog written by Debora Motyka Jones for CLOC's Legal Operations Blog.One of the most common complaints I hear from General Counsels and Chief Legal Officers is that they are not able to sit at a table full of their executive peers and provide metrics on how legal is impacting the core business. Sure, they are able to show their own department’s spending, tasks, and resource allocation. But wouldn’t it be nice to tell the business when revenue will hit? Or insights about what organizational behaviors are leading to inefficiency and, if changed, will impact spending. More specifically, as the legal operations team member responsible for metrics, wouldn’t it be great to share these key insights with your GC as well as your finance, sales, IT, and other department counterparts? Good news! Legal has this type of information, it is just a matter of identifying and mining it!Keeping metrics has become table stakes in today’s legal department and it often falls on the shoulders of legal operations to track and share those metrics. In fact, CLOC highlights business intelligence as a core competency for the legal operations function. Identifying metrics, cleansing those metrics, and putting them forth can be quite a lift, but once you have the right metrics in place, you are able to make data-driven decisions about how to staff your team, what external resources you need, and drive efficiencies. If you are still at the early stages of figuring out which metrics you should track for your department, there are many good resources out there including a checklist of potential metrics by Thompson Reuters, and a blog by CLOC on where to start. HBR also conducts a survey so you can see what other departments are seeing – this can be helpful for setting targets and/or seeing how you compare. When you analyze these and other resources, you will notice that many of the metrics are legal department centric. Though they are helpful for the department, they are not very meaningful when they are sitting around the table with executives doing strategic business planning for the business as a whole. So what types of metrics can legal provide in those settings and how do you capture them? There are many ways to go about this, but I have highlighted a few that can provide a robust discussion at the executive table.Leading Indicators of RevenueMost companies are reviewing the top line with some frequency and in many industries it is a challenge to predict the timing of that revenue. Given its position at the end of the sales cycle, in the contracting phase, legal has excellent access to information about revenue and the timing thereof. Here are the most common statistics your legal department can provide in that area:New Customer Acquisition: Number of Customer Contracts Signed this Month – Signing up paying customers is a direct tie to revenue and the legal department holds the keys to one of the last steps pre-revenue: contract signing. By identifying the type of contract that leads to revenue, the legal department is able to share with the business how many new customers are coming online. The metric is typically a raw number and can be compared against the number of contracts in a prior period. If not all customers who sign this contract lead to revenue, you will want to report (or at least know) the ratio of contracts to paying customers in order to give an accurate picture. Once you have been tracking this metric, you may want to take it a step further and identify and contracts that come earlier in the process. For example, in some companies, prospective clients sign NDAs earlier in the sales cycle. By reporting on the number of NDAs signed, you will start to see a ratio of the number of NDA to the number of MSAs and can give even earlier visibility into the customer acquisition pipeline.Expected New Customers: Contracts in Negotiation and Contract Negotiation Length – If your company has negotiated contracts then reporting on the number of contracts in negotiation can also help with revenue planning. Knowing the typical length of that negotiation will give an indication as to the timing of that revenue.Expected Revenue: Timing – The final piece of the revenue puzzle is when the above revenue will hit. You can work with the finance team to get the typical time between contract signing and revenue. This will often vary by contract size so layering in the contract size is helpful. If contract size if not available in the contract itself, that is likely information that sales keep so they can report that metrics if legal cannot.The two departments most interested in all three the above metrics are likely to be sales and finance but depending on the detail reported at the executive level, these may be executive-level metrics. If the above seems like a lot, know that many contract management tools and/or contract artificial intelligence tools can mine your contracts for the above information.Efficiency in Business OperationsLegal operations also has a unique ability to look back and reflect on the efficiency in some areas of business operations. More specifically, in the course of litigation and investigations, cross sections of the business are examined with hindsight and as we all know, hindsight is 20/20. Providing that look back information to the business can help in overall business efficiency. In addition, legal has access to payment clauses, in contracts, that can ensure efficiency in cash management. Here are some helpful statistics your legal department can provide on the state of legal operations.Early Payment Discount Usage: Number of Contracts with Early Payment and Percentage of Early Payment Discounts Used – When signing vendor contracts, there are often provisions allowing for discounts if certain terms – e.g. payment within a short timeframe, are met. Although this may be fresh on everyone’s mind at the time of negotiation, this often gets lost over time. Using current technologies, the legal operations team can identify these contracts and provide the number of contracts in which such provisions exist. You can then work with finance to determine how many of these provisions are being leveraged – e.g. is the business actually paying early and taking the percentage reduction. The savings for the business can be material by just providing visibility into this area.Data Storage: How Much Data to Keep – A common IT pain point is storage management and having to add servers in order to keep up with the business needs. With cloud technologies, IT often knows how much space they have allocated to each user’s mail or individual drives but what is unknown is how much data users are keeping on their machines or in collaborations tools and shared drives. With data collections for litigation or regulatory matters, the legal team has access to this information. This information can help IT understand its storage needs and put in place technologies to minimize storage per person thereby saving on storage costs.Business Intelligence from Active Matters – This one isn’t a specific metric. Instead, this is more focused on the business intelligence that comes out of the legal department’s unique position as a reviewer of sets of documents. In litigation or investigations, the legal department has access to a cross section of data that the business doesn’t pull together in the regular course of business. Technology is now advanced enough to be able to provide business insights from this data that can be shared with the business as a whole.Example #1: Artificial intelligence can be used to create compliance models that show correlations between expense reports, trade journals, and sales behavior to identify bad behaviors. Sharing these types of learnings from matters can open up discussions among executives as to which learnings deserve a deeper dive. As an aside, you could also imagine a scenario where this same logic can also be used inversely – when combined with revenue it could identify effective sales behaviors – although this is something that would be a bigger lift and I would expect the sales department to drive this type of work.Example #2: The amount of duplicative data is a common metric reported in litigations or investigations. Sharing this with your IT team can highlight an easy storage win and legal can help craft a plan of how to attack duplicative data thereby leading to lower storage costsI would be remiss if I didn’t mention that there are opportunities for the legal department in these metrics as well. By using these metrics, as well as the artificial intelligence mentioned above, legal operations can resource plan and drive savings within the legal department. For example, the number of NDAs and sales contracts can inform staffing. Technology can identify contracts or other documents that are repetitive and automate the handling of those documents. Within litigation and investigations, technology can identify objectively non-responsive data so that it does not need to be collected as well as identify sources that are lower risk which don’t require outside counsel review and previously collected data that can be re-used.I hope that with the above metrics, you’re able to participate in some great business discussions and show how your legal department is not only effective in its own right but how integral a unit it is to driving the core business.ai-and-analytics; legal-operationsreporting, legal-ops, blog, ai-and-analytics, legal-operationsreporting; legal-ops; bloglighthouse
January 27, 2022
Blog

Deploying Modern Analytics for Today’s Critical Data Challenges in eDiscovery

AI & Analytics
Artificial intelligence (AI) has proliferated across industries, in popular culture, and in the legal space. But what does AI really mean? One way to look at it is in reference to technology that lets lawyers and organizations efficiently manage massive quantities of data that no one’s been able to analyze and understand before.While AI tools are no longer brand new, they’re still evolving, and so is the industry’s comfort and trust in them. To look deeper into the technology available and how lawyers can use it Lighthouse hosted a panel featuring experts Mark Noel, Director of Advanced Client Data Solutions at Hogan Lovells, Sam Sessler, Assistant Director of Global eDiscovery Services at Norton Rose Fulbright, Bradley Johnston, Senior Counsel eDiscovery at Cardinal Health, and Paige Hunt, Lighthouse’s VP of Global Discovery Solutions.Some of the key themes and ideas that emerged from the discussion include:Defining AIMeeting client expectationsUnderstanding attorneys’ duty of competenceIdentifying critical factors in choosing an AI toolAssessing AI’s impact on process and strategyThe future of AI in the legal industryDefining AIThe term “AI” can be misleading. It’s important to recognize that, right now, it’s an umbrella term encompassing many different techniques. The most common form of AI in the legal space is machine learning, and the earliest tools were document review technologies in the eDiscovery space. Other forms of AI include deep learning, continuous active learning (CAL), neural networks, and natural language processing (NLP).While eDiscovery was a proving ground for these solutions, the legal industry now sees more prebuilt and portable algorithms used in a wide range of use cases, including data privacy, cyber security, and internal investigations.Clients’ Expectations and Lawyers’ DutiesThe broad adoption of AI technologies has been slow, which comes as no surprise to the legal industry. Lawyers tend to be wary of change, particularly when it comes at the hands of techniques that can be difficult to understand. But our panel of experts agreed that barriers to entry were less of an issue at this point, and now many lawyers and clients expect to use AI.Lawyers and clients have widely adopted AI techniques in eDiscovery and other privacy and security matters. However, the emphasis from clients is less about the technology and more about efficiency. They want their law firms and vendors to provide as much value as possible for their budgets.Another client expectation is reducing risk to the greatest extent possible. For example, many AI technologies offer the consistency and accuracy needed to reduce the risk of inadvertent disclosures.Mingled with client expectations is a lawyer’s duty to be familiar with technology from a competency standpoint. We aren’t to the point in the legal industry where lawyers violate their duty of competence if they don’t use AI tools. However, the technology may mature to the point where it becomes an ethical issue for lawyers not to use AI.Choosing the Right AI ToolDecide Based on the Search TaskThere’s always the question of which AI technology to deploy and when. While less experienced lawyers might assume the right tool depends on the practice area, the panelists all focused on the search task. Many of the same search tasks occur across practice areas and enterprises.Lawyers should choose an AI technology that will give them the information they need. For example, Technology-assisted review (TAR) is well-suited to classifying documents, whereas clustering is helpful for exploration.Focus More on FeaturesTeams should consider the various options’ features and insights when purchasing AI for eDiscovery. They also must consider the training protocol, process, and workflow. At the end of the day, the results must be repeatable and defensible. Several solutions may be suitable as long as the team can apply a scientific approach to the process and perform early data assessment. Additional factors include connectivity with the organization’s other technology and cost.The process and results matter most. Lawyers are better off looking at the system as a whole and its features in deciding which AI tech to deploy instead of focusing on the algorithm itself.Although not strictly necessary, it can be helpful to choose a solution the team can apply to multiple problems and tasks. Some tools are more flexible than others, so reuse is something to consider.Some Use Cases Allow for ExperimentationThere’s also the choice between a well-established solution versus a lesser-known technology. Again, defensibility may push a team toward a well-known and respected tool. However, teams can take calculated risks with newer technologies when dealing with exploratory and internal tasks.A Custom Solution Isn’t NecessaryThe participants noted the rise in premade, portable AI solutions more than once. Rarely will it benefit a team to create a custom AI solution from scratch. There’s no need to reinvent the wheel. Instead, lawyers should always try an off-the-shelve system first, even if it requires fine-tuning or adjustments.AI’s Impact on ProcessThe process and workflow are critical no matter which solution a team chooses. Whether for eDiscovery, an internal investigation, or a cyber security incident, lawyers need accurate and defensible results.Some AI tools allow teams to track and document the process better than others. However, whatever the tool’s features, the lawyers must prioritize documentation. It’s up to them to thoughtfully train the chosen system, create a defensible workflow, and log their progress.As the adage goes: garbage in, garbage out. The effort and information the team inputs into the AI tool will influence the validity of the results. The tool itself may slightly influence the team’s approach. However, any approach should flow from a scientific process and evidence-based decisions.AI’s Influence on StrategyThere’s a lot of potential for AI to help organizations more strategically manage their documents, data, and approach to cases. Consider privileged communications and redactions. AI tools enable organizations to review and classify documents as their employees create them—long before litigation or another matter. Classification coding can travel with the document, from one legal matter to another and even across vendors, saving organizations time and money.Consistency is relevant, too. Organizations can use AI tools to improve the accuracy and uniformity of identifying, classifying, and redacting information. A well-trained AI tool can offer better results than people who may be inconsistently trained, biased, or distracted.Another factor is reusing AI technology for multiple search tasks. Depending on the tool, an organization can use it repeatedly. Or it can use the results from one project to the next. That may look like knowing which documents are privileged ahead of time or an ongoing redaction log. It can also look like using a set of documents to better train the algorithm for the next task.The Future of AIThe panelists wrapped the webinar by discussing what they expect for the future of AI in the legal space. They agreed that being able to reuse work products and the concept of data lakes will become even greater focuses. Reuse can significantly impact tasks that have traditionally had a huge cost burden, such as privilege reviews and logs, sensitive data identification, and data breach and cyber incidents.Another likelihood is AI technology expanding to more use cases. While lawyers tend to use these tools for similar search tasks, the technology itself has potential for many other legal matters, both adversarial and transactional. To hear more of what the experts had to say, watch the webinar, “Deploying Modern Analytics for Today’s Critical Data Challenges.” ai-and-analytics; ediscovery-review; lighting-the-path-to-better-ediscoveryai-big-data, blog, data-reuse, project-management, ai-and-analytics, ediscovery-reviewai-big-data; blog; data-reuse; project-managementai-analyticslighthouse
April 22, 2020
Blog

Data Reuse – Small Changes for Big Benefits

What is data reuse? There are many different flavors and not everyone thinks about it the same way. In the context of eDiscovery, subject-matter specific work product in the form of responsiveness or issue coding often comes to mind and is then immediately dismissed as untenable given that the definitions for these can change from matter to matter. This is just one tiny piece of what’s possible, however. We need to consider the entire EDRM from end to end. What else has already been done, and what can be gained from it?First, there’s the source data itself. The underlying electronically stored information (ESI) is foundational to the reuse of data as a whole. Many corporations deal with frequent litigation and investigations, and those matters often include the same or at least overlapping players, i.e. the “frequent flier” custodians. This means the same data is relevant to multiple matters, which means it can be reused. There’s the potential for a one-to-many relationship here. In other words, instead of starting from scratch with each new project by going back to the same sources to collect the same data, why not take stock of what has been collected already? Compare the previously collected inventory to what is required for each specific matter, and then return to the well for the difference as needed. It may be as simple as a “refresh” to capture a more recent date range, or, even better, there’s no new collection to be done at all.Next up is the processed data. Once it’s collected, a lot of time, effort, and money are spent transforming ESI into a more consumable format. Extracting and indexing the metadata such that it can easily be searched and reviewed in your platform of choice takes real effort. Considering the lift, utilizing data that has already undergone processing makes a lot of sense. Depending on volume, significant savings in terms of timeline and fees are often realized, and this is not a one-time thing. The same data often comes up over and over across multiple matters, compounding savings over time.Finally, after processing comes review, which is where reusing existing work product comes in. This isn’t limited to relevance calls, which may or may not consistently apply across matters. There’s limited application for the reuse of subject-matter specific work product as mentioned earlier. The real treasure trove is all the different types of static work product – the ones that remain the same across matters regardless of the relevance criteria – and there are so many! One valuable step that is often overlooked is the ability to dismiss portions of the data population upfront. Often there is some chunk of data that will simply never be of interest. These are the “junk” or “objectively non-relevant” files that can clog a review. For example, automatic notifications, spam advertisements, and other mass mailings can contribute a lot of volume and rarely have any chance of including relevant content. Also, think about redactions and what often drives them: PII, PHI, trade secret, IP, etc. These are a pain to deal with, so why force the need to do so repeatedly? And, what about privilege? Identifying it is one thing, and then there are the incredibly time intensive privilege log entries that follow. These don’t change, and the cost to handle them can be steep. On top of that, they are incredibly sensitive, so ensuring accuracy and consistency is key. That’s pretty difficult to accomplish from matter to matter if you rely on different reviewers starting over each time.At the end of the day, no one wants to waste time and effort on unnecessary tasks, especially considering how often intense deadlines loom right out of the gate. The key is understanding what has already been done that overlaps with the matter at hand and leveraging it accordingly. In other words, know what you have and use it to avoid performing the same task twice wherever possible.ai-and-analytics; ediscovery-reviewediscovery-process, data-re-use, blog, ai-and-analytics, ediscovery-reviewediscovery-process; data-re-use; bloglighthouse
December 8, 2022
Blog

Challenging 3 Myths About Document Review During Second Requests

Legal teams approaching a Hart-Scott-Rodino (HSR) Second Request may hold false assumptions about what is and isn’t possible with document review. Often these appear as necessary evils—compromises in efficiency and precision are inevitable given the unique demands of Second Requests. But, in fact, these compromises are only necessary in the context of legacy technology and tools. Using more current tools, legal teams can transcend many of these compromises and do more with document review than they thought possible.Document review during an HSR Second Request is notoriously arduous. Legal teams must review potentially millions of documents in a very short timeframe, as well as negotiate with regulators about custodians and other parameters that could change the scope of the data under review.Up until recently, legal teams’ ability to meet these demands was limited by technology. It wasn’t possible to be precise and thorough while also being extremely quick. As a result, attorneys adopted certain conventions and concessions around the timing of review steps and how much risk to accept.Technology has evolved since then. For example, tools powered by advanced artificial intelligence (AI) utilize deep learning models and big data algorithms that make review much faster, more precise, and more resilient than legacy tools. However, legacy thinking around how to prepare for Second Requests remains. Many attorneys and teams remain beholden to the constraints imposed on them by tools of the past. New review tools enable new approaches and benefits, eliminating these constraints. Here’s a look at three of the most common myths surrounding document review during Second Requests and how they’re proven false by modern review tools.Myth 1: Privilege review must come after responsive reviewThe classic approach to reviewing documents during a Second Request is to start by creating a responsive set and then review that set for privileged documents. This takes time— an extremely precious commodity during a Second Request—but these steps are unavoidable with legacy tools. The linear nature of legacy review models requires responsive review to happen first because supporting privilege review over an entire dataset simply is not a feasible task over potentially millions of records. Tools leveraging advanced AI, however, are well suited to support scalable privilege analysis with big data. Rather than save privilege review for later, legal teams can conduct privilege review simultaneously with responsive review. This puts documents in front of human reviewers sooner and shaves invaluable hours off the timeline as a whole.Myth 2: Producing privileged documents to regulators is inevitableInadvertently disclosing privilege documents to federal agencies is so common the Federal Rules of Civil Procedure give parties some latitude to do so without penalty. Even so, the risk remains of inadvertent disclosure during a Second Request that will invite additional questions and scrutiny from regulators and undermine the deal.Although advanced AI tools cannot eliminate the possibility of inadvertent disclosure, these automated solutions can vastly reduce it. In one recent Second Request, a tool using advanced AI was able to identify and withhold 200,000 privilege documents that a legacy tool had failed to catch. This spared the client from costly exposure and clawbacks.Myth 3: There’s no time to know the details of what you’re producingWith massive datasets and very little time to review, legal teams get used to producing documents without fully knowing what’s in them. This can cause surprise and pain down the line when regulators ask for clarification about information the team isn’t prepared to address.With advances in technology, teams can gain more clarity using tools that identify key documents. These tools conduct powerful searches of both text and document attributes, using complex and dynamic search strings managed by linguistic experts. Out of a million or more documents, key document identification can surface the one or two thousand that speak precisely to attorneys’ priorities, efficiently helping counsel prepare for testimony and other proceedings.What’s your Second Request strategy?Second Requests will always be intense. Advancements in eDiscovery technology prove the limits of the past don’t apply today. With technology moving beyond legacy tools, it is time for teams to move beyond legacy thinking as well.For more detail about how advancements in technology help teams meet the demands of Second Requests, download our eBook.antitrust; ediscovery-review; ai-and-analyticsreview, hsr-second-requests, blog, antitrust, ediscovery-review, ai-and-analytics,review; hsr-second-requests; blogkamika brown
November 30, 2020
Blog

Building Your Case for Cutting-Edge AI and Analytics in Five Easy Steps

As the amount of data generated by companies exponentially increases each year, leveraging artificial intelligence (AI), analytics, and machine learning is becoming less of an option and more of a necessity for those in the eDiscovery industry. However, some organizations and law firms are still reluctant to utilize more advanced AI technology. There are different reasons for the reluctance to embrace AI, including fear of the learning curve, uncertainty around cost, and unknown return on investment. But where this is uncertainty, there is often great opportunity. Adopting AI provides an excellent opportunity for ambitious legal professionals to act as the catalysts for revitalizing their organization’s or law firm’s outdated eDiscovery model. Below, I’ve outlined a simple, five-step process that can help you build a business case for bringing on cutting-edge AI solutions to reduce cost, lower risk, and improve win rates for both organizations and law firms.Step 1: Find the Right Test CaseYou will want to choose the best possible test case that highlights all the advantages that newer, cutting-edge AI solutions can provide to your eDiscovery program.One of the benefits of newer solutions is that they can be utilized in a much wider variety of cases than older tools. However, when developing a business case to convince reluctant stakeholders – bigger is better. If possible, select a case with a large volume of data. This will enable you to show how effectively your preferred AI solution can cull large volumes of data quickly compared to your current tools and workflows.Also try to select a case with multiple review issues, like privilege, confidentiality, and protected health information(PHI)/personally identifiable information (PII) concerns. Newer tools hitting the market today have a much higher degree of efficiency and accuracy because they are able to run multiple algorithms and search within metadata. This means they are much better at quickly and correctly identifying types of information that would need be withheld or redacted than older AI models that only use a single algorithm to search text alone.Finally, if possible, choose a case that has some connection to, or overlap with, older cases in your (or your client’s) legal portfolio. For a law firm, this means selecting a case where you have access to older, previously reviewed data from the same client (preferably in the same realm of litigation). For a corporation, this just means choosing a case, if possible, that shares a common legal nexus, or overlapping data/custodians with past matters. This way, you can leverage the ability that new technology has to re-use and analyze past attorney work product on previously collected data.Step 2: Aggregate the Data Once you’ve selected the best test case, as well as any previous matters from which you want to analyze data, the AI solution vendor will collect the respective data and aggregate it into a big data environment. A quality vendor should be able to aggregate all data, prior coding, and other key information, including text and metadata into a single database, even if the previously reviewed data was hosted by different providers in different databases and reviewed by different counsel.Step 3: Analyze the Data Once all data is aggregated, it’s time for the fun to begin. Cutting-edge AI and machine learning will analyze all prior attorney decisions from previous data, along with metadata and text features found within all the data. Using this data analysis, it can then identify key trends and provide a holistic view of the data you are analyzing. This type of powerful technology is completely new to the eDiscovery field and something that will certainly catch the eye of your organization or your clients.Step 4: Showcase the Analytical ResultsOnce the data has been analyzed, it’s time to showcase the results to key decision makers, whether that is your clients, partners, or in-house eDiscovery stakeholders. Create a presentation that drills down to the most compelling results, and clearly illustrates how the tool will create efficiency, lower costs, and mitigate risk, such as:Large numbers of identical documents that had been previously collected, reviewed, and coded non-responsive multiple times across multiple mattersLarge percentages of identical documents picked up by your privilege screen (and thus, thrust into costly privilege re-review) that have actually never been coded privilege in any matterLarge numbers of identical documents that were previously tagged as containing privilege or PII information in past matters (thus eliminating the need for review for those issues in the current test case).Large percentages of documents that have been re-collected and re-reviewed across many mattersStep 5: Present the Cost ReductionYour closing argument should always focus on the bottom line: how much money will this tool be able to save your firm, client, or company? This should be as easy as taking the compelling analytical results above and calculating their monetary value:What is the monetary difference between conducting a privilege review in your test case using your traditional privilege screen vs. re-using privilege coding and redactions from previous matters?What is the monetary difference between conducting an extensive search for PII or PHI in your test case, vs. re-using the PII/PHI coding and redactions from previous matters?How much money would you save by cutting out a large percent of manual review in the test case due to culling non-responsive documents identified by the tool?How much money would you save by eliminating a large percentage of privilege “false positives” that the tool identified by analyzing previous attorney work product?How much money will you (or your client) save in the future if able to continue to re-use attorney work product, case after case?In the end, if you’ve selected the right AI solution, there will be no question that bringing on best-of-breed AI technology will result in a better, more streamlined, and more cost-effective eDiscovery program.ai-and-analyticsanalytics, ai-big-data, data-re-use, phi, pii, blog, ai-and-analytics,analytics; ai-big-data; data-re-use; phi; pii; bloglighthouse
June 7, 2021
Blog

Big Data Challenges in eDiscovery (and How AI-Based Analytics Can Help)

It’s no secret that big data can mean big challenges in the eDiscovery world. Data volumes and sources are exploding year after year, in part due to a global shift to digital forms of communication in working environments (think emails, chat messages, and cloud-based collaboration tools vs. phone calls, in-person meetings, and paper memorandums, etc.) as well as the rise of the Cloud (which provides cheaper, more flexible, and virtually limitless data storage capabilities).This means that with every new litigation or investigation requiring discovery, counsel must collect massive amounts of potentially relevant digital evidence, host it, process it, identify the relevant information within it (as well as pinpoint any sensitive or protected information within that relevant data) and then produce that relevant data to the opposing side. Traditionally, this process then starts all over again with the next litigation – often beginning back at square one in a vacuum by collecting the exact same data for the new matter, without any of the insights or attorney work product gained from the previous matter.This endless cycle is not sustainable as data volumes continue to grow exponentially. Fortunately, just as advances in technology have led to increasing data volumes, advances in artificial intelligence (AI) technology can help tackle big data challenges. Newer analytics technology can now use multiple algorithms to analyze millions of data points across an organization’s entire legal portfolio (including metadata, text, past attorney work product, etc.) and provide counsel with insights that can improve efficiency and curb the endless cycle of re-inventing the wheel on each new matter. In this post, I’ll outline the four main challenges big data can pose in an eDiscovery environment (also called “The Four Vs”) and explain how cutting-edge big data analytics tools can help tackle them.The “Four Vs” of Big Data Challenges in eDiscovery 1. The volume, or scale of dataAs noted above, a primary challenge in matters involving discovery is the sheer amount of data generated by employees and organizations as a whole. For reference, most companies in the U.S. currently have at least 100 terabytes of data stored, and it is estimated that by 2025, worldwide data will grow 61 percent to 175 zettabytes.As organizations and individuals create more data, data volumes for even routine or small eDiscovery matters are exploding in correlation. Unfortunately, court discovery deadlines and opposing counsel production expectations rarely adjust to accommodate this ever-growing surge in data. This can put organizations and outside counsel in an impossible position if they don’t have a defensible and efficient method to cull irrelevant data and/or accurately identify important categories of data within large, complex data sets. Being forced to manually review vast amounts of information within an unrealistic time period can quickly become a pressure cooker for critical mistakes – where review teams miss important information within a dataset and thereby either produce damaging or sensitive information to the opposing side (e.g., attorney-client privilege, protected health information, trade secrets, non-relevant information, etc.) or in the inverse, fail to find and produce requested relevant information.To overcome this challenge, counsel (both in-house and outside counsel) need better ways to retain and analyze data – which is exactly where newer AI-enabled analytics technology (which can better manage large volumes of data) can help. The AI-based analytics technology being built right now is developed for scale, meaning new technology can handle large caseloads, easily add data, and create feedback loops that run in real time. Each document that is reviewed feeds into the algorithm to make the analysis even more precise moving forward. This differs from older analytics platforms, which were not engineered to meet the challenges of data volumes today – resulting in review delays or worse, inaccurate output that leads to critical mistakes.2. The variety, or different forms of dataIn addition to the volume of data increasing today, the diversity of data sources is also increasing. This also presents significant challenges as technologists and attorneys continually work to learn how to process, search, and produce newer and increasingly complicated cloud-based data sources. The good news is that advanced analytics platforms can also help manage new data types in an efficient and cost-effective manner. Some newer AI-based analytics platforms can provide a holistic view of an organization’s entire legal data portfolio and identify broad trends and insights – inclusive of every variety of data present within it. These insights can help reduce cost and risk and sometimes enable organizations to upgrade their entire eDiscovery program. A holistic view of organizational data can also be helpful for outside counsel because it also enables better and more strategic legal decisions for individual matters and investigations.3. The velocity, or the speed of dataWithin eDiscovery, the velocity of data not only refers to the speed at which new data is generated, but also the speed at which data can be processed and analyzed. With smaller data volumes, it was manageable to put all collected data into a database and analyze it later. However, as data volumes increase, this method is expensive, time consuming, and may lead to errors and data gaps. Once again, a big data analytics product can help overcome this challenge because it is capable of rapidly processing and analyzing iterative volumes of collected data on an ongoing basis. By processing data into a big data analytics platform at the outset of a matter, counsel can quickly gain insights into that data, identifying relevant information and potential data gaps much earlier in the processes. In turn, this can mean lower data hosting costs as objectively non-responsive data can be jettisoned prior to data hosting. The ability of big data analytics platforms to support the velocity of data change also enables counsel and reviewers to be more agile and evolve alongside the constantly changing landscape of the discovery itself (e.g., changes in scope, custodians, responsive criteria, court deadlines).4. The veracity, or uncertainty of dataWithin the eDiscovery realm, the veracity of data refers to the quality of the data (i.e., whether the data that a party collects, processes, and produces is accurate and defensible and will satisfy a discovery request or subpoena). The veracity of the data produced to the opposing side in a litigation or investigation is therefore of the utmost importance, which is why data quality control steps are key at every discovery stage. At the preservation and collection stages, counsel must verify which custodians and data sources may have relevant information. Once that data is collected and processed, the data must then be checked again for accuracy to ensure that the collection and processing were performed correctly and there is no missing data. Then, as data is culled, reviewed, and prepared for production, multiple quality control steps must take place to ensure that the data slated to be produced is relevant to the discovery request and categorized correctly with all sensitive information appropriately identified and handled. As data volumes grow, ensuring the veracity of data only becomes more daunting.Thankfully, big data analytics technology can also help safeguard the veracity of data. Cutting-edge AI technology can provide a big-picture view of an organization’s entire legal portfolio, enabling counsel to see which custodians and data sources contain data that is consistently produced as relevant (or, in the alternative, has never been produced as relevant) across all matters. It can also help identify missing data by providing counsel with a holistic view of what was collected in past matters from data sources. AI-based analytics tools can also help ensure data veracity on the review side within a single matter by identifying the inevitable inconsistencies that happen when humans review and categorize documents within large volumes of data (i.e., one reviewer may categorize a document differently than another reviewer who reviewed an identical or very similar document, leading to inconsistent work product). Newer analytics technology can more efficiently and accurately identify those inconsistencies during the review process so that they can be remedied early on before they cause problems. Big Data Analytics-Based MethodologiesAs shown above, AI-based big data analytics platforms can help counsel manage growing data volumes in eDiscovery.For a more in-depth look at how a cutting-edge analytics platform and big data methodology can be applied to every step of the eDiscovery process in a real-world environment, please see Lighthouse’s white paper titled “The Challenge with Big Data.” And, if you are interested in this topic or would like to talk about big data and analytics, feel free to reach out to me at KSobylak@lighthouseglobal.com.ai-and-analytics; ediscovery-reviewcloud, analytics, ai-big-data, ediscovery-process, prism, blog, ai-and-analytics, ediscovery-reviewcloud; analytics; ai-big-data; ediscovery-process; prism; blogkarl sobylak
May 20, 2020
Blog

Big Data and Analytics in eDiscovery: Unlock the Value of Your Data

The current state of eDiscovery is complex, inefficient, and cost prohibitive as data types and volumes continue to explode without bounds. Organizations of all sizes are bogged down in enormous amounts of unresponsive and duplicative electronically stored information (ESI) that still make it to the review stage, persistently the most expensive phase of eDiscovery.Data is at the center of this conundrum and it presents itself in a number of forms including:Scale of Data - In the era of big data, the volume, or amount of data generated, is a significant issue for large-scale eDiscovery cases. By 2025, IDC predicts that 49 percent of the world’s stored data will reside in public cloud environments and worldwide data will grow 61 percent to 175 zettabytes.Different Forms of Data - While the volume of ESI is dramatically expanding, the diversity and variety are also greatly increasing, and a big piece of the challenge involved with managing big data is the varying kinds of data the world is now generating. Gone are the days in eDiscovery where the biggest challenge was processing and reviewing structured, computer-based data like email, spreadsheets, and documents.Analysis of Data - Contending with large amounts of data creates another significant issue around the velocity or speed of the data that’s generated, as well as the rate at which that data is processed for collection and analysis. The old approach is to put everything into a database and try to analyze it later. But, in the era of big data, the old ways are expensive and time-consuming, and the much smarter method is to analyze in real time as the data is generated.Uncertainty of Data - Of course, with data, whether it’s big or small, it must be accurate. If you’re regularly collecting, processing, and generally amassing large amounts of data, none of it will matter if your data is unreliable or untrustworthy. The quality of data to be analyzed must first be accurate and untainted.When you combine all of these aspects of data, it is clear that eDiscovery is actually a big data and analytics challenge!While big data and analytics has been historically considered too complex and elaborate, the good news is that massive progress has been made in these fields over the past decade. Access to the right people, process, and technology in the form of packaged platforms is more accessible than ever.Effective utilization of a robust and intelligent big data and analytics platforms enable organizations to revamp their inefficient and non-repeatable eDiscovery workflows by intelligently learning from past cases. A powerful big data and analytics tool utilizes artificial intelligence (AI) and machine learning to create customized data solutions by harvesting data from all of a client’s cases and ultimately creating a master knowledge base in one big data and analytics environment.In particular, the most effective big data and analytical technology solution should provide:Comprehensive Analysis – The ability to integrate disparate data sources into a single holistic view. This view gives you actionable insights, leading to better decision making and more favorable case outcomes.Insightful Access – Overall and detailed visibility into your data landscape in a manner that empowers your legal team to make data-driven decisions.Intelligent Learnings – The ability to learn as you go through a powerful analytics and machine learning platform that enables you to make sense of vast amounts of data on demand.One of the biggest mistakes organizations make in eDiscovery is forgoing big data and analytics to drive greater efficiency and cost savings. Most organizations hold enormous amounts of untapped knowledge currently locked away in archived or inactive matters. With big data and analytics platforms more accessible than ever, the opportunity to learn from the past to optimize the future is paramount.If you are interested in this topic or just love to talk about big data and analytics, feel free to reach out to me at KSobylak@lighthouseglobal.com.ai-and-analytics; ediscovery-reviewai-big-data, blog, ai-and-analytics, ediscovery-reviewai-big-data; blogkarl sobylak
October 30, 2019
Blog

Best Practices for Embracing the SaaS eDiscovery Revolution

It’s an exciting time in the world of legal tech as SaaS eDiscovery solutions, and cloud computing in general, represent an enormous amount of potential with nearly unlimited capacity of storage, power, and scalability, whether you’re handling small or very large matters. Once seen as something only big firms need to deal with for large cases, we’ve seen electronic communication in the workplace (like email and chat) become the norm and consequently eDiscovery become a typical domain for law firms of all shapes and sizes. It makes perfect sense that the proliferation of an easy-to-use, cost-effective solution is the future for an industry right on the cusp of its next iteration.So you’re ready to embrace this next era of eDiscovery and you’ve decided to adopt a SaaS, self-service, spectra tool within your firm? In my previous blog, I outlined three top reasons why SaaS makes the most sense for law firms in the age of cloud storage, especially as new and improved self-service, spectra tools incorporate the latest technology, are easy to use, and have significantly improved the efficiency of the typically arduous and expensive on-prem eDiscovery process.But transitioning even some of your firm’s in-house eDiscovery process to a SaaS solution requires careful thought around the complexities involved with security, solution support, and business continuity. To make the transition to SaaS as smooth as possible, it’s important to tailor your solution to your specific environment and create an implementation plan that will set you up for success. Here are a few suggestions for best practices to consider when you’re ready to embrace the self-service, spectra, SaaS eDiscovery revolution and leave your cumbersome on-prem environment behind.Eliminate your on-prem applications and infrastructure. Many firms have a patchwork of on-prem tools that they use for different phases of their eDiscovery workflow. A great starting point to eradicating the expense, headache, and risk that comes with maintaining your own infrastructure is to get rid of those old on-prem applications altogether and start fresh with a SaaS tool that will handle your entire workflow. That means choosing one comprehensive tool that allows you to create, upload, and process matters while also enabling you to manage your users across matters and locations from a single place. You’ll not only eliminate administrative headaches, you’ll no longer have to worry about managing data and will be free to concentrate on analyzing data while your SaaS solution provider takes on the security and infrastructure management for you.Leverage best-of-breed tools. A common problem for consumers of on-prem eDiscovery software has been needing to pull together multiple technologies to process, review, perform analytics, and produce data. While you’ve been working with that complicated patchwork of tools you’ve licensed and tried to maintain within your own IT environment, new versions of best-of-breed tools have evolved for everything from processing to analytics to review and production. Now that you’ve chosen one SaaS tool that can handle your full eDiscovery workflow, your new tool should provide you with access to the most updated and advanced tools across the EDRM without any maintenance or upgrades ever needing to be managed on your end.Ensure your solution is supported. Once you’re on board with a streamlined eDiscovery workflow with no infrastructure risks or administrative headaches and access to the most modern and best available eDiscovery tools, what happens when a matter becomes too large or unwieldy or you simply need access to a more traditional full-service support model? In this case, make sure you’re set up with a SaaS tool that is supported by a solution provider who can easily transition you from that self-service, spectra, on-demand eDiscovery model to one where they can take over when you need them to. In addition, speed to implementation is something to consider. While on-prem systems can take months to actually install and implement, a self-service, spectra, SaaS tool can literally be up and ready to use within days.Now that you’ve made the smart decision to modernize your eDiscovery program and implement a self-service, spectra, SaaS solution, it’s time to use these best practices to eliminate your expensive and risky infrastructure, streamline your workflow, adopt the most advanced best-in-breed tools, and benefit from a self-service, spectra tool that’s also backed with the peace of mind of full support from your solution provider. ediscovery-review; ai-and-analyticscloud, self-service, spectra, blog, ediscovery-review, ai-and-analyticscloud; self-service, spectra; bloglighthouse
September 24, 2020
Blog

Automation of In-House Legal Tasks: How and Where to Begin

Legal operations departments aim to support the delivery of legal services in an efficient manner. To that end, resource management and solving problems through technology are core responsibilities of the department. But, the tasks of a legal department vary from answering legal phone calls, filing patents, reviewing and approving contracts, and litigating, just to name a few. With such a varied workload, what to automate can be difficult to identify. To help, I have put together a brief overview of where to start.Step 1: IdentificationStart by identifying the tasks that are repetitive. One of the best ways I have found to do this is to set up a quick 15-minute discussion with 3-5 representatives from different functional areas of your legal team, and from different levels (e.g. individual contributor, manager, function head). In that meeting, ask them one or all of the following questions:What tasks do you wish your team no longer had to do?What tasks do you want to be replaced by robots in the future?What tasks are low value but your team still spends a lot of time on?You should not spend too much time here – the goal is to identify a pretty quick list that is top of mind for people. From these interviews, create a list for further vetting. Just in case you come up empty handed or aren’t able to get time with people within legal, here is a list of items that are commonly automated and we would expect to come up:Contract Automation Self service retrieval of boilerplate contracts (e.g., NDAs) Self service building common contracts (e.g., clause selection for vendor contracts, developer agreements)Request for review, negotiation, and signature of other contractsLegal Team Approvals Marketing document approvals Budget approval for any legal team spend Legal Assistance Requests (Intake) Legal research request Legal advice on an issue neededNeed for outside counselPatent Management Alerts for filing and renewal deadlines Automatically manage workflow for submissionsSelect one or two items from your list and then validate it with your boss and/or general counsel. You want to understand whether others agree on the impact automation will make and identify any potential concerns.Step 2: Build vs. BuyWhether to purchase third-party software or build your own internally is always a good question to start with. Building your own tool gives you exactly what you want with, oftentimes, very little need to change your process. But, it is more resource-intensive both for the build and the maintenance. Buying off the shelf software limits you in what’s commercially available but it takes all the load off your development resources.For some, build or buy may be an easy question as they may not have access to development resources. For others, they may not have any budget for an external tool and/or may be required to use internal teams. For most, however, they fall in the middle and have some access to resources and some budget (but usually not enough of either – that’s a whole other topic).If you fall into this latter category, you will have to analyze your options. Your organizational culture will dictate what depth of analysis is needed. Regardless of the level of detail, the process is the same. The easiest place to start is by surveying what is commercially available. Even if you decide to build, knowing what software is out there, what features are available, and the general costs is helpful. Next, it is helpful to get an approximate cost of the build and maintenance if done internally. This can be a rough order of magnitude based on estimates from other internal tools developed or can be a more detailed estimate developed with the engineering team. Once you have the costs, you will want to add some information about the pros and cons of each solution – e.g., time to build and implement, technology dependencies (if known), other considerations (e.g., we are moving to the cloud in 6 months and we don’t know impact). Once you have this analysis, you can put forth a recommendation to your boss and whomever else is required to decide on how to proceed.Step 3: DesignNow that you have a decision, you can move on to design. This is the most critical stage as this is where you are determining exactly what results your automation will produce. The first thing to do here is to map out your current internal process including who does what. You want to make sure you have a representative of each group take a look at the process diagram and validate it.Once you have the process in place, you’re ready to work with the development team. If you are buying a solution for automation, you should be working closely with the software provider’s onboarding team to overlay your current process with the capabilities of the software. You will want to note where the software does not support your process and where changes will need to be made. If you adjust your process, be sure to involve the same representatives that helped with the initial diagram to provide feedback on any proposed changes in the process.If you are building the solution, you will meet with your internal product resource. This person (or people) will want to understand the process diagram and may even want to watch people go through the process so they can understand user behavior. They will then likely convert your diagram into user stories that developers will develop against. Make sure to be as specific as possible in this process. This resource will be the one representing your voice with the developers so you want them to really understand the nuances of the process.Expect some iteration back and forth during this stage and although I have simplified it here, this will be a long stage and the most important.Step 4: ImplementationThe final stage of the process is implementation. Start with a pilot of the automation. Either select a small use case or a small group of users and validate that your automation functions as planned. During this pilot project, it is really helpful to have resources from your software providers or from the development team readily available to make changes and help answer questions. During this pilot, you should also keep track of how the automation is performing versus your expectations. For example, if you expected it to save time, create a way to track the time it saves and report on that metric.After a successful pilot and necessary refinement, you can move on to your full rollout. Create a plan that includes the deployment of the technology, training, feedback, and adjustment. Make sure to also identify the longer-term maintenance strategy that includes continuing to gather feedback and ways to improve the automation over time.There are lots of great publications that go into further detail about each of the steps above, but hopefully this points you in the right direction. Once deployed, automation can be a very powerful tool that augments your team without adding additional FTEs.To discuss this topic more, please feel free to reach out to me at DJones@lighthouseglobal.com.legal-operations; ai-and-analyticslegal-ops, blog, legal-operations, ai-and-analyticslegal-ops; bloglighthouse
April 12, 2023
Blog

Is 2023 the Tipping Point for AI Adoption in Legal?

Generative AI. Bard. Bing AI. Large language models. Artificial intelligence continues to dominate headlines and workplace chats across every industry since OpenAI’s public release of ChatGPT in November of 2022. Nowhere was this more evident than at this year’s Legalweek event. The annual conference, which gathers thousands of attorneys, legal practitioners, and eDiscovery providers together in New York City, was dominated by discussions of ChatGPT and AI. This makes sense, of course. Attorneys must understand how major technology shifts will impact their clients or companies—especially those in eDiscovery and information governance who deal with corporate data and its challenges. But there was a slight twist to the discussions about ChatGPT. In addition to the possible impacts and risks to clients who use the technology, there was just as much, if not more, focus on how it could be used to streamline eDiscovery.The idea of using a tool released to the public less than four months ago seems almost ironic in an industry with a reputation for slowly adopting technology. Indeed, a 2022 ABA survey showed that as few as 19.2% of lawyers use predictive coding technology (i.e., technology assisted review or TAR) for document review, up from just 12% in 2018. Even surveys dominated by eDiscovery software providers showed TAR was being used on less than 30% of matters in 2022. Given that the technology behind traditional TAR tools has existed since the 1970s and the use of TAR has been widely accepted (and even encouraged) by court systems around the world for over a decade, these statistics are strikingly low.So, what is driving this recent enthusiasm in the industry around AI? The accessibility and generative results of ChatGPT is certainly a factor. After all, even a child can quickly learn how to use ChatGPT to generate new content from a simple query. But the recent excitement in eDiscovery also seems to be driven by the significant challenges attorneys are encountering:Macroeconomic volatility and unpredictability have been a near constant stressor for both corporate legal departments and law firms. Legal budgets are shrinking, and layoffs have plagued almost every industry, leaving legal teams to do the same volume of work with fewer resources.Corporate legal teams are being pressured to evolve from a cost center to one that generates revenue and savings, while attorneys at law firms are expected to add value and expertise to all sectors of a company’s business, beyond the litigation and legal sectors they’ve traditionally operated in. And all attorneys are facing increasing demand to become experts in the risks and challenges of the ever-evolving list of new technology used by their clients and companies.New technology is generating unprecedented volumes of corporate data in new formats, while eDiscovery teams are still grappling with better ways to collect, review, and produce older data formats (modern attachments, collaboration data, text messages, etc.) In short, even the most technology-shy attorneys may be finding themselves at a technology “tipping point,” realizing that it is impossible to overcome some of these challenges without leveraging AI and other forms of advanced technology. But while the challenges may seem grim, there is an inherent hopefulness in this moment. The legal industry’s tendency to adopt AI technology more slowly than other sectors means there’s ample opportunity for growth. Some forward-thinking legal teams, with the help of eDiscovery providers, have already been leveraging advanced AI technology to substantially increase the efficiency and accuracy of eDiscovery workflows. This includes tools that utilize the technology behind ChatGPT, including large language models and natural language processing (NLP). And unlike ChatGPT, where privacy concerns have already been flagged regarding its use, existing AI solutions for eDiscovery were developed specifically to meet the stricter requirements of the legal industry—with some already overcoming tough scrutiny from regulators, opposing counsel, and courts. In other words, the big eDiscovery question of 2023 may not be, “Can ChatGPT revolutionize eDiscovery in the future?” but rather, “What can advanced AI and analytic tools do for eDiscovery practitioners right now?” While the former is up for debate, there are definitive answers to the latter threaded throughout many of the other major industry discussions happening now. Some of those discussions include:If you want to go far, go together Today’s larger and more complicated data volumes often make the traditional eDiscovery model feel like the proverbial round hole that the square (data) peg was not designed to fit into. And it’s becoming increasingly expensive for legal teams to try to do so. To operate in this new era, it’s essential to work with partners who can help you meet your data needs and align with your goals. A good example is when an in-house legal team partners with a technology-forward law firm and eDiscovery provider to build a more streamlined and modern eDiscovery program. This kind of partnership provides the resources, expertise, and technology needed to take a more holistic approach to eDiscovery—breaking away from the traditional model of starting each new matter from scratch. These teams can work together to create and deploy tools and expertise that reduce costs and improve review outcomes across all matters. For example, customized AI classifiers built with data and work product from the company’s past matters, cross-matter analytics that identify review and data trends, and tailored review workflows to increase efficiency and accuracy for specific use cases. This partnership approach is a microcosm of how different organizations and teams can work together to overcome common industry challenges. Technology that meets us where we areDespite all the chatter around ChatGPT, there is currently no “easy AI button” to automate the document review process. However, modern eDiscovery technology (including advanced AI) can be integrated into almost every stage of the document review process in different ways, depending on a case team’s goals. This technology-integrated approach to eDiscovery workflows can help case teams achieve unprecedented efficiency and review accuracy, mitigate risks of inadvertently producing sensitive documents, minimize review redundancy across matters, and quickly pull out key themes, timelines, and documents hidden within large data volumes. Technology-forward law firms and managed review partners can help case teams integrate advanced technology and specialized expertise to achieve these goals in a defensible way that works with each company’s existing data and workflows. The only constant is change The days of a static, rarely updated information governance program are gone. The nature of cloud data, the speed of technology evolution and adoption, and the increasingly complex patchwork of data privacy and security regulations mean that legal and compliance teams need to be nimble and ready for the next new data challenge. New generative AI tools like ChatGPT may only add to this complexity. While this type of technology may be largely off limits in the near future for eDiscovery providers and law firms due to client confidentiality, data privacy, and AI transparency issues, companies in other industries have already begun using it. Legal and compliance teams will need to ensure that any new data created by generative AI tools follow applicable data retention guidelines and regulations and begin to think through how this new data will impact eDiscovery workflows.The furor and excitement over the potential use cases for ChatGPT in eDiscovery are a hopeful sign that more legal practitioners are realizing the potential of AI and advanced analytic technology. This change will help push the industry forward, as more in-house teams, outside counsel, and eDiscovery providers partner together to overcome some of the industry’s toughest data challenges with advanced technology.For other stories on practical applications of AI and analytics in eDiscovery, check out more Lighthouse content. lighting-the-way-for-review; ai-and-analytics; lighting-the-path-to-better-reviewai-big-data, blog, ai-and-analytics,ai-big-data; blogsarah moran
February 24, 2021
Blog

AI and Analytics: Reinventing the Privilege-Review Model

Identifying attorney-client privilege is one of the most costly and time-consuming processes in eDiscovery. Since the dawn of the workplace email, responding to discovery requests has had legal teams spending countless hours painstakingly searching through millions of documents to pinpoint attorney-client and other privileged information in order to protect it from production to opposing parties. As technology has improved, legal professionals have gained more tools to help in this process, but inevitably, it still often entails costly human review of massive amounts of documents.What if there was a better way? Recently, I had the opportunity to gather a panel of eDiscovery experts to discuss how advances in AI and analytics technology now allow attorneys to identify privilege more efficiently and accurately than previously possible. Below, I have summarized our discussion and outlined how legal teams can leverage advanced AI technology to reinvent the model for detecting attorney-client privilege.Current Methods of Privilege Identification Result in Over IdentificationCurrently, the search for privileged information includes a hodgepodge of different technology and workflows. Unfortunately, none of them are a magic bullet and all have their own drawbacks. Some of these methods include:Privilege Search Terms: The foundational block of most privilege reviews involves using common privilege search terms (“legal,” “attorney,” etc.) and known attorney names to identify documents that may be privileged, and then having a review team painstakingly re-review those documents to see if they do, in fact, contain privileged information.‍Complex Queries or Scripts: This method builds on the search term method by weighting the potential privilege document population into ‘tiers’ for prioritized privilege review. It sometimes uses search term frequency to weigh the perceived risk that a document is privileged.‍Technology Assisted Review (TAR): The latest iteration of privilege identification methodologies involves using the TAR process to try to further rank potential privilege populations for prioritized review, allowing legal teams to cut off review once the statistical likelihood of a document containing privilege information reaches a certain percentage.Even applied together, all these methodologies are only just slightly more accurate than a basic privilege search term application. TAR, for example, may flag 1 out of every 4 documents as privilege, instead of the 1 out of every 5 typically identified by common privilege search term screens. This result means that review teams are still forced to re-review massive amounts of documents for privilege.The current methods tend to over-identify privilege for two very important reasons: (1) they rely on a “bag of words” approach to privilege classification, which removes all context from the communication; (2) they cannot leverage non-text document features, like metadata, to evaluate patterns within the documents that often provide key contextual insights indicating a privileged communication.How Can Advances in AI Technology Improve Privilege Identification MethodsAdvances in AI technology over the last two years can now make privilege classification more effective in a few different ways:Leveraging Past Work Product: Newer technology can pull in and analyze the privilege coding that was applied on previous reviews, without disrupting the current review process. This helps reduce the amount of attorney review needed from the start, as the analytics technology can use this past work product rather than training a model from scratch based on review work in the current matter. Often companies have tens or even hundreds of thousands of prior privilege calls sitting in inactive or archived databases that can be leveraged to train a privilege model. This approach additionally allows legal teams to immediately eliminate documents that were identified as privileged in previous reviews.Analyzing More Than Text: Newer technology is also more effective because it now can analyze more than just the simple text of a document. It can also analyze patterns in metadata and other properties of documents, like participants, participant accounts, and domain names. For example, documents with a large number of participants are much less likely to contain information protected by attorney-client privilege, and newer technology can immediately de-prioritize these documents as needing privilege review.Taking Context into Account: Newer technology also has the ability to perform a more complicated analysis of text through algorithms that can better assess the context of a document. For example, Natural Language Processing (NLP) can much more effectively understand context within documents than methods that focus more on simple term frequency. Analyzing for context is critical in identifying privilege, particularly when an attorney may just be generally discussing business issues vs. when an attorney is specifically providing legal advice.Benefits of Leveraging Advances in AI and Analytics in Privilege ReviewsLeveraging the advances in AI outlined above to identify privilege means that legal teams will have more confidence in the accuracy of their privilege screening and review process. This technology also makes it much easier to assemble privilege logs and apply privilege redactions, not only to increase efficiency and accuracy, but also because of the ability to better analyze metadata and context. This in turn helps with privilege log document descriptions and justifications and ensuring consistency. But, by far the biggest gain, is the ability to significantly reduce costly and time-intensive manual review and re-review required by legal teams using older search terms and TAR methodologies.ConclusionLeveraging advances in AI and analytics technology enables review teams to identify privileged information more accurately and efficiently. This in turn allows for a more consistent work product, more efficient reviews, and ultimately, lower eDiscovery costs.If you’re interested in learning more about AI and analytics advancements, check out my other articles on how this technology can also help detect personal information within large datasets, as well as how to build a business case for AI and win over AI naysayers within your organization.To discuss this topic more or to learn how we can help you make an apples-to-apples comparison, feel free to reach out to me at RHellewell@lighthouseglobal.com.ai-and-analytics; chat-and-collaboration-data; ediscovery-reviewprivilege, analytics, ai-big-data, blog, ai-and-analytics, chat-and-collaboration-data, ediscovery-review,privilege; analytics; ai-big-data; bloglighthouse
September 2, 2021
Blog

Analytics and Predictive Coding Technology for Corporate Attorneys: Six Use Cases

Below is a copy of a featured article written by Jennifer Swanton of Medtronic, Shannon Capone Kirk of Ropes & Gray, and John Del Piero of Lighthouse for Legaltech News.This is the second article in a two-part series, designed to help create a better relationship between corporate attorneys and advanced technology. In our first article, we worked to demystify the language technology providers tend to use around AI and analytics technology.With the terminology now defined, we will now focus on six specific ways that corporate legal teams can put this type of technology to work in the eDiscovery and compliance space to improve cost, outcome, efficiencies.1. Document Review and Data Prioritization: The earliest example of how to maximize the value of analytics in eDiscovery was the introduction of TAR (technology-assisted review). CAL (or continuous active learning) allows counsel to see the most likely to be relevant documents much earlier on in the process than if they had been simply looking at search term results, which are not categorized or prioritized and are often overbroad. Plainly put, it is the difference between an organized review and a disorganized review.Data prioritization offers strategic value to the case team, enabling them to get to the crux of a case earlier in the process and ultimately develop a better strategic plan for cost and outcomes. This process also offers the ability to get to a point of review where the likelihood of additional relevant information is so low, no new review is needed. This will save time and money on large document review projects. Such prioritization is critical for time-sensitive internal investigations, as well.To dive further into the Pandora analogy we used above: if you were to listen to a random shuffle of songs on Pandora without giving feedback on what you like and don’t like, you’d likely listen for days to encounter several songs you love. Whereas, if you give Pandora feedback, it learns and you’re likely to hear several songs you love within hours. So why suffer days of listening to show tunes and harp solos when what you really love is the brilliant artistry found in songs by the likes of Ray LaMontagne?2. Custodian and Data Source Identification: Advanced analytics that can analyze complex concepts within data can be a powerful tool to clearly identify your relevant data custodians, where that data lives, and other data sources worth considering. Most conceptual analytics technology can now provide real-time visibility into information about custodians, including the date range of the data collected and the data types delivered. More advanced technology that also analyzes metadata can provide you with a deeper understanding of how custodians interact with other people, including the ability to analyze patterns in timing and speech, and even the sentiment and tone of those interactions.All of this information can be used to help quickly determine whether or not a prospective custodian has information relevant to the case that needs to be collected, or if any supplemental collections are required to close a gap in the date range collected. This, in turn, will help reduce the amount of collections required and minimize processing time in fast-paced cases. These tools also help determine which data sources are likely to hold your most relevant information and where supplemental collections may be warranted.Above: Brainspace display of communication networks, which enable users to identify custodians of interest, as well as related people and conversations.3. Identifying Privileged and Personal Information: Another powerful way to leverage analytics in the eDiscovery workflow is to identify privileged documents in a far more cost-effective way than we could in the past. New privilege categorization software creates significant efficiencies by analyzing the text, metadata, and previous coding of documents in order to categorize documents according to the likelihood that they are actually privileged.More advanced analytics tools can now identify documents that have been flagged as privileged by traditional privilege term screens, but have a high likelihood of not containing privileged communications. For example, the technology identifies that the document was sent to a third-party (thus breaking the privilege attorney-client privilege) or because the only privilege term within the document is contained within a boilerplate footer.These more advanced analytics tools can be much more effective at identifying privileged documents than a privilege search term list, and can help case teams successfully meet rolling production deadlines by pushing the documents that are less likely to be privileged (i.e. those that require less privilege review) to the front of the review line. When integrated with other eDiscovery applications, you can also create a defensible privilege log that can be produced for the litigation team.Additionally, flagging potential PII and protected intellectual property (IP) caught up in a large data set can be challenging, but analytics technology provides in-house legal teams with an important ally for automating those processes. Advanced analytics can streamline the process of locating and isolating this sensitive data, which is often hiding in a variety of different systems, folders, and other information silos. Tools allow you to flag Health Insurance Portability and Accountability Act (HIPAA) protected information based on common format and structure to help quickly move through documents and accurately identify and redact needed information.4. Information Governance: One of the high-stakes elements of large data collections is the importance of parsing out highly sensitive records, such as those that contain PII and protected IP. This information is incredibly important to protect company data and also to comply with the growing number of data privacy regulations worldwide, including Europe’s General Data Protection Regulation (GDPR), the California Consumer Protection Act (CCPA), and HIPAA. Analytics can help identify and flag documents per their appropriate document classification. This can be helpful for both the business in their day-to-day operations as well as the legal team in responding to requests.5. Data Re-Use: One of the largest potentials with the use of analytics is the ability to save time and money on your next matter. Technologically advanced companies are now starting to use analytics technology to integrate previous attorney work product, case information, and documents across all organization matters. On a micro level, recycling and analyzing previous work product allows companies to stop re-inventing the wheel on each case and aids in much faster identification of privilege, personal information, and non-responsive documents.For example, organizations often pay to store documents that contain previous privilege tagging from past matters in inactive or archived databases. Those documents, sitting unused in storage, can be separately re-ingested and used to train a privilege model in the new matter, allowing legal teams to immediately eliminate documents that were identified as privileged in previous reviews—even prior to any human coding in the new matter.On a macro level, this type of advanced capability enables organizations to make data-driven decisions across their entire eDiscovery landscape. Rather than looking at each new matter on an individual basis in a singular lens, legal teams can use advanced analytics to analyze previously coded data across the organization’s entire legal portfolio. This can provide previously unheard of insights, like which custodians often contain the most privileged documents matter over matter, or if a data source rarely produces responsive documents. Data re-use can also come in handy in portfolio matters that have overlapping custodians and data sets and need common production. The overall results are more strategic legal and data decisions, more favorable case outcomes, and increased cost efficiency.6. Accuracy: Finally, and potentially the most important reason to use analytics tools, is to increase accuracy and have a better work product. Studies have shown that tools like predictive coding are more accurate than human work product. That, coupled with the potential for cost savings, should be all one needs to utilize these technologies.As useful as these new analytics tools are to in-house legal teams in their efforts to manage eDiscovery today, it is important to understand that the great promise of these technologies is the fact that they are in a state of continuous improvement. Because analytics tools learn, they refine and “get smarter” as they review more data sets. We all know that we’re on just the cusp of what analytics will bring to our profession—but we believe the future of this technology in the area of eDiscovery management is here now.ai-and-analyticstar-predictive-coding, blog, corporate, ai, ai-and-analytics,tar-predictive-coding; blog; corporate; aijohn del piero
August 5, 2021
Blog

Analytics and Predictive Coding Technology for Corporate Attorneys: Demystifying the Jargon

Below is a copy of a featured article written by Jennifer Swanton of Medtronic, Shannon Capone Kirk of Ropes & Gray, and John Del Piero of Lighthouse for Legaltech News.Despite the traditional narrative that lawyers are hesitant to embrace technology, many in-house legal departments and their outside service providers are embracing the use of what is generally referred to as artificial intelligence (AI). In terms of litigation and internal investigations, this translates more specifically into conceptual analytics and predictive coding (also referred to as continuous active learning, or CAL), which are two of the more advanced technological innovations in the litigation space and corporate America.This adoption, in part, seems to be driven by an expectation from corporate leaders that their in-house counsel must be able to identify and utilize the best available technology in order to drive cost efficiency, while also reducing risk and increasing effective and defensible litigation positions. For instance, in a 2019 survey of 163 legal professionals conducted by ALM Intelligence and LexisNexis, 92% of attorneys surveyed planned to increase their use of legal analytics in the upcoming 12 months. The reasoning behind that expected increase in adoption was two-fold, with lawyers indicating that it was driven both by competitive pressure to win cases (57%), as well as client expectation (56%).Given that the above survey took place right before the COVID-19 pandemic hit, it stands to reason that the 92% of attorneys that expected to increase their use of analytics tools in 2020 may actually be even higher now. With a divisive election and receding pandemic only recently behind us, and an already unpredictable market, many corporations are tightening budgets and looking to further reduce unnecessary spend. Conceptual analytics and CAL are easy (yes, really) and effective ways to manage ballooning datasets and significantly reduce discovery, litigation and internal investigation costs.With that in mind, we would like to help create a better relationship between corporate attorneys and advanced technology with the following two step approach—which we will outline in a series of two articles.This first installment will help demystify the language technology providers tend to use around AI and analytics technology so that in-house teams feel more comfortable with adoption. In our second article, we will provide examples of some great use cases where corporate legal teams can easily leverage technology to help improve workflows. Together, we hope this approach can help in-house legal teams adopt technology that drives efficiency, lowers cost, and improves the quality of their work.Demystifying AI JargonIf you have ever discussed AI or analytics technology with a technology provider, you are probably more than aware that tech folks have a tendency to forget that the majority of their clients don’t live in the world of developing and evaluating new technology, day in and day out. Thus, they may use terms that are often confusing to their legal counterparts (and sometimes use terms that don’t match what the technology is capable of in the legal world). For this reason, it is helpful to level set with some common terminology and definitions, so that in-house attorneys are prepared to have better, more practical real-world discussions with technology providers.Analytics Technology: Within the eDiscovery and compliance space, analytics technology is the ability of a machine to recognize patterns, structures, concepts, terminology, and/or the people interacting within data, and then present that analysis in a visual representation so that attorneys have a better overview of their data. As with AI, not all analytics tools have the same capabilities. Vendors may label everything from email threading identification to more advanced technology that can identify complex concepts and human sentiment as “analytics” tools.Within these articles, when we reference this term, we are referring to the more advanced technology that can analyze not only the text within data but also the metadata and any previous coding applied by subject matter experts. This is an important distinction because this type of technology can greatly improve the accuracy of the analysis compared to older tools. For example, analytics technology that can analyze metadata as well as text is much better at identifying concepts like attorney-client privilege because it can analyze not only the language being used but who is using that language and the circumstances in which they use it.Artificial Intelligence (AI): Probably the most broadly recognized term due to its prevalence outside of the eDiscovery space, AI is technically defined as the ability of a computer to complete tasks that usually would require human intelligence. Within the eDiscovery and compliance world, vendors often use the term broadly to refer to a variety of technologies that can perform tasks that previously would require completely human review.It is important to remember though that the term AI can refer to a broad range of technology with very different capabilities. “AI” in the legal world is currently being used as a generalized term and legal consumers of such technologies should press for specifics—not all “AI” is the same, or, in several cases, even AI at all.Machine Learning: Machine learning is a category of algorithms used in AI that can analyze statistics and find patterns in large volumes of data. The algorithms improve with experience—meaning that as documents are coded in a consistent fashion by humans, the better and more accurate the algorithms should become at identifying specific data types. Note here that there is a common misunderstanding that machine learning requires large amounts of data from which to learn. That is not necessarily true—all that is required for machine learning to work well is that the input it learns from (i.e., document coding for eDiscovery purposes) is consistent and accurate.Natural Language Processing (NLP): NLP is a subset of AI that uses machine learning to process and analyze the natural language humans use within large amounts of data. The result is technology that can “understand” the contents of documents, including the context in which language is used within them. Within eDiscovery, NLP is used within more advanced forms of analytics technology to help identify specific content or sentiments within large datasets.For example, NLP can be used to more accurately identify sensitive information, like personally identifiable information (PII), within datasets. NLP is better at this task than older AI technology because older models relied on “regular expressions” (a sequence of characters to define a search pattern) to identify information. When a “regular expression” (or regex) is used by an algorithm to find, for example, VISA account numbers—it will be able to identify the correct number pattern (i.e., any number that starts with the number 4 and has 16 digits) within a dataset but will be unable to differentiate other numbers that have the same pattern (for example, employee identification numbers). Thus, the results returned by legacy technology using regex may be overbroad and include false positives.NLP can return more accurate results for that same task because it is able to identify not only the number pattern, but can also analyze the language used around the pattern. In this way, NLP will understand the context in which VISA account numbers are communicated within that dataset compared to how employee identification numbers are communicated, and only return the VISA numbers.Predictive Coding (also referred to as Technology-Assisted Review or TAR): Predictive coding is not the same as conceptual analytics. Also, predictive coding is a bit of a misnomer, as the tools don’t predict or code anything. A human reviewer is very much involved. Simply put, it refers to a form of machine learning, wherein humans review documents and make binary coding calls: what is responsive and what is non-responsive. This is similar in concept to selecting thumbs up or down in Pandora so as to teach the app what songs you like and don’t like. After some human coding and calibrations between the human and the tool, the technology uses the human’s coding selections to score how the remaining documents should be coded, enabling the human to review the high scored documents first.In the most current versions of predictive coding, this technology continually improves and refreshes as the human reviews, which reduces or eliminates the need for surgical precision on coding at the start (which was a concern in the former version of predictive coding and why providers and parties spent a considerable amount of time concerned with “seed sets”). This improved and self-improving prioritization of large document sets based on high-scored documents is usually a more efficient and organized manner in which to review documents.Because of this evolution in predictive coding, it is often referred to in a host of different ways, such as TAR 1.0 (which requires “seed sets” to learn from at the start) and TAR 2.0 (which is able to continually refresh as the human codes—and is thus also referred to as Continuous Active Learning or CAL). Some providers continue to use the old terminology, or explain their advancements by walking through the differences between TAR 1.0 and TAR 2.0, and so on. But, speaking plainly, in this day and age, providers and legal teams should really only be concerned with the latest version of TAR, which utilizes CAL, and significantly reduces or totally eliminates the previous concern with surgical precision on coding an initial “seed set.” With our examples in the next installment, we hope to illustrate this point. In a word, walking through the technological evolution around predictive coding and all of the associated terminology can cause unnecessary intimidation, and can cause confusion between providers, parties and the court.The key takeaway from these definitions is that even though all the technology described above may technically fall into the “AI” bucket, there is an important distinction between predictive coding/TAR technology and advanced analytics technology that uses AI and NLP. The distinction is that predictive coding/TAR is a much more technologically-limited method of ranking documents based on binary human decisions, while advanced analytics technology is capable of analyzing the context of human language used within documents to accurately identify a wide variety of concepts and sentiment within a dataset. Both tools still require a good amount of interaction with human reviewers and both are not mutually exclusive. In fact, on many investigations in particular, it is often very efficient to employ both conceptual analytics and TAR, simultaneously, in a review.Please stay tuned for our next installment in this series, “Analytics and Predictive Coding Technology for Corporate Attorneys: Six Use Cases”, where we will outline six specific ways that corporate legal teams can put this type of technology to work in the eDiscovery and compliance space to improve cost, outcome, efficiencies.ai-and-analyticstar-predictive-coding, blog, corporate, ai, ai-and-analytics,tar-predictive-coding; blog; corporate; ailegaltech news
November 24, 2020
Blog

Advanced Analytics – The Key to Mitigating Big Data Risks

Big data sets are the “new normal” of discovery and bring with them six sinister large data set challenges, as recently detailed in my colleague Nick’s article. These challenges range from classics like overly broad privileged screens, to newer risks in ensuring sensitive information (such as personally identifiable information (PII) or proprietary information such as source code) does not inadvertently make its way into the hands of opposing parties or government regulators. While these challenges may seem insurmountable due to ever-increasing data volumes (and also tend to keep discovery program managers and counsel up at night) there are new solutions that can help mitigate these risks and optimize workflows.As I previously wrote, eDiscovery is actually a big data challenge. Advances in AI and machine learning, when applied to eDiscovery big data, can help mitigate and reduce these sinister risks by breaking down the silos of individual cases, learning from a wealth of prior case data, and then transferring these learnings to new cases. Having the capability to analyze and understand large data sets at scale combined with state-of-the-art methods provides a number of benefits, five of which I have outlined below.Pinpointing Sensitive Information - Advances in deep learning and natural language processing has now made pinpointing sensitive content achievable. A company’s most confidential content could be laying in plain sight within their electronic data and yet be completely undetected. Imagine a spreadsheet listing customers, dates of birth, and social security numbers attached to an email between sales reps. What if you are a technology company and two developers are emailing each other snippets of your company’s source code? Now that digital medium is the dominant form of communication within workplaces, situations like this are becoming ever-present and it is very challenging for review teams to effectively identify and triage this content. To solve this challenge, advanced analytics can learn from massive amounts of publically available and computer-generated data and then fine tuned to specific data sets using a recent breakthrough innovation in natural language processing (NLP) called “transfer learning.” In addition, at the core of big data is the capability to process text at scale. Combining these two techniques enables precise algorithms to evaluate massive amounts of discovery data, pinpoint sensitive data elements, and elevate them to review teams for a targeted review workflow.Prioritizing the Right Documents - Advanced analytics can learn both key trends and deep insights about your documents and review criteria. A normal search term based approach to identify potentially responsive or privileged content provides a binary output. Documents either hit on a search term or they do not. Document review workflows are predicated on this concept, often leading to suboptimal review workflows that both over-identify documents that are out of scope and miss documents that should be reviewed. Advanced analytics provide a range of outcomes that enable review teams to create targeted workflow streams tailored to the risk at hand. Descriptive analysis on data can generate human interpretable rules that help organize documents, such as “all documents with more than X number of recipients is never privileged” or “99.9% of the time, documents coming from the following domains are never responsive”. Deep learning-based classifiers, again using transfer learning, can generalize language on open source content and then fine-tune models to specific review data sets. Having a combination of analytics, both descriptive and predictive, provides a range of options and gives review teams the ability to prioritize the right content, rather than just the next random document. Review teams can now concentrate on the most important material while deprioritizing the less important content for a later effort.Achieving Work-Product Consistency - Big data and advanced analytics approaches can ensure the same document or similar documents are treated consistently across cases. Corporations regularly collect, process, and review the same data across cases over and over again, even when cases are not related. Keeping document treatment consistent across these matters can obviously be extremely important when dealing with privilege content – but is also important when it comes to responsiveness across related cases, such as a multi-district litigation. With the standard approach, cases are in siloes without any connectivity between them to enable consistent approaches. A big data approach enables connectivity between cases using hub-and-spoke techniques to communicate and transit learnings and work-product between cases. Work product from other cases, such as coding calls, redactions, and even production information can be utilized to inform workflows on the next case. For big data, activities like this are table stakes.Mitigating Risk - What do all of these approaches have in common? At its core, big data and analytics is an engine for mitigating risk. Having the ability to pinpoint sensitive data, prioritize what you look at, and ensure consistency across your cases is a no-brainer. This all may sound like a big change, but in reality, it’s pretty seamless to implement. Instead of simply batching out documents that hit on an outdated privilege screen for privilege review, review managers can instead use a combination of analytics and fine-tuned privilege screen hits. Review then occurs from there largely as it does today, just with the right analytics to inform reviewers with the context needed to make the best decision.Reducing Cost - The other side of the coin is cost savings. Every case has a different cost and risk profile and advanced analytics should provide a range of options to support your decision making process on where to set the lever. Do you really need to review each of these categories in full, or would an alternative scenario based on sampling high-volume and low-risk documents be a more cost-effective and defensible approach? The point is that having a better and more holistic view of your data provides an opportunity to make these data-driven decisions to reduce costs.One key tip to remember - you do not need to try to implement this all at once! Start by identifying a key area where you want to make improvements, determine how you can measure the current performance of the process, then apply some of these methods and measure the results. Innovation is about getting a win in order to perpetuate the next.If you are interested in this topic or just love to talk about big data and analytics, feel free to reach out to me at KSobylak@lighthouseglobal.com.ai-and-analyticsdigital-forensics, ai-and-analyticsanalytics; ai-big-data; data-re-use; blogkarl sobylak
February 25, 2021
Blog

AI and Analytics: New Ways to Guard Personal Information

Big data can mean big problems in the ediscovery and compliance world – and those problems can be exponentially more complicated when personal data is involved. Sifting through terabytes of data to ensure that all personal information is identified and protected is becoming an increasingly more painstaking and costly process for attorneys today.Fortunately, advances in artificial intelligence (AI) and analytics technology are changing the landscape and enabling more efficient and accurate detection of personal information within data. Recently, I was fortunate enough to gather a panel of experts together to discuss how AI is enabling legal professionals in the ediscovery, information governance, and compliance arenas to identify personal protected information (PII) and personal health information (PHI) more quickly within large datasets. Below is a summary of our discussion, along with some helpful tips for leveraging AI to detect personal information.Current Methods of Personal Data Identification Similar to the slower adoption of AI and analytics to help with the protection of attorney-client privilege information (compared to the broader adoption of machine learning to identify matter relevant documents), the legal profession has also been slow to leverage technology to help identify and protect personal data. Thus, the identification of personal data remains a very manual and reactive process, where legal professionals review documents one-by-one on each new matter or investigation to find personal information that must be protected from disclosure.This process can be especially burdensome for pharmaceutical and healthcare industries, as there is often much more personal information within the data generated by those organizations, while the risk for failing to protect that information may be higher due to healthcare-specific patient privacy regulations like HIPAA.How Advances in AI Technology Can Improve Personal Data Identification There are a few ways in which AI has advanced over the last few years that make new technology much more effective at identifying personal data:Analyzing More Than Text: AI technology is now capable of analyzing more than just the simple text of a document. It can now also analyze patterns in metadata and other properties of documents, like participants, participant accounts, and domain names. This results in technology that is much more accurate and efficient at identifying data more likely to contain personal information.Leveraging Past Work Product: Newer technology can now also pull in and analyze the coding applied on previous reviews without disrupting workflows in the current matter. This can add incredible efficiency, as documents previously flagged or redacted for personal information can be quickly removed from personal information identification workflows, thus reducing the need for human review. The technology can also help further reduce the amount of attorney review needed at the outset of each matter, as it can use many examples of past work product to train the algorithms (rather than training a model from scratch based on review work in the current matter).Taking Context into Account: Newer technology can now also perform a more complicated analysis of text through algorithms that can better assess the context of a document. For example, advances in Natural Language Processing (NLP) and machine learning can now identify the context in which personal data is often communicated, which helps eliminate previously common false hits like mistakenly flagging phone numbers as social security numbers, etc.Benefits of Leveraging AI and Analytics when Detecting Sensitive DataArguably the biggest benefit to leveraging new AI and analytics technology to detect personal information is cost savings. The manual process of personal information identification is not only slower, but it can also be incredibly expensive. AI can significantly reduce the number of documents legal professionals would need to look through, sometimes by millions of documents. This can translate into millions of dollars in review savings because this work is often performed by legal professionals who are billed at an hourly rate.Not only can AI utilization save money on a specific matter, but it can also be used to analyze an entire legal portfolio so that legal professionals have an accurate sense of where (and how much) personal information resides within an organization’s data. This knowledge can be invaluable when crafting burden arguments for upcoming matters, as well as to better understand the potential costs for new matters (and thus help attorneys make more strategic case decisions).Another key benefit of leveraging AI technology is the accuracy with which this technology can now pinpoint personal data. Not only is human review much less efficient, but it can also lead to mistakes and missed information. This increases the risk for healthcare and pharmaceutical organizations especially, who may face severe penalties for inadvertently producing PHI or PII (particularly if that information ends up in the hands of malevolent actors). Conducting quality control (QC) with the assistance of AI can greatly increase the accuracy of human review and ensure that organizations are not inadvertently producing individuals’ personal information. Best Practices for Utilizing AI and Analytics to Identify Personal DataPrepare in Advance: AI technology should not be an afterthought. Before you are faced with a massive document production on a tight deadline, make sure you understand how AI and analytics tools work and how they can be leveraged for personal data identification. Have technology providers perform proof of concept (POC) analyses with the tools on your data and demonstrate exactly how the tools work. Performing POCs on your data is critical, as every provider’s technology demos well on generic data sets. Once you have settled on the tools you want to use within your organization, ensure your team is trained well and is ready to hit the ground running. This will also help ensure that the technology you choose fits with your internal systems and platforms.Take a Global Team Approach: Prior to leveraging AI and analytics, spend some time working with the right people to define what PII and PHI you have an obligation to identify, redact, or anonymize. Not all personal information will need to be located or redacted on every matter or in every jurisdiction, but defining that scope early will help you leverage the technology for the best use cases.Practice Information Governance: Make sure your organization is maintaining proper control of networks, keeping asset lists up to date, and tracking who the business and technical leads are for each type of asset. Also, make sure that document retention policies are enforced and that your organization is maintaining controls around unstructured data. In short, becoming a captain of your content and running a tight ship will make the entire process of identifying personal information much more efficient.Think Outside the Box: AI and analytics tools are incredibly versatile and can be useful in a myriad of different scenarios that require protecting personal information from disclosure. From data breach remediation to compliance matters, there is no shortage of circumstances that could benefit from the efficiency and accuracy that AI can provide. When analyzing a new AI tool, bring security, IT, and legal groups to the table so they can see the benefits and possibilities for their own teams. Also, investigate your legal spend and have other teams do the same. This will give you a sense of how much money you are currently spending on identifying personal information and what areas can benefit from AI efficiency the most.If you’re interested in learning more about how to leverage AI and analytic technology within your organization or law firm, please see my previous articles on how to build a business case for AI and win over AI naysayers within your organization.To discuss this topic more or to learn how we can help you make an apples-to-apples comparison, feel free to reach out to me at RHellewell@lighthouseglobal.com.data-privacy; ai-and-analyticsai-and-analytics, microsoft-365analytics; data-privacy; ai-big-data; bloglighthouse
No items found. Please try different search parameters.