Lighthouse Blog
Read the latest insights from industry experts on the rapidly evolving legal and technology landscapes with topics including strategic and technology-driven approaches to eDiscovery, innovation in artificial intelligence and analytics, modern data challenges, and more.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blog

Five Common Mistakes In Keyword Search: How Many Do You Make?
When you’re a kid, you love easy games to learn and play, whether they’re interactive games, board games or card games. One of the first card games many kids learn how to play is “Go Fish.” It’s easy to learn because you simply ask the other player if they have any cards of a certain kind (e.g., “got any Kings?”) – if they do, you collect those cards from them; if they don’t, they say “Go Fish” and you have to draw a card from the deck and your turn ends. Easy, right?Conducting keyword searching without a planned, controlled process that includes testing and verifying the results is somewhat like playing “Go Fish” – you might get lucky and retrieve the documents you need to support your case (without retrieving too many others) and you might not. Yet many lawyers and legal professionals think they “get” keyword searching. Why? Because they learned keyword searching in law school using Westlaw and Lexis? Or they understand how to use “Google” to locate web pages related to their topics? But these examples are designed to identify a single item (or handful of items) related to one topic that you seek.Keyword searching for electronic discovery is about balancing recall and precision to produce a proportional volume of electronically-stored information (ESI) that is responsive to the case, which could be thousands or even millions of responsive documents, depending on the issues of the case.Five Common Keyword Searching MistakesWith that in mind, here are five common mistakes that lawyers and legal professionals make when conducting keyword searches:1. Poor Use of Wildcards: Wildcard characters can be helpful in expanding the scope of the search, but only if you use them well — and understand how they are applied by the search engine you’re using (warning: don’t use Google’s search engine as an exemplar). Poorly placed or ill-advised wildcard character(s) can completely blow up a search. A few years ago, there was a case where one of the goals was to identify documents that related to apps on devices (mobile and PC), so the legal team decided to use a search term “app*” to retrieve words like “app”, “application”, “apps”, etc. Great, right? Not when that same term also retrieves terms like “appear”, “apparent”, “applied”, “appraise”, etc. A better search in this case would have been (app or apps or application*). Make sure to think through word variability and consider word formulations that could be hit by the search. Also consider whether wildcard operators are attached at the appropriate place in the stem of a word so that all of the variants are hit. If not, the search might target too many unrelated words or omit words you want to capture.2. Use of Noise or Stop Words: To keep retrieval responsive even in large databases, most platforms don’t index certain common words that appear regularly (defined as “noise” or “stop” words), yet many legal professionals fail to exclude these noise words in the searches they conduct – yielding unexpected results. Search terms such as “management did” or “counseled out” won’t work if “did” and “out” are noise words that can’t be retrieved. There are typically 100 or more words that are not indexed by a typical platform, so it’s important to understand what they are and plan around them in creating searches that can get you as close as possible to your desired result.3. Starting with Searches That Are Too Broad: Another common mistake is to start with searches that are too broad, assuming that you’ll get a result that will be easy to narrow down through additional search. In fact, you may get a result that makes it nearly impossible to determine what might be causing your search to retrieve unexpected results. Keyword search works best when the hard work has been done up front, either by working with subject matter experts who have provided insight into likely vocabulary used (e.g., shorthand, code words, slang) or via a targeted exploration of the document population. That knowledge, coupled with the effective use of Boolean operators like AND, OR, and NOT, should enable you to craft initial searches that put targeted words in the appropriate context, increasing the likelihood that relevant material will be found at the outset. That result will provide the necessary fodder for developing additional searches that are more precise.4. Failing to Test What’s Retrieved: Many legal professionals create a search, perform that search and then proceed to review without testing the results. Performing a random sample on the results could quickly identify a search that is considerably overbroad and would result in a low prevalence rate of responsive documents, driving up costs for review and production. Testing the result set to ensure the search is properly scoped is well worth the time and effort to take that extra step in terms of potential cost savings. Better to review an extra few hundred documents than an extra hundred thousand documents.5. Failing to Test What’s Not Retrieved: It’s just as important to test the documents that were not retrieved in a search to identify areas that were potentially missed. Not only does a random sample of the “null set” help identify searches that were too narrow in scope, they also are important in addressing defensibility concerns related to your search process if it is challenged by opposing counsel.The ”Go Fish” analogy isn’t an original one – then New York Magistrate Judge Andrew J. Peck used it in his article Search, Forward over nine years ago (October 2011) when he observed that “many counsel still use the “Go Fish” model of keyword search.” If you’re making some of the mistakes listed above, you might be doing so as well. Proper keyword searching is an expert planned and managed process that avoids these mistakes to maximize the proportionality and defensibility of your discovery process. It’s not a kid’s game, so make sure you don’t treat it like one.ediscovery-reviewblog, -keyword-search, ediscovery-review,blog; keyword-searchlighthouse
eDiscovery and Review
Blog

Legal Tech Trends from 2020 and How to Prepare for 2021
Legal tech was no match for 2020. Everyone’s least favorite year wreaked havoc on almost every aspect of the industry, from data privacy upheavals to a complete change in the way employees work and collaborate with data.With the shift to a remote work environment by most organizations in the early spring of 2020, we saw an acceleration of the already growing trend of cloud-based collaboration and video-conferencing tools in workplaces. This in turn, means we are seeing an increase in eDiscovery and compliance challenges related to data generated from those tools – challenges, for example, like collecting and preserving modern attachments and chats that generate from tools like Microsoft Teams, as well compliance challenges around regulating employee use of those types of tools.However, while collaboration tools can pose challenges for legal and compliance teams, the use of these types of tools certainly did help employees continue to work and communicate during the pandemic – perhaps even better, in some cases, than when everyone was working from traditional offices. Collaboration tools were extremely helpful, for example, in facilitating communication between legal and IT teams in a remote work environment, which proved increasingly important as the year went on. The irony here is that with all the data challenges these types of tools pose for legal and IT teams, they are increasingly necessary to keep those two departments working together at the same virtual table in a remote environment. With all these new sources and ways to transfer data, no recap of 2020 would be complete without mentioning the drastic changes to data privacy regulations that happened throughout the year. From the passing of new California data privacy laws to the invalidation of the EU-US privacy shield by the Court of Justice of the European Union (CJEU) this past summer, companies and law firms are grappling with an ever-increasing tangle of regional-specific data privacy laws that all come with their own set of severe monetary penalties if violated. How to Prepare for 2021The key-takeaway here, sadly, seems to be that 2020 problems won’t be going away in 2021. The industry is going to continue to rapidly evolve, and organizations will need to be prepared for that.Organizations will need to continue to stay on top of data privacy regulations, as well as understand how their own data (or their client’s data) is stored, transferred, used, and disposed of.Remote working isn’t going to disappear. In fact, most organizations appear to be heading to a “hybrid” model, where employees split time working from home, from the office, and from cafes or other locations. Organizations should prepare for the challenges that may pose within compliance and eDiscovery spaces.Remote working will bring about a change in employee recruiting within the legal tech industry, as employers realize they don’t have to focus talent searches within individual locations. Organizations should balance the flexibility of being able to expand their search for the best talent vs. their need to have employees in the same place at the same time.Prepare for an increase in litigation and a surge in eDiscovery workload as courts open back up and COVID-related litigation makes its way to discovery phases over the next few months.AI and advanced analytics will become increasingly important as data continues to explode. Watch for new advances that can make document review more manageable.With continuing proliferation of data, organizations should focus on their information governance programs to keep data (and costs) in check.To discuss this topic further, please feel free to reach out to me at SMoran@lighthouseglobal.com. ai-and-analytics; ediscovery-review; legal-operationscloud, analytics, emerging-data-sources, data-privacy, ai-big-data, blog, ai-and-analytics, ediscovery-review, legal-operations,cloud; analytics; emerging-data-sources; data-privacy; ai-big-data; blogsarah moran
AI and Analytics
eDiscovery and Review
Legal Operations
Blog

Law & Candor Season 6 is Now Available!
This eDiscovery Day, a day dedicated to educating industry professionals around growing trends and current challenges, we are excited to bring you season six of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution.Co-hosts, Bill Mariano and Rob Hellewell, are back for another riveting season of Law & Candor with six easily digestible episodes that cover a range of hot topics such as how cellular 5G increases fraud and misconduct risk to tackling modern attachment challenges in G-Suite, Slack, and Teams. This dynamic duo, alongside industry experts, discuss the latest topics and trends within the eDiscovery, compliance, and information governance space as well as share key tips for you and your team to take away. Check out season six's lineup below:Does Cellular 5G Equal 5x the Fraud and Misconduct Risk?Cross-Border Data Transfers and the EU-US Data Privacy Tug of WarReducing Cybersecurity Burdens with a Customized Data Breach WorkflowTackling Modern Attachment and Link Challenges in G-Suite, Slack, and TeamsThe Convergence of AI and Data Privacy in eDiscovery: Using AI and Analytics to Identify Personal InformationAI, Analytics, and the Benefits of TransparencyCheck them out now or bookmark them to listen to later. Follow the latest updates on Law & Candor and join in the conversation on Twitter. Catch up on past seasons by clicking the links below:Season 1Season 2Season 3Season 4Season 5Special Edition: Impacts of Covid-19For questions regarding this podcast and its content, please reach out to us at info@lighthouseglobal.com.ediscovery-reviewmicrosoft, cybersecurity, analytics, g-suite, ai-big-data, cloud-security, cloud-migration, phi, pii, blog, ediscovery-review,microsoft; cybersecurity; analytics; g-suite; ai-big-data; cloud-security; cloud-migration; phi; pii; bloglighthouse
eDiscovery and Review
Blog

Preparing for Big Data Battles: How to Win Over AI and Analytics Naysayers
Artificial intelligence (AI), advanced analytics, and machine learning are no longer new to the eDiscovery field. While the legal industry admittedly trends towards caution in its embrace of new technology, the ever-growing surge of data is forcing most legal professionals to accept that basic machine learning and AI are becoming necessary eDiscovery tools.However, the constant evolution and improvement of legal tech bestow an excellent opportunity to the forward-thinking eDiscovery legal professional who seeks to triumph over the growing inefficiencies and ballooning costs of older technology and workflow models. Below, we’ll provide you with arguments to pull from your quiver when you need to convince Luddites that leveraging the most advanced AI and analytics solutions can give your organization or law firm a competitive and financial advantage, while also reducing risk.Argument 1: “We already use analytical and AI technology like Technology Assisted Review (TAR) when necessary. Why bring on another AI/analytical tool?”Solutions like TAR and other in-case analytical tools remain worthwhile for specific use cases (for example, standalone cases with massive amounts of data, short deadlines, and static data sets). However, more advanced analytical technology can now be used to provide incredible insight into a wider variety of cases or even across multiple matters. For example, newer solutions now have the ability to analyze previous attorney work product across a company’s entire legal portfolio, giving legal teams unprecedented insight into institutional challenges like identifying attorney-client privilege, trade secret information, and irrelevant junk data that gets pulled into cases and re-reviewed time and time again. This gives legal teams the ability to make better decisions about how to review documents on new matters.Additionally, new technology has become more powerful, with the ability to run multiple algorithms and search within metadata, where older tools could only use single algorithms to search text alone. This means that newer tools are more effective and efficient at identifying critical information such as privileged communications, confidential information, or protected personal information. In short, printing out roadmap directions was advanced and useful at the time, but we’ve all moved on to more efficient and reliable methods of finding our way.Argument 2: “I don’t understand this technology, so I won’t use it” This is one of the easiest arguments to overcome. A good eDiscovery solution provider can offer a myriad of options to help users understand and leverage the advances in analytics and AI to achieve the best possible results. Whether you want to take a hands-off approach and have a team of experts show you what is possible (“Here are a million documents. Show me all the documents that are very likely to be privileged by next week”), or you want to really dive into the technology yourself (“Show me how to use this tool so that I can delve into the privilege rate of every custodian across multiple matters in order to effectuate a better overall privilege review strategy”), a quality solution provider should be able to accommodate. Look for providers that offer training and have the ability to clearly explain how these new technologies work and how they will improve legal outcomes. Your provider should have a dedicated team of analytics experts with the credentials and hands-on experience to quell any technology fears. Argument 3: “This technology will be too expensive.”Again, this one should be a simple argument to overcome. The efficiencies that the effective use of AI and analytics achieve can far outweigh the cost to use it. Look for a solution provider that offers a variety of predictable pricing structures, like per gig pricing, flat fee, fees generated by case, fees generated across multiple cases, or subscription-based fees. Before presenting your desired solution to stakeholders, draft your battle plan by preparing a comparison of your favored pricing structure vs. the cost of performing a linear review with a traditional pricing structure (say, $1 per doc). Also, be sure to identify and outline any efficiencies a more advanced analytical tool can provide in future cases (for example, the ability to analyze and re-use past attorney work product). Finally, when battling against risk-averse stakeholders, come armed with a cost/benefit analysis outlining all of the ways in which newer AI can mitigate risk, such as by enabling more accurate and consistent work product, case over case.ai-and-analyticsanalytics, ai-big-data, blog, ai-and-analytics,analytics; ai-big-data; bloglighthouse
AI and Analytics
Blog

Document Review: It’s Not Location, Location, Location. It’s Process, Process, Process.
Much of the workforce has been forced into remote work due to social distancing requirements because of the pandemic, and that includes the workforce conducting services related to electronic discovery. Many providers have been forced into remote work for services including collection and review. Other providers have been already conducting those services remotely for years, so they were well prepared to continue to provide those services remotely during the pandemic.Make no mistake, it’s important to select a review provider that has considerable experience conducting remote reviews which extends well before the pandemic. Not all providers have that level of experience. But the success of your reviews isn’t about location, location, location; it’s about process, process, process — and the ability to manage the review effectively regardless of where it’s conducted. Here are four best practices to make your document reviews more efficient and cost effective, regardless of where they’re conducted:Maximize culling and filtering techniques up front: Successful reviews begin with identifying the documents that shouldn’t be reviewed in the first place and removing them from the document collection before starting review. Techniques for culling the document collection include de-duplication and de-nisting and identification of irrelevant domains. But it’s also important to craft a search that maximizes the balance between recall and precision to exclude thousands of additional documents that might otherwise be needlessly reviewed, saving time and money during document review.Combine subject matter and best practice expertise: Counsel understands the issues associated with the case, but they often don’t understand how to implement sophisticated discovery workflows that incorporate the latest technological approaches (such as linguistic search) to maximize efficiency. It’s important to select the provider that knows the right questions to ask to combine subject matter expertise with eDiscovery best practices to ensure an efficient and cost-effective review process. It’s also important to continue to communicate and adjust workflows during the case as you learn more about the document collection and how it relates to the issues of the case.Conduct search and review iteratively: Many people think of eDiscovery document review as a linear process, but the most effective reviews today are those that implement an iterative process that that interweave search and review to continue to refine the review corpus. The use of AI algorithms and expert-designed linguistic models to test, measure and refine searches is important to achieve a high accuracy rate during review, so remember the mantra of “test, measure, refine, repeat” for search and review to maximize the quality of your search and review process.Consider producing iteratively, as well: Discovery is a deadline driven process, but that doesn’t mean you have to wait for the deadline to provide your entire production to opposing counsel. Rolling productions are common today to enable producing parties to meet their discovery obligations over time, establishing goodwill with opposing counsel and demonstrating to the court that you have been meeting your obligations in good faith along the way if disputes occur. Include discussion of rolling productions in your Rule 26(f) meet and confer with opposing counsel to enable you to manage the production more effectively over the life of the project.You’re probably familiar with the famous quote from The Art of War by Sun Tzu that “every battle is won or lost before it is ever fought,” which emphasizes the importance of preparation before proceeding with the task or process you plan to perform. Regardless where your review is being conducted, it’s not the location, location, location that will determine the success of your review, but the process, process, process. After all, it’s called “managed review” for a reason!ediscovery-reviewblog, -document-review, ediscovery-review,blog; document-reviewlighthouse
eDiscovery and Review
Blog

Building Your Case for Cutting-Edge AI and Analytics in Five Easy Steps
As the amount of data generated by companies exponentially increases each year, leveraging artificial intelligence (AI), analytics, and machine learning is becoming less of an option and more of a necessity for those in the eDiscovery industry. However, some organizations and law firms are still reluctant to utilize more advanced AI technology. There are different reasons for the reluctance to embrace AI, including fear of the learning curve, uncertainty around cost, and unknown return on investment. But where this is uncertainty, there is often great opportunity. Adopting AI provides an excellent opportunity for ambitious legal professionals to act as the catalysts for revitalizing their organization’s or law firm’s outdated eDiscovery model. Below, I’ve outlined a simple, five-step process that can help you build a business case for bringing on cutting-edge AI solutions to reduce cost, lower risk, and improve win rates for both organizations and law firms.Step 1: Find the Right Test CaseYou will want to choose the best possible test case that highlights all the advantages that newer, cutting-edge AI solutions can provide to your eDiscovery program.One of the benefits of newer solutions is that they can be utilized in a much wider variety of cases than older tools. However, when developing a business case to convince reluctant stakeholders – bigger is better. If possible, select a case with a large volume of data. This will enable you to show how effectively your preferred AI solution can cull large volumes of data quickly compared to your current tools and workflows.Also try to select a case with multiple review issues, like privilege, confidentiality, and protected health information(PHI)/personally identifiable information (PII) concerns. Newer tools hitting the market today have a much higher degree of efficiency and accuracy because they are able to run multiple algorithms and search within metadata. This means they are much better at quickly and correctly identifying types of information that would need be withheld or redacted than older AI models that only use a single algorithm to search text alone.Finally, if possible, choose a case that has some connection to, or overlap with, older cases in your (or your client’s) legal portfolio. For a law firm, this means selecting a case where you have access to older, previously reviewed data from the same client (preferably in the same realm of litigation). For a corporation, this just means choosing a case, if possible, that shares a common legal nexus, or overlapping data/custodians with past matters. This way, you can leverage the ability that new technology has to re-use and analyze past attorney work product on previously collected data.Step 2: Aggregate the Data Once you’ve selected the best test case, as well as any previous matters from which you want to analyze data, the AI solution vendor will collect the respective data and aggregate it into a big data environment. A quality vendor should be able to aggregate all data, prior coding, and other key information, including text and metadata into a single database, even if the previously reviewed data was hosted by different providers in different databases and reviewed by different counsel.Step 3: Analyze the Data Once all data is aggregated, it’s time for the fun to begin. Cutting-edge AI and machine learning will analyze all prior attorney decisions from previous data, along with metadata and text features found within all the data. Using this data analysis, it can then identify key trends and provide a holistic view of the data you are analyzing. This type of powerful technology is completely new to the eDiscovery field and something that will certainly catch the eye of your organization or your clients.Step 4: Showcase the Analytical ResultsOnce the data has been analyzed, it’s time to showcase the results to key decision makers, whether that is your clients, partners, or in-house eDiscovery stakeholders. Create a presentation that drills down to the most compelling results, and clearly illustrates how the tool will create efficiency, lower costs, and mitigate risk, such as:Large numbers of identical documents that had been previously collected, reviewed, and coded non-responsive multiple times across multiple mattersLarge percentages of identical documents picked up by your privilege screen (and thus, thrust into costly privilege re-review) that have actually never been coded privilege in any matterLarge numbers of identical documents that were previously tagged as containing privilege or PII information in past matters (thus eliminating the need for review for those issues in the current test case).Large percentages of documents that have been re-collected and re-reviewed across many mattersStep 5: Present the Cost ReductionYour closing argument should always focus on the bottom line: how much money will this tool be able to save your firm, client, or company? This should be as easy as taking the compelling analytical results above and calculating their monetary value:What is the monetary difference between conducting a privilege review in your test case using your traditional privilege screen vs. re-using privilege coding and redactions from previous matters?What is the monetary difference between conducting an extensive search for PII or PHI in your test case, vs. re-using the PII/PHI coding and redactions from previous matters?How much money would you save by cutting out a large percent of manual review in the test case due to culling non-responsive documents identified by the tool?How much money would you save by eliminating a large percentage of privilege “false positives” that the tool identified by analyzing previous attorney work product?How much money will you (or your client) save in the future if able to continue to re-use attorney work product, case after case?In the end, if you’ve selected the right AI solution, there will be no question that bringing on best-of-breed AI technology will result in a better, more streamlined, and more cost-effective eDiscovery program.ai-and-analyticsanalytics, ai-big-data, data-re-use, phi, pii, blog, ai-and-analytics,analytics; ai-big-data; data-re-use; phi; pii; bloglighthouse
AI and Analytics
Blog

Advanced Analytics – The Key to Mitigating Big Data Risks
Big data sets are the “new normal” of discovery and bring with them six sinister large data set challenges, as recently detailed in my colleague Nick’s article. These challenges range from classics like overly broad privileged screens, to newer risks in ensuring sensitive information (such as personally identifiable information (PII) or proprietary information such as source code) does not inadvertently make its way into the hands of opposing parties or government regulators. While these challenges may seem insurmountable due to ever-increasing data volumes (and also tend to keep discovery program managers and counsel up at night) there are new solutions that can help mitigate these risks and optimize workflows.As I previously wrote, eDiscovery is actually a big data challenge. Advances in AI and machine learning, when applied to eDiscovery big data, can help mitigate and reduce these sinister risks by breaking down the silos of individual cases, learning from a wealth of prior case data, and then transferring these learnings to new cases. Having the capability to analyze and understand large data sets at scale combined with state-of-the-art methods provides a number of benefits, five of which I have outlined below.Pinpointing Sensitive Information - Advances in deep learning and natural language processing has now made pinpointing sensitive content achievable. A company’s most confidential content could be laying in plain sight within their electronic data and yet be completely undetected. Imagine a spreadsheet listing customers, dates of birth, and social security numbers attached to an email between sales reps. What if you are a technology company and two developers are emailing each other snippets of your company’s source code? Now that digital medium is the dominant form of communication within workplaces, situations like this are becoming ever-present and it is very challenging for review teams to effectively identify and triage this content. To solve this challenge, advanced analytics can learn from massive amounts of publically available and computer-generated data and then fine tuned to specific data sets using a recent breakthrough innovation in natural language processing (NLP) called “transfer learning.” In addition, at the core of big data is the capability to process text at scale. Combining these two techniques enables precise algorithms to evaluate massive amounts of discovery data, pinpoint sensitive data elements, and elevate them to review teams for a targeted review workflow.Prioritizing the Right Documents - Advanced analytics can learn both key trends and deep insights about your documents and review criteria. A normal search term based approach to identify potentially responsive or privileged content provides a binary output. Documents either hit on a search term or they do not. Document review workflows are predicated on this concept, often leading to suboptimal review workflows that both over-identify documents that are out of scope and miss documents that should be reviewed. Advanced analytics provide a range of outcomes that enable review teams to create targeted workflow streams tailored to the risk at hand. Descriptive analysis on data can generate human interpretable rules that help organize documents, such as “all documents with more than X number of recipients is never privileged” or “99.9% of the time, documents coming from the following domains are never responsive”. Deep learning-based classifiers, again using transfer learning, can generalize language on open source content and then fine-tune models to specific review data sets. Having a combination of analytics, both descriptive and predictive, provides a range of options and gives review teams the ability to prioritize the right content, rather than just the next random document. Review teams can now concentrate on the most important material while deprioritizing the less important content for a later effort.Achieving Work-Product Consistency - Big data and advanced analytics approaches can ensure the same document or similar documents are treated consistently across cases. Corporations regularly collect, process, and review the same data across cases over and over again, even when cases are not related. Keeping document treatment consistent across these matters can obviously be extremely important when dealing with privilege content – but is also important when it comes to responsiveness across related cases, such as a multi-district litigation. With the standard approach, cases are in siloes without any connectivity between them to enable consistent approaches. A big data approach enables connectivity between cases using hub-and-spoke techniques to communicate and transit learnings and work-product between cases. Work product from other cases, such as coding calls, redactions, and even production information can be utilized to inform workflows on the next case. For big data, activities like this are table stakes.Mitigating Risk - What do all of these approaches have in common? At its core, big data and analytics is an engine for mitigating risk. Having the ability to pinpoint sensitive data, prioritize what you look at, and ensure consistency across your cases is a no-brainer. This all may sound like a big change, but in reality, it’s pretty seamless to implement. Instead of simply batching out documents that hit on an outdated privilege screen for privilege review, review managers can instead use a combination of analytics and fine-tuned privilege screen hits. Review then occurs from there largely as it does today, just with the right analytics to inform reviewers with the context needed to make the best decision.Reducing Cost - The other side of the coin is cost savings. Every case has a different cost and risk profile and advanced analytics should provide a range of options to support your decision making process on where to set the lever. Do you really need to review each of these categories in full, or would an alternative scenario based on sampling high-volume and low-risk documents be a more cost-effective and defensible approach? The point is that having a better and more holistic view of your data provides an opportunity to make these data-driven decisions to reduce costs.One key tip to remember - you do not need to try to implement this all at once! Start by identifying a key area where you want to make improvements, determine how you can measure the current performance of the process, then apply some of these methods and measure the results. Innovation is about getting a win in order to perpetuate the next.If you are interested in this topic or just love to talk about big data and analytics, feel free to reach out to me at KSobylak@lighthouseglobal.com.ai-and-analyticsdigital-forensics, ai-and-analyticsanalytics; ai-big-data; data-re-use; blogkarl sobylak
AI and Analytics
Blog

Automating Legal Operations - A DIY Model
Legal department automation may be top of mind for you like several other legal operations professionals, however, you might be dependent on IT or engineering resources to be able to execute. Or perhaps you are struggling with change management and not able to implement something new. You are not alone. These were the top two blockers to building out an efficient process within legal departments as shared by recent CLOC conference attendees. The good news is that off-the-shelf technologies have advanced to the point where you may not need any time from those resources and may be able to manage automation without needing to change user behavior. With “no code” automation, you can execute end-to-end automation for your legal operations department, yourself!What is “No Code” Automation?As recently highlighted in Forbes magazine, “no-code platforms feature prebuilt drag-and-drop activities and tasks that facilitate integration at the business user level.” This is not “low code” automation that has been around for decades. Low code refers to using existing code, whether from open source or from other internal development, to lower the need to create new code. Low code allows you to build faster but still requires the knowledge of code. In “no code,” however, you do not need to have an understanding of coding. What this really means is that no code platforms are so user-friendly that even a lawyer, or legal operations professional, can create automated actions…I know because I am a lawyer that has successfully done this!But, How Does this Apply in Legal Operations?The short answer is that it lets you, the legal operations professional, automate workflows with little external help. There are some legal departments already taking advantage of this technology. At a recent CLOC conference, Google shared how they had leveraged “no code” automation to remove the change management process for ethics and compliance in the code of conduct, conflict of interest, and anti-bribery and corruption areas. With respect to outside counsel management, Google was similarly able to remove IT/engineering dependencies for conflict waiver approvals, outside counsel engagements, and matter creation. For more details, watch Google describe their no-code automation use cases.Google’s workflow automation is impressive and more mature than those of us who are just starting, so I wanted to share a simple example. A commonplace challenge for smaller legal teams is to manage tasks – ensuring all legal requests are captured and assigned to someone on the legal team. Many teams are dealing with dozens, or hundreds, of emails and it can be cumbersome to look through those to determine who is working on what. Inevitably some of those requests get missed. It is also challenging to then later report on legal requests – e.g., what types of requests the legal team receives daily, how long they take to resolve, and how many requests each person can work on. A “no code” platform can help. For example, you can connect your email to a shared Excel spreadsheet that captures all legal tasks. You would do this by creating a process that has the tool log each email sent to a certain address (e.g. legal@insertconame.com) on an Excel spreadsheet in a shared location (e.g. LegalTasks.xls). You would “map” parts of the email to columns in the spreadsheet. For example, you would want to capture the sender, the date, the time, the subject, and the body. You can even ask users who are sending requests into that email to put the type of request in the subject line. Your legal team can then check the shared spreadsheet daily and “check out” tasks by putting their initials in another column. Once complete, they would also mark that on the spreadsheet. Capturing all this information will allow you to see who is working on what, ensure that all requests are being worked on, and use pivot reporting on all legal tasks later on. Although this is a really simple use case with basic tools, it is also one that takes only a few minutes to set up and can measurably improve organization among legal team members.You can use “no code” automation in most areas of legal operations department automation. Some of the most common things to automate with “no code” are as follows:Legal ApprovalsDocument GenerationsEvidence CollectionTracking of Policy AcceptanceMany “no code” companies work with legal departments, so they may have experience with legal operations use cases. Be sure to ask how they have seen their technologies deployed in other legal departments.Can I Really Do This Without Other Departments?About 90% of the work can be done by you or your team, and in some cases, even 100%. However, sometimes connecting the tools or even installing the software has to be done by your IT and development teams. This is particularly true if you are connecting to proprietary software or have a complex infrastructure. This 10% of work required by these teams, however, is much smaller than if you were asking for those resources to create the automations from scratch. In addition, you often do not have to change user behavior so change management is removed as a blocker.I encourage you to explore using “no code” automation in your legal department. Once you start, you’ll be glad you tried. I would be excited to hear your experiences with “no code” in legal operations. If you are using it, drop me a line at djones@lighthouseglobal.com and tell me how.legal-operations; ediscovery-reviewediscovery-process, legal-ops, blog, legal-operations, ediscovery-reviewediscovery-process; legal-ops; bloglighthouse
Legal Operations
eDiscovery and Review
No items found. Please try different search parameters.