Lighthouse Blog
Read the latest insights from industry experts on the rapidly evolving legal and technology landscapes with topics including strategic and technology-driven approaches to eDiscovery, innovation in artificial intelligence and analytics, modern data challenges, and more.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blog

Privilege Mishaps and eDiscovery: Lessons Learned
Discovery in litigation or investigations invariably leads to concerns over protection of privileged information. With today’s often massive data volumes, locating email and documents that may contain legal advice or other confidential information between an attorney and client that may be privileged can be a needles-in-a-haystack exercise. There is little choice but to put forth best efforts to find the needles.Whether it’s dealing with thousands (or millions) of documents, emails, and other forms of electronically stored information, or just a scant few, identifying privileged information and creating a sufficient privilege log can be a challenge. Getting it right can be a headache — and an expensive one at that. But get it wrong and your side can face time-consuming and costly motion practice that doesn’t leave you on the winning end. A look at past mishaps may be enlightening as a reminder that this is a nuanced process that deserves serious attention.Which Entity Is Entitled to Privilege Protection?A strong grasp of what constitutes privileged content in a matter is important. Just as important? Knowing who the client is. It may seem obvious, but history suggests that sometimes it is not; especially when an in-house legal department or multiple entities are involved.Consider the case of Estate of Paterno v. NCAA. The court rejected Penn State’s claim of privilege over documents Louis Freeh’s law firm generated during an internal investigation. Why? Because Penn State wasn’t the firm’s client. The firm’s engagement letter said it was retained to represent the Special Investigations Task Force that Penn State formed after Pennsylvania charged Jerry Sandusky with several sex offenses. There was a distinct difference between the university and its task force.The lesson? Don’t overlook the most fundamental question of who the client is when considering privilege.The Trap of Over-Designating DocumentsWhat kind of content involving the defined client(s) is privileged? This can be a tough question. You can’t claim privilege for every email or document a lawyer’s name is on, especially for in-house counsel who usually serve both a business and legal function. A communication must be confidential and relate to legal advice in order to be considered privileged.Take Anderson v. Trustees of Dartmouth College as an example. In this matter, a student was expelled in a disciplinary action filed suit against Dartmouth. Dissatisfied with what discovery revealed, the student (representing himself) filed a motion to compel, asking the judge to conduct an in camera review of what Dartmouth had claimed was privileged information. The court found that much of the information being withheld for privilege did not, in fact, constitute legal advice, and that Dartmouth’s privilege claim exceeded privilege’s intended purpose.Dartmouth made a few other unfortunate mistakes. It labeled entire email threads privileged instead of redacting the specific parts identified as privileged. It also labeled every forwarded or cc’ed email the in-house counsel’s name was on as privileged without proving the attorney was acting as a legal advisor. To be sure, the ability to identify only the potentially privileged parts of an email thread — which could include any number of direct, forwarded and cc’d recipients and an abundance of inclusive yet non-privileged content — is not an easy task, but unfortunately, it is a necessary one.The lesson? If the goal of discovery is to get to the truth — foundational in American jurisprudence — courts are likely to construe privilege somewhat narrowly and allow more rather than fewer documents to see the light of day. In-house legal departments must be especially careful in their designations given the flow and volume of communications related to both business and legal matters, and sometimes the distinction is difficult to make.Be Ready to Back Up Your ClaimNo matter how careful you are during the discovery process, the other party might challenge your claim of privilege on some documents. “Because we said so” (aka ipse dixit) is not a compelling argument. In LPD New York, LLC v. adidas America, Inc, adidas claimed certain documents were privileged. When challenged, adidas’ response was to say LPD’s position wasn’t supported by law. The court said: Not good enough. Adidas had the burden to prove the attorney-client privilege applied and respond to LPD’s position in a meaningful way.The lesson? For businesses, be prepared to back up a privilege claim that an in-house lawyer was acting in their capacity as a legal advisor before claiming privilege.A Protection Not Used Often Enough: Rule 502(d)Mistakes do happen, however, and sometimes the other party receives information they shouldn’t through inadvertent disclosure. With the added protection of a FRE 502(d) order, legal teams are in a strong position to protect privileged information and will be in good shape to get that information back. Former United States Magistrate Judge Andrew Peck, renowned in eDiscovery circles, is a well-known advocate of this order.The rule says, “A federal court may order that the privilege or protection is not waived by disclosure connected with the litigation pending before the court — in which event the disclosure is also not a waiver in any other federal or state proceeding.”Without a 502(d) order in place, a mistake usually means having to go back and forth with your opponent, arguing the elements under 502(b). If you’re trying to claw back information, you have to spend time and money proving the disclosure was inadvertent, that you took reasonable steps to prevent disclosure, and you promptly took steps to rectify the error. It’s an argument you might not win.Apple Inc. v. Qualcomm Incorporated is a good example. In 2018, Apple lost its attempt to claw back certain documents it mistakenly handed over to Qualcomm during discovery in a patent lawsuit. The judge found Apple didn’t meet the requirements of 502(b). Had Apple established a 502(d) order to begin with, 502(b) might not have come into play at all.The lesson? Consider Judge Peck’s admonition and get a 502(b) order to provide protection against an inadvertent privilege waiver.Advances in Privilege IdentificationLuckily, gone are the days where millions of documents have to be reviewed by hand (or eyes) alone. Technological tools and machine learning algorithms can take carefully constructed privilege instructions and find potentially privileged information with a high degree of accuracy, reducing the effort that lawyers must expend to make final privilege calls.Although automation doesn’t completely take away the need for eyes on the review process, the benefits of machine learning and advanced technology tools are invaluable during a high-stakes process that needs timely, accurate results. Buyer beware however, such methods require expertise to implement and rigorous attention to quality control and testing. When you’re able to accurately identify privileged information while reducing the stress of creating a privilege log that will hold up in court, you lessen the risk of a challenge. And if a challenge should come, you have the data to back up your claims.ai-and-analyticsprivilege, ai-big-data, blog, ai-and-analytics,privilege; ai-big-data; bloglighthouse
AI and Analytics
Blog

Overcoming eDiscovery Trepidation - Part I: The Challenge
In this two-part series, I interview Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP about his thoughts on the state of eDiscovery within law firms today, including lessons learned and best practices to help attorneys overcome their trepidation of electronic discovery and build a better litigation practice. This first blog focuses on the history of eDiscovery and the logical reasons that attorneys may still try to avoid it, often to the detriment of their clients and their overall practice. IntroductionThe term “eDiscovery” (i.e., electronic discovery) was coined circa 2000 and received significant consideration by The Sedona Conference and others, well in advance of November 2006. That’s when the U.S. Supreme Court amended the Federal Rules of Civil Procedure to include electronically stored information (ESI), which was widely recognized as categorically different from data printed on paper. The amendments specifically mandated that electronic communications (like email and chat) would have been preserved in anticipation of litigation and produced when relevant. In doing so, it codified concepts explored by Judge Shira Scheindlin’s groundbreaking Zubulake v. UBS Warburg decisions.By 2012, the exploding volumes of data led technologists assisting attorneys to employ various forms of artificial intelligence (AI) to allow analysis of data to be accomplished in blocks of time that were still affordable to litigants. The use of predictive coding and other forms of technology-assisted review (TAR) of ESI became recognized in U.S. courts. By 2013 updates to the American Bar Association (ABA) Model Rules of Professional Conduct officially required attorneys to stay current on “the benefits and risks” of developing technologies. By 2015, the FRCP was amended again to help limit eDiscovery scope to what is relevant to the claims and defenses asserted by the parties and “proportional to the needs of the case,” as well as to normalize judicial treatments of spoliation and related sanctions associated with ESI evidence. In the same year, California issued a formal ethics opinion obligating attorneys practicing in California to stay current with ever changing eDiscovery technologies and workflows in order to comply with their ethical obligation of competently providing legal services.In the 15 years that have passed since those first FRCP amendments designed to deal with the unique characteristics of ESI, we’ve seen revolutionary changes in the way people communicate electronically within organizations, as well as explosive growth in the volume and variety of data types as we have entered the era of Big Data. From the rise of email, social media, and chat as dominant forms of interpersonal communication, to organizations moving their data to the Cloud, to an explosion of ever-changing new data sources (smart devices, iPhones, collaboration tools, etc.) – the volume and variety of which makes understanding eDiscovery’s role in litigation more important than ever.And yet, despite more than 20 years of exposure, the challenges of eDiscovery (including managing new data forms, understanding eDiscovery technology, and adhering to federal and state eDiscovery standards) continue to generate angst for most practitioners.So why, in 2021, are smart, sophisticated lawyers still uncomfortable addressing eDiscovery demands and responding to them? To find out, I went to one of the leading experts in eDiscovery today, Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP. Mr. Calhoun has over 40 years of experience in litigation and counseling, and he currently serves as Chair of the firm’s Electronic Discovery, Information Management & Compliance Practice. Over the years he has found creative solutions to eDiscovery challenges, like having a court enter a case management order requiring all 42 parties in a complex construction defect case to use a single technology provider, which dropped the technology costs to less than 2.5% of what they would have been had each party employed its own vendor. In another case (which did not involve privileged communications), he was able to use predictive coding to rank 600,000 documents and place them into tranches from which samples were drawn to determine which tranches could be produced without further review. It was ultimately determined that about 35,000 documents would not have to be reviewed after having put eyes on fewer than 10,000 of the original 600,000.I sat down with Mr. Calhoun to discuss his practice, his views of the legal and eDiscovery industries, and to try to get to the bottom of how attorneys can master the challenges posed by eDiscovery without having to devote the time needed to become an expert in the field.Let’s get right down to it. With all the helpful eDiscovery technology that has evolved in the market over the last 10 years, why do you think eDiscovery still poses such a challenge for attorneys today? Well, right off the bat, I think you’re missing the mark a bit by focusing your inquiry solely around eDiscovery technology. The issue for many attorneys facing an eDiscovery challenge today is not “what is the best eDiscovery technology?” – because many attorneys don’t believe any eDiscovery technology is the best “solution.” Many believe it is the problem. No technology, regardless of its efficacy, can provides value if it is not used. The issue is more fundamental. It’s not about the technology, it is about the fear of the technology, the fear of not being able to use it as effectively as competitors, and the fear of incurring unnecessary costs while blowing budgets and alienating clients.Practitioners fear eDiscovery will become a time and money drain, and attorneys fear that those issues can ultimately cost them clients. Technology may, in fact, be able to solve many of their problems – but most attorneys are not living and breathing eDiscovery on a day-to-day basis (and, frankly, don’t want to). For a variety of reasons, most attorneys don’t or can’t make time to research and learn about new technologies even when they’re faced with a discovery challenge. Even attorneys who do have the inclination and aptitude to deal with the mathematics and statistical requirements of a well-planned workflow, who understand how databases work, and who are unfazed by algorithms and other forms of AI, often don’t make the time to evaluate new technology because their plates are already full providing other services needed by their clients. And most attorneys became lawyers because they had little interest in mathematics, statistics, and other sciences, so they don’t believe they have the aptitude necessary to deal with eDiscovery (which isn’t really true). This means that when they’re facing gigabytes or even terabytes of data that have to be analyzed in a matter of weeks, they often panic. Many lawyers look for a way to make the problem go away. Sometimes they agree with opposing counsel not to exchange electronic data; other times they try to bury the problem with a settlement. Neither approach serves the client, who is entitled to an expeditious, cost effective, and just resolution of the litigation. Can you talk more about the service clients are entitled to, from an eDiscovery perspective? By that, I mean – can you explain the legal rules, regulations, and obligations that are implicated by eDiscovery, and how those may impact an attorney facing an electronic discovery request? Sure. Under Rule 1 of the FRCP and the laws of most, if not all, states, clients are entitled to a just resolution of the litigation. And ignoring most of the electronic evidence about a dispute because a lawyer finds dealing with it to be problematic rarely affords a client a just result. In many cases, the price the client pays for counsel’s ignorance is a surcharge to terminate the litigation. And, counsel’s desire to avoid the challenge of eDiscovery very often amounts to a breach of the ethical duty to provide competent legal services.The ABA Model Rules (as well as the ethical rules and opinions in the majority of states) also address the issue. The Model Rules offer a practitioner three alternatives when undertaking to represent a client in a case that involves ESI (which almost every case does). To meet his or her ethical obligation to provide competent legal services, the practitioner can: (1) become an expert in eDiscovery matters; (2) team up with an attorney or consultant who has the expertise; or (3) decline the engagement. Because comparatively few attorneys have the aptitude to become eDiscovery experts and no one who wants to practice law can do so by turning down virtually all potential engagements, the only practical solution for most practitioners is finding an eDiscovery buddy.In the end, I think attorneys are just looking for ways to make their lives (and thereby their clients’ lives) easier and they see eDiscovery as threatening to make their lives much harder. Fortunately, that doesn’t have to be the case.So, it sounds like you’re saying that despite the fact that it may cost them clients, there are sophisticated attorneys out there that are still eschewing legal technology and responding to discovery requests the way they did when most discovery requests involved paper documents? Absolutely there are. And I can empathize with their thought process, which is usually something along the lines of “I don’t understand eDiscovery technology and I’m facing a tight discovery deadline. I do know how to create PDFs from scanned copies of paper documents and redact them, if necessary. I’m just going to use the method I know and trust.” While this is an understandable way to think, it will immediately impose on clients the cost of inefficient litigation and settlements or judgments that could have been reduced or avoided if only the evidence had been gathered. Ultimately, when the clients recognize that their counsel’s fear of eDiscovery is imposing a cost on them, that attorney will lose the client. In other words, counsel who refuse to delve into ESI because it is hard is similar to a person who lost car keys in a dark alley but insists on only looking under the streetlight because it is easier and safer than looking in the dark alley.That’s such a great analogy. Do you have any real-world examples that may help folks understand the plight of an attorney who is basically trying to ignore ESI?Sure. Here’s a great example: Years ago, my good friend and partner told me he would retire without ever having to learn about eDiscovery. My partner is a very successful attorney with a great aptitude for putting clients at ease. But about a week after expressing that thought, he came to me with 13 five-inch three-ring binders. He wanted help finding contract paralegals or attorneys to prepare a privilege log listing all the documents in the binders. An arbitrator had ordered that if he did not have a privilege log done in a week, his expert would not be able to testify. His “solution” was to rent or buy a bunch of dictating machines and have the reviewers dictate the information about the documents and pay word processers overtime to transcribe the dictation into a privilege log. I asked what was in the binders. Every document was an email thread and many had families. My partner had received the data as a load file, but he had the duplications department print the contents rather than put them into a review platform. Fortunately, the CD on which the data was delivered was still in the file.I can tell this story now because he has since turned into quite the eDiscovery evangelist, but that is exactly the type of situation I’m referring to: smart, sophisticated attorneys who are just trying to meet a deadline and stay within budget will do whatever takes to get the documents or other deliverable (e.g., a privilege log) out the door. And without the proper training, unfortunately, the solution is to throw more bodies at the problem – which invariably ends up being more costly than using technology properly.Can you dive a bit deeper there? Explain how performing discovery the old-fashioned way on a small case like that would cost more money than performing it via a dedicated eDiscovery technology.Well, let me finish my story and then we’ll compare the cost of using 20th and 21st Century technologies to accomplish the same task. As I said, when I agreed to help my partner meet his deadline, I discovered all the notebooks were filled with printed copies of email threads and attachments. My partner received a load file with fewer than 2 GBs and gave it to the duplications department with instructions to print the data so he could read it. We gave the disk to an eDiscovery provider, and they created a spreadsheet using the email header metadata to populate the log information about who the record was from, who it was to, who was copied, whether in the clear or blind, when it was created, what subject was addressed, etc. A column was added for the privilege(s) associated with the documents. Those before a certain date were attorney-client only. Those after litigation became foreseeable were attorney-client and work product. That made populating the privilege column a snap once the documents were chronologically arranged. The cost to generate the spreadsheet was a few hundred dollars. Three in-house paralegals were able to QC, proofread, and finalize the log in less than three days for a cost of about $2,000.Had we done it the old-fashioned way, my partner was looking at having 25 or 30 people dictating for five days. If the reviewers were all outsourced, the cost would have been $12,000 to $15,000. He planned to use a mix of in-house and contract personnel - so, the cost would have been 30% to 50% higher. The transcription process would have added another $10,000. The cost of copying the resulting privilege log that would have been about 500 pages long with 10 entries per page for the four parties and arbitrator would have been about $300. So even 10 years ago, the cost of doing things the old-fashioned way would have been about $35,000. The technology-assisted solution was about $2,500. Stay tuned for the second blog in this series, where we delve deeper into how attorneys can save their clients money, achieve better outcomes, and gain more repeat business once they overcome common misconceptions around eDiscovery technology and costs. If you would like to discuss this topic further, please reach out to Casey at cvanveen@lighthouseglobal.com and Gordon at Gordon.Calhoun@lewisbrisbois.com.ediscovery-review; ai-and-analyticsediscovery-process, blog, spectra, law-firm, ediscovery-review, ai-and-analyticsediscovery-process; blog; spectra; law-firmcasey van veen
eDiscovery and Review
AI and Analytics
Blog

Making the Case for Information Governance and Why You Should Address it Now
You know that cleaning out the garage is a good idea. You would have more storage space and would even be able to put the car into the garage, which is better for security, for keeping it clean, and for ensuring an easy start on a frozen winter morning. Even if you don’t have a garage, you likely have an equivalent example such as a loft or that cupboard in the kitchen, yet somehow these tasks are often put off and rarely top of the “to do” list. Information governance often falls in this category; a great idea that struggles to make it to the top ahead of competing corporate priorities.For both the garage and information governance, the issue is the creation of a compelling business case. For the garage, the arrival of a new car or a spate of car thefts in the area is enough to push this task to the front. For information governance, the business case might be that a company is enlightened enough to realize that its data is an under-utilized asset or it might be a question of time and effort being wasted in the struggle to find the information when needed. However, these positive drivers might not be enough. Sometimes you need to look at the risk if nothing is done.In our view, building a strong business case for information governance will be a laconic combination of both the carrot and the stick. This blog will focus on the stick because that is often the hardest factor to spell out in clear terms. We will take you on a journey through the GDPR fines that have been levied since it came into force in May 2018, show how European regulators see information governance as an essential element of a companies’ data protection obligations, and give you the necessary background to prepare your business case.Why address information governance now? It is worth just pausing to ensure we are all talking about the same thing, so let’s define information governance. You can see Gartner’s definition here. For our purposes, we can talk in simpler terms and define information governance as “the people, processes, and technology involved in seeking to ensure the effective and efficient creation, storage, use, retention, and deletion of information.”Now, let’s turn to the GDPR. The total of fines under the GDPR, since it came into force in May 2018, approaches €300m. The big fines usually relate to processing personal data without good reason or consent (e.g. Google - €50m), or for inadequate security leading to data breaches (e.g. British Airways - £20m). As a result, many organizations prioritize this type of work.However, after a thorough trawl, we see a growing body of decisions where fines have been imposed by regulators for information governance failures. In our view, the top 5 reported “information governance” fines are:€15m Deutsche Wohnen (Berlin DPA) – set aside on procedural grounds​€2.25m Carrefour (France)​€290,000 HUF (Hungary)​€250,000 Spartoo (France)​€160,000 Taxa4x35 (Denmark)​GDPR fines, in detailThe largest fine is the Deutsche Wohnen matter. In 2017, the Berlin Data Protection Authority (DPA) investigated Deutsche Wohnen and found its data protection policies to be inadequate. Specifically, personal data was being stored without a necessary reason and some of it was being retained longer than necessary. In 2019, the DPA conducted a follow-up investigation and found these issues were not sufficiently remedied and thus issued a fine of €15m. The Berlin DPA explained that Deutsche Wohnen could have readily complied by implementing an archiving system which separates data with different retention periods thereby allowing differentiated deletion periods, as such solutions are commercially available. In February 2021, Criminal Chamber 26 of the District Court of Berlin closed the proceedings on the basis the decision was invalid and not sufficiently substantiated. The Berlin DPA had not specified the acts by the management of the company that supposedly led to a violation of the GDPR. The Berlin DPA has announced it would ask the public prosecutor to file an appeal.​ It would be a mistake to interpret the nullification of the fine as evidence that information governance / data retention is not an important issue for DPAs. Such an interpretation would be ignoring that fact that there is no criticism as to the substance of the findings made by the Berlin DPA in relation to Deutsche Wohnen’s approach to data retention.Holding data without necessary purpose or not actively deleting data has been a theme of fines by other DPAs as well. In Denmark, the Data Protection Authority recommended fines for similar inadequacies as follows:1.2m DKK (€160,000) on Taxa4x35. A DPA inspection discovered that although customer names were deleted after 2 years, their telephone numbers remained for 5 (as a key field in the CRM database)1.1m DKK (€150,000) on Arp-Hansen Hotel Group. Personal data was being stored longer than was necessary and in breach of Arp-Hansen’s own retention policies​1.5m DKK (€200,000) on ID Design. A routine DPA inspection revealed old customer data not being adequately deleted.​ Although, like Deutsche Wohnen, this fine was subsequently reduced on technical grounds, the commentary on the corporate information governance policies still holds.In France, three fines have been imposed relating to the holding customer data well past what the regulators deemed necessary:In the Carrefour​ matter, there was a fine of €2.25m​ for various infringements including that Carrefour had retained the data of more than 28 million inactive customers, through its customer loyalty programme, for an excessive period.In SERGIC​, there was a fine of €400,000​ for various infringements including that SERGIC had stored the documents of unsuccessful rental candidates beyond the necessary time to achieve the purpose for which the data was collected and processed​.In Spartoo​, there was a fine of €250,000​ for reasons including that Spartoo retained data for longer than was necessary for more than 3 million customers​. In Spartoo, the regulators also called out that the company had not set up a retention period for customer and prospect data​, did not regularly erase personal data​, and retained names and passwords in a non-anonymised form for over 5 years​.Although the authorities in France and Denmark have been the most active, they are not alone. In Hungary, HUF​ was issued with a fine of approximately €290,000​ based on the absence of a retention policy for a database containing personal data. And in Germany, Delivery Hero failed to delete accounts of former customers who had not been active on the company’s delivery service platform for years ​and was fined €195,000.Other authorities may not yet have imposed fines, but their attention is turning in the direction of information governance. A number of DPAs have issued guidance, the scope of which includes data retention (e.g. the Irish DPA, in Sept 2020, on how long COVID contact details should be retained; the French DPA, in October 2020, on how long union-member files should be retained)​.How to get started on your business caseThere is a genuine threat to companies stalling in relation to information governance, particularly around personal data. The decisions to date represent a small percentage of the activity in this area, as many of the violations are dealt with by regulators directly. We don’t know what, if any, settlements have been agreed upon, but the decisions that we have located are helpful and instructive for building the business case for prioritizing this work.The first thing to do is create an internal overview for why this area matters – use the above to show that there is risk and that regulators are paying attention. Hopefully, our overview will help you to identify the size of the stick. As to the carrot, that will be very company-specific, but our clients who have successfully made the case focus on the efficiency gains that can be made if information is properly governed as well as the opportunity to mine more effectively their own information for its real business value. Next, take a look at your policies and areas that may require adjustment based on the above in order to gain some insight into the scale of the activity. Now your business case should be taking shape. You might also consider looking wider than the GDPR, such as the increasing number of state data protection frameworks within the US.We recognize this process is an oversimplification and each step requires a significant time investment by your organization, but spending time focusing on the necessity of retaining personal data, as well as the length of retention (and subsequent deletion), are critical elements in minimizing your risk.information-governancedata-privacy, blog, record-management, information-governance,data-privacy; blog; record-managementlighthouse
Information Governance
Blog

Analytics and Predictive Coding Technology for Corporate Attorneys: Demystifying the Jargon
Below is a copy of a featured article written by Jennifer Swanton of Medtronic, Shannon Capone Kirk of Ropes & Gray, and John Del Piero of Lighthouse for Legaltech News.Despite the traditional narrative that lawyers are hesitant to embrace technology, many in-house legal departments and their outside service providers are embracing the use of what is generally referred to as artificial intelligence (AI). In terms of litigation and internal investigations, this translates more specifically into conceptual analytics and predictive coding (also referred to as continuous active learning, or CAL), which are two of the more advanced technological innovations in the litigation space and corporate America.This adoption, in part, seems to be driven by an expectation from corporate leaders that their in-house counsel must be able to identify and utilize the best available technology in order to drive cost efficiency, while also reducing risk and increasing effective and defensible litigation positions. For instance, in a 2019 survey of 163 legal professionals conducted by ALM Intelligence and LexisNexis, 92% of attorneys surveyed planned to increase their use of legal analytics in the upcoming 12 months. The reasoning behind that expected increase in adoption was two-fold, with lawyers indicating that it was driven both by competitive pressure to win cases (57%), as well as client expectation (56%).Given that the above survey took place right before the COVID-19 pandemic hit, it stands to reason that the 92% of attorneys that expected to increase their use of analytics tools in 2020 may actually be even higher now. With a divisive election and receding pandemic only recently behind us, and an already unpredictable market, many corporations are tightening budgets and looking to further reduce unnecessary spend. Conceptual analytics and CAL are easy (yes, really) and effective ways to manage ballooning datasets and significantly reduce discovery, litigation and internal investigation costs.With that in mind, we would like to help create a better relationship between corporate attorneys and advanced technology with the following two step approach—which we will outline in a series of two articles.This first installment will help demystify the language technology providers tend to use around AI and analytics technology so that in-house teams feel more comfortable with adoption. In our second article, we will provide examples of some great use cases where corporate legal teams can easily leverage technology to help improve workflows. Together, we hope this approach can help in-house legal teams adopt technology that drives efficiency, lowers cost, and improves the quality of their work.Demystifying AI JargonIf you have ever discussed AI or analytics technology with a technology provider, you are probably more than aware that tech folks have a tendency to forget that the majority of their clients don’t live in the world of developing and evaluating new technology, day in and day out. Thus, they may use terms that are often confusing to their legal counterparts (and sometimes use terms that don’t match what the technology is capable of in the legal world). For this reason, it is helpful to level set with some common terminology and definitions, so that in-house attorneys are prepared to have better, more practical real-world discussions with technology providers.Analytics Technology: Within the eDiscovery and compliance space, analytics technology is the ability of a machine to recognize patterns, structures, concepts, terminology, and/or the people interacting within data, and then present that analysis in a visual representation so that attorneys have a better overview of their data. As with AI, not all analytics tools have the same capabilities. Vendors may label everything from email threading identification to more advanced technology that can identify complex concepts and human sentiment as “analytics” tools.Within these articles, when we reference this term, we are referring to the more advanced technology that can analyze not only the text within data but also the metadata and any previous coding applied by subject matter experts. This is an important distinction because this type of technology can greatly improve the accuracy of the analysis compared to older tools. For example, analytics technology that can analyze metadata as well as text is much better at identifying concepts like attorney-client privilege because it can analyze not only the language being used but who is using that language and the circumstances in which they use it.Artificial Intelligence (AI): Probably the most broadly recognized term due to its prevalence outside of the eDiscovery space, AI is technically defined as the ability of a computer to complete tasks that usually would require human intelligence. Within the eDiscovery and compliance world, vendors often use the term broadly to refer to a variety of technologies that can perform tasks that previously would require completely human review.It is important to remember though that the term AI can refer to a broad range of technology with very different capabilities. “AI” in the legal world is currently being used as a generalized term and legal consumers of such technologies should press for specifics—not all “AI” is the same, or, in several cases, even AI at all.Machine Learning: Machine learning is a category of algorithms used in AI that can analyze statistics and find patterns in large volumes of data. The algorithms improve with experience—meaning that as documents are coded in a consistent fashion by humans, the better and more accurate the algorithms should become at identifying specific data types. Note here that there is a common misunderstanding that machine learning requires large amounts of data from which to learn. That is not necessarily true—all that is required for machine learning to work well is that the input it learns from (i.e., document coding for eDiscovery purposes) is consistent and accurate.Natural Language Processing (NLP): NLP is a subset of AI that uses machine learning to process and analyze the natural language humans use within large amounts of data. The result is technology that can “understand” the contents of documents, including the context in which language is used within them. Within eDiscovery, NLP is used within more advanced forms of analytics technology to help identify specific content or sentiments within large datasets.For example, NLP can be used to more accurately identify sensitive information, like personally identifiable information (PII), within datasets. NLP is better at this task than older AI technology because older models relied on “regular expressions” (a sequence of characters to define a search pattern) to identify information. When a “regular expression” (or regex) is used by an algorithm to find, for example, VISA account numbers—it will be able to identify the correct number pattern (i.e., any number that starts with the number 4 and has 16 digits) within a dataset but will be unable to differentiate other numbers that have the same pattern (for example, employee identification numbers). Thus, the results returned by legacy technology using regex may be overbroad and include false positives.NLP can return more accurate results for that same task because it is able to identify not only the number pattern, but can also analyze the language used around the pattern. In this way, NLP will understand the context in which VISA account numbers are communicated within that dataset compared to how employee identification numbers are communicated, and only return the VISA numbers.Predictive Coding (also referred to as Technology-Assisted Review or TAR): Predictive coding is not the same as conceptual analytics. Also, predictive coding is a bit of a misnomer, as the tools don’t predict or code anything. A human reviewer is very much involved. Simply put, it refers to a form of machine learning, wherein humans review documents and make binary coding calls: what is responsive and what is non-responsive. This is similar in concept to selecting thumbs up or down in Pandora so as to teach the app what songs you like and don’t like. After some human coding and calibrations between the human and the tool, the technology uses the human’s coding selections to score how the remaining documents should be coded, enabling the human to review the high scored documents first.In the most current versions of predictive coding, this technology continually improves and refreshes as the human reviews, which reduces or eliminates the need for surgical precision on coding at the start (which was a concern in the former version of predictive coding and why providers and parties spent a considerable amount of time concerned with “seed sets”). This improved and self-improving prioritization of large document sets based on high-scored documents is usually a more efficient and organized manner in which to review documents.Because of this evolution in predictive coding, it is often referred to in a host of different ways, such as TAR 1.0 (which requires “seed sets” to learn from at the start) and TAR 2.0 (which is able to continually refresh as the human codes—and is thus also referred to as Continuous Active Learning or CAL). Some providers continue to use the old terminology, or explain their advancements by walking through the differences between TAR 1.0 and TAR 2.0, and so on. But, speaking plainly, in this day and age, providers and legal teams should really only be concerned with the latest version of TAR, which utilizes CAL, and significantly reduces or totally eliminates the previous concern with surgical precision on coding an initial “seed set.” With our examples in the next installment, we hope to illustrate this point. In a word, walking through the technological evolution around predictive coding and all of the associated terminology can cause unnecessary intimidation, and can cause confusion between providers, parties and the court.The key takeaway from these definitions is that even though all the technology described above may technically fall into the “AI” bucket, there is an important distinction between predictive coding/TAR technology and advanced analytics technology that uses AI and NLP. The distinction is that predictive coding/TAR is a much more technologically-limited method of ranking documents based on binary human decisions, while advanced analytics technology is capable of analyzing the context of human language used within documents to accurately identify a wide variety of concepts and sentiment within a dataset. Both tools still require a good amount of interaction with human reviewers and both are not mutually exclusive. In fact, on many investigations in particular, it is often very efficient to employ both conceptual analytics and TAR, simultaneously, in a review.Please stay tuned for our next installment in this series, “Analytics and Predictive Coding Technology for Corporate Attorneys: Six Use Cases”, where we will outline six specific ways that corporate legal teams can put this type of technology to work in the eDiscovery and compliance space to improve cost, outcome, efficiencies.ai-and-analyticstar-predictive-coding, blog, corporate, ai, ai-and-analytics,tar-predictive-coding; blog; corporate; ailegaltech news
AI and Analytics
Blog

Biden Administration Executive Order on Promoting Competition: What Does it Mean and How to Prepare
On July 9, 2021, President Biden signed a sweeping new Executive Order (“the Order”) with the stated goal of increasing competition in American markets. Like the recently issued Executive Order on Improving the Nation’s Cybersecurity, the Executive Order on Promoting Competition in the American Economy is meant to establish “a whole-of-government” approach to tackle an issue that is typically handled by numerous federal agencies. As such, the Order includes 72 initiatives touching more than a dozen federal agencies and numerous industries, including healthcare, transportation, agriculture, internet service providers, technology, beer and wine manufacturing, and banking and consumer finance.Notably, the Order calls on the Department of Justice (DOJ) and Federal Trade Commission (FTC) to “vigorously” enforce antitrust laws and “reaffirms” the government’s authority to challenge past transactions that may have been in violation of antitrust laws and regulations (even if they were not challenged by previous Administrations). The remainder of this blog will broadly outline the contents of the Order and conclude with a brief summary on possible ramifications for organizations undergoing merger and acquisition activity (as well as the law firms that counsel them) and how to prepare for them.What is in the Executive Order on Promoting Competition in the American EconomySection 1: PolicyThis section broadly outlines the benefits of “robust competition” to America’s economy and asserts the U.S policy of promoting “competition and innovation” as an answer to the rise of foreign monopolies and cartels. This section also announces the Administration’s policy of supporting “aggressive legislative reforms” to lower prescription drug prices and supports the enactment of a public health insurance option.Sec. 2: The Statutory Basis of a Whole-of-Government Competition Policy This section outlines the antitrust laws which form the Administration’s whole-of-government anti-competition policy, including the Sherman Act, the Clayton Act, and the Federal Trade Commission Act, as well as fair competition and anti-monopolization laws, including Packers and Stockyards Act, Federal Alcohol Administration Act, the Bank Merger Act, and others.Sect 3: Agency Cooperation in Oversight, Investigation, and RemediesThis section outlines the Administration’s policy of cooperation between agencies on anti-competition issues, stating that when there is overlapping jurisdiction over anticompetitive conduct and mergers, the involved agencies should “endeavor to cooperate fully in the exercise of their oversight authority” to benefit from the respective expertise of the agencies and to improve Government efficiency.Section 4: The White House Competition Council This section establishes a White House Competition Council to “coordinate, promote, and advance” government efforts to address monopolies and unfair competition. The section also mandates that the Council should work across agencies to provide a coordinated response to monopolization and unfair competition and outlines the Council make up and meeting cadence.Section 5: Further Agency Responsibilities This section mandates that the heads of all agencies must “consider using their authorities” to further the anti-competition policies outlined within the Order, and “encourages” relevant positions and heads of agencies (including the Attorney General, Chair of the Federal Trade Commission (FTC), Secretary of Commerce, and others) to enforce existing antitrust laws “vigorously,” as well as review and consider revisions to other laws and powers, including encouragement to:Enforce the Clayton Act and other antitrust laws “fairly and vigorously.Review merger guidelines to consider whether they should be revised.Revise positions on the intersection of intellectual property and antitrust laws.Review current practices and adopt a plan for the revitalization of merger oversight under the Bank Merger Act and the Bank Holding Company Act of 1956.Consider whether to revise the Antitrust Guidance for Human Resource Professionals of October 2016.Consider curtailing the unfair use of non-compete clauses that may unfairly limit worker mobility.Consider rulemaking in other areas such as: Unfair data collection and surveillance practices that may damage competition, consumer autonomy, and consumer privacy; Unfair anticompetitive restrictions on third-party repair or self-repair of items (aimed at restrictions that prevent farmers from repairing their own equipment);Unfair anticompetitive conduct or agreements in the prescription drug industries;Unfair competition in major Internet marketplaces;Unfair occupational licensing restrictions;Unfair exclusionary practices in the brokerage or listing of real estate; andAny other unfair industry-specific practices that substantially inhibit competition.The section also calls upon the Secretary of Agriculture to address the unfair treatment of farmers and improve competition in the markets for farm products, and for the Secretary of the Treasury to assess the conditions of competition around the American markets for beer, wine, and spirits (including improving the market for smaller, independent operations).Notably, this section also calls for the Chair of the Federal Communications Commission to consider adopting “Net Neutrality” rules and other avenues to promote competition and lower prices across the telecommunications ecosystem.Finally, the section also calls for the Secretary of Transportation to protect consumers and improve competition in the aviation industry, including enhancing consumer access to airline flight information, providing consumers with more flight options at better prices, promoting rulemaking around requiring airlines to refund baggage fees, and address the failure of airlines to provide timely refunds for flight cancellations resulting from the COVID-10 pandemic.ConclusionAs a whole, the result of this Order will be that organizations undergoing mergers and acquisition activity can expect to face more scrutiny from the government – and that law firms that provide counsel for those types of transactions can expect that government investigations of those activities (like HSR Second Requests) will be more in-depth and meticulous. Accordingly, any law firms and organizations preparing for those types of investigations would do well to evaluate their eDiscovery technology now, in order to ensure that they are using the best and most up-to-date legal technology and workflows to help locate the data requested by the government more accurately and efficiently.antitrustprism, blog, antitrustprism; blog; antitrustsarah moran
Antitrust
Blog

Cybersecurity Defense: Recommendations for Companies Impacted by the Biden Administration Executive Order
As summarized in the first installment of our two-part blog series, President Biden recently issued a sweeping Executive Order aimed at improving the nation’s cybersecurity defense. The Order is a reaction to increased cybersecurity attacks that have severely impacted both the public and private sectors. These recent attacks have evolved to a point that industry solutions have a much more difficult time detecting encryption and file state changes in a reasonable timeframe to prevent an actual compromise. The consequence is that new and evolving ransomware and malware attacks are now getting past even the biggest solution providers and leading scanners in the industry.Thus, while on its face, many of the new requirements within the Order are aimed at federal agencies and government subcontractors, the ultimate goal appears to be to create a more unified national cybersecurity defense across all sectors. In this installment of our blog series, I will outline recommended steps for private sector organizations to prepare for compliance with the Order, as well as general best-practice tips for adopting a more preemptive approach to cybersecurity. 1. Conduct a Third-Party AssessmentFirst and foremost, organizations must understand their current cybersecurity posture. Given the severity and volume of recent cyberattacks, third-party in-depth or red-team assessments should be done that would include not only the organization’s IT assets, but also include solutions providers, vendors, and suppliers. Red teaming is the process of providing a fact-driven adversary perspective as an input to solving or addressing a problem. In the cybersecurity space, it has become a best practice wherein the cyber resilience of an organization is challenged by an adversary or a threat actor’s perspective.[1] Red-team testing is very useful to test organizational policies, procedures, and reactions against defined, intended standards.A third-party assessment must include a comprehensive remote network scan and a comprehensive internal scan with internal access provided or gained with the intent to detect and expose potential vulnerabilities, exploits, and attack vectors for red-team testing. Internal comprehensive discovery includes scanning and running tools with the intent to detect deeper levels of vulnerabilities and areas of compromise. Physical intrusion tests during red-team testing should be conducted on the facility, networks, and systems to test readiness, defined policies, and procedures.The assessment will evaluate the ability to preserve the confidentiality, integrity, and availability of the information maintained and used by the organization and will test the use of security controls and procedures used to secure sensitive data.2. Integrate Solution Providers and IT Service Companies into Plans to Address Above Executive Order StepsTo accurately assess your organization’s risk, you first have to know who your vendors, partners, and suppliers are with whom you share critical data. Many organizations rely on a complex and interconnected supply chain to provide solutions or share data. As noted above, this is exactly why the Order will eventually broadly impact the private sector. While on its face, the Order only seems to impact federal government and subcontractor entities, those entities’ data infrastructures (like most today) are interconnected environments composed of many different organizations with complex layers of outsourcing partners, diverse distribution routes, and various technologies to provide products and services – all of whom will have to live up to the Order’s cybersecurity standards. In short, the federal government is recognizing that its vendors, partners, and suppliers’ cybersecurity vulnerabilities are also its own. The sooner all organizations realize this the better. According to recent NIST guidance, “Managing cyber supply chain risk requires ensuring the integrity, security, quality, and resilience of the supply chain and its products and services.” NIST recommends focusing on foundational practices, enterprise-wide practices, risk management processes, and critical systems. “Cost-effective supply chain risk mitigation requires organizations to identify systems and components that are most vulnerable and will cause the largest organizational impact if compromised.[2]In the recent attacks, hackers inserted malicious code into Orion software, and around 18,000 SolarWinds customers, including government and corporate entities, installed the tainted update onto their systems. The compromised update has had a sweeping impact, the scale of which keeps growing as new information emerges. Locking down your networks, systems, and data is just the beginning! Inquiring how your supply chain implements a Zero Trust strategy and secures their environment as well as your shared data is vitally important. A cyber-weak or compromised company can lead to exfiltration of data, which a bad actor can exploit or use to compromise your organization.3. Develop Plan to Address Most Critical Vulnerabilities and Threats Right AwayThird-party assessors should deliver a comprehensive report of their findings that includes the descriptions of the vulnerabilities, risks found in the environment, and recommendations to properly secure the data center assets, which will help companies stay ahead of the Order’s mandates. The reports typically include specific data obtained from the network, any information regarding exploitation of exposures, and the attempts to gain access to sensitive data.A superior assessment report will contain documented and detailed findings as a result of performing the service and will convey the assessor’s opinion of how best to remedy vulnerabilities. These will be prioritized for immediate action, depending upon the level of risk. Risks are often prioritized as critical, high, medium, and low risk to the environment, and a plan can be developed based upon these prioritizations for remediation.4. Develop A Zero Trust StrategyAs outlined in Section 3 of the Order, a Zero Trust strategy is critical to addressing the above steps, and must include establishing policy, training the organization, and assigning accountability for updating the policy. Defined by the National Security Agency (NSA)’s “Guidance on the Zero Trust Security Model”: “The Zero Trust model eliminates trust in any one element, node, or service by assuming that a breach is inevitable or has already occurred. The data-centric security model constantly limits access while also looking for anomalous or malicious activity.”[3]Properly implemented Zero Trust is not a set of access controls to be “checked,” but rather an assessment and implementation of security solutions that provide proper network and hardware segmentation as well as platform micro-segmentation and are implemented at all layers of the OSI (Open Systems Interconnection) model. A good position to take is that Zero Trust should be implemented using a design where all of the solutions assume they exist in a hostile environment. The solutions operate as if other layers in a company’s protections have been compromised. This allows isolation of the different layers to improve protection by combining the Zero Trust principles throughout the environment from perimeters to VPNs, remote access to Web Servers, and applications. For a true Zero Trust enabled environment, focus on cybersecurity solution providers that qualify as “Advanced” in the NSA’s Zero Trust Maturity Model; as defined in NSA’s Cybersecurity Paper, “Embracing a Zero Trust Security Model.”[4] This means that these solution providers will be able to deploy advanced protections and controls with robust analytics and orchestration.5. Evaluate Solutions that Pre-emptively Protect Through Defense-In-DepthIn order to further modernize your organization’s cybersecurity protection, consider full integration and/or replacement of some existing cybersecurity systems with ones that understand the complete end-to-end threats across the network. How can an organization implement confidentiality and integrity for breach prevention? Leverage automated, preemptive cybersecurity solutions, as they possess the greatest potential in thwarting attacks and rapidly identifying any security breaches to reduce time and cost. Use a Defense-in-Depth blueprint for cybersecurity to establish outer and inner perimeters, enable a Zero Trust environment, establish proper security boundaries, provide confidentiality for proper access into the data center, and support capabilities that prevent data exfiltration inside sensitive networks. Implement a solution to continuously scan and detect ransomware, malware, and unauthorized encryption that does NOT rely on API calls, file extensions, or signatures for data integrity.Solutions must have built-in protections leveraging multiple automated defense techniques, deep zero-day intelligence, revolutionary honeypot sensors, and revolutionary state technologies working together to preemptively protect the environment. ConclusionAs noted above, Cyemptive recommends the above steps in order to take a preemptive, holistic approach to cybersecurity defense. Cyemptive recommends initiating the above process as soon as possible – not only to comply with potential government mandates brought about due to President Biden’s Executive Order, but also to ensure that organizations are better prepared for the increased cybersecurity threat activity we are seeing throughout the private sector. ‍[1]“Red Teaming for Cybersecurity”. ISACA Journal. October 18, 2018. https://www.isaca.org/resources/isaca-journal/issues/2018/volume-5/red-teaming-for-cybersecurity#1 [2] “NIST Cybersecurity & Privacy Program” May 2021. Cyber Supply Chain Risk Management C-SCRM” https://csrc.nist.gov/CSRC/media/Projects/cyber-supply-chain-risk-management/documents/C-SCRM_Fact_Sheet_Draft_May_10.pdf [3] “NSA Issues Guidance on Zero Trust Security Model”. NSA. February 25, 2021. https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/2515176/nsa-issues-guidance-on-zero-trust-security-model/[4] “Embracing a Zero Trust Security Model.” NSA Cybersecurity Information. February 2021. https://media.defense.gov/2021/Feb/25/2002588479/-1/-1/0/CSI_EMBRACING_ZT_SECURITY_MODEL_UOO115131-21.PDFdata-privacy; information-governancecloud, cybersecurity, blog, corporate, data-privacy, information-governancecloud; cybersecurity; blog; corporatelighthouse
Data Privacy
Information Governance
Blog

Cybersecurity Defense: Biden Administration Executive Order a Great Start Towards a More Robust National Framework
On May 12, President Biden issued a landmark Executive Order (“the Order”) aimed at improving the country’s cybersecurity threat defense. This Order is an attempt to create a “whole of government” response to increasingly frequent cybersecurity incidents that have wreaked havoc in the United States in recent months, affecting everything from energy supplies to healthcare systems to IT infrastructure systems. In addition to becoming more frequent, recent cyberattacks have also become increasingly more sophisticated – and even somewhat professional. In response to these attacks, the Biden administration seeks to build a national security framework that aligns the Federal government with private sector businesses in order to “modernize our cyber defenses and enhance the nation’s ability to quickly and effectively respond to significant cybersecurity incidents.” Prior to this Order, there has been no unified system to report or respond to cybersecurity threats and breach incidents. Instead, there is currently a patchwork of state legislation and separate federal government agency protocols, all with differing reporting, notification, and response requirements.In the first of this two-part blog series, I will broadly outline the details of this Order and what it will mean for private sector companies in the coming years. In the second installment, Rob Pike (CEO and Founder of Cyemptive Technologies) will provide guidance on how to set up your organization for compliance with the Order, as well as general best-practice tips for adopting a preemptive cybersecurity approach. What is in President Biden’s Executive Order on Improving the Nation’s CybersecurityThere are nine main sections to the Order, which are summarized below.Section 1: PolicyThis section outlines the overall goal of the Order – namely that, with this Order, the Federal government is intent on making “bold changes and significant investments in order to defend the vital institutions that underpin the American way of life.” To do so, the Order states that the government must improve its efforts to “identify, deter, protect against, detect, and respond to” cybersecurity attacks. While this may sound like a purely governmental task, the Order specifically states that this defense will require partnership with the private sector. Section 2: Removing Barriers to Sharing Threat Information As noted above, prior to this Order, there was no unified system for sharing information regarding threats and data breaches. In fact, separate agency procurement contract terms may actually prevent private companies from sharing that type of information with federal agencies, including the FBI. This section of the Order responds to those challenges by requiring the government to update federal contract language with IT service providers (including cloud service providers) to require the collection and sharing of threat information with the appropriate government agencies. While the Order currently only speaks to federal subcontractors, it is expected that this information-sharing requirement will have a trickle-down effect across the private sector, with purely private companies falling in line to share threat information once federal subcontractors are required to do so. Section 3: Modernizing Federal Government CybersecurityThis section calls for the federal government to adopt security best practices – and is specifically aimed at adopting Zero Trust Architecture and pushing a move to secure cloud services, including “Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS).” It requires that each government agency update plans to prioritize the adoption and use of cloud technology and develop a plan to implement Zero Trust Architecture, in part by incorporating the migrations steps outlined by the National Institute of Standards and Technology (NIST).Section 4: Enhancing Software Supply Chain SecurityThis section deals with increasing the cybersecurity standards of software sold to the government. It specifically calls out the fact that the development of commercial software “often lacks transparency, sufficient focus on the ability of the software to resist attack, and adequate controls to prevent tampering by malicious actors.” It, therefore, calls for “more rigorous and predictable mechanisms for ensuring that products function securely.” Thus, this section calls for NIST to issue new security guidelines for software used by the government. These new guidelines will include encryption requirements, multi-factor and risk-based authentication requirements, vulnerability detection and disclosure programs, and trust relationship audits, among others.Section 5: Establishing a Cyber Safety Review BoardThis section establishes a federal Cyber Safety Review Board, which will convene following significant cyber incidents, providing recommendations to the Secretary of Homeland Security for improving cybersecurity and incident response practices. It will be made up of federal officials, as well as representatives from private sector entities.Section 6: Standardizing the Federal Government’s Playbook for Responding to Cybersecurity Vulnerabilities and IncidentsThis section again speaks to the patchwork of differing vulnerability and incident response procedures that currently exists across multiple federal agencies. The goal here is to create a standard set of operational procedures (or a playbook) for cybersecurity vulnerability and incident response activity. The playbook will have to incorporate all appropriate NIST standards, be used by all Federal Civilian Executive Branch (FCEB) Agencies, and spell out all phases of incident response.Sections 7 and 8: Improving Detection, Investigation, and Remediations of Cybersecurity Vulnerabilities and Incidents on Federal Government NetworksThese two sections focus on creating a unified approach to the detection, investigation, and remediation of cybersecurity vulnerabilities and incidents. Section 7 focuses on improving detection – mandating that all FCEB agencies deploy an “Endpoint Detection and Response (EDR)” initiative to support proactive detection of cybersecurity incidents and establishes a procedure for the implementation of threat hunting and detection, as well as inter-agency information sharing around threat detection. Section 8 is focused on improving the government’s investigative and remediation capabilities – namely, by establishing requirements for agencies and their IT service providers to collect, maintain, and share specified information from Federal Information System network logs.Section 9: National Security SystemsThis section requires the Secretary of Defense to adopt National Security System requirements that are at least equivalent to the requirements spelled out by the above sections in the Order.Who Will This Impact?As noted above, while the Executive Order is aimed at shoring up the federal government’s cybersecurity detection and response systems – its impacts will be felt throughout much of the private sector. That isn’t a bad thing! A patchwork cybersecurity system is clearly not the best way to respond to the increasingly sophisticated cybersecurity incidents currently threatening both the United States government and the private sector. Responding to these threats requires a robust, unified national cybersecurity system, which in turn requires updated and unified cybersecurity standards across both government agencies and private sector companies. This Executive Order is a great stepping stone towards that goal.As far as timing for private sector impacts: the first impacts will be felt by software companies and other organizations that directly contract with the federal government, as there are direct requirements and implications for those entities spelled out within the Order. Many of those requirements come into play within 60 days to a year after the date of the Order, so there may be a quick turnaround to comply with any new standards for those organizations. Impacts are then expected to trickle down to other private sector organizations: as government subcontractors update policies and systems to comply with the Order, they will in turn require the companies that they do business with to comply with the new cybersecurity standards. In this way, the Order actually creates an opportunity for the federal government to create a cybersecurity floor above which most companies in the US will eventually have to comply.ConclusionDetecting and defending against cybersecurity threats is an increasingly difficult worldwide challenge – a challenge to which, currently, no perfect defense exists. However, with this Order, the United States is taking a step in the right direction by creating a more unified cybersecurity standard and network that will encourage better detection, investigation, and mitigation.Check out the second installment of this blog series, where Rob Pike, CEO and Founder of Cyemptive Technologies, provides guidance on how to set up your organization for compliance with the Executive Order, as well as general best-practice tips for adopting a preemptive cybersecurity approach. If you would like to discuss this topic further, please reach out to me at erubenstein@lighthouseglobal.com.data-privacy; information-governancecloud, cybersecurity, blog, corporate, data-privacy, information-governancecloud; cybersecurity; blog; corporateerin rubenstein
Data Privacy
Information Governance
Blog

How to Get Started with TAR in eDiscovery
In a recent post, we discussed that requesting parties often demand more transparency with a Technology Assisted Review (TAR) process than they do with a process involving keyword search and manual review. So, how do you get started using (and understanding) TAR without having to defend it? A fairly simple approach: start with some use cases that don’t require you to defend your use of TAR to outside parties.Getting Comfortable with the TAR WorkflowIt’s difficult to use TAR for the first time in a case for which you have production deadlines and demands from requesting parties. One way to become comfortable with the TAR workflow is to conduct it on a case you’ve already completed, using the same document set with which you worked in that prior case. Doing so can accomplish two goals: You develop a better understanding of how the TAR algorithm learns to identify potentially responsive documents: Based on documents that you classify as responsive (or non-responsive), you will see the algorithm begin to rank other documents in the collection as likely to be responsive as well. Assuming your review team was accurate in classifying responsive documents manually, you will see how those same documents are identified as likely to be responsive by the algorithm, which engenders confidence in the algorithm’s ability to accurately classify documents. You learn how the TAR algorithm may identify potentially responsive documents that were missed by the review team: Human reviewers are only human, and they sometimes misclassify documents. In fact, many studies would say they misclassify them regularly. Assuming that the TAR algorithm is properly trained, it will often more accurately classify documents (that are responsive and non-responsive) than the human reviewers, enabling you to learn how the TAR algorithm can catch mistakes that your human reviewers have made.Other Use Cases for TAREven if you don’t have the time to use TAR on a case you’ve already completed, you can use TAR for other use cases that don’t require a level of transparency with opposing counsel, such as: Internal Investigations: When an internal investigation dictates review of a document set that is conducive to using TAR, this is a terrific opportunity to conduct and refine your TAR process without outside review or transparency requirements to uphold. Review Data Produced to You: Turnabout is fair play, right? There is no reason you can’t use TAR to save costs reviewing the documents produced to you to while determining whether the producing party engaged in a document dump. Prioritizing Your Document Set for Review: Even if you plan to review the entire set of potentially responsive documents, using TAR can help you prioritize the set for review, pushing documents less likely to be responsive to the end of the queue. This can be useful in rolling production scenarios, or if you think that eventual settlement could obviate the need to reduce the entire collection.Combining TAR technology with efficient workflows that maximize the effectiveness of the technology takes time and expertise. Working with experts who understand how to get the most out of the TAR algorithm is important. But it can still be daunting to use TAR for the first time in a case where you must meet a stringent level of defensibility and transparency with opposing counsel. Applying TAR to use cases first where that level of transparency is not required enables your company to get to that efficient and effective workflow—before you have to prove its efficacy to an outside party.ediscovery-review; ai-and-analyticstar-predictive-coding, ediscovery-review, ai-and-analyticstar-predictive-codingmitch montoya
eDiscovery and Review
AI and Analytics
No items found. Please try different search parameters.