Insights

Gain new perspectives on the key issues impacting eDiscovery, information governance, modern data, AI, and more. From the latest in innovative technology and AI to helpful tips and best practices, we’re here to shed a little light on what better looks like.

View all insights

Get the latest insights

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Featured Insights

Showing X results

Filter by trending topics
Select filters
Filter by content type
Select content type
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text
August 28, 2020
Blog
cloud-security, blog, data-privacy

The U.S Privacy Shield Is No Longer Valid – What Does that Mean for Companies that Transfer Data from the EU into the US?

It feels fitting that the summer of 2020 would bring us Schrems II. This surprising Court of Justice of the European Union (CJEU) decision wreaked havoc in late July by invalidating the EU - U.S. Privacy Shield and calling into question other mechanisms for transferring the personal data of EU citizens into the United States (and beyond) under the GDPR. Let’s take a deeper dive into that decision and what it means for companies that need to transfer EU citizens’ data into the U.S.Shrems HistorySchrems II is the second decision by the CJEU that is based on privacy complaints made against Facebook by Austrian privacy activist Max Schrems. Both cases stem from privacy concerns related to the U.S. National Security Agency (NSA)’s ability to access the personal data of EU citizens, famously disclosed by Edward Snowden in 2013.In the first Schrems decision in 2015, the CJEU invalidated the U.S. - EU Safe Harbor Framework (the predecessor to the EU - U.S. Privacy Shield) as a means to transfer personal data from the EU into the U.S., finding that the protections afforded by the Safe Harbor framework did not meet fundamental privacy rights guaranteed within the EU to EU citizens.In the aftermath of the first Schrems decision, the U.S. Department of Commerce and the EU Commission collaborated to implement the EU-U.S. Privacy Shield as a replacement to the Safe Harbor Framework, again allowing for a broader transfer mechanism of personal data into the U.S. compared to the alternatives (namely, “standard contractual clauses” (SCCs) and “binding corporate rules” (BCRs) – more on those below). Since its implementation in 2016, over 5,000 organizations have met the requirements administered by the International Trade Administration to join the Privacy Shield. Meeting those requirements can mean a large investment for organizations in overhauling their data privacy practices.That brings us to Schrems II, wherein Schrems brought a second complaint against Facebook, this time challenging the validity of SCCs as a mechanism to transfer personal data into the U.S. In Schrems II, he argued that the same privacy concerns related to the NSA’s ability to access EU citizens’ personal data under the Safe Harbor framework also applied to personal data transferred via an SCC. It should be noted here that around the same time, European privacy advocates also filed a challenge to the new EU-U.S. Privacy Shield with the European Court.Schrems II CJEU DecisionIn the Schrems II ruling in July, the CJEU ultimately decided to address both the EU-U.S. Privacy Shield and SCC issues.The Court upheld the validity of SCCs as a means to transfer personal data from the EU into the U.S. However, rather than carte blanche approval, the Court laid out obligations for both parties of an SCC and data protection supervisory authorities within the EU. Those obligations include:Entities that are transferring personal data of EU citizens into the U.S. must verify “on a case by case basis” that the protections afforded by the SCC can be met and that there is an “adequate level of protection” in the U.S. to protect the personal data of EU citizens.Entities that are receiving personal data of EU citizens in the U.S. have an obligation to notify the data exporter if they are unable to comply with the SCC for any reason.Data protection supervisory authorities within the EU have a mandatory obligation to evaluate not only the terms of the SCCs themselves, but also whether the data protections afforded by the U.S. legal system can meet those terms. If the SCC is found to be insufficient, the supervisory authority has an obligation to stop the transfer.This decision puts SCCs (and thereby BCRs) on shaky ground throughout the entire world, because the threshold set by the Court applies to any third country, not just the U.S. (see Questions 2 and 6 of the FAQ issued by the European Data Protection Board for more information on these points).However, the real kicker of Schrems II for U.S.-based companies with an international presence is that the CJEU completely invalidated the EU-U.S. Privacy Shield. The Court found that the U.S. does not provide sufficient protection of EU citizens’ personal data because of the access the U.S. government has to EU citizens’ personal data and because EU citizens have no means of redress against U.S. authorities should their privacy rights be violated.What Does Shrems II Mean for Companies that Need to Transfer Personal Data from the EU into the U.S.Companies that were relying on the Privacy Shield to transfer EU data into the U.S. should:Work to put individual SCCs or BCRs in place to achieve these transfers. There is no grace period during which a company can keep transferring data using the Privacy Shield mechanism, according to the European Data Protection Board (see Question 3 for more information).Continue to comply with all current Privacy Shield obligations. While the CJEU decision invalidates the Privacy Shield, it does not relieve current participant organizations of their obligations.Watch for further guidance from both the European Data Protection Board and the U.S. Department of Commerce (DOC). DOC and the European Commissioner for Justice issued a joint press release in early August, stating that they have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy shield framework that would meet the requirements laid out by the CJEU.Companies that rely on SCCs or BCRs as a means to transfer personal data should: Conduct a risk assessment to determine whether those agreements and the recipient of the data in the U.S. can provide an adequate level of data protection, according to the European Data Protection Board (see Questions 5 and 6 for more information).Watch for further guidance from data protection authorities in relevant countries related to SCCs and BCRs in the wake of Schrems II. The transfer of personal data between countries is vital to the lifeblood of many companies, large and small. While Schrems II has thrown a wrench into the legality of those transfers… all is not lost. Stay tuned for updates from U.S. and EU authorities that may help ease the burden of this unexpected decision by the CJEU. Resources for More Information CJUE Schrems II full decision: http://curia.europa.eu/juris/document/document.jsf?text=&docid=228677&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=16606736CJEU press release on its Schrems II decision: https://curia.europa.eu/jcms/upload/docs/application/pdf/2020-07/cp200091en.pdfEU – U.S. Privacy Shield Program Schrems II FAQs: https://www.privacyshield.gov/article?id=EU-U-S-Privacy-Shield-Program-UpdateEuropean Data Protection Board Schrems II FAQs: https://edpb.europa.eu/our-work-tools/our-documents/ovrigt/frequently-asked-questions-judgment-court-justice-european-union_enS. Secretary of Commerce Wilbur Ross Statement on Schrems II ruling and the importance of EU-U.S. data flows: https://www.commerce.gov/news/press-releases/2020/07/us-secretary-commerce-wilbur-ross-statement-schrems-ii-ruling-andJoint press statement from the U.S. Secretary of Commerce and the European Commissioner regarding initiated discussions for a new privacy shield: https://www.commerce.gov/news/press-releases/2020/08/joint-press-statement-us-secretary-commerce-wilbur-ross-and-europeanUK’s Information Commissioner’s Office updated statement on the Schrems II decision: https://ico.org.uk/make-a-complaint/eu-us-privacy-shield/To discuss this topic further, please feel free to reach out to me at SMoran@lighthouseglobal.com. Or, take a look at other Worldwide Data Privacy Updates.data-privacycloud-security, blog, data-privacycloud-security; blogsarah moran
July 22, 2020
Blog
chat-and-collaboration-data, information-governance

Three Key Tips to Keep in Mind When Leveraging Corporate G Suite for eDiscovery

In the eDiscovery space, we are always spotting new trends. Our industry has seen text messages, chat message platforms, websites, and various unstructured data sources become increasingly relevant during discovery. Over the past several years, we have started to see another new trend emerge - many of our clients are using Corporate G Suite rather than Office 365.The use of emerging technologies is part of everyday life for many companies in the space. However, we are beginning to see established biotech, healthcare, manufacturing, and retailers shift to G Suite, an area that was once almost exclusively dominated by on-prem Microsoft products. This transition introduces some new considerations around managing discovery. In this post, we talk about three impacts that G Suite data has on downstream eDiscovery workflows, and the need to factor these items into your discovery plan. Recipient Metadata: Gmail renders email header information in a unique format. While the last-in-time email in a given string will have all expected sender and recipient information (From, To, CC, BCC), all other previous messages exchanged in the email string will display only the sender information and will not display the recipient information. This is not a collection, processing, metadata, or threading issue. Rather, this relates to how Gmail stores and exports recipient information. This presents some unique document review challenges, as previous parts of the thread could include recipients that are not visible to the reviewer, and may include attorneys who have sent privileged communications. As a result, it is important to work closely with your project management team to create workflows related to Gmail. ‚ÄçLinks: Historically, we have all attached copies of documents (e.g. Word, Excel, and PowerPoint files) to an email during the normal course of business. Due to the emergence of technologies such as SharePoint and Google Drive, we now have the ability to send emails with embedded links that reference documents rather than attaching the document itself. When Gmail is exported from Google Vault, the documents referenced in links embedded throughout email exchanges are not exported. As a result, reviewers will encounter these links, but will be unable to readily view the corresponding document referenced in said link. At present, Google Vault does not allow for the mass search and export of these links. However, you do have the ability to manually pull documents referenced in these links. You should be mindful of this issue when drafting your ESI protocol, as opposing parties and regulators may request that your company retrieve these documents.‚ÄçExported Load File: Unlike a standard PST export, when you export a mailbox or set of documents from Google Vault, you have the ability to retrieve a corresponding load file that contains metadata captured in G Suite. Sometimes, the date-related metadata extracted during processing, will not align with dates exported from G Suite. There are a variety of legitimate reasons for this. You will need to determine if you want to produce the date metadata extracted from the processing platform, date values exported from Vault, or both.All of the above items are manageable when in-house legal teams, outside counsel, and eDiscovery vendors work together to proactively implement appropriate downstream eDiscovery workflows. If you have experience with G Suite data or thoughts on managing the discovery of G Suite data, please reach out to me at ashier@lighthouseglobal.com.chat-and-collaboration-data; information-governancechat-and-collaboration-data, information-governanceemerging-data-sources; g-suite; preservation-and-collection; blogalison shier
March 26, 2021
Blog
microsoft, cloud, data-privacy, blog, law-firm, data-privacy, microsoft-365, information-governance, chat-and-collaboration-data

The Impact of Schrems II & Key Considerations for Companies Using M365: The Future

The Schrems II decision invalidated the EU-US Privacy Shield – the umbrella regulation under which companies have been transferring data for the last half-decade. In earlier parts of this four-part series, we described the impact of the Schrems decision, discussed how companies should evaluate their risk in using cloud technologies, and took a deeper dive on M365 in light of Schrems II. In sum, if you are a global business that previously relied upon Standard Contractual Clauses (SCCs) to transfer data, there is no clear guidance on what to do currently.It is even murkier in a cloud environment because the location of the data is not as transparent. Fortunately, there are ways to undertake a risk assessment to determine whether to proceed with any new cloud implementations. In the case of Microsoft products, there is also additional support from Microsoft with changes in its standard contractual terms and features in the product to mitigate some risks. Even so, many companies are holding off making any changes because the legal landscape is evolving. In this final part, we opine on what the future may hold. We can expect in the first half of this year that the European Commission will finalise the amended SCCs. We can anticipate that the EDPB will also produce another draft of its recommendations concerning data transfers. We should see plenty of risk assessments taking place. Even for companies adopting a “wait and see” policy in terms of taking significant steps, those companies should still be looking at their data transfers and carrying out risk assessments to make sure they are as well placed as possible for the moment when the draft SCCs and EDPB guidance are finalised.It would not be a surprise to see Microsoft continue to expand and develop M365 so that it offers yet more services that could be used as technical measures to reduce the risk around data transfers. These changes would strengthen the position of any company doing business between Europe and the US using M365.We do not have a crystal ball, and like many of you, are eager to see what happens next in this space. We will continue to monitor and keep you up to date with developments and our thoughts. If you have any questions in the meantime, feel free to reach out to us at info@lighthouseglobal.com.data-privacy; microsoft-365; information-governance; chat-and-collaboration-datamicrosoft, cloud, data-privacy, blog, law-firm, data-privacy, microsoft-365, information-governance, chat-and-collaboration-datamicrosoft; cloud; data-privacy; blog; law-firmlighthouse
March 22, 2021
Blog
microsoft, cloud, data-privacy, blog, corporate-legal-ops, data-privacy, microsoft-365, information-governance,

The Impact of Schrems II & Key Considerations for Companies Using M365: The Cloud Environment

In part one of this series, we described the state of the EU-US Privacy Shield and the mechanisms global companies have relied upon to transfer data from their multiple locations. In short, a recent decision – Schrems II – invalidated the Privacy Shield and shook the foundation of Standard Contractual Clauses (SCCs). Companies are now left asking the question of how to respond.In this post, we will share our view on how to navigate forward. If your organization is not already highly reliant on cloud software, we recommend weighing the benefits and risks of making that move. As you assess your options, keep in mind that this move may come at a higher cost because of the need to do periodic risk assessments during this uncertain time. For those already in the Cloud, the motto here is “do everything that you reasonably can.” The position no company wants to find itself in is one of stasis. It is difficult to see such a position being looked upon favourably should regulators start to investigate how companies are responding to Schrems II and the consequences that go along with it.The touchstone is the EDPB guidance and its six-stage approach to assessing data transfers, which we recommend companies undertake:Identify your data transfers: It is an obvious first step, although in practice this could prove challenging. You’ll need to know all the scenarios where your data is moved to a non-European Economic Area (EEA) country (at the time of writing this article, the UK, although out of Europe, is still under the European umbrella until at least the 30th of June).Identify the data transfer mechanisms: You need to decide the grounds upon which the transfer is taking place, such as on the basis of an adequacy decision (this does not apply to the US), SCCs, or a specific derogation (such as consent).Assess the law in the third country: You need to assess “if there is anything in the law or practice of the third country that may impinge on the effectiveness of the appropriate safeguards of the transfer tools you are relying on, in the context of your specific transfer.” There is more guidance from the EDPB as to how the evaluation should be carried out (i.e., an independent oversight mechanism should exist). How effective or practical it is to suggest each company has to perform its own thorough legal assessment as the entire range of relevant legislation in any importing country is open to debate and might perhaps be considered further as these recommendations are refined.Adopt supplementary measures if necessary to level up protection of data transfers: The EDPB has published a non-exhaustive list of such measures, which essentially fall into one of three categories - technical (i.e., encryption), contractual (i.e., transparency), and organisational (i.e., involvement of a Data Protection Officer on all transfers). We’ll have a look at these measures in more detail below in relation to Microsoft 365.Adopt necessary procedural steps: If you have made changes to deliver the required level of protection, these need to be embedded into your operation (i.e.., by means of policy).Re-evaluate at appropriate intervals: This is not a job that can be completed and then left. It needs continual monitoring. There is no specific guideline as to what an appropriate interval is, but quarterly is probably a reasonable approach.Essentially this boils down to carrying out a risk assessment and taking steps to mitigate the risks that are uncovered. If your cloud strategy includes Microsoft 365, the next part of this blog series is a must-read. We will share what Microsoft has done in response to Schrems II as well as some specific configuration options that will influence steps 4 and 5, listed above. Bear in mind that these recommendations could change and you should watch the space. To continue the discussion or to ask questions, please feel free to reach out to us at info@lighthouseglobal.com.data-privacy; microsoft-365; information-governancemicrosoft, cloud, data-privacy, blog, corporate-legal-ops, data-privacy, microsoft-365, information-governance,microsoft; cloud; data-privacy; blog; corporate-legal-opslighthouse
March 8, 2021
Blog
blog, diversity-equity-and-inclusion,

The Fearless 5: Spotlighting 5 Women Who “Choose to Challenge” Gender Bias in the Legal and Tech Fields

The International Women’s Day (IWD) theme for 2021 is “choose to challenge.”What a fitting theme for a year when 5.4 million women lost their jobs and over 2.1 million women left the workforce, while the COVID-19 pandemic wreaked havoc, and social and racial inequity issues were raised to the forefront of the international consciousness. There was certainly no shortage of challenges for women to choose from this past year.However, the IWD initiative website elaborates on the “choose to challenge” theme by noting that “a challenged world is an alert world and from challenge comes change.” It is that idea that should really resonate with us, as we move past 2020 and into 2021. There have been many examples this year of women who have chosen to rise to the challenges presented in 2020 and by doing so, have created change for other women.In keeping with this theme, Lighthouse is featuring five such women in the legal and technology fields. Five women who have risen to the challenges presented this year and created change. Those five women are:Laura Ewing-Pearle, eDiscovery Project Manager at Baker Botts LLPJenya Moshkovich, Assistant General Counsel at GenentechGina M. Sansone, Counsel – Litigation Support at Axinn Veltrop & Harkrider LLPAmy Sellars, Director, Discovery Center of Excellence at Cardinal HealthRebecca Sipowicz, VP and Assistant General Counsel at Ocwen Financial CorporationWe had the honor of interviewing these inspirational women about the “choose to challenge theme” – including the stressors of 2020, how to empower other women, how to leverage innovation to shape a more gender-equal world, and how to address social justice issues. That discussion led to some powerful lessons on how to rise to our current challenges and create meaningful, lasting change.Empowering Women Durning a PandemicLike any societal change, empowering women starts small, at the individual level. We do not have to wait for some grand opportunity or postpone our effort until we have the time to volunteer – especially during a pandemic when our personal time may feel even more precious and many in-person volunteer opportunities have halted. Empowerment can happen by simply reaching out to the women around us – women we work with, women in our personal lives, and women within our own families.Laura Ewing-Pearle noted, “One principle to keep in mind is that women are not a monolithic bloc, and that empowering women usually means empowering the individual. A woman with twenty years’ experience in the legal world caring for aging parents has a different set of stressors and goals than a woman fresh out of school with a toddler, especially under the new Covid protocols…Seeing past “woman” to the individual brings us all closer to a more gender-equal world.”In our professional lives, empowering at the individual level can mean reaching out to our women co-workers, teammates, and those that may be on “lower rungs” of the corporate or law firm ladders and offering them a chance to sit down (virtually) for a cup of coffee to talk about their personal goals and challenges. This provides women a space to be heard and seen, first and foremost. It is from these conversations that the seeds of change are often planted.Rebecca Sipowicz stated, “During the summer I became responsible for co-oversight of our back-office team in India, which is approximately 25% female. For at least the past 10 years, the India team has not reported, directly or indirectly, to a woman. I have reached out on an individual level to these women to discuss their career goals and how we can work together to achieve them. I also started a monthly “catch-up” where, in this virtual environment, we can meet for 30 minutes to talk about work, life, and the state of the world. Through these conversations, I have not only gotten to know better my colleagues who are located off-shore but also have been able to share life experiences, such as how to take advantage of our remote work world while parenting and managing online school. The continued growth of this group of employees is one of my most important goals for 2021.”These conversations often help us learn not only about individual ambitions and challenges, but also may help us learn about unsung accomplishments and milestones that women often are less apt to tout about themselves within their own organizations and networks. In turn, this can provide an excellent opportunity to be a champion for those women, by calling out successes that would otherwise go unrecognized.Gina Sansone said, “I am a strong believer in being a vocal cheerleader for the women around me who may be less comfortable with promoting their strengths and accomplishments. Women often play multiple roles at work and at home that are not obvious to others and get overlooked because they tend to be less measurable in a traditional sense. These contributions are nonetheless valuable and crucial to an organization and the lives of others, and it’s really important to notice and appreciate them along the way.”Empowering at the individual level also means leading by example during these conversations. The stressors of the pandemic have changed our lives dramatically, both professionally and personally. It is unchartered territory for everyone, and studies are showing that women are shouldering the brunt of the burden at home – often juggling full-time virtual schooling with children while working full-time jobs or dealing with the bulk of household maintenance. Leading by example and being honest about ourselves and our own hurdles during our conversations can empower women to be honest about their own struggles and needs.Jenya Moshkovich stated, “[I empower other women by] being honest about my own challenges and creating and holding space for others to be their true, authentic selves with all the complexities and messiness that can bring. The line between our private lives and work is blurrier now than it has ever been and we have to let go of trying to pretend that we have it all together all the time because no one does, especially these days.”The example we set and the honesty with which we portray ourselves can especially be important for those closest to us – the people within our own families and homes.Rebecca Sipowicz mentioned, “…Having my children home from school for six months enabled my 11-year-old daughter to witness firsthand how involved my job is and to learn how difficult but rewarding it is to juggle parenting and a career. This is a valuable lesson for all children, not just girls.”Leveraging Innovation to Shape a More Gender Equal WorldIf the legal and technology industries have anything in common, it is that women have been historically under-represented in both spaces. Fortunately, technological innovation can help close the gender gap in both industries:Rebecca Sipowicz shared, “The pandemic really pushed all of corporate America to take steps that will help to advance gender equality in the workplace – namely the move from in-office to remote work. This unquestionably provides working mothers with more equal access to the workplace. Through the use of video software such as Zoom and Teams, and the ability to work around parenting responsibilities, fewer women should feel the pressure to leave the workforce in order to parent….This flexibility allows women to continue to contribute to the workforce and grow in their careers while caring for their families, without feeling like they are short-changing either side. This should enable women to continue to take on more prominent roles and push women throughout the world to request an equal seat at the table.”Technological innovation can also help people push their organizations and law firms to empower women and support equality. Many companies have seen how innovation and technology can help close the gender gap, and we can work within those systems to further those efforts.Gina Sansone said, “I really don’t know how we can begin to work toward a gender-equal world without leveraging innovation. To me, innovation means creating a dynamic work environment that encourages everyone to move forward, which could mean training and managing members of the same team differently. While consistency is important, recognizing differences, being flexible, and empowering people to think differently and not simply check a box are steps toward shaping a more gender-equal world.”Laura Ewing-Pearle added, “Encouraging and leveraging more on-line training certainly helps anybody juggling family and career to keep pace with new technology and change. I’m grateful that Baker Botts as a firm encourages everyone to create new, innovative ideas to improve business processes and culture.”Jenya Moshkovich mentioned, “I am very fortunate to work for a company that has started innovating in this space years ago and where I am in the position to benefit from these efforts. Since 2007, Genentech has more than doubled the percentage of female officers from 16 to 43% and today over half of our employees and over half of our directors are women. In our legal department, ALL of our VPs are women. Genentech’s efforts to move towards gender equality have included senior leadership commitments, programs to drive professional development and open up opportunities for career advancement, among others.”Going forward, it will be equally important to continue efforts to support changes in our industry. While we have come a long way and made considerable progress, it is still important to push companies and law firms to recognize equity gaps and encourage the use of innovation and technology to help close those gaps.Amy Sellars stated, “Corporate practices favor men, and Covid exacerbates this problem. Will companies acknowledge that women took on most of the additional burdens of children at home, education at home, of people at home all the time (more dishes, more cleaning, more cooking, less dry cleaning, and more laundry)?”Rebecca Sipowicz said, “It is up to all of us to make sure that the realization that flexibility can result in increased productivity and satisfaction continues long after the pandemic, allowing women (and men of course) to have the best of both worlds.”Addressing Social Justice and Equality Issues2020 was also a devastating year for people of color, as well as underrepresented and low-income communities. The tragic events throughout the year brought social inequality issues and systemic racism to the forefront of the conversation in many families, workplaces, and social circles. Many of the lessons learned in the fight for gender equality can also be applied to the fight for racial and social equality. For instance, just as empowering women can start at the individual level, the fight for social and racial equality can also start with small, individual acts.These acts can be as simple as personally working to educate ourselves on the work to be done, so that we can act on social justice issues in the most impactful way:Jenya Moshkovich said, “2020 was a difficult year in so many ways including the tragic deaths of George Floyd, Breonna Taylor, and Ahmaud Arbery and many incidents of xenophobic violence against Asian Americans. My personal focus has been on educating myself, speaking up for others, listening, and fostering belonging. And there is so much more that needs to be done.”Gina Sansone added, “The social issues raised in 2020 were unfortunately just a magnification of issues that have existed for a long time. It was a perfect storm of events that certainly made me and others face the thought patterns, inequalities, and general civil unrest that has been festering in our society. It is very easy to live in a bubble and lose sight. I think one of the most important things that happened was people stopped being quiet and just accepting. Change will not happen unless it is absolutely forced and we need to continue recognizing that the world is not equal.”Creating social justice change also can mean utilizing the education we do have about these issues, and working within our communities to help in any way possible – both at the individual level and on a broader scale:Amy Sellars stated, “My husband and I have always been passionate about voting rights and participate in get out the vote efforts. 2020 was a particularly important year for voting issues, as so many people were isolated and had even less access to register to vote or get to polls than normal. Working with the League of Women Voters, we did neighborhood registration drives, and we volunteered as non-partisan poll watchers. We also picked up Meals on Wheels shifts. All around the country, meal recipients who used to be fed at central locations were transitioned to home deliveries, and it has taken an army of volunteers in personal vehicles…We are also volunteering on the domestic crisis hotline.”We can also leverage the networks and programs put in place within forward-thinking organizations to help bring about social change. More and more law firms and organizations are working to help close the gender gap and fight racial and social inequity. Employees of those organizations are in a unique position to join those initiatives to make more of an impact:Laura Ewing-Pearle said, “While (Baker Botts) had resources in place prior to the events of last year, the firm has also increased efforts over the past twelve months to address social issues including greater outreach to diverse communities, and creating a significant pro bono partnership with Official Black Wall Street, among other major initiatives.”Rebecca Sipowicz added, “I am a member of the Ocwen Global Women’s Network (OGWN), which supports the attainment of Ocwen’s goals in diversity, inclusion, and talent development. I am also on the planning committee for the National Association of Women Lawyers (NAWL) mid-year meeting. NAWL’s mission is to empower women in the legal profession, while cultivating a diverse membership dedicated to equality, mutual support and collective success. Membership in and support of organizations such as NAWL and OGWN provide me with a platform to address the diversity and social issues that permeated 2020.”‍ConclusionThe lesson learned from these strong women during a year full of challenges is that seemingly “small” acts can have big impacts. Change starts with all of us, at an individual level, working to empower women and make impactful societal changes – one person, one organization, and one community at a time.Thank you to the five women who participated in our 2021 International Women's Day Campaign! Take a look at our 2020 and 2019 International Women's Day campaigns for more inspiring stories of women in our industry making bold moves to promote gender equality.For more information, please reach out to us at info@lighthouseglobal.com.diversity-equity-and-inclusionblog, diversity-equity-and-inclusion,bloglighthouse
November 21, 2019
Blog
cloud, self-service, spectra, ediscovery-process, blog, ediscovery-review,

The Truth Behind Self-Service Pricing in eDiscovery

eDiscovery pricing has always been nuanced and inconsistent across vendors and technology providers, making it difficult for law firms and corporations alike to compare and contrast options. So, it is no big surprise that this same challenge exists across self-service, spectra eDiscovery tools and software as well, making it extremely challenging to model out an apples-to-apples comparison across solutions. This inability to accurately compare costs across different platforms leaves you and your team in the dark when it comes to choosing the right tool and pricing model to fit your needs.The ChallengeToday’s self-service, spectra solutions are frequently priced based on data sizes/volumes at different phases of the eDiscovery processing, review, and production workflow, often with each of these steps having their own cost trigger associated with them. For example, many solutions charge based off of hosted volume, raw data size, or even post-extraction data volume, while others charge a flat-fee per matter.Although attractive on their face, on a per matter plan you may be in good shape if you are able to entirely self-support, but any requests for help or training are frequently not included in the flat-fee. The lack of the ability to predict the future needs for support make the flat-fee a riskier choice. While paying for eDiscovery based off of per GB sizes is a very popular method, not knowing the expansion rates, the hosted volume, or not needing an entire data-set post culling and filtering means you may fall victim to data size anomalies or paying for data you don’t need.Lastly it is important that you make sure to understand the full ecosystem of potential charges to avoid any surprises. Make sure to understand if there are other costs or “gotchas” you need to be aware of around user fees, OCR costs, Bates endorsement fees, user trainings, additional license fees to other platforms, costs to process specific file types, language translation, etc. Not to mention, you also have to consider the various technology platforms that are included and assess the need for ongoing expert support.There is no perfect pricing model. The key to all of this is choosing the right model for you and your eDiscovery profile, but, how do you go about that?The SolutionWhen you are evaluating pricing for self-service, spectra platforms and you have narrowed down to a few technology providers that offer technology that fits your needs, make sure to leverage the following tips to ensure you’re making an informed comparison:Trust but verify. Ask the technology provider to explain the price point and both how and when it is measured. Discuss how the confluence of the eDiscovery workflow and cost actually come together. Will all workflows trigger all cost points? Do you have to pay for technology you don’t want or need? Understanding the answers to these questions will allow you to get a big picture understanding and to get a feel for where you may see your costs rise or decrease.Set up a real case example. Ask the potential providers to use your actual data volumes to illustrate what the cost would like look for a given period of time (i.e. a month or even a few years). This will allow you to see what actuals would be across platforms as well as give you the opportunity to explore different pricing models with the vendor to meet your budgetary constraints.Use a pricing calculator. Create a pricing calculator to compare self-service, spectra tools. Add as many variables as you would like to understand across all possible scenarios and across the different platforms you are evaluating. Leverage this to compare bottom-line numbers and determine the right fit for you. Additionally, lean on your vendor to help you build this out and make a comparison.To discuss this topic more or to learn how we can help you make an apples-to-apples comparison, feel free to reach out to me at bthompson@lighthouseglobal.com.ediscovery-reviewcloud, self-service, spectra, ediscovery-process, blog, ediscovery-review,cloud; self-service, spectra; ediscovery-process; blogbrooks thompson
November 20, 2020
Blog
privilege, analytics, ai-big-data, data-re-use, phi, pii, blog, chat-and-collaboration-data, ai-and-analytics

The Sinister Six…Challenges of Working with Large Data Sets

Collectively, we have sent an average of 306.4 billion emails each day in 2020. Add to that 23 billion text messages and other messaging apps, and you get roughly 41 million messages sent every minute[1]. Not surprisingly, there have been at least one or two articles written about expanding data volumes and the corresponding impact on discovery. I’ve also seen the occasional post discussing how the methods by which we communicate are changing and how “apps that weren’t built with discovery in mind” are now complicating our daily lives. I figured there is room for at least one more big data post. Here I’ll outline some of the specific challenges we’ll continue to face in our “new normal,” all while teasing what I’m sure will be a much more interesting post that gets into the solutions that will address these challenges.Without further delay, here are six challenges we face when working with large data sets and some insights into how we can address these through data re-use, AI, and big data analytics:Sensitive PII / SHI - The combination of expanding data volumes, data sources, and increasing regulation covering the transmission and production of sensitive personally identifiable information (PII) and sensitive health information (SHI) presents several unique challenges. Organizations must be able to quickly respond to Data Subject Access Requests (DSARs), which require that they be able to efficiently locate and identify data sources that contain this information. When responding to regulatory activity or producing in the course of litigation, the redaction of this content is often required. For example, DOJ second requests require the redaction of non-responsive sensitive PII and/or SHI prior to production. For years, we have relied on solutions based on Regular Expressions (RegEx) to identify this content. While useful, these solutions provide somewhat limited accuracy. With improvements in AI and big data analytics come new approaches to identifying sensitive content, both at the source and further downstream during the discovery process. These improvements will establish a foundation for increased accuracy, as well as the potential for proactively identifying sensitive information as opposed to looking for it reactively.Proprietary Information - As our society becomes more technologically enabled, we’re experiencing a proliferation of solutions that impact every part of our life. It seems everything nowadays is collecting data in some fashion with the promise of improving some quality of life aspect. This, combined with the expanding ways in which we communicate means that proprietary information, like source code, may be transmitted in a multitude of ways. Further, proprietary formulas, client contacts, customer lists, and other categories of trade secrets must be closely safeguarded. Just as we have to be vigilant in protecting sensitive personal and health information from inadvertent discloser, organizations need to protect their proprietary information as well. Some of the same techniques we’re going to see leveraged to combat the inadvertent disclosure of sensitive personal and health information can be leveraged to identify source code within document populations and ensure that it is handled and secured appropriately.Privilege - Every discovery effort is first aimed at identifying information relevant to the matter at hand, and second to ensure that no privileged information is inadvertently produced. That is… not new information. As we’ve seen the rise in predictive analytics, and, for those that have adopted it, a substantial rise in efficiency and positive impact on discovery costs, the identification of privileged content has remained largely an effort centered on search terms and manual review. This has started to change in recent years as solutions become available that promise a similar output to TAR-based responsiveness workflows. The challenge with privilege is that the identification process relies more heavily on “who” is communicating than “what” is being communicated. The primary TAR solutions on the market are text-based classification engines that focus on the substantive portion of conversations (i.e. the “what” portion of the above statement). Improvments in big data analytics mean we can evaluate document properties beyond text to ensure the “who” component is weighted appropriately in the predictive engine. This, combined with the potential for data re-use supported through big data solutions, promises to substantially increase our ability to accurately identify privileged, and not privileged, content.Responsiveness - Predictive coding and continuous active learning are going to be major innovations in the electronic discovery industry…would have been a catchy lead-in five years ago. They’re here, they have been here, and adoption continues to increase, yet it’s still not at the point where it should be, in my opinion. TAR-based solutions are amazing for their capacity to streamline review and to materially impact the manual effort required to parse data sets. Traditionally, however, existing solutions leverage a single algorithm that evaluates only the text of documents. Additionally, for the most part, we re-create the wheel on every matter. We create a new classifier, review documents, train the algorithm, rinse, and repeat. Inherent in this process is the requirement that we evaluate a broad data set - so even items that have a slim to no chance of being relevant are included as part of the process. But there’s more we can be doing on that front. Increases in AI and big data capabilities mean that we have access to more tools than we did five years ago. These solutions are foundational for enabling a world in which we continue to leverage learning from previous matters on each new future matter. Because we now have the ability to evaluate a document comprehensively, we can predict with high accuracy populations that should be subject to TAR-based workflows and those that should simply be sampled and set aside.Key Docs - Variations of the following phrase have been uttered time and again by numerous people (most often those paying discovery bills or allocating resources to the cause), “I’m going to spend a huge amount of time and money to parse through millions of documents to find the 10-20 that I need to make my case.” They’re not wrong. The challenge here is that what is deemed “key” or “hot” in one matter for an organization may not be similar to that which falls into the same category on another. Current TAR-based solutions that focus exclusively on text lay the foundation for honing in on key documents across engagements involving similar subject matter. Big data solutions, on the other hand, offer the capacity to learn over time and to develop classifiers, based on more than just text, that can be repurposed at the organizational and, potentially, industry level.Risk - Whether related to sensitive, proprietary, or privileged information, every discovery effort utilizes risk-mitigation strategies in some capacity. This, quite obviously, extends to source data with increasing emphasis on comprehensive records management, data loss prevention, and threat management strategies. Improvements in our ability to accurately identify and classify these categories during discovery can have a positive impact on left-side EDRM functional areas as well. Organizations are not only challenged with identifying this content through the course of discovery, but also in understanding where it resides at the source and ensuring that they have appropriate mechanisms to identify, collect and secure it. Advances in AI and big data analytics will enable more comprehensive discovery programs that leverage the identification of these data types downstream to improve upstream processes.As I alluded to above, these big data challenges can be addressed with the use of AI, analytics, data reuse, and more. Now that I have summarized some of the challenges many of you are already tasked with dealing with on a day-to-day basis, you can learn more about actual solutions to these challenges. Check out my colleague’s write up on how AI and analytics can help you gain a holistic view of your data.To discuss this topic more or to ask questions, feel free to reach out to me at NSchreiner@lighthouseglobal.com.[1] Metrics courtesy of Statistachat-and-collaboration-data; ai-and-analyticsprivilege, analytics, ai-big-data, data-re-use, phi, pii, blog, chat-and-collaboration-data, ai-and-analyticsprivilege; analytics; ai-big-data; data-re-use; phi; pii; blognick schreiner
March 24, 2021
Blog
microsoft, cloud, data-privacy, blog, corporate-legal-ops, data-privacy, microsoft-365, information-governance,

The Impact of Schrems II & Key Considerations for Companies Using M365: Microsoft’s Response

In our four-part blog series on Schrems II and its impacts, we have already given the state of data transfers in light of the Schrems II decision as well as some practical tips on how to conduct a risk assessment. In sum, the foundation upon which companies have transferred data overseas for the last half-decade was recently shaken. Companies are left with no good legal options for data transfer so, instead, they need to make calculated risk assessments based on business need and convenience versus compliance with an unknown and quickly changing legal landscape.For those companies who have chosen Microsoft as their cloud provider, Microsoft has taken additional steps to alleviate some of the risks. In addition, there are some specific supplementary measures companies can take in their Microsoft 365 (M365) environment to mitigate some risk. In this third part of our series, we will consider the position if you are analysing data transfers that take place using M365, Microsoft’s flagship software-as-a-service tool, which is in use by many entities operating within Europe.It is worth pointing out that Microsoft has responded quickly to the upheaval. The EDPB issued its supplementary measures on November 11th, 2020, and by November 19th, Microsoft issued a press release entitled “New Steps to Defend Your Data.” Microsoft explained it was strengthening the rights of its public sector and enterprise customers in relation to data by including an Additional Safeguards Addendum into standard contractual terms. That addendum would give contractual force to the new steps Microsoft laid out in terms of defending customers’ data, namely that Microsoft:will challenge every government request for public sector or enterprise data from any government where there is a lawful basis for doing so; andwill compensate a public-sector or enterprise-customer user if data is disclosed in response to a government request in violation of the GDPR.Microsoft pointed out that these commitments exceeded the EDPB’s recommendations (presumably referring to the contractual supplementary measures in the EDPB guidance). These changes have received a mixed response, but it is interesting to see that the data protection authorities within three of the German states (Baden -Württemberg, Bavaria, and Hesse) issued a joint opinion that this was a move in the right direction since it included significant improvements for the rights of European citizens and was a clear signal to other providers to follow suit.So at a macro level, Microsoft has taken very public steps. However, that does not remove the need to carry out the analysis set out by the EDPB or, in general, carry out a risk assessment to give you a thorough understanding of any risks associated with using M365. Here are some specific considerations to keep in mind:As to the first step of the EDPB recommendations, identifying your data transfers, it is our understanding that Microsoft will shortly be publishing more detailed data maps which will help.The Microsoft white paper on the necessary elements for monitoring, securing, and assessing cloud storage is a very helpful resource. An updated version of this is also expected shortly.As part of your assessment, you should review the Microsoft Online Services Data Protection Addendum, in particular, the Data Transfers and Location sections, and the amended terms arising from Microsoft’s recent press release.When carrying out your risk assessment or transfer impact assessment, you should consider carefully the extent to which M365 can be configured to reduce the amount of personal data leaving Europe. More specifically, there are six areas upon which you could focus: Multi-geo: With multi-geo, a company operating in Europe can choose to have its Exchange Online (i.e., email), its SharePoint Online, and its OneDrive for Business data stored, at rest, within Europe. Multi-geo reduces the amount of data that would be transferred to the US in comparison to having the geo (Microsoft’s word for the central hub where data is stored) within the US. This is probably the most significant step a company can take to reduce data transfers. Choosing whether or not to enable applications: Certain applications such as Sway, Microsoft’s newsletter application, will have their data stored in the US irrespective of whether a company chooses to have a multi-geo setup. A company might weigh the pros and cons of each application, which involves data being stored in the US, and decide that it could operate without that application.Configuration settings at an application level: There are many settings within M365 at an application level that will vary the amount of data being generated and processed. Assessing each application in turn and deciding the specific configuration within that application can make a significant difference to the amount of personal data being created, moved, or stored. For more details on how to evaluate this for the popular collaboration tool, Teams, you can review this write-up.Encryption: Explore encryption thoroughly and look to implement it, if practical, as an additional technical safeguard. There a number of good resources explaining how encryption operates and the options available to add additional encryption. Here is a good starting point for learning about Microsoft’s encryption options.Customer lockbox: If you configure M365 so that the number of data transfers is reduced to the bare minimum, one area where transfers might still be needed is when there is a need for remote access by Microsoft engineers to provide support. Customer lockbox allows you to give final and limited approval for such access, which you can do after carrying out a specific risk assessment.Audit logs: All significant events in M365 are audited so you should put in place a review of audit logs to support any risk assessments that you complete.It is also more than just good practice to put in place a retention policy within M365, it is essential to ensure that personal data is not being retained for longer than is necessary. Reducing the amount of personal data within an organisation reduces the risk of data breaches that could result in problems under the provisions of the GDPR. Microsoft is following the legal landscape closely so expect to see quick responses from them as things change. But what kinds of changes should companies expect and when? Read the final part of this blog series on what the future may hold.To discuss this topic further, please feel free to reach out to us at info@lighthouseglobal.com.data-privacy; microsoft-365; information-governancemicrosoft, cloud, data-privacy, blog, corporate-legal-ops, data-privacy, microsoft-365, information-governance,microsoft; cloud; data-privacy; blog; corporate-legal-opslighthouse
September 28, 2022
Blog
review, blog, ediscovery-review,

The Disclosure Pilot Scheme Is Here to Stay: What That Means for Your Practice

On July 15, 2022, the mandatory Disclosure Pilot Scheme (PD51U) was officially approved and will operate on a permanent basis within the Business and Property Courts (BP&C) of England and Wales. Originally implemented in 2019 on a temporary pilot basis, it was extended twice and had been set to expire in December of 2022. Its approval means that on October 1, 2022, the pilot will end, and the scheme will officially be known as Practice Direction (PD) 57AD “Disclosure in the Business and Property Courts.”This approval is no surprise to those familiar with the modern disclosure process in the UK. PD51U was originally implemented to address the key issues associated with standard disclosure under Civil Procedure Rule (CPR) 31, such as unwieldly costs and the insurmountable scale of disclosure due to ever-growing corporate data volumes. As per UTB LLC v Sheffield United, the pilot was meant to effect a “culture change” in the reasonableness and proportionality of disclosure requests by streamlining the process in a variety of ways. One of the most notable is through the encouragement of leveraging technology (such as technology assisted review or TAR) and data analytics for document review—even going so far as to mandate the use of TAR in cases where the document count exceeds 50,000.Over the last two years, this push toward implementing more technology to streamline the disclosure process has proven to be a wise one. With a worldwide shift to cloud-based infrastructures and remote working, corporate data volumes have exploded and will only continue to grow. Therefore, the traditional means of disclosure review, wherein a team of reviewers looks at each electronic document one-by-one, is quickly becoming untenable. Utilising technology to streamline review is more imperative than ever and will only grow in importance as data volumes continue to balloon. What 57AD does not mean, however, is that solicitors faced with disclosure need to be data science or technology experts. It simply means that it will become increasingly important for solicitors who are not comfortable with disclosure technology to find a solid managed review partner that can help streamline the disclosure process with technology and meet Practice Direction 57AD requirements. Below are key attributes to look for when seeking such a partner.Look for a managed review partner with expertise on the Disclosure Review Document (DRD)The DRD is meant to facilitate an agreement between parties about what constitutes proportional disclosure, and how to achieve that goal in a cost-effective manner. To do so, it requires parties to identify the key issues of the case and then detail the method of disclosure for each issue, with five methods from which to choose.[1] Each method can have severe impacts on the cost of a matter, as well as the overall outcome of the case for clients. It is vital that someone with in-depth disclosure expertise is involved in the negotiation and completion of this document. Some managed review vendors may be able to provide staffing and project management when it comes to disclosure document review but will not have experts available and capable to provide advice on effective disclosure strategy, including DRD assistance. Without this expertise, a party may find itself agreeing to disclosure methods that significantly balloon budgets or even worse, result in harmful outcomes for clients. Look for a managed review partner who has developed strong defensible workflowsOne of the hallmarks of and impetuses for PD 51U (soon to be PD 57AD) was to streamline the disclosure process in the face of ever-growing and unprecedented data volumes. Understanding when and how to leverage technology to cull and prioritise data for review, as well as how to leverage TAR, is imperative. However, the technology and workflows can seem overwhelming, especially to those who don’t perform disclosure often. Thus, it is essential to find a managed review partner who has access to the best review technology and knows how to leverage that technology to achieve the best results in every type of matter. It is also important that that managed review partner has developed strong defensible workflows for data reduction that can be customised to meet the individual needs of each client.Look for a managed review partner who thinks outside of the traditional linear review approachWhile it may seem simpler to fall back on traditional approaches to the disclosure document review process (i.e., hiring many reviewers to read and categorize each document), it is important to remember that PD 57AD was enacted because that approach is quickly becoming too burdensome for parties. The traditional approach also opens parties up to risk, when reviewers cannot effectively review the volume of documents within the time frames required for disclosure. Today’s larger data volumes and more complicated data increase the risk that human reviewers will miss important documents that were required to be disclosed, or conversely, that they will disclose harmful or sensitive documents that should not have been disclosed. Forward-thinking managed review partners have anticipated this change and have invested in technology and human expertise that can defensibly minimise document volumes so that a discrete number of subject matter experts can look at prioritised categories of pertinent documents, maximizing the value of human review. In this way, a managed reviewer partner can help solicitors move away from an outdated approach to review, while streamlining the disclosure process, keeping litigation budgets in check, minimising risk, and achieving better outcomes. Look for a partner who will help prepare bespoke briefing documentation, right from the outsetWhen a matter needs to scale up quickly and on short notice, the painstaking process of adding new reviewers can explode budgets—not only because of the additional overhead, but also because of the churn and inefficiency created by inconsistent work product from inexperienced, new reviewers. A good managed review partner will prepare for and minimise this churn from the outset, by creating customised briefing documentation that enables new reviewers to roll onto matters seamlessly, without a heavy lift from the client or review manager. Documentation like term glossaries for niche cases (for example, medical inquiries) that are kept in a central repository will help case teams quickly scale up and onboard new reviewers at short notice, while minimizing the churn and risk often thought of as inevitable when adding new reviewers. Look for a partner who has developed ways to ensure quality work from review teamsInconsistent or incorrect decisions from review teams creates additional work, which can decimate budgets. Even when data volumes are culled to more manageable levels, inaccurate review work product can still open clients up to risk, especially when sensitive data is involved. Look for managed review partners who have systems in place to ensure the accuracy of the review team from the outset. For example, some managed review providers will rigorously “test” the work product of review teams, directly after training has finished. This testing process can ensure that each reviewer assigned to the team understands the subject matter and review process, and that from the start of the matter their work product aligns with the case team’s direction. This type of quality control, started at the reviewer selection process, can greatly reduce risk while keeping budgets under control. Look for a managed review partner who ensures value for money in terms of candidatesIn a traditional approach, first pass review for relevance, privilege, and issues are undertaken by UK-based paralegals, with proven experience in reviewing and redacting documents together with a law degree, LPC/GDL, or NALP certification. However, these reviewers can be expensive, and billed at exorbitant hourly rates. Forward-thinking managed review partners often have partnerships with reviewers who have been admitted to Bars outside of the UK, providing an added layer of experience offered at a reduced cost. This complies with the overall message of PD 57AD, in that it offers a reliable basis for costs which promotes the cost-effective and efficient conduct of disclosure. [1] Model A – No order for disclosure; Model B – Limited disclosure; Model C – Request-led, search-based disclosure; Model D – Narrow search-based disclosure (with or without narrative documents); Model E – Wide search-based disclosureediscovery-reviewreview, blog, ediscovery-review,review; blogjennifer cowman
December 17, 2020
Blog
analytics, ai-big-data, tar-predictive-coding, blog, ai-and-analytics, ediscovery-review

TAR Protocols 101: Avoiding Common TAR Process Issues

A recent conversation with a colleague in Lighthouse’s Focus Discovery team resonated with me – we got to chatting about TAR protocols and the evolution of TAR, analytics, and AI. It was only five years ago that people were skeptical of TAR technology and all the discussions revolved around understanding TAR and AI technology. That has shifted to needing to understand how to evaluate the process of your team or of opposing counsel’s production. Although an understanding of TAR technology can help in said task, it does not give you enough to evaluate items like the parity of types of sample documents, the impact of using production data versus one’s own data, and the type of seed documents. That discussion prompted me to grab one of our experts, Tobin Dietrich, to discuss the cliff notes of how one should evaluate a TAR protocol. It is not totally uncommon for lawyers to receive a technology assisted review methodology from producing counsel – especially in government matters but also in civil matters. In the vein of the typical law school course, this blog will teach you how to issue spot if one of those methodologies comes across your desk. Once you’ve spotted the issues, bringing in the experts is the right next step.Issue 1: Clear explanation of technology and process. If the party cannot name the TAR tool or algorithm they used, that is a sign there is an issue. Similarly, if they cannot clearly describe their analytics or AI process, this is a sign they do not understand what they did. Given that the technology was trained by this process, this lack of understanding is an indicator that the output may be flawed.Issue 2: Document selection – how and why. In the early days of TAR, training documents were selected fairly randomly. We have evolved to a place now where people are being choosy about what documents they use for training. This is generally a positive thing but does require you to think about what may be over or under represented in the opposing party’s choice of documents. More specifically, this comes up in 3 ways:Number of documents used for training. A TAR system needs to understand what responsive and non-responsive looks like so it needs to see many examples in each category to approach certainty on its categorization. When using too small a sample, e.g. 100 or 200 documents, this risks causing the TAR system to incorrectly categorize. Although a system can technically build a predictive model from a single document, it will only effectively locate documents that are very similar to the starting document. The reality of a typical document corpus is that it is not so uniform as to rely upon the single document predictive model.Types of seed documents. It is important to use a variety of documents in the training. The goal is to have the inputs represent the conceptual variety in the broader document corpus. Using another party’s production documents, for example, can be very misleading for the system as the vocabulary used by other parties is different, the people are different, and the concepts discussed are very different. This can then lead to incorrect categorization of documents. Production data, specifically, can also add confusion with the presence of Bates or confidentiality stamps. If the types of seed documents/training documents used do not mirror typical types of documents expected from the document corpus, you should be suspicious.Parity of seed document samples. Although you do not need anything approaching the perfect parity of responsive and non-responsive documents, it can be challenging to use 10x the number of non-responsive versus responsive documents. This kind of disparity can distort the TAR model. It can also exacerbate either of the above issues, number, or type of seed documents.Issue 3: How is performance measured? People throw around common TAR metrics like recall and precision without clarifying what they are referring to. You should always be able to tell what population of documents these statistics relate to. Also, don’t skip over precision. People often throw out recall as sufficient, but precision can provide important insight into the quality of model training as well.By starting with these three areas, you should be able to flag some of the more common issues in TAR processes and either avoid them or ask for them to be remedied. ai-and-analytics; ediscovery-reviewanalytics, ai-big-data, tar-predictive-coding, blog, ai-and-analytics, ediscovery-reviewanalytics; ai-big-data; tar-predictive-coding; bloglighthouse
February 5, 2021
Blog
tar-predictive-coding, blog, ai-and-analytics, ediscovery-review

TAR 2.0 and the Case for More Widespread Use of TAR Workflows

Cut-off scores, seed sets, training rounds, confidence levels – to the inexperienced, technology assisted review (TAR) can sound like a foreign language and can seem just as daunting. Even for those legal professionals who have had experience utilizing the traditional TAR 1.0 model, the process may seem too rigid to be useful for anything other than dealing with large data volumes with pressing deadlines (such as HSR Second Requests). However, TAR 2.0 models are not limited by the inflexible workflow imposed by the traditional model and require less upfront time investment to realize substantial benefits. In fact, TAR 2.0 workflows can be extremely flexible and helpful for myriad smaller matters and non-traditional projects, including everything from an initial case assessment and key document review to internal investigations and compliance reviews.A Brief History of TARTo understand the various ways that TAR 2.0 can be leveraged, it will be helpful to understand the evolution of the TAR model, including typical objections and drawbacks. Frequently referred to as predictive coding, TAR 1.0 was the first iteration of these processes. It follows a more structured workflow and is what many people think of when they think of TAR. First, a small team of subject-matter experts must train the system by reviewing control and training sets, wherein they tag documents based on their experience with and knowledge of the matter. The control set provides an initial overall estimated richness metric and establishes the baseline against which the iterative training rounds are measured. Through the training rounds, the machine develops the classification model. Once the model reaches stability, scores are applied to all the documents based on the likelihood of being relevant, with higher scores indicating a higher likelihood of relevance. Using statistical measures, a cutoff point or score is determined and validated, above which the desired measure of relevant documents will be included. The remaining documents below that score are deemed not relevant and will not require any additional review.Although the TAR 1.0 process can ultimately result in a large reduction in the number of documents requiring review, some elements of the workflow can be substantial drawbacks for certain projects. The classification model is most effectively developed from accurate and consistent coding decisions throughout the training rounds, so the team of subject-matter experts conducting the review are typically experienced attorneys who know the case well. These attorneys will likely have to review and code at least a few thousand documents, which can be expensive and time consuming. This training must also be completed before other portions of the document review, such as privilege or issue coding, can begin. Furthermore, if more documents are added to the review set after the model reaches stability (think, a refresh collection or late identified custodian) the team will need to resume the training rounds to bring the model back to stability for these newly introduced documents. For these reasons, the traditional TAR 1.0 model is somewhat inflexible and suited best for matters where the data is available upfront and not expected to change over time (i.e. no rolling collections) so that the large number of documents being excised from the more costly document review portion of the project will offset the upfront effort expended training the model.TAR 2.0, also referred to as continuous active learning (CAL), is a newer workflow (although it has been around for a number of years now) that provides more flexibility in its processes. Using CAL, the machine also learns as the documents are being reviewed, however, the initial classification model can be built with just a handful of coded documents. This means the review can begin as soon as any data is loaded into the database, and can be done by a traditional document review team right from the outset (i.e. there is no highly specialized “training” period). As the documents are reviewed, the classification model is continuously updated as are the scores assigned to each document. Documents can be added to the dataset on a rolling basis without having to restart any portion of the project. The new documents are simply incorporated into the developing model. These differences make TAR 2.0 well suited for a wider variety of cases and workflows than the traditional TAR 1.0 model.TAR 2.0 Workflow ExamplesOne of the most common TAR 2.0 workflows is a “prioritization review,” wherein the highest scoring documents are pushed to the front of the review. As the documents are reviewed the model is updated and the documents are rescored. This continuous loop allows for the most up-to-date model to identify what documents should be reviewed next, making for an efficient review process, with several benefits. The team will review the most likely relevant, and perhaps important, documents first. This can be especially helpful when there are short timeframes within which to begin producing documents. While all documents can certainly be reviewed, this workflow also provides the means to establish a cutoff point (similar to TAR 1.0) where no further review is necessary. In many cases, when the review reaches a point where few relevant documents are found, especially in comparison to the number of documents being reviewed, this point of diminishing returns signals the opportunity to cease further review. The prioritization review can also be very effective with incoming productions, allowing the system to identify the most relevant or useful documents.An alternative TAR 2.0 workflow is the “coverage” or “diverse” review model. In this model, rather than reviewing the highest scoring documents first, the review team focuses on the middle-scoring range documents. The point of a diverse review model is to focus on what the machine doesn’t know yet. Reviewing the middle range of documents further trains the system. In this way, a coverage TAR 2.0 review model provides the team with a wide variety of documents within the dataset. When using this workflow for reviews for productions, the goal is to end up with the documents separated between those likely relevant and those likely not relevant. This workflow is similar to the TAR 1.0 workflow as the desired outcome is to identify the relevant document set as quickly or directly as possible without reviewing all of the documents. To illustrate, a model will typically begin with a bell-shaped curve of the distribution of documents across the scoring spectrum. This workflow seeks to end with two distinct sets, where one is the relevant set and the other is the non-relevant set.These workflows can be extremely useful for initial case assessments, compliance reviews, and internal investigations, where the end goal of the review is not to quickly find and produce every relevant document. Rather, the review in these types of cases is focused on gathering as much relevant information as possible or finding a story within the dataset. Thus, these types of reviews are generally more fluid and can change significantly as the review team finds more information within the data. New information found by the review team may lead to more data collections or a change in custodians, which can significantly change the dataset over time (something TAR 2.0 can handle but TAR 1.0 cannot). And because the machine provides updated scoring as the team investigates and codes more documents, it can even provide the team with new investigational avenues and leads. A TAR 2.0 workflow works well because it gives the review team the freedom to investigate and gain knowledge about a wide variety of issues within the documents, while still ultimately resulting in data reduction.ConclusionThe above workflow examples illustrate that TAR does not have to be the rigid, complicated, and daunting workflow feared by many. Rather, TAR can be a highly adaptable and simple way to gain efficiency, improve end results, and certainly to reduce the volume of documents reviewed across a variety of use cases.It is my hope that I have at least piqued your interest in the TAR 2.0 workflow enough that you’ll think about how it might be beneficial to you when the next document review project lands on your desk.If you’re interested in discussing the topic further, please freely reach out to me at DBruno@lighthouseglobal.com.ai-and-analytics; ediscovery-reviewtar-predictive-coding, blog, ai-and-analytics, ediscovery-reviewtar-predictive-coding; blogdavid bruno
January 20, 2021
Blog
self-service, spectra, blog, ediscovery-review, ai-and-analytics

Self-Service eDiscovery for Corporations: Three Tips for a Successful Implementation

Given the proliferation of data and evolving variety of data sources, in-house counsel teams are beginning to exhaust resources managing increasingly complex case data. self-service, spectra eDiscovery legal technology offers a compelling solution. Consider the impact of inefficiencies faced by in-house counsel, today - from waiting for vendors to load data or provide platform access, to scrambling, to keeping up with advancing technologies, and managing data security risks - it’s a lot. The average in-house counsel team isn’t just dealing with these inefficiencies on large litigations, they’re encountering these issues in even the smallest compliance and internal investigations matters.self-service, spectra solutions offer an opportunity to streamline eDiscovery programs, allowing in-house legal teams to get back to the business of case management and legal counseling. It’s understandable we’re witnessing more and more companies moving to this model.So, once your organization has decided it is ready to step into the future and take advantage of the benefits self-service, spectra eDiscovery solutions have to offer, what’s next? Below, I’ve outlined three best practices for implementing a self-service, spectra eDiscovery solution within your organization. While any organizational change can seem daunting at the outset, keeping the below tips in mind will help your company seamlessly move to a self-service, spectra model.1. Define how you leverage your self-service, spectra eDiscovery solution to scale with ease.One of the key benefits of a quality self-service, spectra solution is that it puts your organization back in the eDiscovery driver’s seat. You decide what cases you will handle internally, with the advantage of having access to an array of eDiscovery expertise and matter management services when needed, even if that need arises in the middle of an ongoing matter. Cloud-based self-service, spectra solutions can readily handle any amount of data, and a quality self-service, spectra solution provider will be able to seamlessly scale up from self-service, spectra to full-service without any interruption to case teams.Having a plan in place regarding how and when you will leverage each of these benefits (i.e. self-service, spectra vs. full-service) will help you manage internal resources and implement a pricing model that fits your organization’s needs.2. Select a pricing model that works for your organization.Every organization’s eDiscovery business is different and self-service, spectra pricing models should reflect that. After determining how your organization will ideally leverage a self-service, spectra platform, decide what pricing model works best for that type of utilization. self-service, spectra solution providers should be able to provide a variety of licensing options to choose from, from an a la cart approach to subscription and transaction models.Prior to communicating with your potential solution provider, define how you plan to leverage a self-service, spectra solution to meet your needs. Then you can consider the type of support you require to balance your caseload with team resources and prepare to talk to providers about whether they can accommodate that pricing. Once you have on-boarded a self-service, spectra solution, be sure to continue to evaluate your pricing model, as the way you use the solution may change over time.3. Discuss moving to a self-service, spectra model with your IT and data security teams .Another benefit of moving to a self-service, spectra model is eliminating the burden of application and infrastructure management. Your in-house teams will be able to move from maintaining (and paying for) a myriad of eDiscovery technologies to a single platform providing all of the capabilities you need without the IT overhead. In effect, moving to a self-service, spectra solution gives your team access to industry-leading eDiscovery technology while removing the cost and hassle of licensing and infrastructure upkeep.A self-service, spectra model also allows you to transfer some of your organization’s data security risk to a solution provider. You gain peace of mind knowing your eDiscovery data and the supporting tech is administered by a dedicated IT and security team in a state-of-the-art IT environment with best-in-class security certifications.Finally, to ensure your organization can realize the full benefit of moving to a self-service, spectra solution, it’s imperative that your IT team has a seat at the table when selecting a solution platform. They can help to ensure that whatever service is selected can be fully and seamlessly integrated into your organization’s systems. Keeping these tips in mind as your organization begins its self-service, spectra journey will help you realize the benefits that a quality self-service, spectra eDiscovery platform can provide. For more in-depth guidance on migrating to self-service, spectra platforms, Brooks Thompson’s blog posts discussing tips for overcoming self-service, spectra objections and building a self-service, spectra business case.ediscovery-review; ai-and-analyticsself-service, spectra, blog, ediscovery-review, ai-and-analyticsself-service, spectra; bloglighthouse
November 3, 2020
Blog
cloud, self-service, spectra, cloud-services, blog, ediscovery-review,

Self-Service eDiscovery: Who’s Really in Control of Your Data?

self-service, spectra as a topic has grown significantly in the recent past. With data proliferating at astronomical amounts year over year it makes sense that corporations and firms are wanting increasing control over this process and its cost. Utilizing a self-service, spectra eDiscovery tool is helpful if you want control over your queue as well as your hosted footprint. It is beneficial if your team has an interest and the capability of doing your own ECA. Additionally, self-service, spectra options are useful as they provide insight into specific reporting that you may or may not be currently receiving.Initially, the self-service, spectra model was introduced to serve part of the market that didn’t require such robust, traditional full eDiscovery services for every matter. Tech-savvy corporations and firms with smaller matters were delighted to have the option to do the work themselves. Over time there have been multiple instances in which a small matter scales unexpectedly and must be dealt with quickly, in an all hands on deck approach, to meet the necessary deadlines. In these instances, it’s beneficial to have the ability to utilize a full-service team. When these situations arise it’s critical to have clean handoffs and ensure a database will transfer well.Moreover, we have seen major strides in the self-service, spectra space regarding the capabilities of data size thresholds. self-service, spectra options can now handle multiple terabytes, so it’s not just a “small matter” solution anymore. This gives internal teams incredible leverage and accessibility not previously experienced.self-service, spectra considerations and recommendationsIt’s important to understand the instances in which a company should utilize a self-service, spectra model or solution. Thus, I recommend laying out a protocol. Put a process in place ahead of time so that the next small internal investigation that gets too large too quickly has an action plan that gets to the best solution fast. Before doing this, it’s important to understand your team’s capabilities. How many people are on your team? What are their roles? Where are their strengths? What is their collective bandwidth? Are you staffed for 24/7 support or second requests or are you not?Next, it’s time to evaluate what part of the process is most beneficial to outsource. Who do you call for any eDiscovery related need? Do you have a current service provider? If so, are they doing a good job? Are they giving you a one-size-fits-all solution (small or large), or are they meeting you where you are and acting as a true partner? Are they going the extra mile to customize that process for you? It’s important to continually audit service providers.Think back to past examples. How prepared has your team and/or service provider been in various scenarios? For instance, if an investigation is turning into a government investigation, do you want your team pushing the buttons and becoming an expert witness, or do you have a neutral third party to hand that responsibility off to?After the evaluation portion, it’s time to memorialize the process through a playbook, so that everyone has clear guidelines regardless of which litigator or paralegal internally is working on the case. What could sometimes be a complicated situation can be broken down into simple rules. If you have a current protocol or playbook, ensure your team understands it. Outline various circumstances when the team would utilize self service or full service, so everyone is on the same page.For more on this topic, check out the interview on the Law & Candor podcast on scaling your eDiscovery program from self service to full service. ediscovery-reviewcloud, self-service, spectra, cloud-services, blog, ediscovery-review,cloud; self-service, spectra; cloud-services; bloglighthouse
December 20, 2019
Blog
gdpr, ediscovery-process, blog, legal-operations, information-governance, data-privacy,

Sitting at the Same Lunch Table: 3 Key Ways to Ensure Legal and IT are in Sync

Legal and IT teams do not necessarily sit at the same lunch table (to use an over-simplified high-school analogy), however, organizations can quickly run into challenges when these teams are not aligned. As corporate data volume and types continue to grow at record speed, it is critical to maintain a technology infrastructure that is not only secure, but also satisfies the legal requirements for managing information. I recently had the privilege of chatting with Craig Shaver, the eDiscovery Program Director at Hilton Worldwide, about the challenges of this electronic data mosaic and innovative strategies to enable collaboration between these groups on the Law and Candor podcast. In this blog, I will review the key challenges we discussed as well as summarize three key solutions to overcoming them in the hopes it will help align your IT and legal teams.To level set, both teams have different priorities. Legal is generally focused on ensuring that the company’s data is protected and retention policies are upheld, while IT is looking for new ways to manage the ever-increasing volume of data to drive efficiency while maintaining budgets. So, when IT moves forward with new technology solutions, large data migrations, moves to the Cloud, or even simple contractual agreements and is not in sync with Legal due to other priorities or lack of communication, items may be missed and can create large downstream issues such as potentially responsive documents going uncollected, being slapped with spoliation charges, or costly and time-consuming rework.Nobody wants unforeseen charges or to loose time and money, so let’s look at some solutions to overcoming these challenges by ensuring collaboration between these two teams. Begin by:Establishing Legal Processes and Policies – Legal needs to first ensure they have effective legal hold processes in place, clear and consistent policies on data retention, as well as defensible deletion policies. Without these in place there is no formal process.Ensuring Participation on Both Sides – It is important to identify and designate a legal and IT liaison to sit on various steering committees and be a part of any technology decisions, migration projects, etc. In some larger, global organizations, you may want at least two or three people from each group involved to attend these meetings, as it can be a lot of work and require travel. Legal will understand the impact on the overall eDiscovery process and can review service-level agreements and SOWs as well.Continuing the Ongoing Partnership and Communication – Post project, it is important to continue to meet regularly (weekly or monthly) with key stakeholders to continue to communicate around upcoming migrations, technology changes, etc., as well as build trust and a further develop relationships. Legal can help IT enforce their deployment and security policies with other departments within the company as well as ensure GDPR compliance and other factors are considered when looking at new products.Enacting these three solutions will help you ensure your teams stay in sync. When legal and IT sit at the same lunch table and stay in communication, organizations are more likely to experience seamless or near-seamless integration of processes, better understand project timelines, reduce friction between very busy teams, maintain a shared understanding each other’s workloads and processes, as well as gain trust amongst the teams, which helps with future projects and getting folks to support one another.To discuss this topic more, reach out to me at bmariano@lighthouseglobal.com.legal-operations; information-governance; data-privacygdpr, ediscovery-process, blog, legal-operations, information-governance, data-privacy,gdpr; ediscovery-process; blogbill mariano
August 19, 2021
Blog
ediscovery-process, blog, spectra, law-firm, ediscovery-review, ai-and-analytics

Overcoming eDiscovery Trepidation - Part I: The Challenge

In this two-part series, I interview Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP about his thoughts on the state of eDiscovery within law firms today, including lessons learned and best practices to help attorneys overcome their trepidation of electronic discovery and build a better litigation practice. This first blog focuses on the history of eDiscovery and the logical reasons that attorneys may still try to avoid it, often to the detriment of their clients and their overall practice. IntroductionThe term “eDiscovery” (i.e., electronic discovery) was coined circa 2000 and received significant consideration by The Sedona Conference and others, well in advance of November 2006. That’s when the U.S. Supreme Court amended the Federal Rules of Civil Procedure to include electronically stored information (ESI), which was widely recognized as categorically different from data printed on paper. The amendments specifically mandated that electronic communications (like email and chat) would have been preserved in anticipation of litigation and produced when relevant. In doing so, it codified concepts explored by Judge Shira Scheindlin’s groundbreaking Zubulake v. UBS Warburg decisions.By 2012, the exploding volumes of data led technologists assisting attorneys to employ various forms of artificial intelligence (AI) to allow analysis of data to be accomplished in blocks of time that were still affordable to litigants. The use of predictive coding and other forms of technology-assisted review (TAR) of ESI became recognized in U.S. courts. By 2013 updates to the American Bar Association (ABA) Model Rules of Professional Conduct officially required attorneys to stay current on “the benefits and risks” of developing technologies. By 2015, the FRCP was amended again to help limit eDiscovery scope to what is relevant to the claims and defenses asserted by the parties and “proportional to the needs of the case,” as well as to normalize judicial treatments of spoliation and related sanctions associated with ESI evidence. In the same year, California issued a formal ethics opinion obligating attorneys practicing in California to stay current with ever changing eDiscovery technologies and workflows in order to comply with their ethical obligation of competently providing legal services.In the 15 years that have passed since those first FRCP amendments designed to deal with the unique characteristics of ESI, we’ve seen revolutionary changes in the way people communicate electronically within organizations, as well as explosive growth in the volume and variety of data types as we have entered the era of Big Data. From the rise of email, social media, and chat as dominant forms of interpersonal communication, to organizations moving their data to the Cloud, to an explosion of ever-changing new data sources (smart devices, iPhones, collaboration tools, etc.) – the volume and variety of which makes understanding eDiscovery’s role in litigation more important than ever.And yet, despite more than 20 years of exposure, the challenges of eDiscovery (including managing new data forms, understanding eDiscovery technology, and adhering to federal and state eDiscovery standards) continue to generate angst for most practitioners.So why, in 2021, are smart, sophisticated lawyers still uncomfortable addressing eDiscovery demands and responding to them? To find out, I went to one of the leading experts in eDiscovery today, Gordon J. Calhoun, Esq. of Lewis Brisbois Bisgaard & Smith LLP. Mr. Calhoun has over 40 years of experience in litigation and counseling, and he currently serves as Chair of the firm’s Electronic Discovery, Information Management & Compliance Practice. Over the years he has found creative solutions to eDiscovery challenges, like having a court enter a case management order requiring all 42 parties in a complex construction defect case to use a single technology provider, which dropped the technology costs to less than 2.5% of what they would have been had each party employed its own vendor. In another case (which did not involve privileged communications), he was able to use predictive coding to rank 600,000 documents and place them into tranches from which samples were drawn to determine which tranches could be produced without further review. It was ultimately determined that about 35,000 documents would not have to be reviewed after having put eyes on fewer than 10,000 of the original 600,000.I sat down with Mr. Calhoun to discuss his practice, his views of the legal and eDiscovery industries, and to try to get to the bottom of how attorneys can master the challenges posed by eDiscovery without having to devote the time needed to become an expert in the field.Let’s get right down to it. With all the helpful eDiscovery technology that has evolved in the market over the last 10 years, why do you think eDiscovery still poses such a challenge for attorneys today? Well, right off the bat, I think you’re missing the mark a bit by focusing your inquiry solely around eDiscovery technology. The issue for many attorneys facing an eDiscovery challenge today is not “what is the best eDiscovery technology?” – because many attorneys don’t believe any eDiscovery technology is the best “solution.” Many believe it is the problem. No technology, regardless of its efficacy, can provides value if it is not used. The issue is more fundamental. It’s not about the technology, it is about the fear of the technology, the fear of not being able to use it as effectively as competitors, and the fear of incurring unnecessary costs while blowing budgets and alienating clients.Practitioners fear eDiscovery will become a time and money drain, and attorneys fear that those issues can ultimately cost them clients. Technology may, in fact, be able to solve many of their problems – but most attorneys are not living and breathing eDiscovery on a day-to-day basis (and, frankly, don’t want to). For a variety of reasons, most attorneys don’t or can’t make time to research and learn about new technologies even when they’re faced with a discovery challenge. Even attorneys who do have the inclination and aptitude to deal with the mathematics and statistical requirements of a well-planned workflow, who understand how databases work, and who are unfazed by algorithms and other forms of AI, often don’t make the time to evaluate new technology because their plates are already full providing other services needed by their clients. And most attorneys became lawyers because they had little interest in mathematics, statistics, and other sciences, so they don’t believe they have the aptitude necessary to deal with eDiscovery (which isn’t really true). This means that when they’re facing gigabytes or even terabytes of data that have to be analyzed in a matter of weeks, they often panic. Many lawyers look for a way to make the problem go away. Sometimes they agree with opposing counsel not to exchange electronic data; other times they try to bury the problem with a settlement. Neither approach serves the client, who is entitled to an expeditious, cost effective, and just resolution of the litigation. Can you talk more about the service clients are entitled to, from an eDiscovery perspective? By that, I mean – can you explain the legal rules, regulations, and obligations that are implicated by eDiscovery, and how those may impact an attorney facing an electronic discovery request? Sure. Under Rule 1 of the FRCP and the laws of most, if not all, states, clients are entitled to a just resolution of the litigation. And ignoring most of the electronic evidence about a dispute because a lawyer finds dealing with it to be problematic rarely affords a client a just result. In many cases, the price the client pays for counsel’s ignorance is a surcharge to terminate the litigation. And, counsel’s desire to avoid the challenge of eDiscovery very often amounts to a breach of the ethical duty to provide competent legal services.The ABA Model Rules (as well as the ethical rules and opinions in the majority of states) also address the issue. The Model Rules offer a practitioner three alternatives when undertaking to represent a client in a case that involves ESI (which almost every case does). To meet his or her ethical obligation to provide competent legal services, the practitioner can: (1) become an expert in eDiscovery matters; (2) team up with an attorney or consultant who has the expertise; or (3) decline the engagement. Because comparatively few attorneys have the aptitude to become eDiscovery experts and no one who wants to practice law can do so by turning down virtually all potential engagements, the only practical solution for most practitioners is finding an eDiscovery buddy.In the end, I think attorneys are just looking for ways to make their lives (and thereby their clients’ lives) easier and they see eDiscovery as threatening to make their lives much harder. Fortunately, that doesn’t have to be the case.So, it sounds like you’re saying that despite the fact that it may cost them clients, there are sophisticated attorneys out there that are still eschewing legal technology and responding to discovery requests the way they did when most discovery requests involved paper documents? Absolutely there are. And I can empathize with their thought process, which is usually something along the lines of “I don’t understand eDiscovery technology and I’m facing a tight discovery deadline. I do know how to create PDFs from scanned copies of paper documents and redact them, if necessary. I’m just going to use the method I know and trust.” While this is an understandable way to think, it will immediately impose on clients the cost of inefficient litigation and settlements or judgments that could have been reduced or avoided if only the evidence had been gathered. Ultimately, when the clients recognize that their counsel’s fear of eDiscovery is imposing a cost on them, that attorney will lose the client. In other words, counsel who refuse to delve into ESI because it is hard is similar to a person who lost car keys in a dark alley but insists on only looking under the streetlight because it is easier and safer than looking in the dark alley.That’s such a great analogy. Do you have any real-world examples that may help folks understand the plight of an attorney who is basically trying to ignore ESI?Sure. Here’s a great example: Years ago, my good friend and partner told me he would retire without ever having to learn about eDiscovery. My partner is a very successful attorney with a great aptitude for putting clients at ease. But about a week after expressing that thought, he came to me with 13 five-inch three-ring binders. He wanted help finding contract paralegals or attorneys to prepare a privilege log listing all the documents in the binders. An arbitrator had ordered that if he did not have a privilege log done in a week, his expert would not be able to testify. His “solution” was to rent or buy a bunch of dictating machines and have the reviewers dictate the information about the documents and pay word processers overtime to transcribe the dictation into a privilege log. I asked what was in the binders. Every document was an email thread and many had families. My partner had received the data as a load file, but he had the duplications department print the contents rather than put them into a review platform. Fortunately, the CD on which the data was delivered was still in the file.I can tell this story now because he has since turned into quite the eDiscovery evangelist, but that is exactly the type of situation I’m referring to: smart, sophisticated attorneys who are just trying to meet a deadline and stay within budget will do whatever takes to get the documents or other deliverable (e.g., a privilege log) out the door. And without the proper training, unfortunately, the solution is to throw more bodies at the problem – which invariably ends up being more costly than using technology properly.Can you dive a bit deeper there? Explain how performing discovery the old-fashioned way on a small case like that would cost more money than performing it via a dedicated eDiscovery technology.Well, let me finish my story and then we’ll compare the cost of using 20th and 21st Century technologies to accomplish the same task. As I said, when I agreed to help my partner meet his deadline, I discovered all the notebooks were filled with printed copies of email threads and attachments. My partner received a load file with fewer than 2 GBs and gave it to the duplications department with instructions to print the data so he could read it. We gave the disk to an eDiscovery provider, and they created a spreadsheet using the email header metadata to populate the log information about who the record was from, who it was to, who was copied, whether in the clear or blind, when it was created, what subject was addressed, etc. A column was added for the privilege(s) associated with the documents. Those before a certain date were attorney-client only. Those after litigation became foreseeable were attorney-client and work product. That made populating the privilege column a snap once the documents were chronologically arranged. The cost to generate the spreadsheet was a few hundred dollars. Three in-house paralegals were able to QC, proofread, and finalize the log in less than three days for a cost of about $2,000.Had we done it the old-fashioned way, my partner was looking at having 25 or 30 people dictating for five days. If the reviewers were all outsourced, the cost would have been $12,000 to $15,000. He planned to use a mix of in-house and contract personnel - so, the cost would have been 30% to 50% higher. The transcription process would have added another $10,000. The cost of copying the resulting privilege log that would have been about 500 pages long with 10 entries per page for the four parties and arbitrator would have been about $300. So even 10 years ago, the cost of doing things the old-fashioned way would have been about $35,000. The technology-assisted solution was about $2,500. Stay tuned for the second blog in this series, where we delve deeper into how attorneys can save their clients money, achieve better outcomes, and gain more repeat business once they overcome common misconceptions around eDiscovery technology and costs. If you would like to discuss this topic further, please reach out to Casey at cvanveen@lighthouseglobal.com and Gordon at Gordon.Calhoun@lewisbrisbois.com.ediscovery-review; ai-and-analyticsediscovery-process, blog, spectra, law-firm, ediscovery-review, ai-and-analyticsediscovery-process; blog; spectra; law-firmcasey van veen
May 21, 2021
Blog
self-service, spectra, ediscovery-process, corporation, prism, blog, spectra, corporate, ediscovery-review, ai-and-analytics

Self-Service eDiscovery: Top 3 Technical Pitfalls to Avoid

Whether it’s called DIY eDiscovery, SaaS eDiscovery, or self-service, spectra eDiscovery, one thing is clear—everyone in the legal world is interested in putting today’s technologies to work for them to get more done with less. It’s a smart move, given that many legal teams are facing an imbalance between needs and resources. As in-house legal budgets are being slashed, actual workloads are increasing.Now more than ever, legal teams need to ensure they’re choosing and using the right tools to effectively manage dynamic caseloads—a future-ready solution capable of supporting a broad range of case types at scale. Given the variety of options on the market, it’s understandable there’s some uncertainty about what to pursue, let alone what to avoid. Below, I have outlined guidance to help your legal team navigate the top three potential pitfalls encountered when seeking a self-service, spectra eDiscovery solution.1. Easy vs. PowerfulThere are a lot of eDiscovery solutions out there making bold promises, but many still force users to choose between ease of use and full functionality. While a platform may be simple to learn and navigate, it may fail to offer advanced features like AI-driven analysis and search, for example.Think of it like the early days of cell phones, when we were forced to choose between a classic brick-style device or a new-to-market smartphone. Older phones were easy to use, offering familiar capabilities like calling and text exchange, while newer smartphones provided impressive, previously unknown functionalities but came with a learning curve. With the advancement of technology, today’s device buyers can truly have it all at hand—a feature-rich mobile phone delivered in an intuitive user experience.The same is true for dynamic eDiscovery solutions. You shouldn’t have to choose between power and simplicity. Any solution your team considers should be capable of delivering best-in-class technology over one simple, single-pane interface.2. Short-Term Thinking vs. Long-Term Gains As organizations move to the seemingly unlimited data storage capacities of cloud-based platforms and tools, legal teams are facing a landslide of data. Even the smallest internal investigation may now involve hundreds of thousands of documents. And with remote working being the new global norm, this trend will only continue to grow. Legal teams require eDiscovery tools that are capable of scaling to meet any data demand at every stage of the eDiscovery process.When evaluating an eDiscovery solution, keep the future in mind. The solution you select should be capable of managing even the most complex case using AI and advanced analytics—intelligent functionality that will allow your team to efficiently cull data and gain insights across a wide variety of cases. Newer AI technology can aggregate data collected in the past and analyze its use and coding in previous matters—information that can help your team make data-driven decisions about which custodians and data sources contain relevant information before collection. It also offers the ability to re-use past attorney work product, allowing you to save valuable time by immediately identifying junk data, attorney-client privilege, and other sensitive information.3. Innovation vs. UpkeepThanks to the DIY eDiscovery revolution, your organization no longer has to devote budget and IT resources to upkeeping a myriad of hardware and software licenses or building a data security program to support that technology. Seek a trusted solution provider that can take on that burden with development and security programs (with the requisite certifications and attestations to prove it). This should include routine technology assessment and testing, as well as using an approach that doesn’t disrupt your ongoing work.As you’re asked to do more with less, the right cloud-based eDiscovery platform can ensure your team is able to meet the challenge. By avoiding the above pitfalls, you’ll end up with a solution that’s able to stand up against today’s most complex caseloads, with powerful features designed to improve workflow efficiency, provide valuable insights, and support more effective eDiscovery outcomes.If you’re interested in moving to a DIY eDiscovery solution, check out my previous blog series on self-service, spectra eDiscovery for corporations, including how to select a self-service, spectra eDiscovery platform, tips for self-service, spectra eDiscovery implementation, and how self-service, spectra eDiscovery can make in-house counsel life easier. ediscovery-review; ai-and-analyticsself-service, spectra, ediscovery-process, corporation, prism, blog, spectra, corporate, ediscovery-review, ai-and-analyticsself-service, spectra; ediscovery-process; corporation; prism; blog; spectra; corporatelighthouse
December 21, 2021
Blog
cloud, analytics, information-governance, ediscovery-process, blog, information-governance, ediscovery-review, chat-and-collaboration-data,

Rethinking the EDRM for Today’s Evolving eDiscovery Data Landscape

The approach of a new year is often a good time to step back and take stock of the eDiscovery industry, so that we can be better prepared to move forward. One of the most dramatic changes over the past few years has been the seismic shift across the legal and corporate data landscapes. That shift has slowly been expanding the concept of eDiscovery beyond a single-litigation focus, to encompass data governance, data privacy and security, and an overall more holistic, strategic approach to review and analysis.As we prepare to move forward in this brave new world, it’s important to understand how those industry changes affect the traditional framework of the eDiscovery process: the Electronic Discovery Reference Model (EDRM). Recently, I was lucky enough to join a panel of industry experts, including Microsoft’s EJ Bastien, TracyAnn Eggen from CommonSpirit Health, and Lighthouse’s Sarah Barsky-Harlan, to dive deeper into that specific issue. Together, we tackled questions like: Does the EDRM still apply in today’s more complex eDiscovery environment? If so, how is the evolving data and eDiscovery landscape reshaping how organizations and law firms think about the EDRM? How can the EDRM be used to meet today’s more complex communication, data, and business challenges?Below are some of the key themes and ideas that emanated from that discussion: A Brave New Data World: Dynamic Changes in eDiscoverySince its inception, the EDRM has been the industry’s standard approach to the eDiscovery process (i.e., identification, collection, processing, review, analysis, and production of electronically stored information (ESI)). However, what we’re seeing today is that organizations and law firms now must think about eDiscovery in much broader terms than that traditionally very linear method. There are three primary reasons for this change:New cloud-based and Software as a Service (SaaS) systems: Enterprise systems are not nearly as controlled by the underlying organization as they used to be. Even five years ago, IT departments could more closely manage what software was installed, as well as when, how, and what upgrades were rolled out. Now those updates and installations are managed by cloud providers, with upgrades rolling out on an almost weekly basis – often with no notice to the organization. All those changes have downstream eDiscovery impacts, which must be dealt with at each stage of the EDRM process.New data formats: Data is no longer structured in the traditional document “family” of an email parent with attachment children. The shift to chat and collaboration platforms within organizations means that communications and workflows generate more data across multiple data sources and are much more fluid and informal. For instance, instead of an employee working on a static document saved on a desktop and then passing that document back and forth to co-workers via email, those employees may work on that document together while it’s saved on a cloud-based collaboration platform, chat about it via an in-office chat application, post updates on it via the collaboration tool channel, as well as email copies back and forth to each other. This means counsel must analyze how relevant data ties together and analyze the relationships between data sources in order to understand the full story of a communication during an investigation or litigation.New capabilities with eDiscovery technology: There are many new types of capabilities that are native to enterprise systems, as well as new types of analytics and artificial intelligence (AI) that can handle more data at scale. These new capabilities are allowing case teams to leverage past data on new cases and get to key data more quickly in the EDRM process. The Impact: How Those Changes Affect the EDRM FrameworkThinking of the EDRM as a monolithic linear process that flows straight from beginning (collection) to end (production) does not fit the way eDiscovery takes place in practice anymore. There is a world of complexity within each step of the EDRM – one that is highly dependent on the data source. And the decisions made along the way for each data source at each new step will impact what happens next – often in a non-linear fashion: Sometimes that next step will send practitioners back to collection again, because they found another data source during review. Sometimes review takes place simultaneously with collection or processing phases, depending on the data source and those newer capabilities discussed above. In short, the old model of collecting all data, exporting it all, and then reviewing it all, in large chunks, one step at a time, is no longer applicable nor practical.Instead, a “mini-EDRM” framework might make more sense, where organizations prepare workflows for the preservation, collection, processing, and review of each particular data source. Thinking of the EDRM in this way also helps the framework stay relevant and future-proof as practitioners deal with the sea-change happening across our data landscape. Practitioners need to be agile enough to handle new data sources as they pop up, for each step of the EDRM process, and then be prepared to do it all over again when someone in a deposition mentions another new data source, and to adapt it when something changes in the data source. A mini-EDRM framework would help organizations and practitioners better meet those challenges.The EDRM and Data-in-PlaceAs noted above, the eDiscovery process is now much broader and has much more of an impact on organizational information governance and data-in-place than ever before. This presents an opportunity to use learnings from across the EDRM to more effectively manage data “to the left” of that traditional process. For example, if a particular data source was problematic during review, that information can be disseminated at the organizational level and help inform how that source is used within the organization moving forward. Or if practitioners notice a large volume of irrelevant data during review that shouldn’t exist in the system at all, that information can be used to redraft document retention policies. In this way, eDiscovery (and the EDRM framework) can now be a force for change over the entire organization.Thinking Beyond a Single MatterIn today’s more dynamic and voluminous data landscape, the work we did in the past is more valuable than ever before and it can be used to inform and impact current processes across the EDRM.This can come in the form of people and institutional knowledge: experienced and consistent staff and outside partners are an invaluable resource. These organizational experts can use their understanding and experience with an organization’s past matters, system architecture, data sources, workflows etc. to improve eDiscovery efficiency and solve current problems more effectively. It can also come in the form of technology: when the EDRM first evolved, data analytics were a much heavier lift. The process and tools were expensive and the amount of data that they could be applied to was much smaller than today. Advancements in AI capabilities now allow us to analyze much larger volumes of data with much more accurate results. Thus, this newer, advanced AI technology is now capable of leveraging the goldmine of millions of previous decisions made by attorneys on an organization’s past matters. That work product is baked into the data, and advanced AI can use it to make more accurate decisions on current data at a much larger scale than ever before.Tips to Keep the EDRM Applicable in an Evolving Data LandscapeStrive to retain institutional knowledge across matters: The constantly evolving eDiscovery landscape makes continuity and retaining institutional knowledge incredibly important. Starting from scratch each time you confront a new data source or problem along the EDRM is no longer practical with today’s diversified and larger data volumes. Work to cultivate valuable partners and staff who will work to understand your organization’s data architecture, as well as the eDiscovery workflows that are effective within your environment.Lean on your peers: Chances are, if you’re facing a problem with a challenging data source at one stage of the EDRM, someone in your peer group has also faced the same or a similar problem. Don’t be afraid to reach out and ask folks to benchmark. Peer experience can help each practitioner learn and move forward, solving challenging industry problems along the way.Open the lines of communication: Because the EDRM process is much more iterative and each step impacts other steps, it is incredibly important that the people working on those steps do not work in silos. Everyone should know the downstream impacts of their decisions and workflows.Test… and test again: Employ a testing framework to test the impact of eDiscovery workflows on the underlying platforms, and then have a feedback loop to apply changes. This will ensure your eDiscovery program is forward-thinking, as opposed to reactive. Automate where possible: When striving for repeatable, defensible eDiscovery processes, predictability is key. And automation, when feasible, is a great way to achieve that predictability. Automating workflows across the EDRM will not only help improve efficiency and lower costs, it will also help minimize risk and keep your eDiscovery program defensible.information-governance; ediscovery-review; chat-and-collaboration-datacloud, analytics, information-governance, ediscovery-process, blog, information-governance, ediscovery-review, chat-and-collaboration-data,cloud; analytics; information-governance; ediscovery-process; bloglighthouse
January 25, 2021
Blog
self-service, spectra, blog, ediscovery-review, ai-and-analytics

Self-Service eDiscovery for Corporations: Four Considerations For Selecting the Solution That’s Right for You

Let’s begin by setting the stage. You’ve evaluated the ways a self-service, spectra eDiscovery solution could benefit your organization and determined the approach will help you boost workflow efficiency, free up internal resources, and reduce eDiscovery practice and technology costs. You’ve also researched how to ideally implement a solution and armed yourself with strategies to build a business case and overcome stakeholder objections that may arise.You’re now ready to move on to the next step in your organization’s self-service, spectra eDiscovery journey: selecting the right solution provider. When it comes to selecting a solution provider, one size does not fit all. Every organization has different eDiscovery needs—including yours—and those needs evolve. From how attorneys and eDiscovery teams are structured within the organization and their approach to investigations and litigations, to the types of data sources implicated in those matters and how those matters are budgeted—there’s a lot to be considered.The self-service, spectra solution you choose should be able to adapt to your changing needs and grow with your organization. Below, I’ve outlined four key considerations that will help you select a fitting self-service, spectra solution for your organization.1. Is the solution capable of scaling to handle any matter? ‍It’s important to select a self-service, spectra eDiscovery solution capable of efficiently handling any investigation or litigation that comes your way. A cloud-based solution can easily, swiftly scale to handle any data volume.You’ll also want to ensure your solution can handle the type of data your organization routinely encounters. For example, collecting, processing, and reviewing data generated by collaborative applications like Microsoft Teams may require special tools or workflows. The same can be said for data generated by chat messages or cellphone data. Before selecting a self-service, spectra solution, you’ll benefit from outlining the types of data your organization must handle and asking potential solution providers how their platform supports each.Additionally, you may be interested in the ability to move to a full-service model with your provider, should the need arise. With scalable service, your team will have access to reliable support if a matter become too challenging to manage in house. With a scalable solution bolstered by a flexible service model, your organization can bring on help as needed, without disruption. 2. Does the solution drive data reduction and review efficiency across the EDRM?‍Organizational data volumes are increasing year after year—meaning even small, discrete internal investigations can quickly balloon into hundreds of thousands of documents. Collecting, processing, analyzing, and producing large amounts of data can be costly, complicated, time consuming, and may open up your organization to legal risk if the right tools and workflows are not in place.Look for a self-service, spectra solution capable of managing data at scale, with the ability to actively help your organization reduce its data footprint. This means choosing a provider that can offer expert guidance around data reduction techniques and tools. Ask potential solution providers if they have resources to address the cost burden of data and mitigate risk through strategies like defensible data collections, effective search term selection, or crafting early case assessment (ECA), and technology assisted review (TAR) workflows.The provider should also be able to deliver technology engineered to reduce data resource draw, like processing that allows access to data faster, tools to cut down on hosted review data volume, and AI and analytics that provide the ability to re-use attorney work product across multiple matters. In short, seek a self-service, spectra solution that gives your organization the ability to defensibly and efficiently reduce the amount of costly human review across your organization’s portfolio. 3. Will the solutions’ pricing model align to your organization’s changing needs? Your organization’s budget requirements are unique and will likely change over time. Look for a solution provider that can change in accord and offer a variety of pricing models to fit your budgetary requirements. Ask prospective providers if they are able to design pricing around your organization’s expectations for utilization. Modern pricing models can be flexible yet predictable to prevent unexpected charges or overages, and ultimately align to your organization’s financial needs.4. Is the solution’s roadmap designed to take your organization into the future? When selecting a self-service, spectra solution it’s easy to focus on your current needs, but it’s equally important to consider what a self-service, spectra solution provider has planned for the future. If a vendor is not forward thinking, an organization may find itself being forced to used outdated technology that’s not able to take on new security challenges or process and review emerging data sources.Pursue a provider that demonstrates the ability to anticipate market trends and design solutions to address them. Ask potential providers to articulate where they see the market moving and what plans they have in place to update their technology and services to reflect what’s new. It can be helpful to question if a provider’s roadmap aligns to your organization’s direction. For example, if you know your company is planning to make a systematic change, like moving to a bring your own device (BYOD) policy or migrating to the cloud, you’ll want to confirm the self-service, spectra solution can support that change. Asking these types of questions before selecting a provider will guarantee the solution you choose will be able to grow with both your organization and the eDiscovery industry as a whole. With awareness and understanding of the true potential offered in a self-service, spectra solution, you can ultimately choose a provider that will help you level up your organization’s eDiscovery program. ediscovery-review; ai-and-analyticsself-service, spectra, blog, ediscovery-review, ai-and-analyticsself-service, spectra; bloglighthouse
December 4, 2019
Blog
gdpr, privilege, cybersecurity, ediscovery-process, cross-border-data-transfers, blog, ediscovery-review,

Now Live! Season Two of Law & Candor

This eDiscovery Day, the day that focuses on educating industry professionals around growing trends and current challenges, we are excited to announce that season two of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution, is now live.Co-hosts, Bill Mariano and Rob Hellewell, are back for season two of Law & Candor with six easily digestible episodes that cover a range of hot topics from cybersecurity to privilege tools. This dynamic duo, alongside industry experts, discuss the latest topics and trends within the eDiscovery, compliance, and information governance space as well as share key tips for you and your team to take away. Check out the latest season's lineup below:Bridge the Gap: Innovative Ways to Enable eDiscovery Collaboration Between Legal and ITThe Privilege in Leveraging Privilege Review ToolsData Preservation in the World of Ephemeral Data, Mobile Devices, and Other New Challenges in Forensic TechnologyCybersecurity in eDiscovery: Protecting Your Data from Preservation through ProductionWould a No-Deal Brexit Change How We Handle Cross-Border Collections in Europe?Understanding and Creating Effective and Best eDiscovery Practices for G-SuiteEpisodes are created to be short and bingeable so that you can listen on the platform of your choice with ease. Check them out now or bookmark them to listen to later.Follow the latest updates on Law & Candor and join in the conversation on Twitter. Catch up on season one today.For questions regarding this podcast and its content, please reach out to us at info@lighthouseglobal.com.ediscovery-reviewgdpr, privilege, cybersecurity, ediscovery-process, cross-border-data-transfers, blog, ediscovery-review,gdpr; privilege; cybersecurity; ediscovery-process; cross-border-data-transfers; bloglighthouse
July 2, 2021
Blog
legal-ops, blog, legal-operations,

Productizing Your Corporate Legal Department’s Services: Internally Marketing Your Solutions

In my last two blogs, I discussed how your legal department can productize services to become more efficient as well as shared some tips for how to determine the legal needs within your organization. Now that you know the added benefits and understand the legal needs, the natural next step is to determine what legal service “products” to offer, as well as any gaps. However, if nobody knows what these repeatable solutions are, what good are they? This is where creating an internal marketing plan to get the word out about your department’s legal services is critically important. In this blog, we’ll talk about how to do that by answering who, what, when, where, and why.Who?When you create your internal plan, the first thing you need to do is understand who you are marketing to. The easiest way to do this is to create some simple “personas.” You can easily do this based on the interviews you conducted as part of your earlier search. You should build a persona for each distinct type of user coming to you – typically this aligns with internal departments. In detailing each persona, you should include the following:Typical day-to-day work of your personaTypical interaction with legalTop of mind issues/challengesOther notesWhat?Next, you will need to decide what you are going to market to these personas (i.e repeatable workflows). Common ones in the legal arena are contract, litigation, HR investigation, and patent workflows. Once you have the workflows applicable to your company identified, detail the features of each workflow. For example, it is automated; has six common template documents, a clause library, and contract status; and leverages existing company technology.Once you have your personas, workflows, and features, you’re ready to create a positioning document. You should create one document for every problem/solution set (i.e. workflow). This will form the basis of how you share the information with others. The goal of this document is to position your solution in a way that resonates with the internal users. Below is a format that I find helpful to follow and I have inserted an example based on a contract workflow.PROBLEM: There is a problem in the company today. Contract negotiations are long, cumbersome, and not transparent. This can delay revenue opportunities. In addition, final contracts are difficult to locate and manage.SOLUTION: The ideal solution to this problem is an easy-to-use process, with some contracts being able to avoid legal review. The solution would allow easy access to status for interested parties and would allow those, or other, interested parties to access the contractual information at a later date.PRIMARY MESSAGE (SHORT - 1 SENTENCE): The Corporate Legal Department delivers a business-driven model for negotiating and managing contracts that accelerates, not hinders, company growth.SERVICE DESCRIPTION (2-3 SENTENCES): By leveraging an intake form, employees are directed to a self-service, spectra portal for template contracts or put in touch with an attorney for more complex matters. The status of their request, as well as information about all finalized contracts, is displayed in our JIRA system giving users full access to contract status as well as important contractual data of finalized contracts.HIGHLIGHTS (THESE SHOULD BE PROBLEM-ORIENTED FEATURES):Reduces contract turnaround by leveraging templated contracts and clausesAllows users access to contract status anytime, anywhereNo new systems (i.e. leverages existing company tools)Etc.The above will create a lot of different worksheets and information. Since I like to keep things a little simpler, I also create a cliff notes version of this to show the all-up view of your corporate legal department’s services.Once you have completed your positioning, don’t be afraid to run the messaging by some of the people you interviewed. You want to make sure that it is clear how legal will be helping them get their work done. I would suggest selecting people who are friendly to your department and who you have a good working relationship with since you are running draft information by them and not a final product.Where, When, and Why?Third, you need to think about where, when, and why you are getting the message out. The goal is to get it out wherever your users are, often, and in a way that they like to consume the information. At a minimum, I would suggest doing a launch of the updated services and including information about that launch on:The company wiki page/internal siteAny internal ticketing toolA company newsletter (or a company meeting if appropriate)Any onboarding materials/presentations your company does for new hiresOr even a “roadshow,” where you present to each department within your organization what services the legal team offersDuring any presentation, it is always helpful to inject some fun into the presentation. I have heard of some legal departments doing humorous videos or skits to capture the attention of their employees. Partner with your internal marketing team, as they may have some great suggestions on how you can get the word out.Finally, don’t forget about post-launch messaging. Though you may see an uptick in users after a launch, some people will have missed the information the first time around or will have forgotten it by the time they get to an issue that they want to bring to legal. To that end, make sure you have a plan for continued marketing. I like to showcase successes in follow-up marketing (e.g. a contract turnaround case study showing the reduced times or some metrics on impact). This information can be shared in an employee newsletter or as a quick email to leaders asking them to share it in their department meetings.This is quite a robust process and you should expect it will take several weeks, or even months, to complete. You will also likely continue to refine this marketing plan as you address gaps by adding services and gathering feedback. The benefit of going through this process is that it brings clarity to what legal does, brings efficiency by advertising repeatable workflows, and gives everyone in legal visibility into the challenges in the business and how legal addresses those.legal-operationslegal-ops, blog, legal-operations,legal-ops; bloglighthouse
January 13, 2022
Blog
review, ai-big-data, blog, ai-and-analytics, ediscovery-review

Purchasing AI for eDiscovery: Tips and Best Practices

eDiscovery is currently undergoing a fundamental sea change, including how we think about data governance and the EDRM. Linear review and older analytic tools are quickly becoming outdated and unable to handle modern datasets, i.e., eDiscovery datasets that are not only more voluminous than ever before, but also more complicated – emanating from an ever-evolving list of new data sources and steeped in variety of text and non-text-based languages (foreign language, slang, emojis, video, etc.).Fortunately, technological advancements in AI have led to a new class of eDiscovery tools that are purpose built to handle “big data.” These tools can more accurately identify and classify responsiveness, privileged, and sensitive information, parse multiple formats, and even provide attorneys with data insights gleaned from an organization’s entire legal portfolio.This is great news for legal practitioners who are faced with reviewing and analyzing these more challenging datasets. However, evaluating and selecting the right AI technology can still present its own unique hurdles and complexities. The intense purchasing process can raise questions like: Is all AI the same? If not, what is the difference between AI-based tools? What features are right for my organization or firm? And once I’ve found a tool I like, how do I make the case for purchasing it to my firm or organization?These are all tough questions and can lead you down a rabbit hole of research and never-ending discussions with technology and eDiscovery vendors. However, the right preparation can make a world of difference. Leveraging the below steps will help you simplify the process, obtain answers to your fundamental questions, and ultimately select the right technology that will help you overcome your eDiscovery challenges and up level your eDiscovery program.1. Familiarize Yourself with Subsets of AI in eDiscoveryNewer AI technology is significantly better at tackling today’s modern eDiscovery datasets than legacy technology. It can also provide legal teams with previously unheard-of data insights, improving efficiency and accuracy while enabling more data-driven strategic decisions. However, not all technology is the same – even if technology providers tend to generally refer to it all as “AI.” There are many different subsets of AI technology, and each may have vastly different capabilities and benefits. It’s important to understand what subsets of AI can provide the benefits you’re looking for, and how those different technology subsets can work together. For example, Natural Language Processing (NLP) enables an AI-based tool to understand text the same way that humans understand it – thus providing much more accurate classifications results – while AI tools that leverage deep learning technology together with NLP are better able to handle large and complex datasets more efficiently and accurately. Other subsets of AI give tools the ability to re-use data across matters as well as across entire legal portfolios. Learning more about each subset and the capability and benefits they can provide before talking to eDiscovery vendors will give you the knowledge base necessary to narrow down the tools that will meet your specific needs. 2. Learn How to Measure AI ROIAs a partner to human reviewers, advanced AI tools can provide a powerful return on investment (ROI). Understanding how to measure this ROI will enable you to ask the right questions during the purchasing process to ensure that you select a tool that aligns with your organization or law firm’s priorities. For example, if your team struggles with review accuracy when utilizing your current tools and workflows, you’ll want to ensure that the tool you purchase is quantifiably more accurate at classifying documents for responsiveness, privilege, sensitive information, etc. The same will be true for other ROI metrics that are important to your team, such as lower overall eDiscovery spend or increased review efficiency.These metrics will also help you build a strong business case to purchase your chosen tool once you’ve selected it, as well as a verifiable way to confirm the tool is performing the way you want it to after purchase.3. Come Prepared with a List of QuestionsIt’s easy to get swept up in conversations about tools and solutions that end without the metrics you need. A simple way to control the conversation and ensure you walk away with the information you need is to prepare a thorough list of questions that reflect your priorities. Also be sure to have a method to record each vendor’s response to your questions. A list of standard questions will keep conversations more productive and provide a way to easily contrast and compare the technology you’re evaluating. Ensure that you also ask for quantifiable metrics and examples to back up responses, as well as references from clients. This will help you verify that vendor responses are backed by data and evidence.4. Know the Pitfalls of AI Adoption—and How to Avoid ThemIt won’t matter how much you understand AI capabilities, whether you’ve asked the right questions, or whether you understand how to measure ROI, if you don’t know how to avoid common AI pitfalls. Even the best technology will fail to return the desired results if it’s not implemented properly or effectively. For example, there are some workflows that work best with advanced AI, while other workflows may fail to return the best results possible. Knowing this type of information ahead of time will help you get your team on board early, ensure a smooth implementation, and enable you to unlock the full potential of the technology.These tips will help you better prepare for the AI purchasing process. For more information, be sure to download our guide to buying AI. This comprehensive guide offers a deep dive into tips and tactics that will help you fully evaluate potential eDiscovery AI tools to ensure you select the best tool for your needs. The guide can also be used to reevaluate your current AI and analytic eDiscovery tools to confirm you’re using the best available technology to meet today’s eDiscovery challenges.lighting-the-way-for-review; ai-and-analytics; ediscovery-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscoveryreview, ai-big-data, blog, ai-and-analytics, ediscovery-reviewreview; ai-big-data; blogai-analyticssarah moran
July 6, 2021
Blog
legal-ops, blog, legal-operations,

Productizing Your Corporate Legal Department’s Services: Making Build vs. Buy vs. Outsourcing Decisions

For years, general counsel have weighed the pros and cons of doing a task internally versus sending the work to outside counsel – this is not a new dichotomy. What is newer, however, is the proliferation of technology available for legal and the business savvy now being applied to internal legal departments. This has opened up more choices for legal departments. First, you have to figure out whether you can apply technology, then whether you should build or buy that technology, and finally if you should outsource any portion of the process.Before you start down the path of buy vs. build vs. outsource, I would recommend assessing your department’s offerings. In the earlier parts of this series, I outline how you can do that. Once you understand your services and your gaps, you can better determine where you may need to apply build vs. buy decisions. Whether you are a general counsel or a legal operations professional, this blog will outline four key aspects to include in your framework as you make these decisions.1. Problem/Solution ListStart with a list of services your company needs and possible solutions. If you followed the productization process, you will have a good list. If you have not yet done this, you can at least jot down a list of your company’s legal needs, how pervasive and urgent they are, whether they further the company strategy, as well as any potential solutions.Next, order that list from most pervasive to least pervasive. Where there is a tie, look to the problem’s relationship to company strategy.Next, work through all of the items in box A. You want to be able to answer the following questions:Is there an existing solution?Is there a software solution that may apply?What are the costs/benefits of all possible solutions?Is there typically urgency around the request?All other things being equal, do we have the expertise to handle this in house?If you have gaps in A, B, or C, I would recommend addressing those before process improvement items.2. Cost-Benefit AnalysisNext, for any change (either addressing a gap or a process improvement) you should do a cost-benefit/return on investment analysis. Note that if you are just trying to get a sense of which problem on your list to address, you can do a high-level analysis by categorizing the solutions into low, medium, or high financial impact. If, however, you are getting to the point of suggesting a change internally and asking for budget, you want to do a much more in-depth quantitative analysis. On the benefit side, you want to consider any revenue acceleration for the company (e.g., customers’ revenue hits a quarter earlier) as well as costs reduced and avoided (e.g. outside counsel fees). If there are other quantifiable benefits, you should include them as well. On the expense side, make sure to consider licensing, annual maintenance, user fees, implementation, infrastructure, training, hourly support/expert charges, and any ongoing costs. You should predict these benefits and costs for the next 3 years, as that is a common period to see whether there is a return on your investment. You can also prepare a version of this document showing the same cost/benefit of building the solution internally as well as outsourcing it to outside counsel.3. Additional Factors: Urgency and ExpertiseOnce you have the cost-benefit analysis for the various solutions, you usually have a preferred direction. However, don’t forget to account for time and expertise. You should then consider how urgent the requests are. The more urgent a request, the more likely it should be handled by technology or outsourced, as those solutions typically can bring more resources to bear. You should then consider expertise. More specifically, does one need specific knowledge about the company to solve this problem or will there be a lot of need to liaise internally? If so, the solution should likely stay with the internal corporate legal department. Conversely, does this require niche expertise and is it better handled by an outside counsel with that expertise? Make notes of these considerations with your cost-benefit analysis, as these factors can sway a decision in one direction or another.4. Decision TimeUltimately, making these decisions is more of an art than a science. They are also decisions that can and should be revisited as things change in your business and legal department. The above should give you the right information to make an informed decision. Ultimately, you will want to share your decision with others and get input before finalizing a direction.By following the productization process, orienting your solutions towards your customers, streamlining how you deliver services, and applying the right sets of resources through build versus buy decisions, your legal department will operate more efficiently. legal-operationslegal-ops, blog, legal-operations,legal-ops; bloglighthouse
June 28, 2021
Blog
legal-ops, blog, legal-operations,

Productizing Your Corporate Legal Department’s Services: Understanding the Needs of the Business

Many law departments are reactionary. Someone comes to legal with a “legal” question and they help that person. Although this makes a lot of sense, as legal is a support department, it makes it very difficult to thematically explain the value legal is driving as well as understand the work the department is doing. As legal operations matures and legal departments look to be more efficient, productizing the services in the department is a natural progression. This approach was a central discussion at the 2021 CLOC conference and the subject of this blog series. In order to productize something effectively, however, you need a very good understanding of your customer and prospective customers’ needs. In this article, I will give you an overview of how to get that.A central theme in product management is building resonators – products that resonate with the buyers. You may have the best idea but, if it doesn’t meet a pervasive market need, nobody will buy it. There are many great examples of products that failed and dozens of lessons we can learn from those failures. Most of the lessons come back to misunderstanding the customer's need and the nature of that need. For example, people may say they want a better mousetrap but if you don’t ask how much they would pay for that mousetrap, whether they would replace any current mousetraps with a better one, and whether it matters if the new mousetrap gives off an odor of chemicals, you can see how you might not make a best seller. To give an example in the legal services space, in my first general counsel role, I heard from many people how it was frustrating that they could never find contracts when they needed them. I immediately set upon a mission to create a contracts database. After investing a lot of time, we had a wonderfully organized database, and the only person who ever used it was the legal team. So what happened to all the frustrated employees from other departments? It turns out I didn’t ask them how often they needed to look up contracts and whether that need was part of another legal request (meaning that legal was the one actually looking up the contract anyway). In the end, the contract database was extremely helpful for the legal department but I could have saved myself the time of making it self-service, spectra and figuring out permissions for different users had I asked some questions upfront. To avoid the same fate, there are four principles you can use when asking your company about its legal needs.1. Don’t rely on the users to define the needs. Instead, be curious about their day-to-day and in that curiosity, you will be able to see the legal needs. The theory is this: if you ask someone what they need from legal, they will overlay their belief system about what legal should provide before they answer. Instead, when you ask them about their role, their goals, how they are measured, and what their biggest challenges are, you are more likely to be able to understand them and see where legal may be able to help.2. Create a template interview form and use it religiously with each person.When you do 10-15 interviews, you want to be able to discern themes and compare interviews. When multiple people are conducting interviews, you want to be sure you are all hitting the same topics. This is much easier to do when you start from a template. For a 30-minute interview, I would suggest 3-5 template questions. Always get background information before the interview starts including their name, title, department, and contact information. Put this information at the top of your interview summary. Do not include this in your 3-5 questions. Having this information clearly labeled and available allows you to easily follow up later. Next, move on to background and devote 2-3 questions to this area including what are their main goals for the year, how is their department measured, what are their biggest pain points. Finally, go on to any specific areas you may want to ask about. For example, you may want to know how they have used the legal department in the past, how much they interact with overseas colleagues, etc. Here is a list of common questions:What are your department’s goals for the year?How is your department measured?What are your biggest roadblocks in achieving your goals?What are your biggest roadblocks in getting your job done?If you had a magic wand and could change one thing about your job, what would it be?What are your most common needs outside your department?What is your perception of what the legal department does?What kinds of things have you come to legal for?3. Interview a diverse group. It may seem obvious that you need a good sample size, however, you will be surprised at how varied the needs are at different levels and across different departments. If you are only interviewing one person to represent a specific level or department, you should ask them “how representative do you think your pain points/goals are of the department?” This will give you a good idea of whether you can rely on this person’s interview as representative of the department or whether you will have to do some follow-up interviews with others.4. Always ask follow-up questions.The guidance for limiting your template to 3-5 questions above ensures you have time for follow up on each response. More specifically, you want to be sure you are really understanding the responses and quantifying the level and frequency of any relevant pain points. I would set a goal to ask 2 follow-up questions for every first response. For example, if your first question is “what are your goals for 2021?” then you should expect to ask 2 follow-up questions after your interviewee responds. If at any point the person you are interviewing mentions a challenge that you think legal can help to solve, this is your queue to follow up around the pain and pervasiveness. Here are some questions you can ask to get into how big a problem they are facing:How often do you run into this roadblock: daily, weekly, monthly, quarterly?When you run into this roadblock, how much time do you spend resolving it: 1-2 hours, 2-5 hours, 5-10 hours, 10+ hours?Does this roadblock impact multiple people? If so, how many?Does this roadblock (or a stoppage in you moving towards your goals) impact other departments?Are there workarounds for this roadblock? If so, how cumbersome are they on a scale of 1-5?If you had to reach out to another department and work with someone to remove this roadblock each time it came up, would you do that or would you continue with the workaround?How long would you wait for an outside resource to help before you proceed with your current workaround?Does the challenge have an impact on revenue?Whether you are a general counsel just getting to know your organization, a legal operations professional tasked with making your department more efficient, or a lawyer who is interested in ensuring you are providing great services, the above should give you a good place to start to understand your customer. Once you understand your customer, you’re able to provide great resonating services and position your existing solutions. legal-operationslegal-ops, blog, legal-operations,legal-ops; bloglighthouse
December 2, 2020
Blog
analytics, ai-big-data, blog, ai-and-analytics,

Preparing for Big Data Battles: How to Win Over AI and Analytics Naysayers

Artificial intelligence (AI), advanced analytics, and machine learning are no longer new to the eDiscovery field. While the legal industry admittedly trends towards caution in its embrace of new technology, the ever-growing surge of data is forcing most legal professionals to accept that basic machine learning and AI are becoming necessary eDiscovery tools.However, the constant evolution and improvement of legal tech bestow an excellent opportunity to the forward-thinking eDiscovery legal professional who seeks to triumph over the growing inefficiencies and ballooning costs of older technology and workflow models. Below, we’ll provide you with arguments to pull from your quiver when you need to convince Luddites that leveraging the most advanced AI and analytics solutions can give your organization or law firm a competitive and financial advantage, while also reducing risk.Argument 1: “We already use analytical and AI technology like Technology Assisted Review (TAR) when necessary. Why bring on another AI/analytical tool?”Solutions like TAR and other in-case analytical tools remain worthwhile for specific use cases (for example, standalone cases with massive amounts of data, short deadlines, and static data sets). However, more advanced analytical technology can now be used to provide incredible insight into a wider variety of cases or even across multiple matters. For example, newer solutions now have the ability to analyze previous attorney work product across a company’s entire legal portfolio, giving legal teams unprecedented insight into institutional challenges like identifying attorney-client privilege, trade secret information, and irrelevant junk data that gets pulled into cases and re-reviewed time and time again. This gives legal teams the ability to make better decisions about how to review documents on new matters.Additionally, new technology has become more powerful, with the ability to run multiple algorithms and search within metadata, where older tools could only use single algorithms to search text alone. This means that newer tools are more effective and efficient at identifying critical information such as privileged communications, confidential information, or protected personal information. In short, printing out roadmap directions was advanced and useful at the time, but we’ve all moved on to more efficient and reliable methods of finding our way.Argument 2: “I don’t understand this technology, so I won’t use it” This is one of the easiest arguments to overcome. A good eDiscovery solution provider can offer a myriad of options to help users understand and leverage the advances in analytics and AI to achieve the best possible results. Whether you want to take a hands-off approach and have a team of experts show you what is possible (“Here are a million documents. Show me all the documents that are very likely to be privileged by next week”), or you want to really dive into the technology yourself (“Show me how to use this tool so that I can delve into the privilege rate of every custodian across multiple matters in order to effectuate a better overall privilege review strategy”), a quality solution provider should be able to accommodate. Look for providers that offer training and have the ability to clearly explain how these new technologies work and how they will improve legal outcomes. Your provider should have a dedicated team of analytics experts with the credentials and hands-on experience to quell any technology fears. Argument 3: “This technology will be too expensive.”Again, this one should be a simple argument to overcome. The efficiencies that the effective use of AI and analytics achieve can far outweigh the cost to use it. Look for a solution provider that offers a variety of predictable pricing structures, like per gig pricing, flat fee, fees generated by case, fees generated across multiple cases, or subscription-based fees. Before presenting your desired solution to stakeholders, draft your battle plan by preparing a comparison of your favored pricing structure vs. the cost of performing a linear review with a traditional pricing structure (say, $1 per doc). Also, be sure to identify and outline any efficiencies a more advanced analytical tool can provide in future cases (for example, the ability to analyze and re-use past attorney work product). Finally, when battling against risk-averse stakeholders, come armed with a cost/benefit analysis outlining all of the ways in which newer AI can mitigate risk, such as by enabling more accurate and consistent work product, case over case.ai-and-analyticsanalytics, ai-big-data, blog, ai-and-analytics,analytics; ai-big-data; bloglighthouse
June 21, 2021
Blog
legal-ops, blog, legal-operations,

Productizing Your Corporate Legal Department’s Services: Getting Started

The 2021 CLOC conference focused a lot on applying product principles to legal services. General Counsel are often in the position of having to show the value of their team’s services and why, as a cost center, it makes sense to continue to grow their department or to buy technology to support their department. In addition to showing that value, there is pressure to be more efficient while providing excellent customer services. By productizing services, you can provide repeatable, measurable solutions that address the needs above. There is also the great benefit of being connected to your client’s needs by providing the services that match the most pervasive and urgent needs. However, if you don’t have a background in product management, how does one go about productizing legal services, and what does that even mean? As someone who is Pragmatic Marketing Certified through the Pragmatic Institute, I am here to help. This blog, and the blog series to follow, will show you how to get started, interview people internally to understand the needs, position your existing solutions internally, and make build vs. buy vs. outsourcing decisions. Let’s start with a high-level overview of where to begin.What does productizing legal services mean? Productizing your legal services focuses on creating solutions that apply to multiple customers in a repeatable way. This means that you first have to understand your customers’ problems by listening, asking, and observing. It then means that you create several repeatable processes to address those problems. Finally, it means you market those solutions internally and show how they bring value to the business. Taking it one step further, it also means that you leverage technology to support these services and continue to develop and improve the services based on feedback.So how does one go about creating these solutions inside a legal team? The first step is all about understanding the needs of the business. You can look internally at the requests the legal department receives to get an understanding of what the business is coming to the legal department for. Next, you want to speak to leaders from different groups in the business to understand what legal needs exist that are not coming into the legal department but should be addressed. Which leaders to speak to will depend a bit on your organization but I would recommend connecting with the following, at minimum: sales, finance, engineering (or product) as well as regional leaders in any key regions. More on this to come in my next blog on interviewing people internally to understand the organization’s needs.Once you have the information, it is helpful to create a list. I like to use the format below:Problems to SolveOnce you have a pretty solid list, you should brainstorm high-level recommended solutions (not the detailed how). This will include things like solving a certain need through documentation (e.g. a “how-to guide” or a template contract). It may include things like facilitating the intake of legal requests or facilitating access to contract information. Once you have your list of potential solutions, there are two next steps. For the set of existing solutions, you should group those into categories and make sure that you are adequately marketing and reporting on those (more on this in a future post). For the set of solutions that are future state, identify how you are going to address this need. When looking at the gaps, I like to categorize the gaps in the following ways so I can understand the budget impact and the division of work.Note that urgency speaks to how quickly the need needs to be solved overall and not necessarily the urgency of a specific request. For example, it speaks to how urgently people need a contract database as opposed to how quickly someone needs information about a specific contract. Pervasiveness addresses how many internal departments/employees have this need. Is it centered around just a small group within one department or is it a need expressed by multiple departments? The relationship to the company strategy should be focused on how much this need moves the business forward. Does it facilitate the company’s #1 strategy? When you complete this list, I recommend grouping it into like needs. If there are overlapping needs, you may want to create a consolidated item but make sure you capture the pervasiveness of it.Recommendations for Filling The GapsBy going through the above process you will have a good understanding of the various needs and solutions in your organization. In the next blog in the series, I will overview how to interview people internally to understand the organization’s needs.legal-operationslegal-ops, blog, legal-operations,legal-ops; bloglighthouse
March 21, 2024
Case Study

Lighthouse Drives First Adoption of M365 by a Major Financial Services Organization

The project included replacing expensive third-party archives with native tools in M365, utilizing an automation solution that Lighthouse had recently prototyped for a large global manufacturer, and other breakthroughs the institution was unable to make before engaging with Lighthouse. Our work with the institution helped unblock their Microsoft 365 deployment and ultimately led to disclosure to regulators for institution’s intent to use M365 as system of record.SIFIs have long wished for a better way to meet their mutability requirement. Historically, they have relied on archiving solutions, which were designed years ago and are poorly suited for the data types and volume we have today. For years, people in the industry have been saying, “Someday we’ll be able to move away from our archives.” It wasn’t until the introduction of M365 native tools for legal and compliance that “someday” became possible.Data Management for SIFIs is Exceptionally ComplexThe financial services industry is one of the most highly regulated and litigious sectors in the world. As a result, companies tend to approach transformation gradually, adopting innovations only after technology has settled and the regulatory and legal landscape has evolved.However, the rate of change in the contemporary world has pushed many financial heavyweights into a corner: They can continue struggling with outdated, clunky, inadequate technologies, or they can embrace change and the disruption and opportunities that come with it.From an eDiscovery perspective, there are three unique challenges: (1) as a broker-dealers, they have a need to retain certain documents in accordance with specific regulatory requirements that govern the duration and manner of storage for certain regulated records, including communications (note that the manner of storage must be “immutable”). This has traditionally required the use of third-party archive solutions that has included basic e-discovery functionality. (2) As a highly regulated company with sizable investigation and litigation matters, they have a need to preserve data in connection with large volumes of matters. Traditionally, preservation was satisfied by long-term retention (coupled by immutable storage) and without deletion. Today, however, companies seek to dispose of legacy data—assuming it is expired and not under legal hold—and are eager to adopt processes and tools to help in this endeavor. (3) They have a need to collect and produce large volumes of data—sometimes in a short timeframe and without the ability to cull-in-place. This means they are challenged by native tooling that might not complete the scale and size of their operations. This particular company’s mission was clear: to use M365 as a native archive and source of data for eDiscovery purposes. To meet this mission, Lighthouse needed to establish that the platform could meet immutability and retrievability requirements—at scale and in the timeframe needed for regulatory and litigation matters. Lighthouse Helps a Large Financial Institution Leverage M365 to Replace Its Legacy Archive SolutionLighthouse is perfectly positioned to partner with financial services and insurance organizations ready to embrace change. Many on our team previously held in-house legal and technology roles at these or related organizations, including former in-house counsel, former regulators, and former heads of eDiscovery and Information Governance. Our team’s unique expertise was a major factor in earning the trust and business of a major global bank (“the Bank”). The Bank first engaged with Lighthouse in 2018, when we conducted an M365 workshop demonstrating what was possible within the platform—most notably, at the time, the potential for native tools to replace their third-party archives. Following the workshop, the Bank attempted, together with Microsoft, to find a viable solution. These efforts stalled, however, due to the complexity of the Bank’s myriad requirements. In 2020, the Bank re-engaged Lighthouse to supports its efforts to fully deploy Exchange and Teams and, in doing so, to utilize the native information governance and e-discovery toolset, paving way for the Bank to abandon its use of third-party archiving tools for M365 data. Our account team had the nuanced understanding of industry regulations, litigation and regulatory landscape, and true technical requirements needed to support a defensible deployment.As a result, we were able to drive three critical outcomes that the bank and Microsoft had not been able to on their own: (1) A solution adequate to meeting regulatory requirements (including immutability and retrievability). (2) A solution adequate to meeting the massive scale required at an institution like this. (3) A realistic implementation timeline and set of requirementsLighthouse Ushers the Bank Through Technical and Industry MilestonesWe spent six months designing and testing an M365-based solution to support recording keeping and e-discovery requirements for Teams and Exchange (including those that could support the massive scalability requirements). The results of these initial tests identified several gaps that Microsoft committed to close. The six month marked a huge milestone for the financial services industry, as the Bank disclosed to regulators their intent to use M365 as system of record. This showed extreme confidence in Lighthouse’s roadmap for the Bank, since a disclosure of this nature is an official notice and cannot be walked back easily. Over the next few months, we continued to design and test, partnering with Microsoft to create a sandbox environment where new M365 features were deployed to the Bank prior to general availability, to ensure we were able to validate adequate performance. During this time, Microsoft made a series of significant updates to extend functionality and close performance gaps to meet the Bank’s requirements. Finally, in February 2021, all the Bank’s requirements had been met and they went live with Teams—the first of their M365 workload deployments. That configuration of M365 met only some of the Bank’s need, however, so Lighthouse had to enable additional orchestration and automation on top. As it happens, we had recently done this for another company, creating a proof of concept for a reusable automation framework designed to scale eDiscovery and compliance operations within M365. Building on this work, we were able to quickly launch development of a custom automation solution for the Bank. This project is currently underway and is slated to complete in June, coinciding with their deployment of Exchange Online.Lighthouse Enables Adoption of Teams and Exchange and Scales M365 Compliance FunctionalityCompliant storage of M365 communications using native tools, rather than a third-party archive. Scaled and efficient use of M365 eDiscovery, including automation to handle preservation and collection tasks rather than manual processes or simple PowerShell scripts.Improved update monitoring, replacing an IT- and message-center-driven process with a cross-functional governance framework based on our CloudCompass M365 update monitoring and impact assessment for legal and compliance teams.Framework for compliant onboarding of new M365 communication sources like Yammer. Framework for compliant implementation of M365 in new jurisdictions, including restricted country solutions for Switzerland and Monaco. Framework to begin expanding to related use cases within M365, such as compliance and insider risk management. Lighthouse Paves the Way for Broader M365 Adoption Across the Financial Services IndustryFollowing the success of this project, we have been engaged by a dozen other large financial institutions interested in pursuing a similar roadmap. The roadblocks we removed for the Bank are shared across the sector, so the project was carefully watched. With the Bank’s goals confidently achieved and even surpassed, its peers are ready to begin their own journey to sunset their archives and embrace the opportunities of native legal and compliance tools in M365.
March 27, 2024
Case Study
ai-and-analytics

AI Powers Successful Review for a Pressing Matter

Two Months to Tackle Three Million DocumentsA financial institution with an urgent matter had two months to review 3.6M documents (2.4TB of data).With that deadline, any time that reviewers spent on irrelevant documents or unnecessary tasks risked missing their deadline. So outside counsel called on Lighthouse to help efficiently review documents.AI and Experience Prove Up to the ChallengeUsing our AI-powered review solution, we devised an approach that coordinated key data reduction tactics, modern AI, and search expertise at different stages of review.Junk Removal and Deduplication Set the Stage We started by organizing the dataset with email and chat threading and removing 137K junk documents. Then we shrank the dataset further with our proprietary deduplication tool, which ensures all coding and redactions applied to one document automatically propagate to its duplicates. AI Model Removes 1.5 Million Nonresponsive Documents To build the responsive set, we used our AI algorithm, built with large language models for sophisticated text analysis. We trained the model on a subset of documents then applied it to all 2.2M TAR-eligible documents, including transcripts from chat platforms. The model identified 80% of the documents containing responsive information (recall) with 73% accuracy (precision). The final responsive set consisted of 650K family-inclusive documents—18% of the 3.6M starting corpus. AI Supports Privilege Detection, QC, and Descriptions Our AI Privilege Review solution supported reviewers in multiple ways.First, we used a predictive AI algorithm in conjunction with privilege search terms to identify and prioritize potentially privileged documents for review. During QC, we compared attorney coding decisions with the algorithm’s assessment and forwarded any discrepancies to outside counsel for final privilege calls. For documents coded as privileged, we used a proprietary generative AI model to draft 2.2K unique descriptions and a privilege log legend. After reviewing these, attorneys left nearly 1K descriptions unchanged and performed only light edits on the rest.Search Experts Surface the 300 Documents Most Important for Case Prep Alongside the production requirements for the Second Request, Lighthouse also supported the institution’s case strategy efforts. Each tranche of work was completed in 4 days and within an efficient budget requested by counsel, who was blown away by the team’s speed and accuracy. Using advanced search techniques and knowledge of legal linguistics, our experts delivered: 130 documents containing key facts and issues from the broader dataset, for early case analysis. 170 documents to prepare an executive for an upcoming deposition. Beating the Clock Without Sacrificing Cost or QualityWith Lighthouse Review—including the strategic use of state-of-the-art AI analytics—outside counsel completed production and privilege logging ahead of schedule. The financial institution met a tough deadline while controlling costs and achieving extraordinary accuracy at every stage.
December 15, 2023
Case Study

Lighthouse Uncovers Key Facts In Misappropriation Investigation

Searching for Evidence in 8TB of Chat and Technical Data Senior executives at an information technology company suspected that former employees had utilized company resources and intellectual property when starting a rival company. To determine whether litigation was called for, executives needed to find the most relevant documents within 8TB of processed data. The data was extremely complex, dating back 6+ years and consisting mostly of Slack data and attachments including highly technical documents, applications, logs, and related system files—tallying over ten million files. The company engaged a senior partner at an AM50 law firm, who recommended using keyword search terms, filters, and targeted linear review to find the “smoking gun” documents—which was estimated to take several months. The company came to Lighthouse looking for a faster, more strategic search alternative for their investigation. Pinpointing Key Docs with Linguistic Analysis Two Lighthouse search and linguistics experts met with company executives to learn exactly what information they suspected the former employees had misappropriated. From there, our experts created linguistic-based search criteria that go well beyond keywords, taking into consideration the unique vocabulary and syntax of software engineers and developers, the conversational quirks of Slack and other chat-based communications, and the coded language used by people who are trying to get away with something. The team delivered documents in 2 batches, refining their search based on input from the executives—and resulting in only 39 files for the company to review. Getting Results—and a Start on Case Strategy—in Days In less than 10 days, 2 Lighthouse experts pierced the subterfuge in the employees’ chat messages to reveal patterns in their behavior and attempts to cover their tracks. In all, we found 39 documents representing possibly questionable conduct, which required only 141 hours of eyes-on review. In comparison, using conventional analytics would have identified 5-20% of the search population as key documents—up to 50K documents to review in this matter. So in the end, Lighthouse saved the company over 3 months and nearly $200K.Armed with knowledge of the key events, timelines, and context of conversations buried within the data, the company was primed to begin litigation efforts and had a team ramped up to perform additional searches when needed.Lighthouse KDI vs Linear Review
December 15, 2023
Case Study
Key document identification, KDI, ai-and-analytics

Lighthouse Litigation Prep Proves Invaluable in Complex Litigation

Firms Needed Fast Analysis of 25M Documents More than a dozen international law firms—including a Joint Defense Group (JDG) of 11 firms and several firms representing defendants outside the JDG—were engaged in a complex cluster of cases spanning over 30 US jurisdictions. The total document tranche included over 25M documents. The firms needed to find and understand the key players, timelines, and nuances involved in each litigation, while also preparing for hundreds of depositions, witness interviews, hearings, and trials scheduled across the litigation universe. However, traditional approaches to fact-finding and litigation (i.e., document review, keyword searches, etc.) were drowning case teams in extraneous and duplicative information. They came to Lighthouse looking for a strategic, unified approach to fact-finding, led by experts who could deliver the key documents, information, and details the case teams needed—and nothing more. Custom Workflows Power Consistency, Speed, and Efficiency Our experts started by creating a topic map across matters, which helped them quickly provide case teams with the core themes in each jurisdiction while reducing redundant search work. From there, as case strategy for each matter developed, the Lighthouse team drilled down into more nuanced fact-finding to help surface the documents case teams needed to learn the key details of each matter, through strategies like: State/Jurisdictional Overview Workflow – We used advanced search technology to target key documents in incoming productions and categorize them by jurisdiction, providing case teams with an immediate thematic overview of key facts and timelines. Re-Deployable Linguistic Model Workflow – Lighthouse linguists developed models based on intimate knowledge of the language used within the datasets, then deployed them within proprietary search technology to sort documents into tiers based on the likelihood that they contained key information. Deposition Kit Bundle Workflow – By bundling deposition kit requests from the same jurisdictions and departments together, we could search across smaller collections of documents and take a deponent-agnostic approach. Previously Delivered Name Hit Workflow – We provided case teams with documents from previously delivered results, giving them an advanced start on deposition preparation while further reducing duplicative searching. These repeatable workflows significantly reduced the volume of searching and coordination required across matters and enabled Lighthouse experts to quickly zero-in on the exact documents needed—without wasting counsels’ time with redundant and unimportant documents. Critical Docs Found and Delivered Across Dozens of Matters and Hundreds of Kits Over the course of two years, Lighthouse experts prepared dozens of case teams for complex litigation and handled a deluge of competing deadlines, priorities, and ad hoc requests (totaling as many as 70 requests at a time). For the Joint Defense Group, this meant: Over 1,150 deposition kits across 24 matters, encompassing 245K unique documents Over 100 state overviews across 21 different jurisdictions, encompassing 80K documents For law firms representing individual defendants, Lighthouse provided an additional:150 deposition kits, encompassing 13K documents 30 defensive overviews across 20 jurisdictions, encompassing 6K documents 1.3K documents in response to ad hoc requests and trial support Each delivery was limited to essential information—including key themes and players in every jurisdiction, potential gaps in productions, lists of hot/sensitive documents and potential deponents, and key strategy documents—and avoided redundant and unimportant documents. The combination of innovative workflows and cutting-edge technology enabled Lighthouse to keep our team small and consistent throughout the engagement, so the entire effort was achieved by a handful of Lighthouse experts with institutional knowledge of every matter. Since this engagement, we have used the same workflows for other clients facing complex Multidistrict Litigation (MDL)—making Lighthouse key document identification one of the most valuable and scalable litigation technology solutions on the market today.
September 22, 2023
Case Study

Lighthouse Transforms Complex Enterprise Data Protection with Microsoft Purview

The Lighthouse team of SMEs applied their dedication to exemplary customer experience and unique strategy of marrying compliance, security, IT, and legal needs to help a global chemistry solutions and specialty material producer meet the ever-evolving security and compliance demands and challenges facing international manufacturing and regulations to effectively deploy Microsoft Purview across workstreams while preparing for needs and reducing costs. Global Leader in Chemistry Solutions Transforms Enterprise Data Protection with Microsoft Purview An international producer of commercial chemicals and specialty materials upholds a commitment to people safety and well-being as part of their core tenets. As cyber risks increased along with data volumes, the organization extended their commitment to safety to include the security of data accessed, produced, and stored within their enterprise. Now, the company has implemented a comprehensive data protection program using the entire Microsoft 365 Information Protection suite. After careful design, the team is piloting the solution before a global rollout. A Commitment to Physical and Digital Safety As one of the world’s largest acetyl products manufacturers and a top-tier producer of high-performance engineered polymers, the company supplies chemicals across major industries and for a variety of industrial and consumer applications. Over 10,000 employees in offices, technical centers, and 50+ manufacturing facilities work to realize a vision of improving the world and everyday life through people, chemistry, and innovation—with products that impact the lives of millions. For the organization, an operational approach rooted in well-being has always meant physically safe working environments for employees, and safe solutions for their customers and their communities. However, in this digital age, they have expanded their notion of safety to include data protection for employees, customers, shareholders, and the communities in which they operate. The company’s Chief Information Security Officer (CISO) notes that committing to data protection means a “higher level of assurance—making sure that our security controls keep pace with the threats that surround us every day and seek to exploit vulnerabilities in companies like us every day. You can’t stand still. You always have to evolve—you always have to get better, otherwise you’re devolving, and you’re getting worse, and becoming more vulnerable.” Advancing Data Protection with a Trusted Partner A few years ago, when the company decided to make the move to the cloud, they chose Microsoft 365 E5 and Microsoft Azure, building on their longstanding use of Microsoft technologies. Prior efforts to overhaul their data protection program had been unsatisfactory. However, with access to new Microsoft Purview capabilities, the Information Security team saw an opportunity to try again. They hoped to utilize the full breadth of the Microsoft 365 Information Protection suite including Information Protection Classification and Labeling, Data Loss Prevention (DLP), and Insider Risk Management solutions. Microsoft tapped Security Solutions and Advanced Specialization Designation-Information Protection and Governance Partner Lighthouse Global to lead the engagement for their ability to effectively understand complex compliance needs across IT, security, and legal departments. They hoped that together they could develop a solution to realize the investment they’d made in Microsoft 365, and to support their corporate commitment to safety for both employees and customers. “If you were to interview a bunch of companies, those who have actual, very successful DLP and data labeling programs typically have a hodgepodge of solutions that get melded together,” reflected the CISO, “and that’s where Lighthouse was successful…we’ve been able to leverage the investment…and get it to work, [and not] have to go spend more money to hodgepodge together a solution.” Developing a Comprehensive, Scalable Solution The Lighthouse team started by holding a series of working sessions to align the company’s vision and requirements and design the implementation approach. Using Microsoft Compliance Check, Lighthouse scanned the company’s environment to get an understanding of current state activity and sensitivity intelligence. The team also reviewed existing policies and approaches for the handling of sensitive data and data loss prevention to identify any areas of opportunity or gaps that could exist. From there, the combined teams were able to successfully design and configure a holistic data protection solution leveraging multiple Microsoft Purview products including Data Loss Prevention, Information Protection, and Insider Risk Management. Starting with data classification, the team defined the sensitive information types that needed to be identified. From there, they developed a set of sensitivity labels corresponding to the data protection policy. This set of classification techniques and labels were generated in the course of both Data Loss Protection and Insider Risk Management implementation, ensuring a comprehensive data life cycle protection program from content identification through insider threat analysis. Finally, the Lighthouse team supported the integration of the Microsoft products with the company’s third-party HR software to feed HR data into the Data Theft by Departing Employee Policy, enabling the creation of a truly end-to-end solution. Fulfilling a Mission of Security The company’s dedication to safety, security, and well-being across applications and contexts drove this project’s success. “Because we see security as part of our commitment to people and innovation, we take a uniquely holistic approach and have strong support all the way up to our board of directors,” says the company’s CISO. The CISO also credits Lighthouse’s unwavering commitment to partnership. “They helped us not only implement the technology and guide us through some of the critical points to consider as we implemented the technology, but also the process and decision points with data—which ultimately, in the end, actually worked,” they conclude. Now, with the design and implementation of the Microsoft Purview-based Data Protection program behind them, the organization’s information security team is focused on operationalizing the program through a series of pilots scheduled over the next year. Their ultimate goal is total, global implementation of the solution—and total, global protection for all employee and customer data. Corporate Case Studymicrosoft; big-datamicrosoft-365; data-privacy
September 7, 2023
Case Study
ediscovery-review, ai-and-analytics, biotech

Alignment and Savings Across a Dynamic Portfolio

A global biotech achieves consistent and efficient document review with Lighthouse review. Key Actions Coordinating efforts across disparate review teams and counsel Integrating advanced AI and other innovations on an incremental basis ‍ Key Results Streamlined and efficient approach to document review Saving more than $340,000 through a tailored workflow in one recent matter A Lack of Coordination Drove High Costs and Complexity Document review for a global biotech was expensive and inconsistent, due to a high frequency of litigations with often overlapping timelines and different outside counsel. Lighthouse had been managing the company’s electronically stored information (ESI) for years, saving the company hundreds of thousands of dollars through plans and policies introduced over time. After learning of our expertise in managed review, the company hired Lighthouse to bring order and efficiency to that domain as well. Laying a Foundation with Standard Protocols Our first order of business was to establish universal standards across matters, outside firms, and review vendors. These included: Upstream changes , such as data management protocols that made documents easier to search and sort. Overarching review protocols , such as QC process guidelines and specifications for production. Changes to specific tasks , such as refining privilege filters and standardizing coding layouts so review performance could be compared across different matters and teams. A Lighthouse review manager trained all current firms and vendors and was on hand to monitor progress and answer questions, as well as onboard new firms and vendors as needed. Increasing Efficiency Through Technology Over time, Lighthouse gradually introduced accelerators to help increase efficiency and cost savings. Initially, this consisted of: Deduplication improvements , through strategies like single-instance review and normalized deduplication. Review accelerators such as privilege log automation and redaction automation. To drive even more savings, Lighthouse led the company through a test-and-learn process for building workflows around advanced AI and other, more in-depth technology. The process involved trying out a new technology on a live matter, then conducting a post-mortem to clarify what worked and what could be improved. In this way, Lighthouse and the company developed a rubric for determining which workflows were the right fit for different matters. Streamlined, Aligned, and Eager to Keep Innovating In 5 years, Lighthouse transformed the company’s disconnected, manual, expensive approach to document review into a coordinated and robust program that boosts efficiency at every level. For one recent matter—a patent litigation with a tight timeline overlapping the winter holidays—this review program drove extraordinary efficiency and savings. Tailoring the client playbook for the specific matter, the review manager designed a complex workflow that reduced eyes-on review: The initial dataset of almost 8M documents was reduced to a corpus of 388K through deduplication, culling, and removal of embedded and redundant documents. The population was further reduced through search hit only protocols and by employing a continuous active learning (CAL) model, stopping review when responsive documents became scarce. Finally, Lighthouse reunited document family members, automatically giving members tied to responsive documents the coding of their source docs. In the end, Lighthouse: Reduced eyes-on review to just 92K documents (25% of the documents promoted to review) Saved the company an estimated $341,000 in review costs Going forward, the company is ready to increase its use of technology, including classifiers built with advanced AI and an automated workflow for redactions of personally identifiable information (PII). Corporate Case Studyai-and-analytics; ediscovery-reviewediscovery-review, ai-and-analytics, biotech
September 7, 2023
Case Study
antitrust, ai-and-analytics-ediscovery-review, kdi, key document identification

Lighthouse Key Document Identification Proves Pivotal to Antitrust Defense

Lighthouse leveraged linguistic expertise and cutting-edge analytics to efficiently locate only the documents that mattered in a complicated, year-long antitrust criminal investigation and trial. What They Needed Senior executives from a global food manufacturing company faced federal criminal antitrust charges related to allegations of 15 instances of price fixing over a five-year period. A joint defense team comprised of outside counsel representing each of the executives was assembled by the company. The prosecution expected to make rolling productions of evidence up to and through the trial. As those productions rolled in, the joint defense team could tell that many of the evidentiary documents, timelines, and conversations that were key to the prosecution’s case were taken out of context or failed to include all the exculpatory evidence. However, the joint defense team was having trouble finding key evidence because much of the nuance was located within piecemeal chat conversations and complex bid spreadsheets that were buried among millions of similar documents. The joint defense team needed a document search team that was nimble and could quickly identify the most important documents to the defense and share them across the team. They came to Lighthouse because we could quickly identify key documents with accuracy and nuance. How We Did It Lighthouse first organized a central search desk, where all members of the joint defense team could go for document search requests, with results shared across three defense teams. Next, the Lighthouse team located the most important documents related to each of the 15 episodes of price-fixing allegations, on a priority basis. They used linguistic expertise to create narrow searches, taking into consideration the nuance of acronyms, slang, and terminology used within the company and the food manufacturing industry. They also leveraged Lighthouse’s proprietary, cutting-edge search analytic tools to look for key information buried in hundreds of thousands of Excel spreadsheets and chat messages. As the government produced more documents, the Lighthouse team refreshed their searches, looking for key documents in each new production and quickly sharing results across the defense team. As defense preparations continued throughout the year, we we supported all aspects of trial preparation, including two mock trials, all witness preparation binders, and the James hearing. Lighthouse support will continue through the criminal trial for the senior executives, due to our proven success in supporting ad hoc search requests and providing results in real time. The Results The Lighthouse team efficiently delivered incredibly accurate results, saving the underlying client more than $3M thus far. Out of an always-in-flux review population that eventually grew to over 16M documents, Lighthouse was able to cull through the irrelevant data to find and deliver only the most important documents for the defense team’s utilization. In the end, that amounted to less than 1% of the initial review population, including: 4.7K documents for the joint defense group to defend the episodes of alleged price fixing 5.3K documents for defense team’s specific ad hoc and witness kit requests (an average of 400 documents per witness kit) In comparison, a traditional linear review using search terms and conventional analytics performed by multiple case teams typically results in 5-20% of the data population being tagged as “key documents.” This volume would then be funneled to the case teams for review as well, where they would waste valuable time and resources looking at hundreds of thousands of irrelevant or run-of-the-business documents. In addition to cost-efficiency, the team has gained expertise in the key events, timelines, and context of conversations buried within the data. As such, the team is now a critical resource to the defense, supporting all stages of the investigation and assisting in pivot ad hoc requests. Examples include finding a unique pricing document buried among volumes of near duplicates, as well as the relevant context surrounding a single line of a chat message. In the end, Lighthouse saved the underlying company significant time and money that could not have been achieved otherwise. Additionally, our expertise in the data was a critical resource to the joint defense team, which relied on Lighthouse at each step of trial preparation. Lighthouse expert support will continue throughout the criminal trial. ‍ Corporate Case Studyantitrust; ai-and-analytics; ediscovery-reviewantitrust, ai-and-analytics-ediscovery-review, kdi, key document identification
August 3, 2023
Case Study
Case-Study; client-success; ai-and-analytics; analytics; Compliance-and-Investigations; Corporate; Corporation; data-analytics; eDiscovery; fact-finding; healthcare-investigations; investigations; machine-learning; predictive-coding; Processing; risk-management; self-service, spectra; Spectra; TAR; TAR-Predictive-Coding; technology-assisted-review; edicovery-review; ai-and-analytics

Lighthouse Self-Service Solution Uplevels Compliance Investigations

In-house legal and compliance teams use Lighthouse Spectra, a cloud-based, self-service legal technology platform, to achieve a more efficient and scalable approach to compliance monitoring. Our self-service technology keeps clients well ahead of audits and compliance risks, while lowering the costs and inefficiencies inherent to compliance monitoring, particularly for companies working in heavily regulated industries. Clients avoid the processing fees and wait times that burden compliance reviews by quickly and easily loading their own data. Then, they leverage industry leading technology to create repeatable, scalable compliance workflows that quickly cull out irrelevant data and uncover key information. The results are lower risk, faster results, and unprecedented savings. Repeatable and Effective Self-Service Compliance Investigation Workflow Below, we’ve detailed a sample self-service compliance workflow—including real results that our clients have achieved at each step during internal investigations. Similar workflows have been used by our clients to deliver up to 96% reduction in document review and over $800K in savings across a single investigation. Step 1: Automated Data Upload, Processing, and Deduplication What it does : Reduces administration time, speeds up investigation setup, reduces hosting costs, reduces review population by removing duplicates Lighthouse self-service automation features reduce the manual set up tasks that often delay the start of an investigation (data import, processing, etc.). Clients can leverage Lighthouse’s native file managing technology at this point to significantly reduce hosting costs—by only loading native files if or when they’re necessary to the investigation. Once data is uploaded and processed, clients can deploy Lighthouse deduplication technology to immediately remove redundant data. Results : Enabled an investigation team to start analysis one week earlier than standard processing; reduced data population by 25%. Step 2: ECA Culling and Search Term Iteration What it does : Reduces review populations by removing irrelevant documents Once processed and deduplicated, clients use our customized culling and search term iteration processes to swiftly narrow the scope of documents for review. Results : Reduced review population of an internal investigation by over 78%. Step 3: Thread Suppression and Proprietary Review Technology What it does : Reduces review populations by identifying the most unique documents Clients can then implement customized workflows that combine email thread suppression with Lighthouse review technology to identify the most unique documents. Results : Reduced review population of an internal investigation by over 50%. Step 4: Lighthouse TAR and Advanced Analytics What it does : Finds the key documents that matter to investigations. After the culling process, clients often deploy Lighthouse’s Continuous Active Learning TAR workflows to find relevant documents. Once reaching a point of diminishing returns, advanced analytics such as clustering, categorization, and concept searches can be deployed to ensure that no relevant documents were left behind. Results : Reduced review population of an internal investigation by over 60%. Corporate Case Studyspectra; self-service, spectra; compliance-and-investigationsediscovery-review; client-success; ai-and-analyticsCase-Study; client-success; ai-and-analytics; analytics; Compliance-and-Investigations; Corporate; Corporation; data-analytics; eDiscovery; fact-finding; healthcare-investigations; investigations; machine-learning; predictive-coding; Processing; risk-management; self-service, spectra; Spectra; TAR; TAR-Predictive-Coding; technology-assisted-review; edicovery-review; ai-and-analytics
August 3, 2023
Case Study
Case-Study; client-success; ai-and-analytics; analytics; document-review; eDiscovery; fact-finding; investigations; KDI; key-document-identification; keyword-search; TAR; TAR-Predictive-Coding; technology-assisted-review; machine-learning; transportation-industry; automotive-industry; edicovery-review; ai-and-analytics

Unprecedented Review Accuracy and Efficiency in Federal Criminal Investigation

A global transportation company was under investigation for possible infractions of the Foreign Corrupt Practices Act (FCPA) in India. The company’s legal counsel needed to quickly produce responsive documents and find key documents to prepare their defense. Key Results 4M total documents reduced to 250K through 2 rounds of responsive review, with precision rate and recall of 85% or higher. 810 key documents quickly delivered to outside counsel, saving them hours of review and gaining more time for case strategy. A Complex Dataset Requiring Nuanced Approaches The company collected 2M documents from executives in India and the U.S. Information in the documents was extremely sensitive, making it critical to produce only those documents related to the India market. This would be impossible for most TAR tools, which use machine learning and therefore can’t reliably differentiate between conversations about the company’s business in India from discussions solely pertaining to U.S. business. Finding key documents to prepare a defense was challenging as well. The company wanted to learn whether vendors and other third parties had bribed officials in violation of the FCPA, but references to any such violations were sure to be obscure rather than overt. Zeroing In On the Right Conversations Lighthouse used a hybrid approach, supplementing machine learning models with powerful linguistic modeling. First, our linguistic experts created a model to remove documents that merely referred to India but didn’t pertain to business in that market, so that the machine learning TAR wouldn’t pull them into the responsive set. Then our responsive review team developed geographic filters based on documents confirmed as India-specific and used those filters to train the machine learning model. The TAR model created an initial responsive set, which our linguists refined even further with an additional model, based on nuances of English used in communications across different regions of India. By the end, our hybrid approach had reduced the corpus by 97%, with an 87% precision rate and 85% recall. Once this first phase of review was successfully completed, Lighthouse dove into an additional 2M documents collected from custodians located in India. Finding Key Documents Among Obfuscated Communications To help inform a defense, our search specialists focused on language that bad actors outside the company might have used to obfuscate bribery. The team used advanced search techniques to examine how often, and in what context, certain verb-noun pairs indicating an “exchange” were used (for instance, commonly used innocent pairings like give a hand vs. rarer pairs like give reward). The team could then focus on the documents containing language indicating an attempt to conceal or infer. $1.7M Saved, 810 Key Documents Found to Support Defense Lighthouse performed responsive review on two datasets of 2M documents each, reducing them to less than 250K and saving the client more than $1.7M. Out of the 237K responsive documents, Lighthouse uncovered 810 hot docs spanning 7 themes of interest. The work was complete in just 3 weeks and enabled outside counsel to provide the best defense to the underlying company. Corporate Case Studykdi; key-document-identification; case-study; investigations; reviewediscovery-review; client-success; ai-and-analyticsCase-Study; client-success; ai-and-analytics; analytics; document-review; eDiscovery; fact-finding; investigations; KDI; key-document-identification; keyword-search; TAR; TAR-Predictive-Coding; technology-assisted-review; machine-learning; transportation-industry; automotive-industry; edicovery-review; ai-and-analytics
August 1, 2023
Case Study
AI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Case-Study, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, eDiscovery-Migration, healthcare-litigation, litigation, managed-review, Prism, TAR, TAR-Predictive-Coding, technology-assisted-review, ediscovery-review, ai-and-analytics

Connecting Matters for Better, Faster eDiscovery

A healthcare provider needed help simplifying ESI hosting for a complex series of 14 related matters across 9 states (and growing). Lighthouse went above and beyond—providing a unified workflow from hosting to review. Key Actions Quickly migrated 11M documents from existing Relativity and non-Relativity databases into a single repository, supported by AI Created one sophisticated workflow—from ESI storage to managed review—for over 14 matters across 9 states (and any other matters that arise in the future) Leveraged advanced technology to facilitate data re-use, data reduction, and review efficiency ‍ Key Results Avoided duplicate collections, hosting, and review of 1.2M documents Instantaneously provided production sets to all 14 matters, giving local counsel time to focus on unique matter documents before production Set case teams up for success in future matters with a readymade data repository, workflow, and trained review team—exponentially increasing the client’s ROI Data Everywhere and No One to Turn To A large healthcare provider was facing a growing number of separate but related litigations. With 14 ongoing matters in 9 different jurisdictions, the company’s data was spread out across multiple ESI vendors and a variety of review databases. The hosting costs of this data sprawl was threatening to explode the company’s overall budget. And with each case team and vendor taking their own approach to case strategy and review, in-house counsel was busy herding cats rather than managing overall litigation strategy. They came to Lighthouse desperately seeking a way to consolidate their overall eDiscovery approach to these matters. A Streamlined Solution for Multiple Matters, from Hosting Through Review Lighthouse seamlessly integrating all related matters into an advanced document repository. Backed by AI, this repository connected insights across matters and maximized work product reuse. Using this repository as a base, our experts built a sophisticated eDiscovery workflow for all 14 individual matters. Each process in every individual matter—from hosting to document review—was purposefully designed around insights and data from all other related matters. The result of this holistic approach was more efficient, consistent, and accurate eDiscovery across every matter—at a much lower cost than could ever have been achieved with a traditional siloed approach. Here’s how we did it: Faster, More Versatile Migration Capabilities With our advanced technology and unique migration expertise, Lighthouse quickly migrated 11M documents from existing databases—including Relativity and non-Relativity—into an advanced AI-backed document repository. At the outset, the team worked closely with the client to understand the scope, types of data, and future needs, so that the migration flowed quickly and efficiently. This approach meant that the client only had to process data once, rather than paying for processing and re-processing data with every matter. Individual case teams also immediately reaped the benefit of data and insights from every related matter, including matters that had already been successfully litigated. This helped counsel anticipate issues in their own matters, while re-using review work product for greater efficiency and consistency—ultimately saving costs and improving matter outcomes. One Hash for Unprecedented Cross-Matter Deduplication and Efficiency Unlike other data storage repositories, the Lighthouse AI-backed repository adds a hash system unique to Lighthouse. This technology normalizes documents before adding a hash value, extending our deduplication power and allowing us to identify all duplicate documents beyond what is possible using traditional deduplication technology. Our unique AI hash system also enabled faster insights into opposing party productions. The Lighthouse team used the system to compare newly received productions in one matter against documents previously received in other matters. Where matches were found, any issue coding one case team applied to a document was carried over and applied to new matching documents. This helped facilitate case team collaboration and a consistent legal strategy across matters. Broad Bench of Data Experts Rather than paying separate vendors for expertise in individual matters, in-house counsel and local case teams leaned on Lighthouse’s unified bench of subject matter experts—including ESI processing and hosting, advanced analytics, and review specialists. These experts worked together as a dedicated client service team, providing a uniquely holistic view of the entire array of related matters. However, individual specialists tagged in to perform work only when their expertise was needed, ensuring that the company didn’t rack up expensive invoices for consulting services they didn’t need or use. When our experts were called in to help, they were able to identify areas for greater efficiency and cross-matter consistency that would have been impossible if the client had remained with a siloed approach to each matter. For example, before review began, Lighthouse review experts counseled individual case to teams to implement a coding layout for each jurisdiction that facilitated work product reuse and consistency across matters. As new related matters come up, our experts will bring their deep institutional knowledge to continue to drive these types of unique efficiency and consistency gains. A Strategic Approach Leads to Faster Reviews and Productions Once data was migrated into the document repository, Lighthouse review experts designed one strategic review plan for all 14 matters that lowered costs and maximized data reuse and cross-matter insights. As part of this plan, Lighthouse created one national review database and separate jurisdiction-specific review databases. Then, Lighthouse experts used advanced AI and review technology to isolate a core set of 150K documents within the 11M documents housed in the repository that were most likely to be responsive across all jurisdictions. This core set was published to the national review database and fully reviewed by an experienced Lighthouse review team trained by our review managers to categorize each document for both national and jurisdictional responsiveness. After review, Lighthouse copied this strategic production set to each jurisdictional database. This approach kept hosting costs drastically lower for each individual matter, while providing all local case teams with an immediate first production, well ahead of production deadlines. Corporate Case Studyai; ai-and-analytics; analytics; artificial-intelligence; big-data; case-study; corporation; corporate; data-analytics; data-re-use; data-reuse; document-review; ediscovery; ediscovery-migration; healthcare-litigation; litigation; managed-review; prism; tar; tar-predictive-coding; technology-assisted-reviewediscovery-review; ai-and-analytics; client-successAI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Case-Study, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, eDiscovery-Migration, healthcare-litigation, litigation, managed-review, Prism, TAR, TAR-Predictive-Coding, technology-assisted-review, ediscovery-review, ai-and-analytics
July 1, 2023
Case Study
Case-Study, client-success, AI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, litigation, Prism, PII, PHI, Healthcare, healthcare-litigation, PII, PHI, HIPAA-PHI, managed-review, document-review, review, TAR-Predictive-Coding, technology-assisted-review, TAR, Production, ediscovery-review, ai-and-analytics

Simplifying Complex Multi-District Document Review

A large healthcare provider faced a series of related matters requiring document review. Lighthouse designed and executed a single review workflow that provided accurate, consistent, and efficient productions. Lighthouse Managed Review Results Efficient, compliant productions across 14 matters in 9 states (and counting) Nuanced document review performed by one experienced review team, eliminating the need to train multiple review teams Case teams avoided re-reviewing 150K core documents by reusing 100K high-quality review decisions and redactions A Perfect Storm of Review Complexities A large healthcare provider was facing 14 related matters across 9 states. The initial corpus of documents numbered 11M, with each jurisdiction adding more. While each matter shared a core set of relevant issues, they all had their own unique relevancy scope and were being handled by different outside counsel and eDiscovery teams. The corpus was also littered with personally identifiable information (PII) that required identification and redaction by review teams before production. Combining Expertise and Tech to Drive Efficiency The company turned to Lighthouse because of our extensive experience working on complex document review. Our review managers developed a sophisticated workflow to reduce the number of documents requiring review and re-review across jurisdictions by leveraging advanced technology. Custom Workflow Enables Work Product Reuse To lower costs and maximize consistency across matters, Lighthouse created an overall document repository and review database, as well as separate jurisdictional databases. The team migrated all 11M documents into the document repository and used advanced AI and review technology to isolate a core set of documents that were most likely to be responsive across all jurisdictions. Our review managers efficiently worked with all outside counsel teams to validate this core set. They also suggested and implemented a coding layout for each jurisdiction to facilitate work product reuse and consistency across matters. One Skilled Review Team and Review Process for All Matters Our combination of managed review, advanced technology, and custom data re-use workflow resulted in a single document set that met all jurisdiction-specific production requirements. These documents were duplicated across all databases for immediate production in multiple matters. To get to this caliber of review, our review managers used technology to reduce the number of documents needing eyes-on review to 90K and trained an experienced review team on both universal and jurisdictional responsiveness. Technology was also used to expedite PII redaction and propagate coding to the core set of 150K documents. Unprecedented Review Time and Cost Savings With Lighthouse’s review approach, each case team had more freedom in how they structured their post-production workflows. Our approach also provided stricter control of data and enabled more accurate and predictable billing for the client. Further, all 14 matters now had an initial production ready at the push of a button. In addition to lowering costs, this gave local counsel additional time to assess case strategy, with the first production available in advance of agreed-upon deadlines. Instantaneous Initial Production for Multiple Matters Beyond the stellar review outcomes achieved across each matter, Lighthouse’s strategic workflow and use of technology also saved the client an impressive $650K—a delightful surprise to the client, who was prepared to pay more for such a complex litigation series. As new related matters arise, the client can engage a trained and experienced review team ready to hit the ground running. Corporate Case Studycase-study; ai; ai-and-analytics; analytics; artificial-intelligence; big-data; corporation; corporate; data-analytics; data-re-use; data-reuse; document-review; ediscovery; litigation; prism; pii; phi; healthcare; healthcare-litigation; hipaa-phi; managed-review; review; tar-predictive-coding; technology-assisted-review; tar; productionediscovery-review; ai-and-analytics; client-successCase-Study, client-success, AI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, litigation, Prism, PII, PHI, Healthcare, healthcare-litigation, PII, PHI, HIPAA-PHI, managed-review, document-review, review, TAR-Predictive-Coding, technology-assisted-review, TAR, Production, ediscovery-review, ai-and-analytics
March 15, 2022
Case Study
Case-Study, client-success, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, insurance-industry, analytics, ai-and-analytics, ediscovery-review, ai-and-analytics

Lighthouse Streamlines a Complicated False Claims Investigation

Over the course of five months, Lighthouse delivered approximately 4,500 documents for review—out of the 2.3 million document review set—for a Fortune 100 health insurance provider. The Challenge Complex internal False Claims Act investigation 2.3M total documents for review Five-month timeline and tight budget Lighthouse Key Actions Provided curated weekly deliveries of the most important, inclusive documents for review—with no redundant or duplicative versions Compiled summary reports of each delivery (including highlights of high-priority information) to expedite counsel review Out of 2.3M documents, identified and delivered just the 4,500 documents counsel needed to review in order to conduct a comprehensive legal analysis Key Results for Counsel Immediately gained a grasp on the relevant facts and timelines hidden within a massive review set—without wasting time reviewing irrelevant information Quickly developed a deeper understanding of the underlying risks and nuances of the investigation, through consistent and iterative communication with Lighthouse search experts Confidently completed the investigation on time and within budget—even after large volumes of new data were added mid-investigation A Challenging Internal Investigation into False Claims Act Violations A Fortune 100 health insurance provider was pursuing an internal investigation involving potentially improper diagnosis practices undertaken by a wholly-owned provider group. The scope of the investigation included analysis of reimbursements processed across 20+ disease categories, potentially triggering False Claims Act violations. With 2.3M documents to review, it was unclear how the internal investigation would be completed within a constrained budget and timeline. Counsel reached out to Lighthouse for help. Lighthouse Hands Counsel the Keys to a Focused, Efficient Investigation A small team of Lighthouse information retrieval, legal, data science, and linguistic experts immediately began working with counsel to understand the specific allegations at issue, as well as catalogue the various sources of data that needed to be investigated. The team then designed and executed a battery of complex searches tailored to find instances of fraud or wrongdoing related to the allegations at hand. By staying in close communication with counsel, the Lighthouse team ensured that new search requirements and data sources were quickly integrated into the workstream to support fact development. On a weekly basis, Lighthouse delivered a streamlined set of documents responding to counsel’s evolving theory of the case. These deliveries also included a detailed breakdown of the categories of documents identified each week, descriptions of relevant internal processes and policies, and flagging of high-priority documents of particular interest to counsel. Each delivery was distilled down to only the most inclusive, non-redundant versions of relevant documents. In addition to keeping pace with ongoing requests and deliverables, the Lighthouse team also re-executed previous searches to address waves of new data rolling in midway through the engagement. A Faster and More Comprehensive Investigation Resolution Over the course of five months, Lighthouse delivered approximately 4,500 documents for review—out of the 2.3 million document review set. The Lighthouse deliveries encompassed everything counsel needed to know in order to resolve their investigation—and nothing more. The team accomplished this precision through deep subject matter expertise surrounding the allegations and underlying issues at play, consistent and effective communication with counsel, expert topic-based searching, and additional proprietary data analytics to remove unnecessary duplicative content. By the end of their short engagement with Lighthouse, counsel had developed a comprehensive understanding of the pertinent risk areas and confidently completed their investigation—on time and within budget. Corporate Case Studycase-study; corporate; corporation; ediscovery; fact-finding; document-review; investigations; kdi; key-document-identification; keyword-search; insurance-industry; analytics; ai-and-analyticsediscovery-review; ai-and-analytics; client-successCase-Study, client-success, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, insurance-industry, analytics, ai-and-analytics, ediscovery-review, ai-and-analytics
August 15, 2022
Case Study
Case-Study, client-success, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, insurance-industry, analytics, ai-and-analytics, fraud-detection, ediscovery-review, ai-and-analytics

Lighthouse Uncovers Key Evidence in Fast-Paced Employee Fraud Investigation

Lighthouse experts uncover key evidence in just two weeks eliminating 97% of document set. The Challenge Complex internal investigation into potential employee fraud 627K total documents Two-week timeline Key Results for Counsel Confidently completed a complex fraud investigation in just two weeks—without fear of missing critical information Significantly mitigated risk to the company through the identification of previously unknown internal control gaps Lighthouse Key Actions Executed 22 strategic searches, based on expert analysis, to identify all relevant evidence of employee fraud and misconduct Uncovered hidden information, previously unknown to counsel, that revealed additional acts of fraud, embezzlement, and misconduct by targeted employees—as well as potentially problematic internal control gaps Out of 627K documents, identified and delivered, just the 16K documents counsel needed to review in order to conduct a comprehensive fact investigation A Complex Employee Fraud Investigation The audit division of a health insurance provider was pursuing an internal investigation involving potentially concealed employee conflicts of interest with external vendors. The allegations involved possible defrauding of the parent organization through noncompliant contract and billing practices, as well as embezzlement of membership incentives for personal use and gain. With approximately 627K documents to review on an exceptionally tight timeline of two weeks, it was unclear how a comprehensive internal investigation would be completed to ensure proper due diligence. Counsel reached out to Lighthouse for help. Lighthouse Experts Quickly Uncover Key Evidence A small team of Lighthouse information retrieval, legal, data science, and linguistic experts immediately began working with counsel to understand the specific allegations at issue. As part of this work, the Lighthouse team catalogued the various sources of data that needed to be investigated. Based on counsel’s theory of the case, the team devised eight main search themes that would enable them to find instances of fraud or wrongdoing related to the allegations at hand. Over the course of the short two-week engagement, the Lighthouse team completed 22 discrete searches with corresponding deliveries based on expert analysis of the eight priority search themes. Each delivery was distilled down to include only the most inclusive, non-redundant versions of relevant documents so counsel wasn’t bogged down by reviewing a slew of duplicative and/or irrelevant documents. Over the course of searching, Lighthouse experts quickly uncovered new key information that was previously unknown to counsel. This information revealed a picture of internal control gaps used to circumvent company policies, leading to problematic vendor contract arrangements and suspect billing practices. Separately, the Lighthouse team also uncovered details of relevant personal circumstances of targeted employees. This new information shed light on the potential motivation for bad acts, including substantial personal debt, resentment of parent company controls, and personal relationships with superiors in the management reporting structure. Significant Risk Mitigation and Faster Investigation Resolution with Lighthouse In just two weeks, Lighthouse delivered a targeted set of approximately 16K documents, out of a total 627K in the review set. The Lighthouse deliveries represented everything counsel needed to know about the possible fraudulent employee activity—including concealed information that posed significant risk to the company if it had been left undiscovered. The team was able to accomplish this precision through deep subject matter expertise regarding the fraud allegations, comprehensive metadata analysis and emotional content detection, consistent and effective communication with counsel, expert topic-based searching, and exhaustive content deduplication. With Lighthouse’s partnership, counsel quickly gained a thorough understanding of the internal controls, potential fraud, and the embezzlement issues at play—ultimately enabling them to significantly mitigate risk and complete their investigation in just two weeks. Corporate Case Studycase-study; corporate; corporation; ediscovery; fact-finding; document-review; investigations; kdi; key-document-identification; keyword-search; insurance-industry; analytics; ai-and-analytics; fraud-detectionediscovery-review; ai-and-analytics; client-success; lighting-the-path-to-better-ediscoveryCase-Study, client-success, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, insurance-industry, analytics, ai-and-analytics, fraud-detection, ediscovery-review, ai-and-analytics
December 30, 2021
Case Study
Case-Study, client-success, eDiscovery, TAR, TAR-Predictive-Coding, investigations, analytics, predictive-coding, privilege, privilege-review, ediscovery-review, ai-and-analytics

The Benefits of Best-in-Class Technology on a High-Stakes Matter

By partnering with Lighthouse, clients reduce their data and save millions of dollars while ensuring quality and security. What They Needed Recently, Lighthouse was brought in by the Department of Justice (DOJ) of a large western US state who had to produce data for a high-stakes, multi-million dollar breach of contract matter. The client was dissatisfied with their current eDiscovery panel and was looking for a new provider who could help centralize eDiscovery with document review, use advanced technologies to reduce data, and ensure quality and security. How We Did It To kick things off, Lighthouse and the client team met to discuss the key goals and expected outcomes of this particular case. It became very clear that the client wanted to reduce data in a defensible way and so our team of legal and technology experts got to work. At the start of the matter, our team collected and processed more than 3.5TBs of client source data (i.e. 9M documents) as well as 98K documents that had been produced by opposing counsel and 135K documents that had been produced by 22 various third parties. In addition, we collected approximately two dozen mobile devices as well as advised and assisted outside counsel on a declaration defending the process for collection and production of mobile devices. Next, we brought in the use of best-in-class technology. We leveraged our search consulting team to apply our early case assessment (ECA) tool to the data after processing, and less than 14% of the original corpus (i.e. 1.2M documents) was promoted from the ECA database. Within the ECA environment, we assisted the client with culling, search term iteration, and helped the client to develop and sample search terms for use during negotiations with opposing counsel. After agreeing upon and validating search terms with opposing counsel, the result set was promoted from ECA for review. Within the review environment, we instituted a technology assisted review (TAR) workflow to reduce the overall review population to 420K documents (a 65% reduction after applying ECA) and prepared defensibility reports for opposing counsel. Finally, we used our thread suppression technology to suppress duplicative emails. ‍ We then developed a custom automated workflow to incorporate confidential de-designation decisions from 16 co-defendants on individual documents and reproduced them. An additional 155K documents were loaded directly to review without culling. For review of the remaining ~500K records, we then implemented our managed review solution—managing a review team (provided by our trusted review partner) through a very successful first pass review, privilege review, and privilege log creation process. ‍ The Results Ultimately, the client produced 260K documents in this matter and saved significant time and money. Lighthouse was able to reduce the original corpus by more than 95% through the use of best-in-class technology and our legal, review, and technology experts. Because of the service quality, support, breadth of capabilities, and expertise exhibited during the matter, the client has since migrated several active matters from different providers to Lighthouse. ‍ Corporate Case Studycase-study; ediscovery; tar; tar-predictive-coding; investigations; analytics; predictive-coding; privilege; privilege-reviewediscovery-review; ai-and-analytics; client-successCase-Study, client-success, eDiscovery, TAR, TAR-Predictive-Coding, investigations, analytics, predictive-coding, privilege, privilege-review, ediscovery-review, ai-and-analytics
April 1, 2022
Case Study
Case-Study, client-success, Corporate, Corporation, eDiscovery, TAR, TAR-Predictive-Coding, ai-and-analytics, -analytics, predictive-coding, healthcare-litigation, Healthcare, Processing, machine-learning, ediscovery-review

Lighthouse Achieves Review Efficiency and Cost Control for a Global Healthcare Company

Lighthouse partners with a healthcare company, saving $145K in document review costs after reducing review time by 90% through a custom review process. How We Did It Initial Processing Lighthouse used our proprietary processing automation to ingest, load, and deduplicate a total of 690K documents. Our deduplication process was able to immediately achieve a 25% data reduction by removing 175K documents. ECA Culling and Search Term Iteration Results Next, Lighthouse applied our customized culling and search term iteration processes to the 143K eligible documents and families. This process removed 81K documents, reducing the review population by over 55%. Thread Suppression and Proprietary Review Technology Results Lighthouse then implemented a customized workflow that combined email thread suppression with our proprietary review technology to identify the most unique documents. This process removed a total of 31K documents from the review population, thereby reducing the review population by another 50%. Lighthouse TAR and Advanced Analytic Results After the culling process, Lighthouse’s Review & Advanced Analytics team guided counsel through a Continuous Active Learning TAR workflow to find relevant documents. Once we reached a point of diminishing returns, we leveraged advanced analytics such as clustering, categorization, and concept search to ensure that no relevant documents were left behind. Our TAR and advanced analytics removed 17K documents, representing another 50% in data reduction. Corporate Case Studycase-study; corporate; corporation; ediscovery; tar; tar-predictive-coding; ai-and-analytics; analytics; predictive-coding; healthcare-litigation; healthcare; processing; machine-learningediscovery-review; client-successCase-Study, client-success, Corporate, Corporation, eDiscovery, TAR, TAR-Predictive-Coding, ai-and-analytics, -analytics, predictive-coding, healthcare-litigation, Healthcare, Processing, machine-learning, ediscovery-review
February 1, 2023
Case Study
Case-Study, client-success, AI, ai-and-analytics, AI-Big-Data, Corporate, Corporation, eDiscovery, eDiscovery-Migration, Prism, Processing, Project-Management, Healthcare, ediscovery-review, ai-and-analytics

Lighthouse Uses AI to Complete a Seamless, Customized Data Migration

Lighthouse's proprietary AI technology solves a unique data deduplication challenge while migrating over 25 terabytes for an extensive healthcare system. Key Results In 5 months, Lighthouse migrated four databases—with 25 TBs of data—all while keeping the databases active for review and production for current matters. Leveraging our AI technology, Lighthouse created an innovative solution for a large volume of Lotus Notes files originally processed as HTML files by a legacy processing tool. This solution ensured that any new Lotus Notes files would deduplicate against the migrated data, regardless of the file type or the tool used for processing. A Challenging Data Deduplication Problem A large healthcare system had been hosting its data (over 25 TBs of data across four databases) on another vendor’s platform for nearly a decade. The company knew it was time to modernize its eDiscovery program with Lighthouse. In order to do so, all 25 TBs would need to be migrated over to Lighthouse for hosting and future processing. However, in addition to data migration, the company also had a unique deduplication challenge due to the previous vendor’s original processing tool. The company’s data had originally been processed with the vendor’s legacy processing tool—which processed Lotus Notes data as HTML files, rather than the more modern EML version. The prior processing of these files into an HTML format meant that whenever duplicate Lotus Notes files were added to the database and processed using a more modern processing tool, those EML files would not deduplicate against the older HTML files in the databases. With over half their data consisting of Lotus Note files processed by the older tool in HTML format, the company was concerned that this issue would significantly increase review cost and slow down review time. Thus, in addition to the overall migration process, the company came to Lighthouse with an unfortunate Catch-22: in order to modernize its processing and eDiscovery capabilities, it was losing the ability to deduplicate a majority of its data with each new ingestion. Lighthouse Migration Expertise Because of the volume of new clients moving to Lighthouse for eDiscovery support, Lighthouse has developed an entire practice group dedicated to data migration. This group is adept at creating customized solutions to the unique challenges that often arise when migrating data out of legacy systems. The team works closely with each client to understand the scope, types of data, challenges, and future needs so that the data migration process is seamless and efficient. The Lighthouse migration team quickly got to work gathering information from the healthcare company to start this process, paying particular attention to the Lotus Notes deduplication issue. Once all relevant information was gathered, Lighthouse worked with stakeholders from the organization to form a comprehensive migration plan that minimized workflow disruption and included a detailed schedule and workflow for future data. In the process, Lighthouse also developed a custom solution for the Lotus Notes issue using our proprietary AI technology. An Innovative Solution: Lighthouse AI Lighthouse’s advanced AI technology can create a unique hash value for all data, no matter how it was originally processed. The Lighthouse migration team leveraged this innovative technology to create a unique hash value for the Lotus Notes files that were originally processed as HTML files. That hash value could then be matched against any new Lotus Notes files that were added to the database by the company, even when those files were processed as EML files. With this proprietary workflow, the healthcare company was able to seamlessly move to Lighthouse’s eDiscovery platform, which was better equipped to serve its eDiscovery needs—without losing the ability to deduplicate its data. Set Up for Success In just five months, Lighthouse completed a seamless migration of the healthcare company’s data by creating a custom migration plan that minimized blackouts and kept all databases up and running. Importantly, Lighthouse also leveraged its proprietary AI to create an innovative solution to a complex problem, ensuring continued deduplication capability and reduced discovery costs. ‍ Corporate Case Studycase-study; ai; ai-and-analytics; ai-big-data; corporate; corporation; ediscovery; ediscovery-migration; prism; processing; project-management; healthcareediscovery-review; ai-and-analytics; client-successCase-Study, client-success, AI, ai-and-analytics, AI-Big-Data, Corporate, Corporation, eDiscovery, eDiscovery-Migration, Prism, Processing, Project-Management, Healthcare, ediscovery-review, ai-and-analytics
November 15, 2021
Case Study
Case-Study, client-success, eDiscovery, self-service, spectra, Spectra, analytics, Processing, managed-review, document-review, review, Law-Firm, ediscovery-review

Top-Ten Global Law Firm Overcomes Budgetary Challenges

Top ten global law firm revitalizes their eDiscovery program with Lighthouse Managed Services for one predictable, recurring price. What They Needed After years of carrying hefty infrastructure costs and operating with limited access to emerging eDiscovery solutions, one of the ten largest law firms globally decided to look for a new eDiscovery partner that could advance their existing eDiscovery program without the burden of unpredictable, piecemeal pricing and sub-par technology. In particular, the firm was interested in a predictable cost model that would provide them with access to forensics, information governance, and eDiscovery experts as well as innovative new analytic and chat technology. To further complicate things, they had less than two months to migrate all of their existing data to the newly selected vendor before they would have to renew payments with their existing vendor. How We Did It Lighthouse Managed Services was a natural fit for this cutting-edge client. We were selected as the firm’s eDiscovery provider because it was clear we could provide a wide-range of subject-matter experts, access to best-in class technology (particularly our proprietary Spectra ® and SmartSeries ™ , as well as third-party tools like Nuix, Relativity, and Brainspace) and deliver within their tight timeline requirements – all for one predictable, recurring price. After the selection process, Lighthouse immediately tackled the migration of over 130 cases and ~13 TB of the firm’s data from their existing vendor’s environment to the Lighthouse environment within the 45-day requirement. Once the cases were restored, we worked with the firm to develop custom workflows that would allow the new data to flow through active migrated matters seamlessly without loss of deduplication, matter-level settings, or work product. We then developed a comprehensive eDiscovery playbook for our client detailing customized, repeatable, and defensible eDiscovery processes for every stage of the EDRM. We also began technology training sessions to allow our client to effectively utilize their access to tools like Relativity and Brainspace, as well as our proprietary Spectra and SmartSeries technology. Further, Lighthouse developed a custom Relativity template to ensure the user experience in Relativity mirrored the law firm’s workflows for continuity. We scheduled bi-weekly meetings with the Lighthouse Product Development team to keep the firm’s team abreast of new features on the horizon as well as allow the firm an opportunity to influence the overall product roadmap. All of this work was completed under a predictable, recurring pricing model, with custom reports around the firm’s matters and metrics. Results Overall, Lighthouse Managed Services surpassed of all the firm’s expectations – completely revitalizing their eDiscovery program for one predictable pricing model. We successfully completed the entire data migration within 45 days, without any disruption to case teams. Once migrated, our client was elated with the access Lighthouse provided to the best technology on the market, as well as the comprehensive training we offered their teams which enabled them to leverage these tools more effectively. In particular, Spectra enabled the firm to administer matters autonomously while getting data into a review platform at a much greater speed than ever before. Since the time of the launch, this client has started over 90 new matters in Spectra, leveraging the analytics, predictive coding, automated redaction, privilege log creation, and chat messaging tools that make our self-service solution the best in its class. Providing all these comprehensive services under a recurring, predictable processing model allowed this client to successfully manage cost recovery and integrate with their client billing seamlessly. Law Firm Case Studycase-study; ediscovery; self-service, spectra; spectra; analytics; processing; managed-review; document-review; review; law-firmediscovery-review; client-success; lighting-the-path-to-better-ediscoveryCase-Study, client-success, eDiscovery, self-service, spectra, Spectra, analytics, Processing, managed-review, document-review, review, Law-Firm, ediscovery-review
June 1, 2021
Case Study
Big-Data, Case-Study, collections, eDiscovery, digital forensics, Law-Firm, Processing, Production, Project-Management, ediscovery-review, digital forensics

Big Data, Impossible Timeline, Successful Results

Lighthouse collected, processed, and imaged 550 GB of data in less than 96 hours, saving a client from an eight-figure sanction. What They Needed Lighthouse’s client, an Am Law 100 firm, had to respond to a request for production in a highly sensitive matter. The client originally contracted another eDiscovery service provider for collection, processing, and production. Much of the collected data was corrupt and the other service provider was unable to handle a large majority of the data. Facing an eight–figure sanction if the production deadline was missed, the client abandoned their provider and contacted Lighthouse. Lighthouse had 14 days to resolve corrupt data, process the data, identify and segregate the already reviewed data, provide the unreviewed data for review, and produce the responsive data. Complicating matters even further, the data set was sizeable—550GBs—and the client needed at least a week to review the data before production. How We Did It Collect, Analyze, Repair A close inspection of the data revealed that another on-site collection would be necessary in order to deal with the corrupt data. On February 9, two forensic experts from Lighthouse collected three email exchange servers totaling 550 gigabytes. Lighthouse was able to repair some of the corrupt data; however, some data was corrupt at the source. This corrupt data could not interfere with the production to the government so Lighthouse processed the non-corrupt data overnight. The client then requested additional searching and culling for a specific list of custodians. Reduce, Process, Deliver As a result of the way the data was stored, Lighthouse had to navigate through a large number of files to identify the data belonging to the list of custodians. Ultimately, Lighthouse was left with 245 gigabytes which it further culled and filtered. Lighthouse’s experts then segregated 8,000 documents that the client previously reviewed so that the client did not have to waste time re-reviewing these documents. With the deadline looming, Lighthouse immediately imaged the documents for review. Lighthouse provided client with just over 25,000 images for review on February 13. Results As a result of Lighthouse’s speed and ability to handle the corrupt data, the client avoided an eight-figure sanction. In a matter of 96 hours, Lighthouse forensically collected 550 gigabytes from three email exchange servers, extracted 245 gigabytes from those servers, identified 8,000 documents in a corrupted media environment, and imaged over 25,000 documents. Law Firm Case Studybig-data; case-study; collections; ediscovery; forensics; law-firm; processing; production; project-managementediscovery-review; digital forensics; client-successBig-Data, Case-Study, collections, eDiscovery, digital forensics, Law-Firm, Processing, Production, Project-Management, ediscovery-review, digital forensics
June 25, 2021
Case Study
Case-Study, client-success, eDiscovery, self-service, spectra, Spectra, ai-and-analytics, analytics, Processing, TAR-Predictive-Coding, technology-assisted-review, TAR, Law-Firm, ediscovery-review,

Law Firm Goes from Keeping Up to Getting Ahead with New In-House eDiscovery Software

A prominent law firm leveraged a cloud-based software solution to increase efficiency and scale, resulting in significant costs savings. What They Needed A mid-sized East Coast law firm­—known for its expertise and experience in complex and high-stakes matters—was looking for new software to replace its in-house legacy technology. Their in-house tool did not provide the level of sophistication or throughput the team needed to continue to scale their work for their clients. In assessing their potential new partner, the firm required access to best-in-class technology, in particular Relativity and Nuix, as the firm’s employees were already familiar with these platforms. In addition, they wanted to leverage automation to have repeatable processes that would save both themselves and their clients time and money. ‍ How We Did It Lighthouse Spectra was selected for its simple and intuitive interface that allows users to internally manage client matters across best-in-class technology – including Relativity, Nuix, and even Brainspace. With Spectra, the firm can now start matters immediately, without having to go through the vendor solicitation and/or statement of work processes, creating real time savings. And the monthly subscription price for Spectra gave them more transparency around billing and greater cost control to help them stay within their budget. The onboarding and training processes were quick, due to the experience of the internal team coupled with the ease of use of Spectra’s. After the initial deployment of Spectra, the firm started processing client data through the tool immediately. They were able to get these matters through processing (Nuix) to review (Relativity) within a few hours, rather than an entire day or more, as was typical with their previous in-house solution. We can go from soup to nuts without having to reinvent the wheel each time. It is truly self-service. — Law Firm The Results Soon after onboarding, the firm took on a couple quick-turn and complex matters that they were able to handle more quickly due to the speed and scale of Spectra, as well as the support of Spectra team. In one instance, they received a request late in the work day that needed to be turned around within a short period of time. Prior to deploying Spectra, that would have taken some hands-on experience and a day’s worth of time. With Spectra, they were able to process it as soon as they received it and it was available for review within a few short hours. In another instance, the firm received a request with a pressing deadline where the document set consisted of approximately 95% foreign-language text. Quickly translating the text to English was imperative to firm’s success. To solve this problem, the Spectra team pointed the firm to a machine language translation tool that easily integrates with Spectra. By deploying the integrated translation service on the workspace, documents submitted for translation were loaded back into the workspace as easily as if it was performing a mass edit. This provided an easy solution for the firm for this particular matter, and now that it’s integrated, the feature is available to the firm on demand. By moving to Spectra, the law firm was able to leverage best-in-class technology, gain more transparency and control around the entire eDiscovery process, and create efficiencies and therefore, reduce costs for themselves and their clients. Leveraging Spectra, the law firm can now do more with less and scale their business to support their clients’ growing needs. ‍ Law Firm Case Studycase-study; ediscovery; self-service, spectra; spectra; ai-and-analytics; analytics; processing; tar-predictive-coding; technology-assisted-review; tar; law-firmediscovery-review; client-successCase-Study, client-success, eDiscovery, self-service, spectra, Spectra, ai-and-analytics, analytics, Processing, TAR-Predictive-Coding, technology-assisted-review, TAR, Law-Firm, ediscovery-review,
February 1, 2022
Case Study
Case-Study, client-success, financial-services-industry, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, analytics, ediscovery-review

Penningtons Manches Cooper Takes Control of their eDiscovery Process with Lighthouse Spectra

With Spectra, Penningtons Manches Cooper accelerated their ediscovery workflow and created a more efficient and cost-effective process. What They Needed Penningtons Manches Cooper LLP, a leading UK and international law firm, was looking for an innovative solution to enable more efficient and cost-effective management of their increasing eDiscovery needs. At the same time, they also wanted a solution that would not require a large investment in hardware, additional personnel, or training. When the team at Penningtons Manches Cooper reached out to Lighthouse about their needs, we suggested Spectra, our cloud-based, self-service eDiscovery solution that would enable the team at Penningtons Manches Cooper to easily run their matters in a more efficient and predictable manner. Additionally, the team at Penningtons Manches Cooper also wanted the option of leveraging experienced, knowledgeable, and on-demand assistance as required. Because Spectra offers the ability to seamlessly transition a matter from self-service to Lighthouse’s full-service team of eDiscovery experts, it provided the Penningtons Manches Cooper team a level of reassurance that there would always be help on hand should it be needed. The team at Penningtons Manches Cooper agreed that Spectra was the right eDiscovery solution for them. How They Did It Penningtons Manches Cooper partnered with Lighthouse to deploy Spectra, which was implemented within three months from initial proof of concept to rollout with live matters. Primary areas of focus during the implementation were training, process design, and internal change management. The project began with roundtable sessions to fully understand the scope and ensure that deployment was customized to fit Penningtons Manches Cooper’s requirements, deliverables, and goals. And because Spectra is a cloud-based solution, there was no capital expenditure or additional IT resourcing required for implementation. This allowed for a flexible approach, fast implementation, and low ongoing maintenance for Penningtons Manches Cooper. Once the tool was initially implemented, the team at Penningtons Manches Cooper identified a suitable matter to be used in a proof of concept. Lighthouse trained key Penningtons Manches Cooper personnel on how to use Spectra, and together the two teams worked to create a scalable and repeatable workflow for particular work types. All items were recorded in a bespoke playbook, which fully documents Spectra’s capabilities and process as well as specific Penningtons Manches Cooper requirements. Next, Lighthouse provided training to the wider Penningtons Manches Cooper team on Spectra, Brainspace, and Lighthouse’s proprietary SmartSeries® tools to enable the firm to leverage automated redaction, chat, and other emerging solutions. Due to the simplicity and on-demand nature of Spectra, the team at Penningtons Manches Cooper was able to realize a 1 to 4-hour reduction in the time it takes to create a matter and upload data into Relativity. Further, Lighthouse developed a custom Relativity template. to ensure the user experience in Relativity is mirrored across matters and complements the firm’s workflows. Following the successful trial period, Penningtons Manches Cooper has identified and managed many other matters in Spectra with very little external support. Setup of each new matter has been reduced significantly, in some circumstances by up to 2-3 days, as there has been a significant reduction in the number of steps required to instruct external eDiscovery vendors, including no need to gather price proposals, no delay while vendors run conflict checks, and no need for any additional contract negotiation. As a consequence, each legal team was typically able to begin reviewing documents on the same day the data was received by the firm. In conjunction with the above, predictable and recurring billing practices were implemented and custom reports were developed around the firm’s matters and metrics. This, in turn, will allow Penningtons Manches Cooper to manage cost recovery and integrate billing for a more seamless and efficient process. The Results Penningtons Manches Cooper partnered with Lighthouse to roll out Spectra, which enabled their team to control the process from the very start and create efficiency and predictability of cost and process. By using Spectra, the team at Penningtons Manches Cooper was able to create matters in Relativity and Brainspace as well as upload and process data quickly, all within a simplified and intuitive interface. The use of best-in-class technology, combined with repeatable process and in-house expertise, created a tangible benefit, ensuring eDiscovery and document review are completed with minimal cost, a savings which can be passed on directly to the client. Law Firm Case Studycase-study; financial-services-industry; corporate; corporation; ediscovery; self-service, spectra; spectra; analyticsediscovery-review; client-successCase-Study, client-success, financial-services-industry, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, analytics, ediscovery-review
December 1, 2022
Case Study
Case-Study, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, ai-and-analytics, analytics, Processing, TAR-Predictive-Coding, technology-assisted-review, TAR, Healthcare, ediscovery-review

Fortune 500 Company Saves $500K+ with New In-House eDiscovery Software

Lighthouse Spectra helps a considerable healthcare organization gain control, pricing transparency, and efficiency gains in the eDiscovery process. What They Needed A large healthcare organization was looking to solve their eDiscovery challenges around speed and cost. Specifically, they needed to increase their overall efficiency, and have more control over their matters with truly transparent and lower ediscovery-related costs. How We Did It Lighthouse Spectra was chosen to help achieve these key goals. Spectra is a self-service, on-demand eDiscovery tool with a transparent subscription-based pricing model. Spectra users can also access a full-time project management team at Lighthouse, whenever needed – all for one predictable price. Spectra onboarding was tailored to the users’ needs and focused on teaching users how to use Spectra itself, as well as when and how to use Brainspace, an analytics engine available inside the platform. Since Spectra is built with an intuitive interface, it only took a few short trainings over the course of a few weeks for the users to become comfortable using it. The Lighthouse team also ensured that Relativity and Spectra were customized to the organization’s specific needs. Our teams ensured that all customized permissions and views were set up within Relativity and worked with the organization to create custom Relativity templates to apply their standard coding pallets, rule-based coding propagations, pre-baked saved searches, standard views/layouts, imaging profiles, and more. Additionally, the Lighthouse team also assisted in building a continuous multi-model learning (CMML) workflow for their team to leverage within Spectra. Once set up was complete, the organization immediately started leveraging Spectra to process their data and run search terms as needed on a variety of diverse case types, including labor and employment cases, internal investigations, and OIG requests. The Results By moving to Spectra, the healthcare organization gained more control over their eDiscovery processes, created more efficient workflows, and achieved significant cost savings with transparent and predictable pricing. Since deploying the tool, the organization found that using the search and analytics capabilities of Spectra reduced the volume of natives to just 4.5% of the total hosted volume, minimizing the count of documents being reviewed by 95%. The custom Relativity template prevents the need to reinvent the wheel with each new matter and drive consistency across their portfolio. Further, the CMML workflow allows the organization to prioritize review of documents that are most likely to be responsive, as well as minimize the number of documents that go to review. Both of these enhancements allowed the organization to increase their overall speed from collection to production while lowering their overall eDiscovery-related costs. Through these new workflows and processes, the healthcare organization has achieved both defensibility and affordability and reduced review time from days to hours. This has resulted in an overall savings of $500K in their first year with Spectra.\ Corporate Case Studycase-study; corporate; corporation; ediscovery; self-service, spectra; spectra; ai-and-analytics; analytics; processing; tar-predictive-coding; technology-assisted-review; tar; healthcareediscovery-review; client-successCase-Study, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, ai-and-analytics, analytics, Processing, TAR-Predictive-Coding, technology-assisted-review, TAR, Healthcare, ediscovery-review
February 1, 2023
Case Study
Case-Study, client-success, Antitrust, eDiscovery, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, ai-and-analytics, AI-Big-Data, artificial-intelligence, AI, Acquisitions, analytics, predictive-coding, Prism, privilege, privilege-review, name-normalization, microsoft, Emerging-Data-Sources, digital forensics, collections, ediscovery-review, ai-and-analytics, antitrust, chat-and-collaboration-data

Global Law Firm Partners with Lighthouse to Save Millions During Government Investigation

Lighthouse partners with a global law firm to meet a 60-day production deadline for an 11.5 million-document population, saving the firm millions. What They Needed A global law firm was representing a large analytics company being investigated by the Federal Trade Commission (FTC) for antitrust activity. The company faced an extremely aggressive production deadline—approximately 60 days to collect, review, and produce responsive documents from an initial data population of roughly 11.5M. How We Did It The firm partnered with Lighthouse to create a workflow to execute multiple work streams simultaneously (collections, processing, TAR, privilege review, and logging) to ensure the company could meet the production deadline. Lighthouse expert teams managed the entire process, implementing daily standup calls and facilitating communication between all stakeholders to ensure that each workflow was executed correctly and on time. Lighthouse clients that leverage our AI technology to its full potential can realize even more cost savings and efficiency. For example, in this case, this global law firm would have seen the removal of close to 420K documents from privilege review that our AI accurately (as verified in the qc process) deemed to be highly unlikely or unlikely to be privilege. The Lighthouse team also provided strategic and defensible review methods to attack data volume and increase overall efficiency throughout the project. This included Technology Assisted Review (TAR) and email thread suppression in combination with our proprietary AI-technology and privilege log application. The different work streams that Lighthouse designed and executed to reduce the time, burden, and expense of review included: Lighthouse Forensic Collection : Lighthouse’s dedicated expert forensic team implemented a workflow to perform all initial collections, as well as all refresh collections across M365 mailboxes, Teams data, OneDrive, and SharePoint. TAR 1.0 : Lighthouse implemented predictive coding via a TAR 1.0 workflow to systematically find and remove non-relevant documents in a defensible manner. Not relevant documents that fell below the cutoff score were removed from the review population to reduce privilege review. Non-TAR Review : A detailed file analysis was conducted on documents that could not be scored via the TAR model by Lighthouse experts to remove non-responsive documents from eyes-on responsiveness review. Email Threading : Once TAR 1.0 reached stability and a cutoff score was achieved, Lighthouse applied email thread suppression on the documents above the cutoff score to further decrease privilege review and the production set overall. Managing Teams data : The Lighthouse team leveraged our proprietary chat tool to deduplicate Microsoft Teams data. Using the tool, the team stitched Teams messages back together in a format that allowed outside counsel to easily see the conversation in totality (e.g., who was part of the thread, who entered/left the chat room, who said what, at what time, etc.). The tool then integrated and threaded chat messages with search and filtering capabilities for review directly in Relativity. Privilege Review : Even as collections, TAR 1.0, email threading, and document review workflows were ongoing, the Lighthouse advanced analytics team leveraged technology in combination with their expertise to drastically reduce the privilege review set and guard against inadvertent production of privileged documents: Lighthouse Strategic Privilege Reduction : Lighthouse data reduction experts worked with outside counsel to analyze the data to identify large categories of documents that could be safely removed from privilege review, such as two large tranches of calendar items that were pulled into the privilege review. Lighthouse also ran a separate header-only privilege screen across and located a pattern in the privilege hits, which outside counsel confirmed were not privileged and removed from privilege review. AI-enabled Privilege QC : To minimize risk and increase efficiency of privilege review, Lighthouse deployed our advanced AI-technology, which uses multiple algorithms to analyze the text and metadata of documents, enabling highly accurate privilege predictions. First, it analyzed the entire review workspace and identified additional privileged documents that were not picked up by the conventional privileged screen approach. Then, the tool was utilized in privilege review QC workflows where it helped reviewers overturn first and second level privilege calls. Privilege logging application : Lighthouse also leveraged our privilege logging application to automate privilege log generation, saving outside counsel significant time and driving consistent work product in creating their privilege log. The Results Lighthouse forensic collection collected roughly 11.5M documents from more than 600 unique datasets and over 90 custodians, spanning M365 mailboxes, Teams data, OneDrive, and SharePoint sources. Lighthouse’s TAR 1.0 workflow then dramatically reduced the document population for privilege review, ultimately removing over 6M documents in full families from review, thereby delivering a savings of nearly $6.2M. The Lighthouse team’s detailed file analysis of non-TAR universe resulted in an additional 640K files removed from responsiveness review—encompassing close to a 90% reduction in the non-TAR review volume and delivering a savings of roughly $640K. Our email thread suppression process then removed another 1.1M documents from review (for a savings of $1.1M), while the Lighthouse proprietary chat tool removed over 63K Teams items and generated over 200K coherent transcript families from 1.3M individual messages. Law Firm Case Studycase-study; antitrust; ediscovery; tar; tar-predictive-coding; law-firm; hsr-second-requests; investigations; mergers; ai-and-analytics; ai-big-data; artificial-intelligence; ai; acquisitions; analytics; predictive-coding; prism; privilege; privilege-review; name-normalization; microsoft; emerging-data-sources; forensics; collectionsediscovery-review; ai-and-analytics; antitrust; chat-and-collaboration-data; client-successCase-Study, client-success, Antitrust, eDiscovery, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, ai-and-analytics, AI-Big-Data, artificial-intelligence, AI, Acquisitions, analytics, predictive-coding, Prism, privilege, privilege-review, name-normalization, microsoft, Emerging-Data-Sources, digital forensics, collections, ediscovery-review, ai-and-analytics, antitrust, chat-and-collaboration-data
April 1, 2023
Case Study
Case-Study, client-success, Antitrust, eDiscovery, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, ai-and-analytics, AI-Big-Data, artificial-intelligence, AI, Acquisitions, analytics, predictive-coding, Prism, privilege, privilege-review, tech-industry, ediscovery-review, antitrust, ai-and-analytics

Saving Millions in a Demanding HSR Second Request

Cleary Gottlieb and Lighthouse save millions of dollars and thousands of hours in HSRs Second Request for Fortune 500 company. What They Needed A global Fortune 500 electronics company received an HSR Second Request from the Department of Justice (DOJ), with an extremely aggressive timeline to reach substantial compliance. They engaged Cleary Gottlieb (“Cleary”), a global technology-savvy and innovative law firm with extensive experience handling challenging Second Requests. After Cleary led negotiations with the DOJ to reduce the scope of the investigation, the client was faced with 3.3M documents to review—a significant subset of which included CJK language documents that would require expensive and time-consuming translation. To further complicate matters, the DOJ and Cleary remained engaged in ongoing scope negotiations, resulting in additional data being added throughout the project. Cleary knew that conventional TAR technology was not capable of evaluating a dataset with ever-changing review parameters. How Cleary and Lighthouse Did It CJ Mahoney, counsel and head of the eDiscovery and litigation technology group at Cleary, has extensive experience working on complex HSR Second Requests and has pioneered a number of different analytics-driven methods to reach substantial compliance in the past. Based on prior joint success in innovating new ways to use this technology to improve privilege analytics, CJ immediately saw the potential of Lighthouse’s proprietary AI technology for this challenge. Together, CJ and the Lighthouse data scientists developed a unique training workflow to achieve highly precise responsive prediction results on this challenging dataset. CJ secured the DOJ’s first-ever approval of this workflow with Lighthouse’s proprietary AI technology. Immediately after approval, responsive and privilege analysis and review began simultaneously, enabled by AI technology. For responsiveness, the teams utilized an active learning TAR workflow wherein subject matter experts reviewed a control set of randomly selected documents. After only a few training rounds, the system reached stability and began scoring the remaining dataset for responsiveness. A privilege classifier was built based on 20K previously confirmed privilege calls and applied to score all documents in the privilege workspace. The teams used a combination of the analytic results and privilege terms to identify potential privileged documents. All documents within this set that were scored as “highly likely to be privileged” were immediately routed to reviewers for review and privilege logging. Conversely, documents scored as “unlikely to be privileged” were removed from privilege review after Cleary’s attorneys verified the accuracy of the results using a random sample. Further, the teams used the privilege classifier to identify additional privilege documents that had not hit on privilege terms. As the timeline for substantial compliance approached, negotiations with DOJ regarding relevant timeframes and custodians continued, resulting in the near-constant addition and removal of documents from the dataset. The Lighthouse and Cleary teams managed the ever-changing dataset with ease using the Lighthouse technology and workflow developed by the teams. The Results Using a specialized TAR workflow leveraging advanced AI, the teams delivered highly accurate responsive classification, resulting in more than 500K (or more than 40%) fewer documents requiring further review and production to the DOJ, when compared to legacy TAR tools. By creating a smaller volume of documents requiring production, the amount of privilege and foreign language review was also lessened. For example, 120K fewer foreign language documents were included in the final responsive set compared to legacy TAR tool results. This reduction of review and translation saved approximately $1M alone. For the client, the smaller responsive set meant faster production turnaround times, lower overall costs, and risk mitigation through the decreased chance for inadvertent production of non-responsive documents. The Lighthouse and Cleary partnership resulted in the removal of 200K documents from privilege review beyond what could have been possible through conventional methods, leading to cost savings of $1.2M and time savings of 8K review hours. The team further mitigated risk to the client by identifying privilege documents that did not hit on standard privilege terms. The Cleary and Lighthouse partnership resulted in substantial compliance with the HSR Second Request, increased risk mitigation, faster document review, and remarkable savings for the client. Law Firm Case Studycase-study; antitrust; ediscovery; tar; tar-predictive-coding; law-firm; hsr-second-requests; investigations; mergers; ai-and-analytics; ai-big-data; artificial-intelligence; ai; acquisitions; analytics; predictive-coding; prism; privilege; privilege-review; tech-industryediscovery-review; antitrust; ai-and-analytics; client-success; lighting-the-path-to-better-ediscoveryCase-Study, client-success, Antitrust, eDiscovery, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, ai-and-analytics, AI-Big-Data, artificial-intelligence, AI, Acquisitions, analytics, predictive-coding, Prism, privilege, privilege-review, tech-industry, ediscovery-review, antitrust, ai-and-analytics
May 15, 2023
Case Study
Case-Study, client-success, AI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, litigation, Prism, privilege, privilege-review, PII, PHI, Pharma, ediscovery-review, ai-and-analytics

Lighthouse AI and Analytics Drive Unprecedented Savings Across Multiple Matters

A global pharmaceutical company leverages Lighthouse's AI-powered analytics to reduce legal spending, increase efficiency, and decrease risk in their matters. Driving Value on Individual Matters The pharmaceutical company first came to Lighthouse for better, faster review for a single matter. Leveraging our unparalleled range of advanced analytics accelerators, our experienced review managers and expert consultants created a custom review workflow that significantly reduced data volume, expedited review, and increased the accuracy of data classification. Individual Matter Review Workflow and Metrics Driving Value Across All Matters Based on the results from the first matter and Lighthouse’s ability to attain even more review efficiency by connecting matters, the company sent additional matters to Lighthouse. Applying advanced AI across the company’s matters resulted in deeper matter insights and upleveled the accuracy of classification models in ways that that would be impossible on one single matter. As each new matter is added, Lighthouse AI identifies data that overlaps with past and concurrent matters. This has two impacts at the outset: 1) significant processing cost savings and unprecedented 2) early insights into new matters. These insights empower counsel to make more strategic, data-backed decisions from the start, leading to extraordinary downstream efficiencies and significantly reduced risk. For example, across five currently connected matters for the company, Lighthouse AI showed that: “Outside Counsel A” email domains were coded privileged over 95% of the time. Emails with a government email domain on the communication were coded privilege 15% of the time. 20K documents of Custodian B were collected and processed across multiple matters, but only 10 documents were ever actually reviewed. Custodian C’s documents were reviewed and produced across multiple matters, with a 0% privilege rate. Lighthouse AI-powered insights and connections supercharge the efficiency, accuracy, and consistency for each subsequent matter. Past attorney work product and metadata are used to reduce the need for eyes-on review and improve the consistency and accuracy of review for responsiveness, privilege, PII, confidentiality, redactions, and more. Driving Value into The Future The efficiency and risk mitigation benefits continue to grow for the pharmaceutical company with each new matter. A true big data technology, the more data Lighthouse advanced analytics ingests, the deeper and more nuanced its decision-making and insights become. Opportunities for data and attorney work product re-use will also grow with each new matter ingested, amplifying the company’s ROI into the future. Corporate Case Studycase-study; ai; ai-and-analytics; analytics; artificial-intelligence; big-data; corporation; corporate; data-analytics; data-re-use; data-reuse; document-review; ediscovery; litigation; prism; privilege; privilege-review; pii; phi; pharmaediscovery-review; ai-and-analytics; client-success; lighting-the-path-to-better-ediscoveryCase-Study, client-success, AI, ai-and-analytics, analytics, artificial-intelligence, Big-Data, Corporation, Corporate, data-analytics, Data-Re-use, Data-Reuse, data-re-use, document-review, eDiscovery, litigation, Prism, privilege, privilege-review, PII, PHI, Pharma, ediscovery-review, ai-and-analytics
May 15, 2023
Case Study
Case-Study, client-success, Big-Data, Cloud-Migration, cloud, Cloud-Services, Corporate, Corporation, Emerging-Data-Sources, Information-Governance, eDiscovery, microsoft, Legacy-Data-Remediation, microsoft, risk-management, Record-Management, financial-services-industry, microsoft-365, information-governance

Meeting Compliance Burden for Financial-Sector Giant

Lighthouse helps global British bank resolve critical risks during a major technology overhaul. Key Actions Microsoft referred Company to Lighthouse to address eDiscovery needs within Microsoft 365 (M365) Lighthouse assembled a team whose members had former expertise gained from stakeholder departments that were affected by the unresolved needs Key Results Compliance risks were successfully remediated using native M365 tools The Company used its new platform to avoid the need for add-on services or vendors What They Needed M365 Implementation Yields Data Risk Management As one of the nation’s largest financial institutions, the Company’s move to M365 required exceptional time and care—further complicating compliance requirements for record-keeping, data protection, and regulated conduct, and ultimately placing demands on M365 that created uncertainty of whether the platform could be resolved. The complex compliance requirements fueled an internal audit, revealing several risks related to the Company’s management of unstructured data, including its practices for retention, deletion, preservation, and protection of sensitive information. The Company asked Microsoft for help—and Microsoft referred the Company to Lighthouse. Tight Deadlines, Exceptional Solutions Lighthouse was tasked to explore whether M365’s native information governance (IG) and eDiscovery tools could address the risks identified in the audit. The team launched a series of workshops, interviews, and research tasks to: Educate stakeholders about M365’s native capabilities for records and information management (RIM) and IG Define stakeholders’ needs and current workflows regarding RIM and IG Analyze gaps in the current state Test and propose new workflows using native M365 tools Executives intensely monitored this project, as every identified risk was critical, so the pressure on the teams’ proposed workflows was tremendous—not to mention a tight 12-week timeline. Lighthouse prevailed, fielding a team of experienced peers with the Company stakeholders. Every business group—from records management to IT that were responsible for remediating risks—was paired with a Lighthouse consultant who had previously filled a similar role at a comparable institution. Our experts gained rapid credibility with each stakeholder group, and they ultimately accomplished a unified solution that was acceptable to all parties. Our solution succeeded in remediating all flagged risks using RIM and IG workflows within M365. It required the Company to upgrade its M365 licensing agreement from E3 to E5, but the company agreed that the added cost was more than worth it. In the end, Lighthouse achieved two key wins: 1) demonstrating to the Company that M365 could meet even the most stringent security and compliance needs, and 2) securing a new trusted partnership with the customer that has continued to develop. ‍ Corporate Case Studycase-study; big-data; cloud-migration; cloud; cloud-services; corporate; corporation; emerging-data-sources; information-governance; ediscovery; microsoft; legacy-data-remediation; risk-management; record-management; financial-services-industrymicrosoft-365; information-governance; client-success; lighting-the-path-to-better-information-governanceCase-Study, client-success, Big-Data, Cloud-Migration, cloud, Cloud-Services, Corporate, Corporation, Emerging-Data-Sources, Information-Governance, eDiscovery, microsoft, Legacy-Data-Remediation, microsoft, risk-management, Record-Management, financial-services-industry, microsoft-365, information-governance
October 1, 2022
Case Study
Case-Study, Big-Data, Cloud-Migration, cloud, Cloud-Services, ccpa, Corporate, Corporation, Data-Privacy, data-protection, Emerging-Data-Sources, Information-Governance, eDiscovery, microsoft, gdpr, Legacy-Data-Remediation, Legal-Holds, microsoft, risk-management, insurance-industry, Record-Management, microsoft-365, data-privacy, information-governance

Gap Analysis Solution for IT and Legal Teams Transitioning to M365

Lighthouse saves insurance giant millions of dollars during major technology upgrade. Key Actions Microsoft referred the Company to Lighthouse to resolve existing concerns from the Company’s IT and legal departments that were stifling their automation and transition process to Microsoft 365 (M365). Lighthouse held educational workshops on eDiscovery tools within M365, and devised a comprehensive plan for the compliance. Key Results Unblocked the M365 transition effort and enhanced the partnership between legal and IT. Compliance concerns were answered within M365, saving the company millions of dollars in retaining or updating legacy data management systems. What They Needed Legal Concerns Churn 11th Hour Nightmare for IT Department In 2017, a nationwide insurance giant initiated a transition from an on-premises Microsoft solution to a cloud-based M365 solution fueled by gain from cost, performance, and security improvements. Years later, and well past the intended launch date, the Company’s legal team suddenly halted the transition entirely due to concerns of M365’s eDiscovery capabilities, specifically, how M365 would handle the identification, preservation, and collection of email, instant messages, and files for the Company. The legal department insisted the company retain its custom-built archival solution until all compliance concerns were allayed. These demands put the IT department in an extremely tough spot after having already invested several years into the transition to M365. If forced to extend their aging, on-premises solution, the team would face substantial costs. To help unstick the implementation project, Microsoft suggested the Company engage Lighthouse to assist. Lighthouse immediately understood the legal team’s concerns and acted swiftly to address the Company’s insistence on exercising the transition to M365 with great caution, all while remaining vigilant of the Company’s receipt of hundreds of new legal matters monthly. The sensitive nature of data in this industry and the complex regulatory environment made the potential risk related to mismanagement very high. The process was intricate and complex, and required high-level integration to mitigate the significant risks that were specific to individual privacy regulations, such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). Hands-on Experience and High-touch Service Bridge the Gaps Lighthouse fielded a team of experts with direct experience in the same or similar roles as the various client stakeholders, ranging from IT to records management, corporate legal, and public affairs. This hand-selected team led a three-part process with their counterparts from the Company: Providing education on the eDiscovery aspects of M365 Analyzing current workflows and performance, and expressing their desired future state Devising a high-level design document for how relevant parties could conduct eDiscovery tasks in compliance with the requirements while using M365 The first two processes helped restore unity among stakeholders, while the design document delivered on the legal team’s concerns, including specified settings for a range of M365 applications and components, such as Exchange Online, SharePoint Online, OneDrive for Business, and Teams. The design document made room for process automation and/or custom workflows, as well as for third-party system integration (for compliance archive, legal hold, matter management, etc.). The initial project success led to a continuing relationship between the Company and Lighthouse, and over time Lighthouse has become a critical element in the Company’s ongoing M365 implementation and adoption journey helping them in charting a path forward. Corporate Case Studycase-study; big-data; cloud-migration; cloud; cloud-services; ccpa; corporate; corporation; data-privacy; data-protection; emerging-data-sources; information-governance; ediscovery; microsoft; gdpr; legacy-data-remediation; legal-holds; risk-management; insurance-industry; record-managementmicrosoft-365; data-privacy; information-governance; client-success; lighting-the-path-to-better-information-governanceCase-Study, Big-Data, Cloud-Migration, cloud, Cloud-Services, ccpa, Corporate, Corporation, Data-Privacy, data-protection, Emerging-Data-Sources, Information-Governance, eDiscovery, microsoft, gdpr, Legacy-Data-Remediation, Legal-Holds, microsoft, risk-management, insurance-industry, Record-Management, microsoft-365, data-privacy, information-governance
June 1, 2023
Case Study
Big-Data, Case-Study, Cloud-Migration, cloud, Cloud-Services, Cloud-Security, Corporate, Corporation, Data-Privacy, Emerging-Data-Sources, Information-Governance, eDiscovery, microsoft, manufacturing-industry, risk-management, chat-and-collaboration-data, ediscovery-review, microsoft-365, data-privacy, information-governance

Engineering a Customized M365 eDiscovery Premium Add-on

Lighthouse bridges internal gaps during technology overhaul and solves longstanding compliance issues for a German multinational healthcare manufacturer. Key Actions Lighthouse engaged company stakeholders in operational planning and received funding from Microsoft to devise and integrate a premium Microsoft 365 (M365) add-on to existing Purview Premium eDiscovery, which resolved an outstanding compliance need. Key Results The proof-of-concept achieved a zero-trust security model integrated with third-party software, and satisfied the barring of critical needs for the Company that centralized IT and legal departments after years of dysfunction. What They Needed Automating a transition to M365 commonly yields a clash between IT, legal, and compliance stakeholders if the decision to convert was spearheaded by IT and made without consulting legal and compliance teams. Typically, during planning or implementation of converting to M365, legal teams ask IT how the new platform will manage compliant and defensible processes, and if IT doesn’t have the answers, the project stalls. This was the situation facing a multinational manufacturing Company that engaged Lighthouse for help during the spring of 2020. At that time, the Company was several years into its M365 transition, and the legal teams’ requirements for adoption of native M365 compliance tools barred a complete transition. Pressure to adopt the tools escalated as M365 workloads for content creation, collaboration, and communication were already rolled out, creating an increasingly large and complex volume of data with significant degrees of risk. Lighthouse Responds to Need and Launches New Technology In partnership with Microsoft Consulting Services, Lighthouse organized a companywide M365 “reset,” hosting a three-day workshop to revamp the transition process and generate an official statement of work. The strategic goal was to streamline the stakeholders from litigation, technical infrastructure, cybersecurity, and forensics teams that previously failed to align. The workshop fielded critical topics geared to encourage constructive discussions between stakeholders and to strengthen departmental trust. The outcome of these discussions eventually enabled the company to move forward with critical compliance updates, including the collection and parsing of Microsoft Teams data, and the management of myriad files and email attachments. Lighthouse took stock of the current state, testing potential solutions, and arrived at a proof-of-concept for an eDiscovery Automation Solution (EAS) that augmented existing M365 capabilities to meet the legal team’s security requirements and remediate any performance gaps. Microsoft recognized the potential value of the EAS for the wider market, ultimately leading to Microsoft funding for the proof-of-concept. Inside the eDiscovery Automation Solution (EAS) Technology Azure-native web application designed to orchestrate the eDiscovery operations of an M365 subscriber through Purview Premium eDiscovery automation Maximized Microsoft Graph API “/Compliance/eDiscovery/” functions and other Microsoft API Simplified to Azure AD trust boundary, targeting the M365 tenant hosted within, and enabling full governance of identity and entitlement throughout Azure and M365 security features Benefits Achieved a zero-trust security model Authorized high-velocity, high-volume eDiscovery tasks without outside technology through automation and orchestration of existing M365 eDiscovery premium capabilities native to M365 Mobilized integration with third-party software included in the Company’s eDiscovery workflows Amplified workload visibility by automatically surfacing relevant Mailboxes, OneDrives, and other M365 group-based technologies dependent upon selected Custodians’ access Corporate Case Studybig-data; case-study; cloud-migration; cloud; cloud-services; cloud-security; corporate; corporation; data-privacy; emerging-data-sources; information-governance; ediscovery; microsoft; manufacturing-industry; risk-managementchat-and-collaboration-data; ediscovery-review; microsoft-365; data-privacy; information-governance; client-success; lighting-the-path-to-better-information-governanceBig-Data, Case-Study, Cloud-Migration, cloud, Cloud-Services, Cloud-Security, Corporate, Corporation, Data-Privacy, Emerging-Data-Sources, Information-Governance, eDiscovery, microsoft, manufacturing-industry, risk-management, chat-and-collaboration-data, ediscovery-review, microsoft-365, data-privacy, information-governance
April 14, 2023
Case Study
Case-Study, client-success, Corporate, Corporation, -G-Suite, digital forensics, investigations, collections, fraud-detection, Red-Flag-Reporting, Departing-Onboarding-Employee, digital forensics

Lighthouse Finds the Hidden Forensic Evidence Other Teams Miss

Lighthouse's forensics experts found hidden clues missed during an internal investigation, proving a departing employee was stealing company data. Lighthouse Key Results By quickly engaging Lighthouse forensics experts: The company stopped proprietary and sensitive information from being disseminated and used by competitors. The company’s law firm was able to quickly take action against the employee, preventing any further malfeasance or damage. Investigation Overview Week 1 Day 1 – 4 — Employee uploads company data onto a personal Google Drive account over the span of four days. ‍ Day 4 – 5 — An internal investigation concludes that all company data has been deleted from the employee’s personal data sources and no further action is needed. However, the company’s outside counsel calls in Lighthouse forensics experts to perform a separate investigation for affirmation. ‍ Day 6 — Lighthouse forensics experts find evidence missed during the company’s internal investigation, indicating that the laptop provided to internal investigators was a “decoy,” and that the employee had actually transferred the proprietary company data onto an as-of-yet undisclosed laptop. Week 2–4 Outside counsel uses Lighthouse’s findings to file a restraining order against the employee and elicit a confession wherein the employee admitted they had downloaded the proprietary data onto a secret laptop—owned by another business. Week 6 Lighthouse forensics team is provided access to the additional laptop and the employee’s private Google Drive account. Although there is no company data stored on the drive, the Lighthouse team dives deeper and immediately finds that the employee had restored the previously deleted company data back to their Google Drive account, transferred it the secret laptop, and then deleted it again from the Google Drive account. These findings enable outside counsel to take additional remediating actions. Suspicious Activity by a Departing Employee Raises Alarm Bells During routine internal departing employee analysis, a global company was alerted to the fact that an employee had uploaded more than 10K files containing sensitive proprietary data to a personal Google Drive account. The company immediately launched an internal investigation and engaged their outside counsel. Over the course of the internal investigation, the employee admitted they had uploaded company data to their Google Drive, and then used an external hard drive to transfer that data onto a personal laptop. However, the employee avowed that all company data had since been deleted—which the company’s IT team confirmed by examining all three data sources. However, due to the sensitivity of the data, outside counsel wanted additional reassurance that the employee was no longer concealing proprietary company data. The law firm had previously relied on Lighthouse forensics experts for similar investigations and knew that they could count on Lighthouse expertise to find any hidden clues that would point to additional hidden data. Finding the Forensic Breadcrumbs Week 1 The Lighthouse forensics team received access to forensic images of the employee’s personal laptop and external hard drive within one week of the first suspicious upload. The team immediately noticed that the employee’s data tracks conflicted with the timelines and statements provided by the employee during the company’s internal investigation. Key Evidence Found by Lighthouse Forensics Experts The external hard drive used to transfer company data had not been plugged in to the personal laptop during the relevant time frame. File paths identified on the external hard drive (which show the file locations where data was downloaded upon connection) did not match those on the personal laptop provided to internal investigators. This evidence led the Lighthouse team to conclude that the laptop provided by the employee was not the laptop used to download company data—and that a different laptop with the stored proprietary company data existed but had not been disclosed by the employee. Week 2–4 A Lighthouse forensics expert provided a sworn declaration explaining the evidence found during the examination of the employee’s personal devices. The company’s law firm used this declaration to file a restraining order to stop the employee from continuing to steal or disseminate proprietary data. The law firm also used Lighthouse’s findings to elicit a confession from the employee, admitting that they had been secretly working part-time for another business, and had transferred the company’s proprietary data onto a laptop provided to the employee by that business. Week 6 Within two weeks of the Lighthouse forensics expert’s sworn declaration, the Lighthouse team was provided access to the laptop owned by the other business, as well as the employee’s personal Google Drive account. Lighthouse’s inspection of the Google Drive did show that all company data had been deleted, as had been confirmed by internal investigators. However, Lighthouse immediately went deeper into the Google Drive and found conclusive evidence that the employee had subsequently “restored” the deleted proprietary data just a few days after the internal investigation ended, in an attempt to continue with the data theft. Key Evidence Found by Lighthouse Forensics Experts Despite the fact that no company data was stored on the employee’s personal Google Drive account at the time Lighthouse received access to it, Lighthouse forensics experts went above and beyond to do a deeper forensic dive into the user activity log, email account, and internet searches stored on the Google Drive. That deeper analysis showed that: Two days after the internal investigation ended, the employee began conducting numerous internet searches for ways to “restore” deleted files on Google Drive. Two weeks later, the employee emailed a private IT company asking for help restoring deleted Google Drive files. One day after sending that email, thousands of files were restored to the employee’s Google Drive. Those restored files were once again deleted a few days later. Before the restored files were re-deleted, the employee downloaded some of the files containing company data to the “secret” laptop owned by another business. Keeping a Lid on Pandora’s Box The evidence found by Lighthouse forensics experts after their initial examination of the employee’s personal devices enabled the company’s law firm to take legal action against the employee less than one month after the first suspicious data upload. Within one day of being provided access to the employee’s personal Google Drive account, Lighthouse forensics experts were able to find exactly how and where the stolen proprietary and sensitive data was hidden. This enabled the company to permanently prevent any dissemination of that proprietary and sensitive data to competitors. ‍ ‍ Corporate Case Studycase-study; corporate; corporation; g-suite; forensics; investigations; collections; fraud-detection; red-flag-reporting; departing-onboarding-employeedigital forensics; client-successCase-Study, client-success, Corporate, Corporation, -G-Suite, digital forensics, investigations, collections, fraud-detection, Red-Flag-Reporting, Departing-Onboarding-Employee, digital forensics
October 7, 2022
Case Study
Case-Study, client-success, document-review, eDiscovery, fact-finding, KDI, key-document-identification, Law-Firm, HSR-Second-Requests, investigations, Mergers, Acquisitions, ediscovery-review, ai-and-analytics, antitrust

Law Firm Equipped with 35 Deposition Kits, At or Before DOJ Deadlines, for Massive Antitrust Investigation

Lighthouse experts distilled crucial information from millions of produced documents for a client's legal strategy during a Department of Justice investigation. Key Actions Lighthouse created 35 deposition kits by conducting two large-scale data investigations—and addressing multiple ad-hoc emergency investigations in the process—on an initial production set of six million documents, identifying the 4,100 most relevant items. Lighthouse adhered to a complex delivery schedule so the case team had time to prepare for each deposition. ‍ Key Results Counsel was well-prepared for 35 depositions using the deposition kits delivered by Lighthouse. Instead of spending time and review cycles finding they evidence, they used the bandwidth they saved to hone their legal strategy. ‍ Responding to a Fast-Moving Government Investigation, with a Merger on the Line When two of the largest publishing companies in the country entered a merger deal, the Department of Justice (DOJ) reacted with a large anti-trust investigation. Pursuant to an HSR Second Request, the companies produced a combined six million documents to the DOJ. In response, the DOJ sought to depose 35 individuals within a few months’ time. This left outside counsel with just two months to prepare for the defense of a massive potential merger, including intensive preparation for all 35 depositions. To do so, they knew they would need to find every shred of relevant information hidden within those six million documents—as quickly as possible. Executing a Plan for Better Legal Strategy When the law firm reached out to Lighthouse for help, our agile search team of analytic, legal, and linguistic experts immediately got to work, consulting with counsel to understand the specifics of the investigation, as well as the case team’s initial strategy for response. Using this background, the Lighthouse team mapped out a information search plan leveraging advanced volume reduction technologies and linguistic search models, delivering: Comprehensive deposition kits for all 35 deponents. Each kit was scheduled to be delivered well ahead of the corresponding deposition date, and included summaries of Lighthouse experts’ findings and highlights of notable documents and facts, in order to give counsel adequate time to prepare for each deposition. Key and relevant documents related to the DOJ’s anti-trust concerns and outside counsel’s defense strategies. These documents, provided on a rolling timeline, were uncovered by conducting two large scale data investigations: one to find all documents related to determining which publishers participated in or won the auctions, and another to find all documents necessary to facilitate the creation of an all-encompassing book auction timeline. Given the legal and analytic expertise of our specialists, Lighthouse search results often uncovered new areas of importance for the case team. When the case team responded to this new information with urgent follow-up search requests (with results sometimes needed in 24 – 48 hours), our team also boosted efforts to provide the requested information. Powering Counsel with Knowledge—and Time By partnering with Lighthouse, the case team stayed focused on preparing for depositions and crafting a response to the DOJ’s concerns to the merger, instead of conducting database searches and reviewing irrelevant or redundant documents. In just two months, Lighthouse found and delivered the 4,100 documents the case team needed, out of an initial population of six million documents. This included creation and delivery of 35 deposition preparation kits, all documents related to the case team’s strategy for responding to the DOJ’s antitrust concerns (delivered on a rolling basis), and results of six ad hoc case team investigation requests. All deposition kit and derivative search deliveries met or exceeded counsel’s delivery deadline expectations. Law Firm Case Studycase-study; document-review; ediscovery; fact-finding; kdi; key-document-identification; law-firm; hsr-second-requests; investigations; mergers; acquisitionsediscovery-review; ai-and-analytics; antitrust; client-successCase-Study, client-success, document-review, eDiscovery, fact-finding, KDI, key-document-identification, Law-Firm, HSR-Second-Requests, investigations, Mergers, Acquisitions, ediscovery-review, ai-and-analytics, antitrust
May 1, 2023
Case Study
Case-Study, client-success, document-review, eDiscovery, fact-finding, KDI, key-document-identification, Law-Firm, ai-and-analytics, analytics, ediscovery-review, ai-and-analytics

Law Firm Reconstructs Contract History from 92,000 Documents in Three Weeks

Lighthouse applies language models and human expertise to uncover critical evidence. What We Did Outside counsel for a large construction firm partnered with Lighthouse to identify key documents Lighthouse used its proven iterative process to reduce the review set Collaborative approach continuously incorporated counsel’s insights into model results Key Results 92,000 documents reduced to 871 Key handwritten reports identified using metadata Counsel freed to focus on most important documents Review completed within the 3-week deadline Piecing Together Contract History Without a Guide A large construction company facing a breach-of-contract suit retained outside counsel. Because personnel involved in the contract were no longer employed by the contractor, the law firm needed to reconstruct the agreement’s history based on related documents and communications. However, with just three weeks for review, a keyword search returned more than 90,000 items. The firm needed a way to identify the most critical documents rapidly and accurately. Iterating and Adapting to Unearth Critical Information The Lighthouse team applied advanced technology and review expertise to get the job done. Counsel provided Lighthouse with 15 topics relevant to contractual changes, such as cost, delays, and weather conditions. The team identified an initial set of documents using linguistic modeling. The law firm provided feedback to update the search models. The insights of the experienced attorneys directed the investigation, while Lighthouse people and technology accelerated the discovery of relevant information. As new topic areas emerged, Lighthouse adapted. They identified additional contractors involved in the dispute and concerns such as employee discontent and time-keeping accuracy. As the search proceeded, they captured important documents even though they were outside the original search parameters. Most importantly, Lighthouse used metadata to highlight relevant site incident reports, the contents of which were not searchable. The law firm could review salient reports in depth, discovering key information concerning the disputed contract. Ensuring Response Readiness Over four iterations, Lighthouse escalated 871 key documents related to 16 case themes, in addition to the handwritten incident reports. Lighthouse data retrieval experts highlighted key language in Relativity and coded and prioritized critical documents to expedite review. Using a powerful combination of linguistic models and case experience, Lighthouse shrank the unwieldy dataset to a manageable size and brought the most critical information to the forefront. Counsel could focus their resources on the most relevant data and maximize value for their client. By the end of the third week and final delivery, the attorneys were well-prepared for negotiations and litigation. Law Firm Case Studycase-study; document-review; ediscovery; fact-finding; kdi; key-document-identification; law-firm; ai-and-analytics; analyticsediscovery-review; ai-and-analytics; client-success; lighting-the-path-to-better-ediscoveryCase-Study, client-success, document-review, eDiscovery, fact-finding, KDI, key-document-identification, Law-Firm, ai-and-analytics, analytics, ediscovery-review, ai-and-analytics
February 1, 2023
Case Study
Antitrust, Case-Study, document-review, eDiscovery, fact-finding, KDI, key-document-identification, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, Acquisitions, ediscovery-review, ai-and-analytics, antitrust

Finding the Keys to a Strategic Defense in a Second Request

Lighthouse proprietary, technology-enabled strategy for finding key documents gives counsel a strategic advantage in a challenging HSR Second Request. Key Results In just three weeks, the Lighthouse team found the 1K most important documents out of an initial data population of 19M documents. Lighthouse experts began flowing key documents to the case team just three days after the initial kickoff meeting. Lighthouse saved counsel at least a month’s worth of preparation time for witness interviews and defense planning by efficiently finding the most important documents. A Mountain of Data and a Short Timeline A global technology company and their two outside counsel teams needed to quickly prepare a winning defense in a high-stakes, time-sensitive, Department of Justice (DOJ) Hart-Scott-Rodino (HSR) Second Request. To do so, they would have to identify and review all potentially damaging (or alternatively, helpful) documents within an initial data population of 19M documents. Finding the most important documents within that massive data volume—in less than one month—presented a Herculean task. A Proprietary Solution for Finding the Most Important Documents Lighthouse’s technology-enabled search strategy is led by information retrieval experts with decades of industry experience, who utilize robust search technologies that support large data volumes beyond industry-standard tools. Together, this combination of cutting-edge technology and data expertise quickly surfaces critical documents, streamlining legal analysis and case preparation for case teams. Handing Over the Keys to a Strategic Defense With no time to lose, Lighthouse TAR and review experts were able to whittle down the 19M documents to just over 990K responsive documents for production to meet substantial compliance. Simultaneously, Lighthouse experts quickly got to work finding the most important documents for the case team. Rather than relying on keyword culling, the Lighthouse team analyzed the data population and leveraged proprietary algorithms to safely reduce the universe to documents that contained the unique content the case team needed. From there, a team of six data retrieval experts leveraged proprietary search technology and institutional knowledge of the client’s data, gleaned from working with the company in a managed services capacity, to find key documents that were critical to the case team. Our experts used an iterative process and had weekly meetings with the case team so that they could instantly integrate counsel and witness feedback throughout the project, which helped yield more accurate search results. With this process, the Lighthouse team began flowing key documents to the case team just three days after the initial kickoff meeting. Over the course of the next three weeks, the Lighthouse team provided a total 1K key documents (out of a 990K responsive documents) in eight rolling deliveries. By gaining immediate access to these documents and eliminating the need for time-consuming and costly manual review, Lighthouse saved the team at least a month’s worth of preparation time for witness interviews and defense preparation. Law Firm Case Studyantitrust; case-study; document-review; ediscovery; fact-finding; kdi; key-document-identification; tar; tar-predictive-coding; law-firm; hsr-second-requests; investigations; mergers; acquisitionsediscovery-review; ai-and-analytics; antitrust; client-successAntitrust, Case-Study, document-review, eDiscovery, fact-finding, KDI, key-document-identification, TAR, TAR-Predictive-Coding, Law-Firm, HSR-Second-Requests, investigations, Mergers, Acquisitions, ediscovery-review, ai-and-analytics, antitrust
June 1, 2022
Case Study
Advisory-Services, Big-Data, Case-Study, collections, Corporate, Corporation, eDiscovery, digital forensics, Information-Governance, investigations, Pharma, privilege, privilege-review, Processing, Project-Management, TAR, TAR-Predictive-Coding, technology-assisted-review, ediscovery-review, digital forensics, ai-and-analytics, information-governance

Big Pharma Relies on Lighthouse to Manage Complex eDiscovery

Lighthouse partners with a rapidly expanding pharmaceutical company to streamline its eDiscovery workflow and meet obligations more efficiently. What They Needed A large pharmaceutical client received subpoenas from several regulators. The subpoenas covered multiple product lines, implicated 60 custodians, and virtually all the company’s email. The client’s IT group identified over 35TBs of data requiring collection, processing, and review. Complicating matters further, the company had only 60 days to respond, well outside its estimated time of nine months to complete the project. Faced with this near impossible timeline, the client looked to Lighthouse for support. How We Did It Relying on procedures outlined in a jointly developed eDiscovery Playbook, Lighthouse’s data collection and forensics experts worked closely with the client’s legal and IT groups to implement a defensible strategy that greatly reduced the amount of data requiring collection. Experts from Lighthouse’s Advisory Services group worked with the client to implement a legal hold and data retention policy, customized to the various subpoenas. Lighthouse provided a unified review database, allowing outside counsel (who was responding to separate subpoenas) to leverage each other’s work product, greatly reducing review costs and preventing the inadvertent production of privileged and other sensitive materials. The Results Our combined efforts reduced the originally estimated 35TBs of data requiring review to less than 3TBs. By greatly reducing the amount of data requiring processing and review, the client saved significant review costs and reduced the estimated project completion time from nine months to only four weeks. Review cost reductions were achieved by leveraging Lighthouse’s project management team as well as the company’s proprietary suite of technology-assisted review offerings. These, and other efficiencies discovered during the project, have been implemented in future matters, continuing to drive down costs and increase value. Corporate Case Studyadvisory-services; big-data; case-study; collections; corporate; corporation; ediscovery; forensics; information-governance; investigations; pharma; privilege; privilege-review; processing; project-management; tar; tar-predictive-coding; technology-assisted-reviewediscovery-review; digital forensics; ai-and-analytics; information-governance; client-successAdvisory-Services, Big-Data, Case-Study, collections, Corporate, Corporation, eDiscovery, digital forensics, Information-Governance, investigations, Pharma, privilege, privilege-review, Processing, Project-Management, TAR, TAR-Predictive-Coding, technology-assisted-review, ediscovery-review, digital forensics, ai-and-analytics, information-governance
January 15, 2023
Case Study
Case-Study, client-success, Corporate, Corporation, digital forensics, investigations, collections, fraud-detection, Red-Flag-Reporting, Departing-Onboarding-Employee, digital forensics

Lighthouse Secure IP On-Demand Services Prevent Proprietary Data Theft by Exiting Employee

Lighthouse red flag report prevents proprietary data from being taken by departing employee. Key Actions A global company partnered with Lighthouse to create a proactive departing employee program to prevent data loss and theft. Lighthouse forensics experts prepared Red Flag Reports for every departing employee that fell within a specific category of employees. Each report outlined the risks associated with the departing employee based on a skilled forensic examination of their activity and data. Soon after implementing the program, a Lighthouse Red Flag Report alerted the company to suspicious activity by a departing employee indicating a high risk for data loss. Key Results Because of Lighthouse’s analysis and quick response, the company was able to: Prevent sensitive data from being disseminated outside the company. Avoid costly litigation associated with proprietary data loss. Reevaluate the departing employee’s severance package due to breach of contract, resulting in additional cost savings. ‍ What They Needed A global company was dealing with an increased risk of data loss and theft from departing employees. The company retains large volumes of proprietary data spread across their entire data landscape. Much of that data is also highly sensitive and would create a competitive disadvantage for the company if it were to end up in competitors’ hands. The company was also facing a higher volume of employee turnover—especially within roles that had access to the company’s most sensitive data (e.g., company executive and management roles). The company was concerned that these factors were creating a perfect storm for data theft and loss. They realized they needed a better system to catch instances of proprietary data loss before any data left the company. Company stakeholders reached out to Lighthouse because they knew our forensics team could help them build a proactive, repeatable solution for analyzing and reporting on departing employee activity. How We Did It Lighthouse forensics experts worked with the company to create a custom departing employee program for data loss prevention. With this program, Lighthouse experts prepared a Red Flag Report for every departing employee that fell within specified high-risk categories (e.g., employees above a specific seniority level, or employees that had access to highly sensitive company data, etc.). Each Red Flag Report was prepared by a Lighthouse forensics expert and summarized the data theft risk associated with the underlying employee. Every report contained: A high-level summary of the risk of data theft presented by the employee. A collection of attachments with highlights and comments by the Lighthouse forensics examiner (for example, a list of files stored in an employee’s personal cloud storage account, with an explanation of why that activity may indicate a higher risk of data theft). A forensic artifact categorization with associated risk ratings (e.g., if there were no suspicious search terms found during a scan of the employee’s Google search history, the examiner assigned that category a lower risk rating of “1”). Recommended next steps, with options for substantiating high-risk employee behavior. Reports were delivered to a cross-functional group of company stakeholders, including IT, human resources, and legal groups. The Results The Lighthouse program very quickly paid off for the company. Soon after initiation, Lighthouse escalated a Red Flag Report for a departing employee that showed a high risk of data loss. Specifically, the Lighthouse forensics examiner flagged that the employee had connected two different external thumb drives containing sensitive company data to their laptop. This activity was flagged by the Lighthouse forensics examiner as high risk because: The employee had already been directed by the company to return any device that had corporate data saved on it; and The employee had previously indicated that they didn’t have any devices to return. As soon as Lighthouse escalated the Red Flag Report, company stakeholders scheduled an interview with the employee. This interview resulted in the employee admitting that they had taken corporate data with them, via the two thumb drives. Because Lighthouse was able to quickly flag the employee’s suspicious activity, the company was able to retrieve the thumb drives before the proprietary data was disseminated to a competitor. The company was also able to reevaluate the employee’s severance package due to the breach of company policy, resulting in a significant cost saving. Even more importantly, the company now has a proven, proactive, and customized solution for preventing data loss and theft by departing employees—implemented by Lighthouse’s highly skilled forensics team. ‍ Corporate Case Studycase-study; corporate; corporation; forensics; investigations; collections; fraud-detection; red-flag-reporting; departing-onboarding-employeedigital forensics; client-successCase-Study, client-success, Corporate, Corporation, digital forensics, investigations, collections, fraud-detection, Red-Flag-Reporting, Departing-Onboarding-Employee, digital forensics
April 1, 2023
Case Study
Case-Study, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, energy-industry, analytics, ediscovery-review

Energy Company Saves Hundreds of Hours with the Right Combination of Technology and Human Expertise

A leading energy company gained the flexibility to use self-service technology and full-service expertise as needed, reducing costs and optimizing outcomes. Key Actions A multinational energy company sought eDiscovery efficiency and scalability A seamless combination of self-service Lighthouse Spectra eDiscovery and full-service Lighthouse consulting enabled them to meet a wide range of needs Minor matters can be addressed with low-cost self-service tools A full-service Lighthouse team applies in-depth review expertise to complex matters Key Results $50,000 year-over-year cost reduction 100+ hours freed for matter-critical work Flexibility to meet varying matter requirements Training improved speed and accuracy of self-service eDiscovery What They Needed A multinational energy company wanted to stop relying on an expensive patchwork of third-party eDiscovery providers and adopt a unified, cost-effective strategy. It sought transparent pricing and self-service access to the latest technology, including Relativity and Brainspace. At the same time, it needed a consistent team of experienced eDiscovery and review experts for more in-depth needs. How We Did It Lighthouse listened closely as the company described its desire for greater scalability and efficiency. We proposed a seamless combination of self-service capabilities on the Lighthouse Spectra platform and a dedicated full-service team for complex matters. This proven, flexible approach minimizes cost for minor matters while ensuring available capacity and expertise for complex projects. The Lighthouse Spectra support team accelerated onboarding through technical assistance and training. After completing a proof of concept, the client immediately began ingesting matters into Spectra. At the same time, we assembled a dedicated full-service team to be ready when needed. The Results Using the intuitive, familiar Lighthouse Spectra experience—incorporating Relativity and Brainspace functionality—the client rapidly discovered and reviewed data for internal investigations, subpoenas, and other minor matters. They no longer needed to license and manage Relativity and Brainspace separately, benefitting from a predictable, fixed-fee pricing model that fits their budget and scales to meet their needs. The Lighthouse team simplified data processing and exception handling, freeing resources to focus on strategic aspects of a given matter. As soon as a case warranted, they could triage it to the full-service team directly from the Spectra workspace. The result is a more responsive, cost-effective eDiscovery strategy, saving the company hundreds of hours and almost $50,000. Corporate Case Studycase-study; corporate; corporation; ediscovery; self-service, spectra; spectra; energy-industry; analyticsediscovery-review; client-success; lighting-the-path-to-better-ediscoveryCase-Study, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, energy-industry, analytics, ediscovery-review
July 3, 2023
Case Study
Case-Study, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, tech-industry, analytics, ediscovery-review, ai-and-analytics

Beyond Relevance: Finding Evidence in a Fraction of the Time

Lighthouse goes beyond linear review to help a global technology company make its case to the IRS. Key Actions Targeting critical case documents with Key Document Identification rather than performing linear review on the whole document set. Identifying key events that took place within specific hours, by applying advanced linguistic modelling to overcome challenges presented by multiple time zones and different time stamp formats within email traffic. Key Results 1.5 million total documents reduced to roughly 37,500. Results in 100-500% less time and at 90-240% lower cost than linear review. Building a Case for Tax-Exempt Lunches A global technology company was facing IRS scrutiny over the complementary lunches the company provided to staff. Full-time workers were comped the meals because, the company claimed, staff were required to respond to emergencies during lunch hours. The IRS was dubious of that claim and inclined to consider the lunches a taxable benefit. To prevent the meals from being taxed, the company needed to demonstrate to the IRS that, over a two-year period, at least 50% of employees at its San Francisco office had in fact responded to an emergency between the hours of 11 a.m. and 2 p.m. local time. For evidence, the company had 1.5 million documents—mostly emails—pertaining to about 1,000 employees. The company reached out to Lighthouse for help finding the best case-building documents within those 1.5 million. Lighthouse offered its Key Document Identification service. Rather than prioritize documents for linear review, the Lighthouse team promised to identify the most valuable and evidential documents—and do so in less time and at a lower cost. Hacking Through the Haystack The Lighthouse team eliminated less-valuable documents in stages. First, they used an advanced algorithm to remove junk and duplicative documents, reducing the document set to 943,000 (a 38% reduction). Among those, the team targeted San Francisco employee names and emails, which brought the total down to 484,000 (an additional 49% reduction). From here, the team employed nuanced, multi-layered linguistic search techniques to zero in on the most necessary and informative documents. Along the way, Lighthouse encountered a number of challenges that would have thwarted other search tools and teams. One of these was the knot of different time stamps attached to emails: the last in time email in every thread was converted to Coordinated Universal Time (UTC), while every previous email in the thread was stamped according to the local time zone of the sender. The Lighthouse team circumvented this by searching the emails’ metadata, which converted all times to UTC. Using this metadata, the team was able to search using a single timeframe (6 to 9 p.m. UTC, corresponding with 11 a.m. to 2 p.m. Pacific). Another challenge was looping together all emails stemming from the same incident, so that Lighthouse could provide the company with a complete account of each emergency response (and avoid counting a given emergency more than once). The team did this by flagging one email tied to a specific emergency and using proprietary threading technology to propagate that flagging to all other emails associated with that emergency. Finally, the Lighthouse team had to classify documents by level of emergency, to help the company build the strongest case. The emergency level of some documents was already classified, thanks to a system installed by the company toward the end of the two years under investigation. But for the majority of documents, it was unknown. Lighthouse was able to classify them using advanced search features of proprietary technology, which identified key terms like “time-sensitive” and other ways emergencies were referenced in the document population. Major Savings and Critical Insights In only two weeks, a two-person team delivered on Lighthouse’s promise to help the company gather evidence, shrink the document population, and save time and money. Had the company tried to build a case with linear review instead, it would have taken up to 5 times longer and cost up to twice as much. Of the 1.5 million total documents, Lighthouse escalated approximately 37,500 (2.5% of the original dataset). To help with case building, the team sorted documents into three tiers of descending priority: employees responding to high-level emergencies during the lunch hour, employees responding to any level of emergency during the lunch hour, and employees responding to high-level emergencies at any time in the day. The Lighthouse team also normalized the metadata for all documents to make it easy for company counsel to see which employees were involved in each document and thread. Across the three tiers: 78% of San Francisco employees were tied to at least one document 74% were tied to at least one non-propagated document (i.e., an email associated with a unique emergency) 68% were the sender of at least one non-propagated document This strongly suggested that more than 50% of employees actively responded to emergencies in the target timeframe and helped counsel hit the ground running in collecting the facts to prove it. Corporate Case Studycase-study; corporate; corporation; ediscovery; fact-finding; document-review; investigations; kdi; key-document-identification; keyword-search; tech-industry; analyticsediscovery-review; ai-and-analytics; client-successCase-Study, Corporate, Corporation, eDiscovery, fact-finding, document-review, investigations, KDI, key-document-identification, keyword-search, tech-industry, analytics, ediscovery-review, ai-and-analytics
February 15, 2022
Case Study
Case-Study, client-success, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, analytics, Pharma, ai-and-analytics, analytics, Processing, ediscovery-review, ai-and-analytics

Significant Cost Savings Achieved Through Lighthouse Spectra

Spectra, Lighthouse's cloud-based eDiscovery software, saved a pharmaceutical company cost by managing eDiscovery for a third-party subpoena in-house. What They Needed Faced with yet another third-party subpoena, a large pharmaceutical company started to question how they could address these types of matters in a more cost-effective manner. Although sometimes larger in terms of data volume, these types of matters aren’t generally complex and commonly don’t require the expertise and oversight of an outside vendor to manage the eDiscovery process. This case, in particular, had a large data volume with a low dollar value, so the company wanted to explore options outside of the traditional vendor and outside counsel review and production process. How They Did It Lighthouse had been exploring the idea of Spectra, our cloud-based, user-driven eDiscovery solution, with this client for some time and this third-party subpoena seemed to be the perfect fit for their first run. Although the matter was a bit larger in nature, with over 150 GBs of email, it could easily be self-driven by the client’s in-house team of experts within the Spectra environment. To begin, the Spectra team onboarded the client’s team into the tool and provided training, documentation, and access. From there, the client kicked off the matter and uploaded all the documents into Nuix to be processed with the click of a button. Nuix then quickly processed this data and loaded the resulting documents into Relativity for review. Upon investigation of the resulting ~750K document set, the client decided that instead of taking the time to craft and test search terms to identify the potentially relevant files, they preferred to engage Lighthouse’s Focus Discovery team to further reduce and refine the files needing to be reviewed. As a first step, all documents were run through Brainspace to flag lesser included emails that could be removed from the review. Out of the 771,825 documents loaded to Relativity, 168,628 (or 22% of the population), were able to be removed from the review entirely. Next, the client sent Lighthouse’s Focus Discovery team a request for production as well as the subpoena to aid in the search term creation and optimization process. The Focus group worked with the client to create and then optimize the search terms until only ~5,000 hits (0.6% of promoted docs) were flagged for review. At this point, the client team was able to organize the review and review the documents to ensure privilege was considered. Finally, the ~250 relevant documents were produced inside of Spectra and delivered for service to the other side. ‍ The Results Overall, the client was not only able to save significant money on linear review due to a reduced data volume, but also on the traditional review process, as they did not have to outsource it and instead could run their matter in one easy-to-use solution, while accessing on-demand expertise of the Focus Discovery team. The experience thus far has been overwhelmingly positive and the client now has an easy-to-use, self-service solution for handling third-party subpoenas (and other similar matters) in a more cost-effective manner. ‍ ‍ Corporate Case Studycase-study; corporate; corporation; ediscovery; self-service, spectra; spectra; analytics; pharma; ai-and-analytics; processingediscovery-review; ai-and-analytics; client-successCase-Study, client-success, Corporate, Corporation, eDiscovery, self-service, spectra, Spectra, analytics, Pharma, ai-and-analytics, analytics, Processing, ediscovery-review, ai-and-analytics
April 5, 2024
eBook
Antitrust, Second Requests, HSR Second Request

Emerging Trends in Second Requests

December 15, 2023
eBook
Ai-and-analytics

AI Is All the Rage — But What’s the ROI in eDiscovery?

[h2] Not All AI is Created Equally The eDiscovery market is suddenly crowded with AI tools and platforms. It makes sense—AI is perfectly suited for the large datasets, rule-based analysis, and need for speed and efficiency that define modern document review. But not all AI tools are created equally—so how do you sort through the noise to find the solutions best fit for you? What’s most important? The latest, greatest tech or what’s tried and true? At the end of the day, those aren’t the most important questions to consider. Instead, here are three questions you need to answer right away: What is my goal? How Is AI uniquely suited to help me? What are the measures of success? These questions will help you look beyond the “made with AI” labels and find solutions that make a real difference on your work and bottom line. To get you started, here are 4 ways that our clients have seen AI add value in eDiscovery. [h2] AI in eDiscovery: 4 ways to measure ROI Document review accuracy Risk mitigation Speed to strategy and completion Cost of eDiscovery [h2] AI Improves Document Review Deliverables and Timelines Studies have shown that machine learning tools from a decade ago are at least as reliable as human reviewers—and today’s AI tools are even better. Lighthouse has proven this in real-world, head-to-head comparisons between our modern AI and other review tools (see examples below). Analytic tools built with AI, such as large language models (LLMs), do a better job of detecting privilege, personally identifiable information, confidential information, and junk data. This saves a wealth of time and trouble down the line, through fewer downstream tasks like privilege review, redactions, and foreign language translation. It also significantly lowers the odds of disclosing non-relevant but sensitive information that could fuel more litigation. [h3] Document review accuracy [tab 1: open] Comparison [tab 2: closed] Examples No/Old AI Modern AI Words evaluated individually, at face value Words evaluated in context, accounting for different usages/meanings Analysis limited to text Analysis includes text, metadata, and other data types Broad analysis pulls in irrelevant docs for review Variable efficacy, highly dependent on document richness and training docs Nuanced analysis pulls in fewer irrelevant docs for review Specific base models for each classification type leads to more accurate analytic results [tab 1: closed] Comparison [tab 2: open] Examples Lighthouse AI Results in Smaller, More Precise Responsive Sets* During review for a Hart-Scott-Rodino Second Request, counsel ran the same documents through 3 different TAR models (Lighthouse AI, Relativity, and Brainspace) with the same training documents and parameters. *Data shown is for 70% recall. 308K fewer documents than Relativity; ~94K fewer than Brainspace 89% precision, compared to 73% for Relativity and 83% for Brainspace Lighthouse AI Outperforms Priv Terms In a matter with 1.5 million documents, a client compared the efficacy of Lighthouse AI and privilege terms. The percentage of potential privilege identified by each method was measured against families withheld or redacted for privilege. 8% privilege search terms 53% Lighthouse AI [h2] AI Mitigates Risk Through Data Reuse and Trend Analysis The accuracy of AI is one way it lowers risk. Another way is by applying knowledge across matters: Once a document is classified for one matter, reviewers can see how it was coded previously and make the same classification in current and future matters. This makes it much less likely that you’ll produce sensitive and privileged information to investigators and opposing counsel. Additionally, AI analytics are accessible in a dashboard view of an organization’s entire legal portfolio, helping teams identify risk trends they wouldn’t see otherwise. For example, analytics might show a higher incidence of litigation across certain custodians or a trend of outdated material stored in certain data sources. [h3] Risk mitigation [tab 1: open] Comparison [tab 2: closed] Examples No/Old AI Modern AI Search terms miss too many priv and sensitive docs Search terms cannot show historical coding Nuanced search finds more priv and sensitive docs Historical coding insights help reviewers with consistency Docs may be coded differently across matters, increasing risk of producing sensitive or priv docs Coding can be reused, increasing consistency and lowering risk QC relies on the same type of analysis as initial review (i.e., more humans) QC bolstered by statistical analysis; discrepancies between AI and attorney judgments indicate a need for more scrutiny [tab 1: closed] Comparison [tab 2: open] Examples Lighthouse AI Powers Consistency in Privilege Review A global pharmaceutical company asked Lighthouse to use advanced AI analytics on a group of related matters. This enabled the company to reuse a total of 26K previous privilege coding decisions, avoiding inadvertent disclosures and heading off potential challenges from opposing counsel. Reused priv coding Case A 4,300 Case B 6,080 Case C 970 Case D 4,100 Case E 11,000 [h2] AI Empowers with Early Insights and Faster Workflows Enhancements in AI technology in recent years have led to tools that work faster even when dealing with large datasets. They provide a clearer view of matters at an earlier stage in the game, so you can make more informed legal and strategy decisions right from the outset. They also get you to the end of document review more quickly, so you can avoid last-minute sprints and spend more time building your case. [h3] Speed to strategy and completion [tab 1: open] Comparison [tab 2: closed] Examples No/Old AI Modern AI Earliest insights emerge weeks to months into doc review Initial insights available within days for faster case assessment and data-backed case strategy Responsive review and priv review must happen in sequence Responsive review and priv review can happen simultaneously Responsive model goes back to start if the dataset changes Responsive models adapt to dataset changes False negatives lead to surprises in later stages No surprises QC spends more time managing review and checking work QC has more time to assess the substance of docs Review drags on for months Review completed in less time [tab 1: closed] Comparison [tab 2: open] Examples Lighthouse AI Crushes CAL for Early Insights Case planning and strategy hinge on how soon you can assess responsiveness and privilege. Standard workflows for advanced AI from Lighthouse are orders of magnitude faster than traditional CAL models. Dataset: 2M docs Building the responsive set Detecting sensitive info CAL & Regex 8 weeks 8+ weeks Lighthouse AI 15 days including 2 wks to train and 24 hrs to produce probability assessments (highly likely, highly unlikely, etc.) 24 hrs for arrival of first probability assessments [h2] AI Lowers eDiscovery Spend The accuracy, risk mitigation, and speed of advanced AI tools and analytics add up to less eyes-on review, faster timelines, and lower overall costs. [h3] Cost of eDiscovery [tab 1: open] Comparison [tab 2: closed] Examples No/Old AI Modern AI Excessive eyes-on review requires more attorneys and higher costs Eyes-on review can be strategically limited and assigned based on data that requires human decision making Doc review starts fresh with each matter Doc review informed and reduced by past decisions and insights Lower accuracy of analytics means more downstream review and associated costs Higher accuracy decreases downstream review and associated costs ROI limited by document thresholds and capacity for structured data only ROI enhanced by capacity for an astronomical number of datapoints across structured and unstructured data [tab 1: closed] Comparison [tab 2: open] Examples Lighthouse AI Trims $1M Off Privilege Review Costs In a recent matter, Lighthouse’s AI analytics rated 208K documents from the responsive set “highly unlikely” to be privileged. Rather than verify via eyes-on review, counsel opted to forward these docs directly to QC and production. In QC, reviewers agreed with Lighthouse AI’s assessment 99.1% of the time. 208K docs removed from priv review = $1.24M savings* *Based on human review at a rate of 25 docs/hr and $150/hr per reviewer. Lighthouse AI Significantly Reduces Eyes-On Review The superior accuracy of Lighthouse AI helped outside counsel reduce eyes-on review by identifying a smaller responsive set, removing thousands of irrelevant foreign-language documents, and targeting privilege docs more precisely. In terms of privilege, using AI instead of privilege terms avoided 18K additional hours of review. “My team saved the client $4 million in document review and translation costs vs. what we would have spent had we used Brainspace or Relativity Analytics.” —Head of eDiscovery innovation, Am Law 100 firm [h2] Finding the Right AI for the Job We hope this clarifies how AI can make a material difference in areas that matter most to you—as long as it’s the right AI. How can you tell whether an AI solution can help you accomplish your goals? Look for key attributes like: Large language models (LLMs) – LLMs are what enable the nuanced, context-conscious searches that make modern AI so accurate. Predictive AI – This is a type of LLM that makes predictions about responsiveness, privilege, and other classifications. Deep learning – This is the latest iteration of how AI gets smarter with use; it’s far more sophisticated than machine learning, which is an earlier iteration still used by many tools on the market. If you find AI terminology confusing, you’re not alone. Check out this infographic that provides simple, practical explanations. And for more information about AI designed with ROI in mind, visit our AI and analytics page below.
October 27, 2023
eBook
ai-and-analytics, edisovery-review

AI for eDiscovery: Terminology to Know

Everybody’s talking about AI. To help you follow the conversation, here’s a down-to-earth guide to the AI terms and concepts with the most immediate impact on document review and eDiscovery. Predictive AI. AI that predicts what is true now or in the future. Give predictive AI lots of data—about the weather, human illness, the shows people choose to stream—and it will make predictions about what else might be true or might happen next. These predictions are weighted by probability, which means predictive AI is concerned with the precision of its output. In eDiscovery: available now Tools with predictive AI use data from training sets and past matters to predict whether new documents fit the criteria for responsiveness, privilege, PII, and other classifications. Generative AI AI that generates new content based on examples of existing content ChatGPT is a famous example. It was trained on massive amounts of written content on the internet. When you ask it a question, you’re asking it to generate more written content. When it answers, it isn’t considering facts. It’s lining up words that it calculates will fulfill the request, without concern for precision. In eDiscovery: still emerging So far, we have seen chatbots enter the market. Eventually it may take many forms, such as creating a first draft of eDiscovery deliverables based on commands or prior inputs. Predictive AI and Generative AI are types of Large Language Models (LLMs) AI that analyzes language in the ways people actually use it LLMs treat words as interconnected pieces of data whose meaning changes depending on the context. For example, an LLM recognizes that “train” means something different in the phrases “I have a train to catch” and “I need to train for the marathon.” In eDiscovery: available but not universal Many document review tools and platforms use older forms of AI that aren’t built with LLMs. As a result, they miss the nuances of language and view every instance of a word like “train” equally. Ask an expert: Karl Sobylak, Director of Product Management, AI, Lighthouse What about “hallucinations”? This is a term for when generative AI produces written content that is false or nonsensical. The content may be grammatically correct, and the AI appears confident in what it’s saying. But the facts are all wrong. This can be humorous—but also quite damaging in legal scenarios. Luckily, we can control and safeguard against this. Where defensibility is concerned, we can ensure that AI models provide the same solution every time. At Lighthouse, we always pair technology with skilled experts, who deploy QC workflows to ensure precision and high-quality work product. What does this have to do with machine learning? Machine learning is the older form of AI used by traditional TAR models and many review tools that claim to use AI. These aren’t built with LLMs, so they miss the nuance of language and view words at face value. How does that compare to deep learning? Deep learning is the stage of AI that evolved out of machine learning. It’s much more sophisticated, drawing many more connections between data. Deep learning is what enables the multilayered analysis we see in LLMs.
September 21, 2023
Whitepaper
ediscovery-review, ai-and-analytics, document review

Analyzing the Real-World Applications and Value of AI for eDiscovery

September 6, 2023
eBook
ediscovery-review, ai-and-analytics, document review

How AI Advancements Can Revolutionize Document Review

September 15, 2021
Whitepaper
TAR, Advanced AI, HSR Second Requests, Big data

TAR + Advanced AI: The Future Is Now

April 12, 2023
Whitepaper
ediscovery-review, data-privacy, modern-data, big-data, analytics

The Challenge with Big Data

October 14, 2021
eBook

Self-Service eDiscovery Buying Guide

May 18, 2022
eBook

Purchasing AI for eDiscovery - New, Now, and Next

November 23, 2022
eBook

eDiscovery Software Assessment Toolkit

June 16, 2022
eBook

eDiscovery Advancements Meet the Unique Challenges of Second Requests

November 1, 2021
eBook

2021 HSR Second Request Trends Report

May 1, 2023
eBook

Is Repeated Review Always Necessary?

September 29, 2023
Podcast
chat and collaboration data, information governance, Microsoft 365

The Great Link Debate and the Future of Cloud Collaboration

Michael Blank, Corporate Counsel ‚Äì eDiscovery, at DISH, and Lisa Lukaszewski, counsel at Gunster, discuss how the issues with hyperlinks and collaboration data continue to transform., Links, modern attachments, shared documents‚Äîthe descriptors for files exchanged through email and collaboration platforms continue to grow with no clear consensus on what to call them or how exactly to handle them. Despite their wide use, why are they a persistent challenge for eDiscovery and data governance teams? Beyond semantics, links and attachments raise bigger questions about how to manage collaboration data as it proliferates in the evolving workplace. Michael Blank , Corporate Counsel ‚Äì eDiscovery , at DISH, and Lisa Lukaszewski , Of Counsel at Gunster, join Law & Candor to discuss how the issues with links and collaboration data continue to transform‚Äîincluding changes to ESI protocols‚Äîhow recent legal decisions are contributing to the debate, and best practices for tackling these persistent challenges.  This episode‚Äôs sighing of radical brilliance: ‚Äú Carmakers are failing the privacy test. Owners have little or no control over data collected ,‚Äù Frank Bajak, AP, September 6, 2023. Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , chat-and-collaboration-data; information-governance, chat and collaboration data, information governance, Microsoft 365, big-data; compliance; corporate; emerging-data-sources; g-suite; information-governance; microsoft; podcast; preservation; legal-holds
September 29, 2023
Podcast
AI, analytics, eDiscovery, Review, information governance, generative AI, PHI, PII, healthcare, HIPAA, podcast

Generative AI and Healthcare: A New Legal Landscape

Lighthouse welcomes Ty Dedmon, Partner and lead of Bradley’s healthcare litigation team, to assess how generative AI is impacting litigation and what we can do to minimize the risk., Although the novel and often comical uses of generative AI have captured more recent headlines—think philosophical conversations with a chatbot or essays written in seconds using AI—there are big changes happening across sectors of the economy thanks to adoption of new tools and programs, including the legal and healthcare spaces. Recent case law and legislation highlights the new landscape emerging in healthcare litigation with potential long-term implications. Lighthouse welcomes Ty Dedmon , Partner at Bradley who leads their healthcare litigation team, to assess how generative AI is impacting litigation and what we can do to prepare, and to share advice on leverage AI innovation while minimizing the risk. This episode’s sighing of radical brilliance: “ Top AI companies agree to work together toward transparency and safety ,” Kevin Collier, NBCNews , July 21, 2023. Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , ai-and-analytics; ediscovery-review; information-governance, AI, analytics, eDiscovery, Review, information governance, generative AI, PHI, PII, healthcare, HIPAA, podcast, ai-and-analytics; analytics; artificial-intelligence; compliance; data-privacy; healthcare; healthcare-litigation; hipaa-phi; phi; pii; podcast; regulation
September 29, 2023
Podcast
eDiscovery, Review,

Why Your eDiscovery Program and Technology Need Scalability

Lighthouse’s Brooks Thompson, Executive Director of Spectra, provides use cases for scaling and diversifying your eDiscovery platform and technology., As the demands of modern data, litigation, investigations, and data privacy continue to grow in scale and complexity, solutions for them need to adapt accordingly. Although there is a lot of noise around the latest generative AI promises or capabilities for eDiscovery, often legal teams and counsel merely need solutions that can effectively scale to their matters at hand. Deploying platforms or technology intended only for larger or more specific matters can be cumbersome and drain resources, leaving teams ill equipped for the variety of projects they encounter. Lighthouse’s Brooks Thompson , Executive Director of Spectra Operations and Support, joins the podcast to provide some practical advice and use cases for scaling and diversifying your eDiscovery platform and technology to make them more comprehensive. This episode’s sighing of radical brilliance: “ Why Companies Can — and Should — Recommit to DEI in the Wake of the SCOTUS Decision , ”Tina Opie and Ella F. Washington, Harvard Business Review , July 27, 2023. Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , ediscovery-review, eDiscovery, Review, , ediscovery; ediscovery-process; analytics; big-data; ai-and-analytics
September 29, 2023
Podcast
antitrust, AI, analytics, HSR, antitrust, FTC, DOJ, M&A

What You Need to Know About the New FTC and DOJ HSR Changes

Brian Rafkin, counsel in Akin‚Äôs antitrust and competition practice, joins to examine the HSR rules and share advice for utilizing AI and workflows to manage increased scrutiny., <iframe height="200px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/f0b5195e-f4b6-4f49-a2ca-4d7aa3638bc2?dark=true"></iframe> ‚Äç Continuing a more aggressive posture toward corporate mergers, the Department of Justice and Federal Trade Commission recently announced new HSR rules that dramatically change and expand the amount and type of information that needs to be submitted with HSR filings. How will this impact future M&A activity and Second Requests? Brian Rafkin , counsel in Akin‚Äôs antitrust and competition practice, joins the podcast to examine the new HSR rules and their potential implications. He also shares best practices for utilizing technology and workflows to manage increased scrutiny and pressure on deals.  This episode‚Äôs sighing of radical brilliance: ‚Äú United States takes on Google in biggest tech monopoly trial of 21st century ,‚Äù Dara Kerr, NPR, September 12, 2023. Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , antitrust; ai-and-analytics, antitrust, AI, analytics, HSR, antitrust, FTC, DOJ, M&A, ai-and-analytics; antitrust; artificial-intelligence; biden-administration; document-review; hsr-second-requests; mergers; regulation
September 29, 2023
Podcast
legal operations, eDiscovery, Review

The Power of Three: Maximizing Success with Law Firms, Corporate Counsel, and Legal Technology

Law & Candor welcomes Michael Bohner, Managing Discovery Attorney at Cleary, and Justin Van Alstyne, Head of Discovery and Information Governance at T-Mobile, to explore the practical aspects of this partnership, including balancing responsibilities, employing technology, and building relationships., In demanding and highly contentious litigation or investigations it can often feel like it‚Äôs every person for themselves without much room for partnership. However, this is a lost opportunity. The relationship between the strong trio of corporate counsel, law firms, and legal technology providers is often an unacknowledged key to overcoming critical challenges. By sharing key information, balancing workloads, and building on each other‚Äôs expertise, these partners can work together to solve modern data challenges and the toughest matters. Law & Candor welcomes Michael Bohner , Managing Discovery Attorney at Cleary, and Justin Van Alstyne , Head of Discovery and Information Governance at T-Mobile, to explore the practical aspects of this partnership, including balancing responsibilities, employing technology, and building relationships. This episode‚Äôs sighing of radical brilliance: ‚Äú Meet Aleph Alpha, Europe‚Äôs Answer to Open AI ,‚Äù Morgan Meaker, Wired, August 30, 2023.   Learn more about the show and our speakers on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and Twitter . , legal-operations; ediscovery-review, legal operations, eDiscovery, Review, corporate-legal-ops; ediscovery; law-firm; legal-ops; legal; corporate; ediscovery-process
March 29, 2023
Podcast
information-governance, data-privacy, microsoft-365

Prioritizing Information Governance and Risk Strategy for a Dynamic Economic Climate

Lica Patterson, Senior Director of Global Advisory Services at Lighthouse, discusses how assessing short and long-term risk can inform a more strategic information governance program.,   As we continue to grapple with a strange and unpredictable economic environment, establishing your legal and information governance priorities can be daunting. While directing investment and energy into the most urgent matters is a reflex during a down economy, neglecting more long-term data issues and risk can be detrimental. How do you balance these interests with already strapped resources? Lica Patterson , Senior Director of Global Advisory Services at Lighthouse, joins the podcast to discuss how assessing short and long-term risk can inform a more strategic information governance program. She also shares how the right technology and teams contribute to accomplishing goals and evolving your program. This episode's sighting of radical brilliance:  3 trends will shape the future of work, according to Microsoft‚Äôs CEO , World Economic Forum,  February 10, 2023. If you enjoyed the show, learn more about our speakers and subscribe on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and  Twitter .   , information-governance; data-privacy; microsoft-365, information-governance, data-privacy, microsoft-365, emerging-data-sources; legal-holds; podcast; record-management; risk-management
December 15, 2022
Podcast
podcast, mental health, diversity-equity-and-inclusion,

Legal’s Mental Health Imperative

Amy Sellars, Senior Legal Counsel at CBRE, joins Law & Candor to discuss some of the contributors to mental health challenges in the legal industry and some practical approaches to remedy them., To kick off the episode, Bill and Paige discuss a piece from Law.com that looks at a recent surge in diverse, female general counsels . Next, they welcome Amy Sellars , Senior Legal Counsel, eDiscovery Operations, at CBRE, for an important conversation about the mental health crisis in the legal industry. They discuss some of the drivers of mental health challenges and what can be done at an individual and industry level to help. They explore a variety of questions, including: How has the pandemic or other factors contributed to greater challenges with mental health we‚Äôve read about? Improving mental health is a challenge we‚Äôve seen many industries grapple with recently. Are there unique challenges in legal and eDiscovery that have contributed to the epidemic we‚Äôre seeing today? While we‚Äôve heard about ways to personally manage stress, there are also some structural issues at play. What are some strategies or approaches you‚Äôve seen to help improve work/life balance or how work is allocated? As an industry, how can we continue this conversation and keep advancing initiatives to improve mental health and well being for everyone? If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us wherever you get your podcasts, and join in the conversation on  Twitter .  , diversity-equity-and-inclusion, podcast, mental health, diversity-equity-and-inclusion,, podcast; mental-health
March 29, 2023
Podcast
collections, review, emerging data sources, podcast, production, chat-and-collaboration-data, ediscovery-review

The Chat Effect: Improving eDiscovery Workflows for Modern Collaboration Data

Law & Candor welcomes Vanessa Quaciari, Senior eDiscovery Counsel at Baker Botts, to discuss improvements in collection, review, and production that can help you manage collaboration data.,   We are all participating in the unprecedented evolution of workplace communication. From virtually editing a shared document, to ‚Äúliking‚Äù a chat message, to responding to a colleague with an emoji during a video call‚Äîmost employees in a modern work environment are actively (and often unknowingly) creating large volumes of collaboration data. For the legal and eDiscovery professions, the speed of this innovation has necessitated parallel rapid advancements in technology and new approaches to workflows to stay ahead of the complexity and scale of chat and collaboration data. Law & Candor welcomes Vanessa Quaciari , Senior eDiscovery Counsel at Baker Botts, to discuss improvements in collection, review, and production that can help you manage collaboration data and scale your approach as the evolution continues. This episode's sighting of radical brilliance: ChatGPT If you enjoyed the show, learn more about our speakers and subscribe on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and  Twitter .  , chat-and-collaboration-data; ediscovery-review, collections, review, emerging data sources, podcast, production, chat-and-collaboration-data, ediscovery-review, collections; review; emerging-data-sources; podcast; production
March 29, 2023
Podcast
review, ai/big data, podcast, managed review, ai-and-analytics, legal-operations

Optimizing Review with Your Legal Team, AI, and a Tech-Forward Mindset

Lighthouse‚Äôs Mary Newman, Executive Director of Managed Review, joins the podcast to explore how adopting a technology-forward mindset can provide better results for document review teams.,   To keep up with the big data challenges in modern review, adopting a technology-enabled approach is critical. Modern technology like AI can help case teams defensibly cull datasets and gain unprecedented early insight into their data. But if downstream document review teams are unable to optimize technology within their workflows and review tasks, many of the early benefits gained by technology can quickly be lost. Lighthouse‚Äôs Mary Newman , Executive Director of Managed Review, joins the podcast to explore how document review teams that adopt a technology-forward mindset can provide better review results now and in the future. This episode's sighting of radical brilliance: An A.I. Pioneer on What We Should Really Fear , New York Times,  December 21, 2022.  If you enjoyed the show, learn more about our speakers and subscribe on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and  Twitter .  , ai-and-analytics; legal-operations; lighting-the-way-for-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscovery, review, ai/big data, podcast, managed review, ai-and-analytics, legal-operations, review; ai-big-data; podcast; managed-review
March 29, 2023
Podcast
podcast, data reuse, document review, chat-and-collaboration-data, ediscovery-review

Why Your Data is Key to Reducing Risk and Increasing Efficiency During Investigations and Litigation

Cassie Blum, Senior Director of Review Consulting at Lighthouse, discusses how to implement a data reuse strategy, including what technology and workflows can optimize its success.,   Handling large volumes of data during an investigation or litigation can be anxiety-inducing for legal teams. Corporate datasets can become a minefield of sensitive, privileged, and proprietary information that legal teams must identify as quickly as possible in order to mitigate risk. Ironically, corporate data also provides a key to speeding up and improving this process. By reusing metadata and work product from past matters in combination with advanced analytics, organizations can significantly reduce risk and increase efficiency during the review process. Law & Candor welcomes Cassie Blum , Senior Director of Review Consulting at Lighthouse, to discuss how to implement this data strategy, including what technology and workflows can optimize its success. This episode's sighting of radical brilliance:  7 Ways to be a more inclusive colleague ,  Fast Company , February 24, 2023. If you enjoyed the show, learn more about our speakers and subscribe on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and  Twitter . , chat-and-collaboration-data; ediscovery-review; lighting-the-path-to-better-ediscovery, podcast, data reuse, document review, chat-and-collaboration-data, ediscovery-review, podcast; data-reuse; document-review
December 15, 2022
Podcast
review, data-re-use, ai/big data, podcast, ai-and-analytics, ediscovery-review

Review Analytics for a New Era

Law & Candor welcomes Kara Ricupero, Associate General Counsel at eBay, for a conversation about how analytics and reimagining review can help solve data challenges and advance business imperatives., In episode two, we introduce our new co-host Paige Hunt , Vice President of Global Discovery Solutions at Lighthouse, who will be joining Bill Mariano as our guide through the legal technology revolution. In their first Sighting of Radical Brilliance together they chat about an article in Wired that explores the rise of the AI meme machine, DALL-E Mini . Then, Paige and Bill interview Kara Ricupero , Associate General Counsel and Head of Global Information Governance, eDiscovery, and Legal Analytics at eBay. They explore how a dynamic combination of new technology and human expertise is helping to usher in new approaches to review and analytics that can help tackle modern data challenges. Other questions they dive into, include: How did you identify the kind of advanced technology needed for modern data challenges?   Partnering with the right people and experts across the business to utilize technology and insights seems to be a big part of the equation. How did you work with other stakeholders to leverage analytics?  With new analytics and intelligence, has it changed how you approach review on matters or other processes? How do you think utilizing analytics will evolve as data and review continue to change? What kinds of problems do you think it can help solve?  If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , listen and rate the show wherever you get your podcasts, and join in the conversation on  Twitter .  , ai-and-analytics; ediscovery-review; lighting-the-way-for-review; lighting-the-path-to-better-review; lighting-the-path-to-better-ediscovery, review, data-re-use, ai/big data, podcast, ai-and-analytics, ediscovery-review, review; data-re-use; ai-big-data; podcast
March 29, 2023
Podcast
microsoft, emerging data sources, podcast, record management, microsoft-365, chat-and-collaboration-data

Everything Dynamic Everywhere: Managing a More Collaborative Microsoft 365

Emily Dimond, Managing Senior Counsel, eDiscovery, at PNC Bank, shares practical strategies for managing updates in Microsoft 365 and how to develop an agile governance program.,   Collaborative technology‚Äîgreat for employee productivity but often challenging for legal and IT departments. Balancing the risk and reward requires a deep understanding of ever evolving updates while proactively managing those changes. As organizations adopt cloud-based enterprise software like Microsoft 365, previous change management and governance approaches are often no longer sufficient. Emily Dimond , Managing Senior Counsel, eDiscovery, at PNC Bank, shares practical strategies for managing updates in M365, including recent changes to transcripts and loop components, and how to develop a strong governance program equipped for today‚Äôs dynamic landscape.  This episode's sighting of radical brilliance:  Where is Tech Going in 2023? Harvard Business Review,  January 26, 2023. If you enjoyed the show, learn more about our speakers and subscribe on lawandcandor.com , rate us wherever you get your podcasts, and join in the conversation on LinkedIn and  Twitter .  , microsoft-365; chat-and-collaboration-data; lighting-the-path-to-better-information-governance, microsoft, emerging data sources, podcast, record management, microsoft-365, chat-and-collaboration-data, microsoft; emerging-data-sources; podcast; record-management
March 31, 2022
Podcast
cloud migration, legacy data remediation, legal holds, podcast, record management, preservation, risk management, data-privacy, chat-and-collaboration-data, microsoft-365,

Spring Cleaning for Legal Teams: The Cloud and Defensible Deletion of Data

Law & Candor welcomes Erika Namnath of Lighthouse to discuss new challenges with data retention and deletion in the Cloud, developing a defensible disposal program, and getting stakeholder buy-in., To kick off the show, Bill Mariano and Rob Hellewell discuss another Sighting of Radical Brilliance: How scientists are using AI to identify new drug combinations for children with incurable brain cancer. Next, they interview Erika Namnath  from Lighthouse about how to develop a sound and efficient defensible deletion program and the benefits of getting buy-in for it throughout an organization. Some of the key questions they discuss include: Defensible disposal of data continues to be a key challenge for eDiscovery and information governance programs. Why has this issue persisted and how has it evolved? Historically, because of the risk of deleting important information or not being able to defend deletion, teams have defaulted to saving as much as possible. Why is this approach becoming increasingly impossible and even poses a greater risk? How should leaders approach developing a data retention and disposal program or updating their existing one? When developing these retention policies and updates, we often hear challenges with legacy data and legal holds. How can teams wrap their heads around existing data while also considering what they‚Äôre retaining today?  It seems a significant challenge for these programs is gaining stakeholder buy-in and assigning ownership for retention and deletion. What can leaders do to tackle this? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us wherever you get your podcasts, and join in the conversation on Twitter .  Related Links : Blog post: Cloud Adaptation: How Legal Teams Can Implement Better Information Governance Structures for Evolving Software Blog post: Making the Case for Information Governance and Why You Should Address It Now Podcast: Achieving Information Governance through a Transformative Cloud Migration Article: Scientists use AI to identify new drug combination for children with incurable brain cancer About Law & Candor   Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for eDiscovery, compliance, and information governance. To learn more about the show and our speakers, visit the podcast homepage .  , data-privacy; chat-and-collaboration-data; microsoft-365, cloud migration, legacy data remediation, legal holds, podcast, record management, preservation, risk management, data-privacy, chat-and-collaboration-data, microsoft-365,, cloud-migration; legacy-data-remediation; legal-holds; podcast; record-management; preservation; risk-management
December 15, 2022
Podcast
collections, emerging data sources, departing/onboarding employee, podcast, preservation, risk management, chat-and-collaboration-data, data-privacy, digital-forensics,

Data Governance for the BYOD Age

Our hosts chat with Lighthouse's John Bair about implementing proactive data management programs and emerging challenges with remote working, including mobile devices and collaboration data., Law & Candor returns for Season 10 with co-hosts  Bill Mariano  and Rob Hellewell. They kick off the episode with a discussion of a Harvard Business Review article about the ways AI can make strategy more human. Next they are joined by John Bair , Senior Consultant in Digital Forensics at Lighthouse, to discuss bring your own device (BYOD) policies, implementing proactive data management programs, and emerging data challenges with remote working. Some questions that they tackle include: From a data governance and management perspective, what are the greatest challenges that have emerged from working from home and BYOD policies? Many organizations may have governance programs in place but still struggle with new data sources or devices. What can make some programs inadequate to face these changes? For those needing to refresh their governance approach, or build something new, what advice do you have for creating a more proactive program to get ahead of these data challenges? How should legal teams work with IT to ensure these types of programs are a success? How should we think about their roles? As mobile devices and virtual work continue to advance, how can teams ensure their data governance programs keep pace? If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , listen and rate the show wherever you get your podcasts, and join in the conversation on  Twitter .  , chat-and-collaboration-data; data-privacy; forensics; lighting-the-path-to-better-information-governance, collections, emerging data sources, departing/onboarding employee, podcast, preservation, risk management, chat-and-collaboration-data, data-privacy, digital-forensics,, collections; emerging-data-sources; departing-onboarding-employee; podcast; preservation; risk-management
March 25, 2022
Podcast
ccpa, gdpr, dsars, cross border data transfers, pii, podcast, privacy shield, data-privacy,

Mapping Updates to Data Privacy Regulations Worldwide

Our hosts chat with Lighthouse's Sarah Morgan about updates to privacy regulations in the U.S., Europe, and China, how they're impacting businesses, and what's next on the horizon., Bill Mariano and Rob Hellewell kick off this episode with another segment of Sightings of Radical Brilliance, where they discuss major privacy changes by Google and Apple in their mobile software. Next, our hosts chat with Sarah Moran , eDiscovery Evangelist and Proposal Content Strategist at Lighthouse, about updates to privacy regulations in the U.S., Europe, and China. They also dive into the following key questions: How is the enforcement of GDPR impacting businesses? How has the UK‚Äôs departure from the EU impacted privacy compliance? With so many states pursuing their own privacy regulations, do we anticipate any movement on a federal level? Beyond the U.S. and Europe, what does the privacy landscape look like internationally? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us wherever you get your podcasts, and join in the conversation on Twitter .  Related Links : Blog post: 2021 Data Privacy Overview: New Regulations and Guidance Blog post: Navigating the Intersections of Data, Artificial Intelligence, and Privacy Blog post: The Impact of Schrems II & Key Considerations for Companies Using M365: The Cloud Environment Article: Google Plans Privacy Changes, but Promises to Not Be Disruptive , data-privacy, ccpa, gdpr, dsars, cross border data transfers, pii, podcast, privacy shield, data-privacy,, ccpa; gdpr; dsars; cross-border-data-transfers; pii; podcast; privacy-shield
December 15, 2022
Podcast
gdpr, cross border data transfers, podcast, privacy shield, data-privacy, chat-and-collaboration-data, ai and analyics, microsoft-365

Anonymization and AI: Critical Technologies for Moving eDiscovery Data Across Borders

Our hosts are joined by Lighthouse's Damian Murphy for a lively chat about what AI solutions can be deployed to optimize eDiscovery workflows and maximize data insights while adhering to privacy laws., In this episode's Sighting of Radical Brilliance, our hosts discuss strategies for putting your data to work outlined in a recent Harvard Business Review article. To elucidate the complexities of moving data across borders, Lighthouse's Damian Murphy , Executive Director of Advisory Services in EMEA, joins the podcast. With Paige and Bill, Damian explains recent updates to data transfer policies, and what AI solutions can be deployed to optimize eDiscovery workflows and maximize data insights while adhering to privacy laws. Some key questions they answer, include: With fines continuing to be issued for GDPR violations and organizations grappling with how to transfer data across regions, data privacy is still not a resolved issue. What are some recent policy changes our audience should be aware of? How have these created challenges for the ways that data is managed and how organizations can ultimately utilize it? Many of our listeners are likely aware of how anonymization and pseudonymization are being utilized, but can you remind us how they work? Is there a typical approach for a client faced with the need to supply data held within the EU in order to comply with an eDiscovery order in the US? If the past is any indication, we should expect privacy policies to continue to change and impact data governance. How are anonymization and pseudonymization, and other approaches, helping prepare for what‚Äôs on the horizon? If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us wherever you get your podcasts, and join in the conversation on  Twitter .  , data-privacy; chat-and-collaboration-data; microsoft-365; practical-applications-of-ai-in-ediscovery, gdpr, cross border data transfers, podcast, privacy shield, data-privacy, chat-and-collaboration-data, ai and analyics, microsoft-365, gdpr; cross-border-data-transfers; podcast; privacy-shield
December 15, 2022
Podcast
podcast, dei, diversity-equity-and-inclusion

A Journey from One to All in Legal with Diversity, Equity, and Inclusion

Lighthouse's Reem Saffouri joins Law & Candor to share her personal journey and discuss how individuals can create greater equity and inclusion at work, in their industry, and beyond., Our hosts begin the show with another Sighting of Radical Brilliance, an article in Forbes about one of the most powerful sources of big data your company already owns . Then, Reem Saffouri , Vice President of Clients Solutions and Success at Lighthouse, joins the podcast to share her personal journey and discuss how individuals can create greater equity and inclusion at work, in their industry, and beyond. Here are some of the key questions they dive into: Although it‚Äôs a seemingly simple act, why don‚Äôt more people share their personal experiences and why is it so important for DEI efforts?  Hearing about structural challenges to DEI can be intimidating and somewhat demoralizing. But along with sharing personal experiences what can individuals do to champion DEI at their organizations?  There are nuances and specific solutions that work in each industry for improving equity and inclusion. What are you seeing in legal and legal tech that‚Äôs moving the needle? As you look to the future, what aspects of DEI are you hoping to impact?  If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us wherever you get your podcasts, and join in the conversation on  Twitter .  , diversity-equity-and-inclusion, podcast, dei, diversity-equity-and-inclusion, podcast; dei
December 15, 2022
Podcast
self-service, spectra, podcast, ediscovery-and-review, ai-and-analytics

Investigative Power: Utilizing Self Service Solutions for Internal Investigations

Our hosts chat with Justin Van Alstyne, Senior Corporate Counsel at T-Mobile, about best practices for handling internal investigations including the self service tools that have been most effective., Paige and Bill start the show with new and exciting research from MIT Sloan on artificial intelligence and machine learning.  Next, their interview with  Justin Van Alstyne , Senior Corporate Counsel, Discovery and Information Governance at T-Mobile. They dive into internal investigations, including how a simple, on-demand software solution can offer the scalability and flexibility teams need to manage investigations with varying amounts of data. Some other questions they explore are: How we collaborate and work has changed immensely over the past few years and that evolution doesn‚Äôt appear to be slowing down. How have new tools and data sources complicated conducting internal investigations?  With organizations encountering investigations of different sizes and degree, what workflows or approaches have you found are most flexible to respond to this variability? Along with process, technology is another key part of the equation. When choosing the right technology for internal investigations, what are some of your high-priority considerations? Are there any features that are must-haves? For people contemplating deploying a self service solution, what advice do you give to ensure your team has the right level of expertise and technology to handle their internal investigations at scale? If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us wherever you get your podcasts, and join in the conversation on  Twitter .  , ediscovery-review; ai-and-analytics; lighting-the-path-to-better-ediscovery, self-service, spectra, podcast, ediscovery-and-review, ai-and-analytics, self-service, spectra; podcast
April 13, 2022
Podcast
microsoft, cloud services, podcast, microsoft-365, information-governance

Microsoft 365 and the Age of Automation

Microsoft‚Äôs Stefanie Bier joins Law & Candor to delve into the key types of automation required to support Microsoft 365 at scale for large organizations using Core or Advanced eDiscovery., Bill Mariano and Rob Hellewell bring listeners another Sighting of Radical Brilliance. They discuss an episode of Fast Company‚Äôs podcast Innovation Unrestricted that explores how companies can incorporate diversity and inclusion into product design. They are then joined by Stefanie Bier , Senior Program Manager at Microsoft, to chat about how to deploy critical automation in Microsoft 365 and key updates on the horizon. Some questions they explore, include:  Automation is increasingly becoming a critical component of managing data and scaling programs. What are some of the new ways collaboration platforms, specifically M365, have introduced automation? What are the benefits of adopting these automated processes?  What are some of the key types of automation that are necessary to optimize M365?   With the cloud and automated updates, platforms are undergoing faster changes than ever before. How do you stay on top of them and ensure there‚Äôs cross-functional alignment at your organization? Whether it‚Äôs fear of error or worry about loss of control, some are reticent to automate certain aspects of their programs. What are the risks in not adopting automation? Our co-hosts wrap up the episode with advice for amplifying other women‚Äôs voices in the legal and technology industries and some key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , listen and rate the show wherever you get your podcasts, and join in the conversation on Twitter .  Related Links   Podcast: Understanding Microsoft 365 Unindexed Items Blog post: An Introduction to Managing Microsoft 365 Updates that Present Legal and Compliance Considerations Blog post: Breaking the Bias: Strategies from Top Women Leaders in Legal Technology Podcast: Innovation Unrestricted ‚Äì How companies can incorporate diversity and inclusion into product design , microsoft-365; information-governance, microsoft, cloud services, podcast, microsoft-365, information-governance, microsoft; cloud-services; podcast
March 25, 2022
Podcast
podcast, diversity-equity-and-inclusion,

Leading in Legal with Inclusive Mentorship

Kelly McGill, Chief People Officer at Lighthouse, discusses the value of mentorship, what a good mentorship program looks like in a virtual work environment, and how to create inclusive cultures., Kicking off season 9 of Law & Candor, co-hosts Bill Mariano and Rob Hellewell , welcome listeners back for a celebration of Women‚Äôs History Month. Each guest this season is a woman breaking bias, advancing technology, and championing inclusion in the legal and technology industries. First, they dive into Sightings of Radical Brilliance, discussing a Harvard Business Review article about being a better ally in a remote workplace . Bill and Rob are then joined by Kelly McGill , Chief People Officer at Lighthouse, to chat about the value of mentorship, what a good mentorship program looks like in a virtual or hybrid work environment, and how to create a more inclusive culture. Some key questions they explore, include:  Why is mentorship so powerful? What should people seek in a mentor and what makes a good mentee? What are best practices for mentoring in a virtual environment? How does mentorship contribute to more inclusive cultures? Our co-hosts wrap up the episode with advice for amplifying other women‚Äôs voices and key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , listen and rate the show wherever you get your podcasts, and join in the conversation on Twitter .  Related Links   Blog post: Breaking the Bias: Strategies from Top Women Leaders in Legal Technology Blog post: Charting the Path to Progress: A Conversation with Economic Forecaster Marci Rossell and Lighthouse CEO Brian McManus Podcast: Diversity and eDiscovery: How Diverse Hiring Practices Lead to a More Innovative Workforce Article: Managers, Here‚Äôs How to Be a Better Ally in the Remote Workplace , diversity-equity-and-inclusion, podcast, diversity-equity-and-inclusion,, podcast
March 25, 2022
Podcast
podcast, project management, risk management, ai-and-analytics, legal-operations, ediscovery-review,

Legal’s Balancing Act: Risk, Innovation, and Advancing Strategic Priorities

Megan Ferraro, Associate General Counsel, eDiscovery & Information Governance at Meta, joins Law & Candor to discuss the pivotal role legal is playing in helping innovation thrive while managing risk., Co-hosts Bill Mariano and Rob Hellewell start the show with Sightings of Radical Brilliance. In this episode, they review an article in Reuters exploring lawyer attrition and the ‚Äúgreat resignation.‚Äù Next, their interview with Megan Ferraro , Associate General Counsel, eDiscovery & Information Governance, Meta. They discuss the delicate balance that must be struck between risk and innovation and explore some of the following questions: How did the legal function evolve to play a bigger role in corporate strategy and innovation? What are the broader trends in the ways legal teams are supporting innovation? With businesses growing, adding new technology, and pivoting strategy quickly, what are the most critical risk challenges legal teams face today? How can legal best work with other functions in an organization to ensure strategic priorities are advanced‚Äîthrough new deals or technology, for example‚Äîwhile also balancing the risk factors?  Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us wherever you get your podcasts, and join in the conversation on Twitter .  Related Links   Blog post: Analytics and Predictive Coding Technology for Corporate Attorneys: Six Use Cases Podcast: Innovating the Legal Operations Model Blog post: What Skills Do Lawyers Need to Excel in a New Era of Business? Blog post: Purchasing AI for eDiscovery: Tips and Best Practices Article: To stem lawyer attrition, law firms must look beyond cash - report , ai-and-analytics; legal-operations; ediscovery-review, podcast, project management, risk management, ai-and-analytics, legal-operations, ediscovery-review,, podcast; project-management; risk-management
November 16, 2021
Podcast
privilege, review, ai/big data, tar/predictive coding, podcast, production, ai-and-analytics, ediscovery-review

Staying Ahead of the AI Curve

Our hosts and Harsha Kurpad of Latham Watkins discuss how to stay apprised of changes in AI technology in the ediscovery space and practical applications for more advanced analytics tools., Co-hosts Bill Mariano and Rob Hellewell start the show with Sightings of Radical Brilliance. In this episode, they review a recent  New York Times article by Cade Metz that explores how new organizations are using AI to find bias in AI . Next, they bring on Harsha Kurpad of Latham Watkins who answers the following questions around staying ahead of AI innovation in legal technology: What are some current barriers to adopting AI? How do you stay apprised of new AI technology, tools, and solutions? What are new data challenges that are leading to a greater adoption of AI or requiring the use of more sophisticated tools? How are government entities like the FTC and DOJ changing how AI is being used and what is required during investigations?  What are some best practices for training algorithms and staying on top of new approaches to training? What are some of the risks in not adopting AI or not staying apprised of changes to the tools, platforms, and how it‚Äôs being used. Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . Related Links White Paper: The Challenge with Big Data Blog Post: What Attorneys Should Know About Advanced AI in eDiscovery: A Brief Discussion Podcast: AI and Analytics for Corporations: Common Use Cases Blog Post: What is the Future of TAR in eDiscovery? (Spoiler Alert ‚Äì It Involves Advanced AI and Expert Services) , ai-and-analytics; ediscovery-review, privilege, review, ai/big data, tar/predictive coding, podcast, production, ai-and-analytics, ediscovery-review, privilege; review; ai-big-data; tar-predictive-coding; podcast; production
November 16, 2021
Podcast
microsoft, emerging data sources, podcast, record management, preservation, microsoft-365, chat-and-collaboration-data, information-governance,

Understanding Microsoft 365 Unindexed Items

James Hart of Lighthouse and our hosts discuss this complex aspect of Microsoft 365 eDiscovery, identify best practices and mitigation strategies, and proactive tips for the future., Law & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss a framework for building accountability into AI from an article in Harvard Business Review by Stephen Sanford . In this episode, Bill and Rob are joined by James Hart of Lighthouse. They discuss this critical component of Microsoft 365 and its important role in maximizing the effectiveness of ediscovery workflows and mitigation strategies. Key questions from their conversation include: What are unindexed items and how critical are they to efficiency in ediscovery workflows? After identifying unindexed items, what is the next step and how do you approach it? What are some key strategies for handling unindexed items? How are different organizations approaching unindexed items from a policy perspective? What are best practices for approaching this unique issue in Microsoft 365? In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . Related Links Blog Post: An Introduction to Managing Microsoft 365 Updates that Present Legal and Compliance Considerations Blog Post: Making the Case for Information Governance and Why You Should Address It Now White Paper: The Impact of Schrems II and Key Considerations for Companies Using M365 Podcast: Keeping Up with M365 Software Updates , microsoft-365; chat-and-collaboration-data; information-governance; lighting-the-path-to-better-information-governance, microsoft, emerging data sources, podcast, record management, preservation, microsoft-365, chat-and-collaboration-data, information-governance,, microsoft; emerging-data-sources; podcast; record-management; preservation
March 31, 2022
Podcast
ai/big data, tar/predictive coding, hsr second requests, podcast, acquisitions, mergers, ai-and-analytics, antitrust

Closing the Deal: Deploying the Right AI Tool for HSR Second Requests

Gina Willis of Lighthouse joins the podcast to explore some of the modern challenges of HSR Second Requests and how a combination of expertise and AI technology can lead to faster and better results., Bill Mariano and Rob Hellewell kick off this episode with another segment of Sightings of Radical Brilliance, where they discuss JPMorgan becoming the first bank to have a presence in the metaverse. Next, our hosts chat with Gina Willis , Analytics Consultant at Lighthouse, about how the right AI tool and expertise can help with HSR Second Requests. They also dive into the following key questions: What are some of the contemporary challenges with Second Requests? What AI tools are helping with some of these modern challenges? For Second Requests, what interaction and feedback between attorneys and AI algorithms is optimal to ensure substantial compliance is reached efficiently? Are there some best practices for improving this relationship‚Äîdeploying the AI better or optimizing algorithms? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us wherever you get your podcasts, and join in the conversation on Twitter .  Related Links : Blog post: Deploying Modern Analytics for Today‚Äôs Critical Data Challenges in eDiscovery Blog post: Biden Administration Executive Order on Promoting Competition: What Does it Mean and How to Prepare Article: JPMorgan bets metaverse is a $1 trillion yearly opportunity as it becomes first bank to open in virtual world , ai-and-analytics; antitrust; practical-applications-of-ai-in-ediscovery, ai/big data, tar/predictive coding, hsr second requests, podcast, acquisitions, mergers, ai-and-analytics, antitrust, ai-big-data; tar-predictive-coding; hsr-second-requests; podcast; acquisitions; mergers
November 16, 2021
Podcast
ccpa, gdpr, cybersecurity, emerging data sources, pii, podcast, hipaa/phi, data-privacy, information-governance

Getting Personal—Wearable Devices, Data, and Compliance

Thora Johnson of Orrick joins Bill and Rob to discuss the new data landscape with wearable devices and health apps, and how it has impacted data compliance, cybersecurity, and privacy concerns., In the final episode of the season, co-hosts Bill Mariano and Rob Hellewell review a New Yorker piece by Kyle Chayka about the beauty and uncanniness of AI-created images delivered by the Twitter handle @images_ai. The co-hosts then bring on Thora Johnson of Orrick for a riveting discussion about the rise in wearable devices and the personal data they‚Äôre collecting. They discuss the fascinating innovation in health-related technology and apps and the significant data compliance, privacy, and cybersecurity issues that are accompanying it. Some key questions from their conversation include:  Beyond the more well-known wearable devices and health-related apps, what others are out there and what types of data are they collecting? The proliferation of data these devices and apps are generating have created a unique set of intersecting compliance, security, and privacy challenges‚Äîwhat are some of the most critical to understand? How can teams mitigate the risk of a cyber breach? And in the event it does happen, what are best practices in terms of responding to a breach? What should attorneys and legal teams know about the FTC‚Äôs recent announcement that it plans to ‚Äúvigorously‚Äù enforce its 2009 Health Breach Notification rule? What regulatory issues related to apps collecting genetic information that people should be aware of? The season ends with key takeaways from the guest speaker section. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , data-privacy; information-governance, ccpa, gdpr, cybersecurity, emerging data sources, pii, podcast, hipaa/phi, data-privacy, information-governance, ccpa; gdpr; cybersecurity; emerging-data-sources; pii; podcast; hipaa-phi
November 16, 2021
Podcast
review, emerging data sources, ai/big data, podcast, ai-and-analytics, ediscovery-review

Finding Lingua Franca: The Power of AI and Linguistics for Legal Technology

In this episode, Amanda Jones of Lighthouse will illuminate some common challenges and pitfalls that can arise with modern language in ediscovery., In the very first episode of season eight, co-hosts Bill Mariano and Rob Hellewell  introduce themselves and welcome listeners back for another riveting season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. They start off with some exciting news about Lighthouse and the recent acquisition of H5 . They then dive into Sightings of Radical Brilliance, the part of the show highlighting the latest news of noteworthy innovation and acts of sheer genius. In this episode, they discuss an article in the AP that investigates how AI-powered tech landed a man in jail with scant evidence . Bill and Rob discuss the case and the AI technology involved, and what questions this raises regarding scientifically validating AI and its use as evidence in criminal cases. Bill and Rob are then joined by Amanda Jones of Lighthouse to discuss common challenges and pitfalls that can arise with modern language in ediscovery, and the interplay between AI and linguistics. Some key questions they explore, include: What is linguistic modeling? What are the critical challenges with modern language and ediscovery today? How is linguistics informing and impacting AI in ediscovery? What are best practices for implementing AI solutions and tools? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , ai-and-analytics; ediscovery-review, review, emerging data sources, ai/big data, podcast, ai-and-analytics, ediscovery-review, review; emerging-data-sources; ai-big-data; podcast
November 16, 2021
Podcast
privilege, review, ai/big data, tar/predictive coding, podcast, ediscovery-review, ai-and-analytics

eDiscovery Review: Family Vs. Four Corner

Pooja Lalwani of Lighthouse and our hosts discuss these two ediscovery review methodologies, and walk through the advantages and disadvantages of both and which better supports AI technology., Bill Mariano and Rob Hellewell kick off this episode with another segment of Sightings of Radical Brilliance, where they discuss Dalvin Brown’s piece in the Washington Post about how AI was used to recreate actor Val Kilmer’s voice . Bill and Rob consider this great scientific achievement along with the potentially nefarious ways it can used. Next, our hosts chat with Pooja Lalwani of Lighthouse about two key approaches to ediscovery review: family and four corner. Pooja helps break down the benefits and drawbacks of each through questions such as: What are some of the key differences between both approaches? With modern communication platforms and data creating a more dynamic and complex review process, what are some of the considerations for when and how to deploy family and four corner review? What review methodology is better suited to supporting TAR and AI tools? How do these review methodologies either help classify privilege more efficiently or potentially create limitations? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , ediscovery-review; ai-and-analytics; lighting-the-way-for-review; lighting-the-path-to-better-review, privilege, review, ai/big data, tar/predictive coding, podcast, ediscovery-review, ai-and-analytics, privilege; review; ai-big-data; tar-predictive-coding; podcast
November 16, 2021
Podcast
collections, tar/predictive coding, hsr second requests, processing, podcast, data reuse, project management, ediscovery-review, ai-and-analytics

Achieving Cross-Matter Review Discipline, Cost Control, and Efficiency

Bill and Rob bring on Jason Rylander of Axinn to discuss techniques for unifying matter data across an organization's portfolio and how it can save significant time and money on document review., Join co-hosts Bill Mariano and Rob Hellewell as they discuss a law firm that only works on artificial intelligence and whether this is an emerging trend for the industry. Next, they‚Äôre joined by Jason Rylander of Axinn to discuss the antitrust landscape, benefits of cross-matter review, and techniques for unifying matter data across an organization‚Äôs portfolio. Jason and our hosts walk through key questions, including: With a new administration and the continued disruption from COVID, has there been an increase in the volume of antitrust matters, investigations, and litigation? What are some of the challenges or disadvantages of doing the traditional single-matter document review? What are some strategies for identifying work product or data that can be reused or repurposed?  What are some best practices when connecting matters?  Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage , rate us on Apple and Stitcher , and join in the conversation on Twitter . , ediscovery-review; ai-and-analytics, collections, tar/predictive coding, hsr second requests, processing, podcast, data reuse, project management, ediscovery-review, ai-and-analytics, collections; tar-predictive-coding; hsr-second-requests; processing; podcast; data-reuse; project-management
March 23, 2021
Podcast
legal ops, podcast, legal-operations

Innovating the Legal Operations Model

In the second episode of season seven, co-hosts¬†Bill Mariano and¬†Rob Hellewell kick off the show with¬†Sightings of Radical Brilliance. In this episode, they review a recent NY Times article..., In the second episode of season seven, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they review a recent NY Times article written by  Brian Chen that focuses on the  tech that will invade our lives in 2021 . Next, they bring on  Julie Johnson of Align who answers the following questions around innovation in legal operations:  How has Covid impacted legal departments and budgets in general?  Why did this bring about the need to focus on innovation and automation? What are some of the newer innovations/solutions you are seeing your fellow legal operations peers adopt? What recommendations would you share with those looking to adopt technology and drive efficiency? What advice would you give to other women in the ediscovery industry looking to move their careers forward? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us on  Apple and  Stitcher , and join in the conversation on  Twitter . , legal-operations, legal ops, podcast, legal-operations, legal-ops; podcast
March 23, 2021
Podcast
microsoft, podcast, microsoft-365, information-governance, chat-and-collaboration-data,

Keeping Up with M365 Software Updates

In the fourth episode of the seventh season, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss¬†why diversity in AI is important and how this could impact legal outcomes and decisions.¬†Next, they..., In the fourth episode of the seventh season, co-hosts  Bill Mariano and  Rob Hellewell discuss  why diversity in AI is important and how this could impact legal outcomes and decisions.  Next, they introduce their guest speaker,  Jamie Brown of Lighthouse, who uncovers key strategies to keep up with the constant flow of Microsoft 365 software updates. Jamie answers the following questions (and more) in this episode: What are some of the common challenges associated with M365‚Äôs rapid software updates? How do these constant updates lead to compliance risks? What are some best practices for overcoming these challenges? What recommendations would you pass along to those who are experiencing these challenges? What advice would you give to other women in the ediscovery industry looking to move their careers forward? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us on  Apple and  Stitcher , and join in the conversation on  Twitter . , microsoft-365; information-governance; chat-and-collaboration-data, microsoft, podcast, microsoft-365, information-governance, chat-and-collaboration-data,, microsoft; podcast
March 23, 2021
Podcast
microsoft, podcast, chat-and-collaboration-data, microsoft-365

Efficiently and Defensibly Addressing Microsoft Teams Data

Bill Mariano and¬†Rob Hellewell kick off episode 3 with another segment of¬†Sightings of Radical Brilliance, where they discuss¬†Anis Uzzaman‚Äôs Inc.com article that dives into 2021 business and..., Bill Mariano and  Rob Hellewell kick off episode 3 with another segment of Sightings of Radical Brilliance, where they discuss  Anis Uzzaman‚Äôs Inc.com article that dives into 2021 business and technology trends . Bill and Rob review these trends and discuss how they will have an impact on the space. Next, Bill and Rob chat with  Royce Cohen of Lighthouse about key ways to efficiently and defensibly address Microsoft Teams data. In this interview, Royce uncovers the answers to the following questions:  How do you achieve a balance between encouraging collaboration amongst colleagues and the ediscovery impact of that collaboration?  What are some of the challenges associated with the rise in Teams data? How do you overcome those challenges? How do organizations ensure they are overcoming those challenges efficiently and defensibly?  What advice would you give to other women in the ediscovery industry looking to move their careers forward? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us on  Apple and  Stitcher , and join in the conversation on  Twitter . Related Links Blog Post:  Key Compliance & Information Governance Considerations As You Adopt Microsoft Teams Podcast: Tackling Modern Attachment and Link Challenges in G-Suite, Slack, and Teams , chat-and-collaboration-data; microsoft-365, microsoft, podcast, chat-and-collaboration-data, microsoft-365, microsoft; podcast
March 23, 2021
Podcast
podcast, diversity-equity-and-inclusion,

Diversity and eDiscovery: How Diverse Hiring Practices Lead to a More Innovative Workforce

In the very first episode of season seven, co-hosts¬†Bill Mariano and¬†Rob Hellewell, introduce themselves and welcome listeners back for another riveting season of Law & Candor, the¬†podcast wholly..., In the very first episode of season seven, co-hosts  Bill Mariano and  Rob Hellewell , introduce themselves and welcome listeners back for another riveting season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. They note that in celebration of Women‚Äôs History Month (March), season seven will feature an all-female guest speaker lineup exploring industry hot topics, as well as key tactics for championing the career growth of females within the space. To kick things off, Bill and Rob begin with Sightings of Radical Brilliance, the part of the show highlighting the latest news of noteworthy innovation and acts of sheer genius. In this episode, they dive into a recent article written by  Ayang Macdonald for  BiometricUpdate.com that discusses  Aratek‚Äôs new biometric finger scanner with enhanced security . Bill and Rob discuss this new fingerprint scanning technology and what it (and other tech like it) could mean for the future of the legal space.  For the guest speaker segment of the show, Bill and Rob bring on  Stacy Ybarra of Lighthouse to discuss diversity in ediscovery and how diverse hiring practices can lead to a more innovative workforce via the following questions: How does diversity feed innovation in ediscovery? What are some of the key ways diversity impacts organizations directly?  How does leading with empathy and inclusion make an impact? What are some best practices for those looking to champion diversity within their organization and the industry through employee resource groups? What advice would you give to other women in the ediscovery industry looking to move their careers forward? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us on  Apple and  Stitcher , and join in the conversation on  Twitter . , diversity-equity-and-inclusion, podcast, diversity-equity-and-inclusion,, podcast
December 3, 2020
Podcast
data-privacy, ai/big data, phi, pii, podcast, ai-and-analytics, data-privacy

The Convergence of AI and Data Privacy in eDiscovery: Using AI and Analytics to Identify Personal Information

Law & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss the challenges and implications of misinformation around voting in...,   Law & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss the challenges and implications of misinformation around voting in the U.S. In this episode, Bill and Rob are joined by John Del Piero of Lighthouse. The three of them discuss how PII and PHI can be identified more efficiently by leveraging tools like AI and analytics via the following questions: Why is it important to identify PII and PHI within larger volumes of data quickly? How can AI and analytics help to identify PII and PHI more efficiently? What are the key benefits of using these tools? Are there any best practices to put in place for those looking to weave AI and analytics into their workflow? In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on Twitter , and discover more about our speakers and the show here . , ai-and-analytics; data-privacy, data-privacy, ai/big data, phi, pii, podcast, ai-and-analytics, data-privacy, data-privacy; ai-big-data; phi; pii; podcast
March 23, 2021
Podcast
ai/big data, podcast, ai-and-analytics

AI and Analytics for Corporations: Common Use Cases

Law & Candor co-hosts¬†Bill Mariano and¬†Rob Hellewell kick things off with¬†Sightings of Radical Brilliance, in which they discuss¬†the growing use of¬†emotion recognition in tech in China and how..., Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss the growing use of  emotion recognition in tech in China and how this could lead to some challenges in the legal space down the road.  In this episode, Bill and Rob are joined by  Moira Errick of Bausch Health. The three of them discuss common AI and analytics use cases for corporations via the following questions: What types of AI and analytics tools are you using and for what use cases? What is ICR and how you have been leveraging this internally? What additional use cases are you hoping to use AI and analytics for in the future? What are some best practices to keep in mind when leveraging AI and analytics tools? What recommendations do you have for those trying to get their team on board? What advice would you give to other women in the ediscovery industry looking to move their careers forward? In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the  podcast homepage , rate us on  Apple and  Stitcher , and join in the conversation on  Twitter . , ai-and-analytics, ai/big data, podcast, ai-and-analytics, ai-big-data; podcast
December 3, 2020
Podcast
cybersecurity, data-privacy, podcast, data-privacy, legal-operations, information-governance,

Reducing Cybersecurity Burdens with a Customized Data Breach Workflow

Bill Mariano and Rob Hellewell kick off episode 3 with another segment of Sightings of Radical Brilliance where they discuss the EU striking down the Privacy Shield and what that means for the...,   Bill Mariano and Rob Hellewell kick off episode 3 with another segment of Sightings of Radical Brilliance where they discuss the EU striking down the Privacy Shield and what that means for the legal realm. Next, Bill and Rob chat with Jeremiah Weasenforth of Orrick about a recent customized data breach workflow that Jeremiah and his team implemented to significantly reduce the burdens of a data breach. In this interview, Jeremiah uncovers the answers to the following questions:  What are the burdens of a major data breach? What impacts do DSARs and the CCPA have on these breaches? How do you get started with a customized workflow? What technology should one use? How do you implement the workflow internally? What key tips are there for those experiencing cybersecurity burdens today? The show concludes with key takeaways from the guest speaker segment. Subscribe to Law & Candor here , rate us on Apple and Stitcher, join in the conversation on Twitter , and discover more about our speakers and the show here . , data-privacy; legal-operations; information-governance, cybersecurity, data-privacy, podcast, data-privacy, legal-operations, information-governance,, cybersecurity; data-privacy; podcast
December 3, 2020
Podcast
preservation and collection, podcast, digital-forensics, digital-forensics, chat-and-collaboration-data

Does Cellular 5G Equal 5x the Fraud and Misconduct Risk?

In the very first episode of season six, co-hosts Bill Mariano and Rob Hellewell, introduce themselves and welcome listeners back for another season of Law & Candor, the podcast wholly devoted to...,   In the very first episode of season six, co-hosts Bill Mariano and Rob Hellewell , introduce themselves and welcome listeners back for another season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob begin with Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this episode, they dive into a recent article from ITPro.com that discusses the increase in insider data breaches with the remote work shift .  For the guest speaker segment of the show, Bill and Rob bring on Jerry Bui of Lighthouse to discuss cellular 5G and how it could lead to more fraud and misconduct risk via the following key questions: How does 5G lead to fraud and misconduct?  What insider threats are there (i.e. shadow IT, encrypted messages, etc.)? What about outsider threats (i.e. outside of IT‚Äôs purview, data breaches, hacking, etc.)? How does this impact compliance programs?  How does one overcome 5G challenges?  Are there other recommended best practices related to this topic? The episode wraps up with key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on Twitter , and discover more about our speakers and the show here . , forensics; chat-and-collaboration-data, preservation and collection, podcast, digital-forensics, digital-forensics, chat-and-collaboration-data, preservation-and-collection; podcast; digital-forensics
December 3, 2020
Podcast
data-privacy, cross border data transfers, podcast, data-privacy, ai-and-analytics

Cross-Border Data Transfers and the EU-US Data Privacy Tug of War

In the second episode of season six, co-hosts Bill Mariano and Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they review a recent trends analysis article...,   In the second episode of season six, co-hosts Bill Mariano and Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they review a recent trends analysis article written by Lighthouse‚Äôs very own John Shaw for The Lawyer that dives into new sources of evidentiary data in employment disputes .    Next, they bring on Melina Efstathiou of Eversheds Sutherland who answers questions around cross-border data transfers and the EU-US data privacy challenges outlined below: What does the surprise decision to invalidate the EU-US Privacy Shield mean for ediscovery? How does this impact other data transfer mechanisms?  What are some of the implications that Brexit could have? Are there any key tips for preparing for the future of cross-border ediscovery? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on Twitter , and discover more about our speakers and the show here . Related Links Blog Post: Worldwide Data Privacy Update Blog Post: Three Steps to Tackling Data Privacy Compliance Post GDPR Blog Post: The U.S Privacy Shield Is No Longer Valid ‚Äì What Does that Mean for Companies that Transfer Data from the EU into the US?   , data-privacy; ai-and-analytics, data-privacy, cross border data transfers, podcast, data-privacy, ai-and-analytics, data-privacy; cross-border-data-transfers; podcast
December 3, 2020
Podcast
ai/big data, podcast, ai-and-analytics,

AI, Analytics, and the Benefits of Transparency

In the final episode of season six, co-hosts Bill Mariano and Rob Hellewell review an article covering key privacy and security features on iOS4 and highlight the top features to be aware of.The...,   In the final episode of season six, co-hosts Bill Mariano and Rob Hellewell review an article covering key privacy and security features on iOS4 and highlight the top features to be aware of. The co-hosts then bring on Forbes Senior Contributor, David Teich , to discuss AI, analytics, and the benefits of transparency via the following questions:   Why is it important to be transparent in the legal realm? How does this come into play with bias? What about AI and jury selection? How do analytics come into play as a result of providing transparency? The season ends with key takeaways from the guest speaker section. Subscribe to the show here , rate us on Apple and Stitcher, connect with us on Twitter , and discover more about our speakers and the show here . Related Links Blog Post: Big Data and Analytics in eDiscovery: Unlock the Value of Your Data Blog Post:  The Sinister Six‚ĶChallenges of Working with Large Data Sets Blog Post:  Advanced Analytics ‚Äì The Key to Mitigating Big Data Risks Podcast Episode: Tackling Big Data Challenges Podcast Episode: The Future is Now ‚Äì AI and Analytics are Here to Stay , ai-and-analytics, ai/big data, podcast, ai-and-analytics,, ai-big-data; podcast
September 22, 2020
Podcast
microsoft, podcast, microsoft-365, ediscovery-review, chat-and-collaboration-data,

Top Microsoft 365 Features to Leverage in Your eDiscovery Program

Microsoft‚Äôs agile development and rapid product enhancement allows Microsoft 365 (M365) users to stay up to date with emerging industry challenges. However, keeping pace with these M365 features,   In the final episode of season five, co-hosts  Bill Mariano and  Rob Hellewell review an article on a recent ILTA>ON panel that examined how  tech has created certain power dynamics in legal space. Next, Bill and Rob bring on John Collins of Lighthouse to walk them through the top M365 features to leverage in an ediscovery program. Together they cover the latest and greatest as well as uncover answers to the following questions:  How many updates and enhancements is Microsoft making? How often/fast are these coming out? What are some of the common challenges around these rapid changes?  What are the top M365 features that folks in the industry should be aware of? Are there other ways and/or resources folks can use to stay up-to-date? The season ends with key takeaways from the guest speaker section. Subscribe to the show here , rate us on Apple and Stitcher, connect with us  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post: Microsoft 365, G-Suite, and the Growing Demand for Consulting and ifying Experts Blog Post: Leveraging Microsoft 365 to Reduce Your eDiscovery Spend Blog Post: Key Compliance & Information Governance Considerations As You Adopt Microsoft Teams Podcast Episode:  Microsoft Office 365 Part 1: Microsoft‚Äôs Influence on the Next Evolution of eDiscovery Podcast Episode: Microsoft Office 365 Part 2: How to Leverage all the Tools in the Toolbox   , microsoft-365; ediscovery-review; chat-and-collaboration-data, microsoft, podcast, microsoft-365, ediscovery-review, chat-and-collaboration-data,, microsoft; podcast
September 22, 2020
Podcast
analytics, ai/big data, podcast, ai-and-analytics,

Leveraging AI and Analytics to Detect Privilege

AI and analytics are picking up momentum in the ediscovery space. With new tools that can help ediscovery professionals see trends and patterns in their data as well as identify inefficiencies and opp,   Co-hosts Bill Mariano and  Rob Hellewell kick episode 3 of season 5 off with another riveting Sightings of Radical Brilliance segment where they discuss transforming risks into benefits through  artificial intelligence and data privacy. Bill and Rob interview  CJ Mahoney of Cleary Gottlieb, who discusses some new AI and analytics practices around privilege review. In this segment, CJ uncovers the answers to the following questions:  Why the uptick in the adoption of AI and analytics in the industry? Why did it take so long for folks to adopt?  How can one leverage AI to detect privilege?  What benefits and learnings can one apply to future work? What are some recommendations for those looking to leverage AI and analytics in similar ways? The show concludes with key takeaways from the guest speaker segment. Subscribe to Law & Candor here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post: Big Data and Analytics in eDiscovery: Unlock the Value of Your Data Podcast Episode: Tackling Big Data Challenges Podcast Episode: The Future is Now ‚Äì AI and Analytics are Here to Stay   , ai-and-analytics, analytics, ai/big data, podcast, ai-and-analytics,, analytics; ai-big-data; podcast
September 22, 2020
Podcast
information-governance, cloud migration, podcast, information-governance, microsoft-365, chat-and-collaboration-data,

Achieving Information Governance through a Transformative Cloud Migration

Data migrations are generally perceived as painful and disruptive experiences. However, they also provide unique opportunities to transform the way unstructured data is used and managed within an,   In the first episode of season five, co-hosts  Bill Mariano and  Rob Hellewell , introduce themselves and welcome listeners back for another season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob begin with Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this first episode, they dive into a recent article written by the folks at Baker Botts LLP around  Federal Expedited Review in Response to COVID-19 and what that means for the industry. For the guest speaker segment of the show, Bill and Rob bring on  John Holliday of Lighthouse to discuss transformative cloud migrations and how to ensure a successful outcome via the following questions: How do cloud migrations provide an opportunity to transform processes and workflows within an organization?  How does information architecture come into play? What benefits can one achieve during a cloud migration? What are best practices for a successful transformative cloud migration? The episode wraps up with key takeaways. If you enjoyed the show, subscribe here, rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post:  Top Three Things That Could Derail Your Cloud Migration Project Blog Post:  Why Moving to the Cloud is a Legal Conversation   , information-governance; microsoft-365; chat-and-collaboration-data, information-governance, cloud migration, podcast, information-governance, microsoft-365, chat-and-collaboration-data,, information-governance; cloud-migration; podcast
September 22, 2020
Podcast
analytics, ai/big data, hsr second requests, podcast, ai-and-analytics, antitrust

Facilitating a Smooth and Successful Large Review Project with Advanced Analytics

Large dataset projects are being addressed with the broadening use of advanced analytics. However, this is introducing another level of complexity into what is already a complicated and potentially st,   Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss how  law firms are managing the hurdles of remote work , specifically comprehensive security measures, and driving efficiency.  In this episode, Bill and Rob are joined by  Adam Strayer of Paul Weiss. The three discuss facilitating successful large review projects with advanced analytics and other tools via the following questions: Why has there been an increase in the use of advanced analytics on larger matters across the industry? What are some of the key tools and strategies that drive the most value? What are the most effective and efficient workflows regarding advanced analytics? How does one combine the expertise and talents from each team involved (client, counsel, and service provider(s)) in an organized manner? In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Podcast Episode:  New Efficiency Gains in TAR 2.0 and CMML Revealed Case Study:  Drug Store Giant Sees Significant Data Reduction , ai-and-analytics; antitrust, analytics, ai/big data, hsr second requests, podcast, ai-and-analytics, antitrust, analytics; ai-big-data; hsr-second-requests; podcast
September 22, 2020
Podcast
self-service, spectra, podcast, ediscovery-review, ai-and-analytics

Scaling Your eDiscovery Program: Self Service to Full Service

Being able to scale an ediscovery program from a self-service to a full-service model for particular matters can save both time and money, thus allowing for a more efficient ediscovery program overall,   In the second episode of season five, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they discuss  Solos Health Analytics‚Äôs new technology (FeverGaurd) that was designed as a fever detection software to stop the spread of COVID-19 and the PPI challenges it could raise.  Next, they bring on  Claire Caruso of Lighthouse. Together, the three of them talk through how to scale ediscovery programs from self-service to full-service and back through the following questions:  When would one need to transition from self service to full service, and back to self service?  What are the benefits of making these moves? What are some of the key things to look out for?  What are some recommendations for folks looking to optimize their structure? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post:  How to Bring eDiscovery In House from Seasoned Self-Service Adopters Podcast Episode:  The Future of On-Demand SaaS Software for Small Matters ‚Äì A Self-Service Model Story Blog Post:  Overcoming  Top Objections for Moving to a Self-Service eDiscovery Model Blog Post:  Building a Business Case for Upgrading Your eDiscovery Self-Service Practices in Six Simple Steps Podcast Episode:  Moving to the Cloud Part 1: A Corporate Journey Podcast Episode:  Moving to the Cloud Part 2: A Law Firm Journey About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ediscovery-review; ai-and-analytics, self-service, spectra, podcast, ediscovery-review, ai-and-analytics, self-service, spectra; podcast
September 22, 2020
Podcast
dsars, podcast, data-privacy, information-governance, ai-and-analytics,

Effective Strategies for Managing DSARs

Since the introduction of the GDPR, organizations with a European presence have seen a rise in the number of Data Subject Access Requests (DSARs). These matters are time-consuming, costly, and not,   In the fourth episode of season five, co-hosts  Bill Mariano and  Rob Hellewell discuss how  Relativity is using its technology to help medical researchers comb through COVID-19 journal articles to help battle the virus.  Bill and Rob then introduce their guest speaker,  Nicki Woodfall of Travers Smith, who uncovers effective strategies for managing DSARs. Nicki answers the following questions in this episode: Why has there been a recent uptick in DSARs over the past few years?  What are the top challenges when it comes to managing DSARs? What are key ways to overcome these common challenges? Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here , rate us on Apple and Stitcher, join in the conversation on  Twitter , and discover more about our speakers and the show  here . Related Links Blog Post: How GDPR and DSARs are Driving a New, Proactive Approach to eDiscovery Case Study:  Penningtons Manches Cooper Takes Control of their eDiscovery Process with Lighthouse Spectra About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , data-privacy; information-governance; ai-and-analytics, dsars, podcast, data-privacy, information-governance, ai-and-analytics,, dsars; podcast
June 23, 2020
Podcast
analytics, ai/big data, tar/predictive coding, podcast, ai-and-analytics,

Take the Mystery out of Machine Learning: Success Stories from Real-Life Examples and How Data Scientists Impact eDiscovery

In the final episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss a¬†coronavirus tracing app and the privacy concerns that may come about from a legal perspective.¬†Bill and Rob...,   In the final episode of season three, co-hosts  Bill Mariano and  Rob Hellewell discuss a  coronavirus tracing app and the privacy concerns that may come about from a legal perspective.  Bill and Rob bring on  Sara Lockman of Walmart to discuss the mysteries behind machine learning. Together they cover what machine learning is, the benefits, success stories, and more by uncovering answers to the following questions: What is machine learning? What are the benefits of machine learning? What are some challenges to be aware of when implementing machine learning?  What are some best practices to put in place when using machine learning?  Are there any major differences between implementing machine learning on investigations versus litigation?  What are some of the practical applications you have seen used in the context of cases? How do you convince the non-believers? The season ends with key takeaways from the guest speaker section. Connect with us  Twitter , discover more about our speakers and the show  here . Related Links Blog Post:  Big Data and Analytics in eDiscovery: Unlock the Value of Your Data Podcast Episode:  The Future is Now ‚Äì AI and Analytics are Here to Stay Podcast Episode:  Tackling Big Data Challenges Podcast Episode: New Efficiency Gains in TAR 2.0 and CMML Revealed About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, analytics, ai/big data, tar/predictive coding, podcast, ai-and-analytics,, analytics; ai-big-data; tar-predictive-coding; podcast
June 23, 2020
Podcast
managed services, podcast, ediscovery-review,

Myth Busters - The Managed Services Edition

In the second episode of season four, co-hosts¬†Bill Mariano and¬†Rob Hellewell kick off the show with¬†Sightings of Radical Brilliance. In this episode, they discuss¬†how the¬†U.S. House plans to...,   In the second episode of season four, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they discuss how the  U.S. House plans to start voting remotely and the impacts this could have on the legal space.  They then introduce the next guest speaker segment, which features  Tracy Hallenberger of Baker Botts. They unravel the myths behind managed services and discuss the key benefits of this modern approach to ediscovery through the following questions:  What are some of the top myths that are associated with managed services? What about this myth around lesser quality? What about the myth around it being more expensive? What about this lower service level to lawyer myth? What are the key benefits of a managed services model? Our co-hosts wrap up the episode with a few key takeaways. Join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Case Study:  Lighthouse‚Äôs Managed Service Solution Delivers More Than $13 Million in Savings over Six Years Case Study:  Top Ten Global Law Firm Realizes BeneÔ¨Åts of Lighthouse Managed Services About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ediscovery-review, managed services, podcast, ediscovery-review,, managed-services; podcast
June 23, 2020
Podcast
cybersecurity, podcast, data-privacy, ediscovery-review, information-governance,

Managing Cybersecurity in eDiscovery

Law & Candor co-hosts¬†Bill Mariano and¬†Rob Hellewell kick things off with¬†Sightings of Radical Brilliance, in which they discuss¬†how¬†password dumping can improve your security and what that means...,   Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss how  password dumping can improve your security and what that means for the future of security.  In this episode, Bill and Rob are joined by  Dave Kuhl of Lighthouse. The three uncover the complexities around managing cybersecurity as well as practical tips for overcoming challenges via the following questions: What are the recent complexities around managing cybersecurity? What are today‚Äôs biggest threats? What are some key lessons learned around these challenges? How do you combat cybersecurity challenges? How do you get ahead of these issues before they hit? In conclusion, our co-hosts end the episode with key takeaways. To join the conversation, connect with us  Twitter and discover more about our speakers and the show  here . Related Links Blog Post: Cybersecurity in eDiscovery: Protecting Your Data from Preservation through Production Blog Post: Top Three Tips for Structuring an Effective eDiscovery Security Evaluation Podcast Episode:  Cybersecurity in eDiscovery: Protecting Your Data from Preservation through Production Webinar Recording: The Risks of Cybersecurity in eDiscovery ‚Äì Is Your Data Safe? About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , data-privacy; ediscovery-review; information-governance, cybersecurity, podcast, data-privacy, ediscovery-review, information-governance,, cybersecurity; podcast
June 23, 2020
Podcast
ediscovery process, legal ops, podcast, ediscovery-review, legal-operations

eDiscovery Program Starter Pack: Uncover Key Ways to Build an Effective & Efficient eDiscovery Program

In the fourth episode of season four, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss the¬†first-ever trial by Zoom, how it all went down, as well as what may expect to see looking forward.¬†Bill...,   In the fourth episode of season four, co-hosts  Bill Mariano and  Rob Hellewell discuss the  first-ever trial by Zoom , how it all went down, as well as what may expect to see looking forward.  Bill and Rob then introduce their guest speaker,  Zander Brandt of Lyft, who shares his experience as a two-time corporate ediscovery ‚Äúfirst employee‚Äù and what it takes to set up an effective and efficient ediscovery program. Zander answers the following questions in this episode: What is that like being the first corporate ediscovery employee? Where do you start in a role like this? What are the key initial steps to take when coming on board? What are things to avoid? Common pitfalls? What are the recommendations/best practices for those looking to implement an efficient ediscovery program today? Our co-hosts wrap up the episode with a few key takeaways. Follow us on  Twitter and discover more about our speakers and the show  here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ediscovery-review; legal-operations, ediscovery process, legal ops, podcast, ediscovery-review, legal-operations, ediscovery-process; legal-ops; podcast
June 23, 2020
Podcast
emerging data sources, podcast, chat-and-collaboration-data, microsoft-365

Emerging Data Sources – Get a Handle on eDiscovery for Collaboration Tools

In the first episode of season four, co-hosts¬†Bill Mariano and¬†Rob Hellewell, introduce themselves and welcome listeners back for a fourth season of Law & Candor, the¬†podcast wholly devoted to...,   In the first episode of season four, co-hosts  Bill Mariano and  Rob Hellewell , introduce themselves and welcome listeners back for a fourth season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob begin with Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this first episode, they dive into a recent story around  COVID-19 and the reformation of legal culture .  The guest speaker segment for episode one highlights  Ellen Blanchard of T-Mobile. Ellen, Bill, and Rob discuss the growth in emerging data sources, especially with the introduction of more remote work due to COVID-19. They cover tips on how to manage, collect, process, and review collaboration data for ediscovery purposes via the following questions: What has changed over the last couple of years and even in the last few months with COVID-19? How do you get a handle on these data sources? How do you weigh that balance between risks and what teams need to use to be productive? What are some key tips to keep in mind when managing ediscovery around collaboration tools? At the end of the episode, Bill recaps key takeaways and thanks Ellen for joining. If you enjoyed the show, join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Case Study:  Rapid and Reliable Chat Message Review About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , chat-and-collaboration-data; microsoft-365, emerging data sources, podcast, chat-and-collaboration-data, microsoft-365, emerging-data-sources; podcast
June 23, 2020
Podcast
legal ops, podcast, legal-operations ,

Legal Operations 101: Skills for Success

Co-hosts Bill Mariano and¬†Rob Hellewell kick episode 3 of season 4 off with another riveting¬†Sightings of Radical Brilliance segment where they uncover how¬†biometric data will impact ediscovery...,   Co-hosts Bill Mariano and  Rob Hellewell kick episode 3 of season 4 off with another riveting Sightings of Radical Brilliance segment where they uncover how  biometric data will impact ediscovery and  why it is important to protect this data .  Bill and Rob are accompanied by  Debora Motyka Jones of Lighthouse, who shares what today‚Äôs legal operations landscape looks like as well as the key competencies for those looking to succeed in the field. In this segment, Debora uncovers the answers to the following questions:  What is legal operations? What are today‚Äôs legal operations trends? What are some of the core competencies for departments? What are some of the skills that individuals in the field need to focus on? What are the best practices when it comes to legal operations? The show concludes with key takeaways from the guest speaker segment. Join the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Blog Post: Legal Operations... Is it a Fad or Here to Stay? Blog Post:  Managing Your (Legal Ops) Budget with Five Simple Tips Blog Post:  Budget Busters and How to Avoid Them: Budgeting Tips for Legal Operations Professionals Blog Post: Putting Together an Effective Legal Strategy Session About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , legal-operations, legal ops, podcast, legal-operations ,, legal-ops; podcast
March 24, 2020
Podcast
self-service, spectra, podcast, ediscovery-review, ai-and-analytics

The Future of On-Demand SaaS Software for Small Matters – A Self-Service Model Story

Co-hosts Bill Mariano and¬†Rob Hellewell kick things off with another riveting¬†Sightings of Radical Brilliance segment where they uncover how¬†real-time translation tools are breaking down barriers...,   Co-hosts Bill Mariano and  Rob Hellewell kick things off with another riveting Sightings of Radical Brilliance segment where they uncover how  real-time translation tools are breaking down barriers and what this means for the future of legal space. Next, Bill and Rob set the stage for the final recorded guest speaker segment of the live Law & Candor show during Legaltech. For this session, they were accompanied by  TracyAnn Eggen of Dignity Health and  Steve Clark of Dentons, who discuss the future of on-demand SaaS software for small matters from both a corporate and a law firm perspective. In this segment, TracyAnn and Steve uncover the answers to the following questions:  What triggered the move to a SaaS model? How did you get wide-scale adoption? What are some best practices for implementation? The show concludes with key takeaways from the guest speaker segment. Join the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Blog Post: Overcoming  Top Objections for Moving to a Self-Service eDiscovery Model Blog Post:  Building a Business Case for Upgrading Your eDiscovery Self-Service Practices in Six Simple Steps Blog Post:  Top Four Considerations for Law Firms When Choosing a SaaS eDiscovery Solution Podcast Episode: Moving to the Cloud Part 1: A Corporate Journey Podcast Episode:  Moving to the Cloud Part 2: A Law Firm Journey About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, self-service, spectra, podcast, ediscovery-review, ai-and-analytics, self-service, spectra; podcast
March 24, 2020
Podcast
ai/big data, podcast, ai-and-analytics,

Tackling Big Data Challenges

Big data challenges and key ways to overcome them with AI, analytics, and data re-use are uncovered in this podcast episode.,   In the very first episode of season three, co-hosts  Bill Mariano and  Rob Hellewell , introduce themselves and welcome listeners back for another riveting season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob begin with Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this first episode, they dive into a recent story around the  Astros cheating scandal and their illegal use of technology to observe and relay the signs given by the opposing catcher to the pitcher known as sign-stealing. Before our co-hosts jump directly into the guest speaker segment of today‚Äôs episode, they set the stage for the first three episodes of season 3, which are recordings from the first-ever live Law & Candor show during Legaltech this past January. All three live segments are trickled out over the next three episodes.  The guest speaker segment for episode one highlights,  Josh Kreamer of AstraZeneca. Josh, Bill, and Rob discuss ever-evolving technology and data sources, and how it is now more challenging than ever to combat the cost and complexities associated with legal data. They tackle these key questions and Josh provides answers to the following:  What are some of the biggest data challenges in the industry today? What are some key solutions to these challenges? How do you implement these solutions? How do you get buy in from your team/get them excited to move forward with implementation? In conclusion, Rob shares top takeaways from episode one. If you enjoyed the show, join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Podcast Episode:  The Future is Now ‚Äì AI and Analytics are Here to Stay About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, ai/big data, podcast, ai-and-analytics,, ai-big-data; podcast
March 24, 2020
Podcast
self-service, spectra, analytics, emerging data sources, ai/big data, podcast, ediscovery-review, ai-and-analytics

eDiscovery Shark Tank - What’s Worth Your Investment in 2020?

In the final episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss the¬†New York SHIELD Act and its impact on data and security requirements within the space in the¬†Sightings of...,   In the final episode of season three, co-hosts  Bill Mariano and  Rob Hellewell discuss the  New York SHIELD Act and its impact on data and security requirements within the space in the Sightings of Radical Brilliance segment. Bill and Rob shake things up a bit in the final guest speaker segment of the season by conducting an eDiscovery Shark Tank-style episode, where they bring on  Chris Dahl of Lighthouse to share the most forward-thinking and innovative solutions to industry challenges that are worth folks‚Äô 2020 investment. Chris covers the following key questions: What are some of the key innovations in the legal space today? What innovations around SaaS are worth investment? How is the SaaS paradigm impacted on a global perspective? What about big data analytics? When it comes to collaboration, chat, and social, what solutions are there? What about continuous program updates, what can folks be looking for? The season ends with key takeaways from the guest speaker section.  Connect with us  Twitter , discover more about our speakers and the show  here . Related Links Blog Post:  Best Practices for Embracing the SaaS eDiscovery Revolution Podcast Episode: Microsoft Office 365 Part 1: Microsoft‚Äôs Influence on the Next Evolution of eDiscovery Podcast Episode:  Microsoft Office 365 Part 2: How to Leverage all the Tools in the Toolbox About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, self-service, spectra, analytics, emerging data sources, ai/big data, podcast, ediscovery-review, ai-and-analytics, self-service, spectra; analytics; emerging-data-sources; ai-big-data; podcast
March 24, 2020
Podcast
tar/predictive coding, podcast, ai-and-analytics,

New Efficiency Gains in TAR 2.0 and CMML Revealed

In the fourth episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell converse around the innovation behind family tracking apps and how¬†one app helped capture a criminal in this...,   In the fourth episode of season three, co-hosts  Bill Mariano and  Rob Hellewell converse around the innovation behind family tracking apps and how  one app helped capture a criminal in this episode‚Äôs Sightings of Radical Brilliance segment.  Bill and Rob then introduce their guest speaker,  Nordo Nissi of Goulston & Storrs, and together they dive into new and uncovered efficiency gains around TAR 2.0 and CMML. They ask Nordo the following questions: What are TAR 2.0 and CMML? What are some efficiency gains you have seen around these workflows? What are some of the hidden efficiencies you have seen? What are some techniques to get to those? In the end, our co-hosts wrap up the episode with a few key takeaways. Follow us on  Twitter and discover more about our speakers and the show  here . Related Links Case Study:  Drug Store Giant Sees Significant Data Reduction About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, tar/predictive coding, podcast, ai-and-analytics,, tar-predictive-coding; podcast
April 6, 2020
Podcast
ediscovery process, podcast, ediscovery-review,

Special Edition: The Impact of COVID-19 on the Legal Space Now & Beyond

In this special edition of Law & Candor, co-hosts¬†Bill Mariano and¬†Rob Hellewell, kick things off with¬†Sightings of Radical Brilliance, the part of the show where they discuss the latest news of...,   In this special edition of Law & Candor, co-hosts  Bill Mariano and  Rob Hellewell , kick things off with Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. Within this episode, they discuss the recent innovative trend around large car manufactures switching gears around their production plans in the midst of COVID-19 to help  develop ventilators and  supply masks to help fight the pandemic. Related to COVID-19, the guest speaker segment of the show features Lighthouse‚Äôs CEO, Brian McManus, who shares his take on the industry impacts of COVID-19. The trio cover current top company priorities, common themes being heard throughout the industry, as well as the lasting impacts of this pandemic on the legal space by answering the following key questions: What are key company priorities? What are current employee safety priorities and items to be aware of? What is the industry saying? What will be the lasting impact of COVID-19 on the legal space?  In conclusion, they share top takeaways from the episode. If you enjoyed the show, join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Webinar Recording: Top Tips for Staying Productive and Connected While Working from Home  About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ediscovery-review, ediscovery process, podcast, ediscovery-review,, ediscovery-process; podcast
March 24, 2020
Podcast
microsoft, gdpr, data-privacy, cross border data transfers, podcast, data-privacy, microsoft-365, chat-and-collaboration data,

How Microsoft 365 and GDPR Are Driving a Proactive Approach to eDiscovery Across the Globe

Law & Candor co-hosts¬†Bill Mariano and¬†Rob Hellewell kick things off with¬†Sightings of Radical Brilliance, in which they discuss¬†changes the legal system may face thanks to¬†innovation brought...,   Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss changes the legal system may face thanks to  innovation brought about by AI, big data, and online courts .  In this episode, Bill and Rob are joined by  Mike Brown of Lighthouse. The three uncover how Microsoft 365 (M365) and GDPR are driving change for a more proactive approach to ediscovery across the globe and answer the following questions:  How have GDPR and M365 changed company attitudes from a reactive to a more proactive approach to ediscovery? How does Brexit impact this? How does a company actually become GDPR compliant? How do companies prepare? How do DSARs come into play? How does M365 help solve for these concerns? In conclusion, our co-hosts end the episode with key takeaways. To join the conversation, connect with us  Twitter and discover more about our speakers and the show  here . Related Links Blog Post:  Why Moving to the Cloud is a Legal Conversation , data-privacy; microsoft-365; chat-and-collaboration-data, microsoft, gdpr, data-privacy, cross border data transfers, podcast, data-privacy, microsoft-365, chat-and-collaboration data,, microsoft; gdpr; data-privacy; cross-border-data-transfers; podcast
March 24, 2020
Podcast
gdpr, data-privacy, information-governance, compliance and investigations, podcast, data-privacy, information-governance

Data Privacy in a Post-GDPR World: Facing Regulators and Ensuring Compliance Through Rock-Solid Information Governance Practices

In the second episode of season three, co-hosts¬†Bill Mariano and¬†Rob Hellewell kick off the show with¬†Sightings of Radical Brilliance. In this episode, they discuss¬†how¬†technology competence has...,   In the second episode of season three, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they discuss how  technology competence has become a priority for today‚Äôs lawyers, which has become a recent hot topic within the space as more  states make technical competence for lawyers mandatory .  They then introduce the next guest speaker segment from the live recording of Law & Candor during Legaltech, which features Kelly Clay from GSK. They explore how GDPR has impacted the ediscovery world, both globally and in the US, since its enactment and focus on ways to mitigate risk by uncovering answers to the following questions:  What key challenges have GDPR and the rise of recent privacy laws created globally and in the US? How can information governance and compliance practices mitigate data privacy and security risks? What are best practices or key recommendations for listeners? Our co-hosts wrap up the episode with a few key takeaways. Join in the conversation on  Twitter and discover more about our speakers and the show  here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , data-privacy; information-governance, gdpr, data-privacy, information-governance, compliance and investigations, podcast, data-privacy, information-governance, gdpr; data-privacy; information-governance; compliance-and-investigations; podcast
December 4, 2019
Podcast
g suite, ediscovery process, podcast, chat-and-collaboration data, information-governance

Understanding and Creating Effective and Best eDiscovery Practices for G-Suite

In the final episode of season two, co-hosts¬†Bill Mariano and¬†Rob Hellewell discuss what a¬†US approach to data protection and privacy would look like in the¬†Sightings of Radical Brilliance segment...,   In the final episode of season two, co-hosts  Bill Mariano and  Rob Hellewell discuss what a  US approach to data protection and privacy would look like in the Sightings of Radical Brilliance segment of the show. In particular, they discuss how we are seeing these pop up on a state-by-state basis and whether we need a Federal law that applies to privacy.  Bill and Rob are joined by  Alison Shier , Client Development Manager at Lighthouse, to discuss the challenges and best practices around G-Suite data for their sixth and final episode of the season. The three cover the following questions:  Is leveraging G-suite a more common trend/theme in the space? How is Gmail data different than Outlook data?  What are some of the challenges around managing this data? What are some of the downstream issues and challenges around review of this data? How do we address these challenges? How do TAR and analytics impact G-suite data? The season ends with key takeaways from the guest speaker section.  Connect with us  Twitter , discover more about our speakers and the show  here , and, if you are interested in attending the live podcast show at Legaltech,  email us for details. About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , chat-and-collaboration-data; information-governance, g suite, ediscovery process, podcast, chat-and-collaboration data, information-governance, g-suite; ediscovery-process; podcast
December 4, 2019
Podcast
cross border data transfers, podcast, data-privacy, information-governance

Would a No-Deal Brexit Change How We Handle Cross-Border Collections in Europe?

Law & Candor co-hosts¬†Bill Mariano and¬†Rob Hellewell kick things off with¬†Sightings of Radical Brilliance, in which they discuss¬†personalized and predictive medicine and how¬†apple watches have...,   Law & Candor co-hosts  Bill Mariano and  Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss  personalized and predictive medicine and how  apple watches have been saving lives . In addition, they dive into what these trends mean for the legal field. In this episode, Bill and Rob are joined  Josh Yildirim , Executive Director of Service Delivery of Europe at Lighthouse. The three of them jump into the current status of Brexit and what the future of cross-border data collections could look like. Below are the questions they address:  Where we are at currently with Brexit and whether a no-deal is likely? How could this potentially impact data privacy? How could this impact cross-border collections? What are some practical tips when it comes to potential challenges? What are companies going to need to do to prepare? In conclusion, our co-hosts end the episode with key takeaways. To join the conversation, connect with us  Twitter and discover more about our speakers and the show  here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , data-privacy; information-governance, cross border data transfers, podcast, data-privacy, information-governance, cross-border-data-transfers; podcast
December 4, 2019
Podcast
privilege, podcast, ai-and-analytics, ediscovery-review

The Privilege in Leveraging Privilege Review Tools

In the second episode of season two, co-hosts¬†Bill Mariano and¬†Rob Hellewell kick off the show with¬†Sightings of Radical Brilliance. In this episode, they discuss¬†AI and how this comes into play...,   In the second episode of season two, co-hosts  Bill Mariano and  Rob Hellewell kick off the show with Sightings of Radical Brilliance. In this episode, they discuss  AI and how this comes into play in the game of poker as well as what that means for the industry. Next, they introduce their guest speaker for episode two,  Joanna Harrison ,Solutions Architect at Lighthouse, to discuss the privileges of using privilege review tools in ediscovery. Together, they uncover the answers to the questions below: Why is privilege a priority? Why are the current methods in which privilege gets identified for review inefficient? Why is privilege review so important for folks in the ediscovery space? What kind of tools are out there to assist with privilege review? What about privilege logs? What are some key tips or tricks for setting up privilege workflows? Finally, our co-hosts wrap up the episode with a few key takeaways. Join in the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Blog Post: Finding the Needle Faster ‚Äì Speeding up the Second Request Process Case Study: Drug Store Giant Sees Significant Data Reduction Case Study: When the Government Investigates About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , ai-and-analytics, privilege, podcast, ai-and-analytics, ediscovery-review, privilege; podcast
December 4, 2019
Podcast
emerging data sources, preservation and collection, podcast, digital-forensics, chat-and-collaboration-data, digital-forensics, information-governance, microsoft-365

Data Preservation in the World of Ephemeral Data, Mobile Devices, and Other New Challenges in Forensic Technology

Co-hosts Bill Mariano and¬†Rob Hellewell share details around the¬†five biggest data breaches of the year so far in¬†Sightings of Radical Brilliance and what this means for the future of legal...,   Co-hosts Bill Mariano and  Rob Hellewell share details around the  five biggest data breaches of the year so far in Sightings of Radical Brilliance and what this means for the future of legal space. Next, Bill and Rob bring on  Jerry Bui , Executive Director of Digital Forensics at Lighthouse, to help uncover the answers to the following questions around data preservation when it comes to ephemeral and encrypted data:  What do ephemeral and encryption mean? What are the different types of enterprise communication platforms? Which platform gives you the most in terms of investments from a legal and compliance perspective? What about data privacy on these platforms? How is the personal data treated? What should IT and Legal departments keep in mind when it comes to platforms that are not encrypted? The show concludes with key takeaways from the guest speaker segment. Join the conversation on  Twitter and discover more about our speakers and the show  here . Related Links Podcast: Digital Forensics Future About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , chat-and-collaboration-data; forensics; information-governance; microsoft-365, emerging data sources, preservation and collection, podcast, digital-forensics, chat-and-collaboration-data, digital-forensics, information-governance, microsoft-365, emerging-data-sources; preservation-and-collection; podcast; digital-forensics
December 4, 2019
Podcast
cybersecurity, preservation and collection, processing, podcast, data-privacy, information-governance, ediscovery-review,

Cybersecurity in eDiscovery: Protecting Your Data from Preservation through Production

In the fourth episode of season two, co-hosts¬†Bill Mariano and¬†Rob Hellewell begin with¬†Sightings of Radical Brilliance and the recent¬†trend of folks moving away from email and towards text and...,   In the fourth episode of season two, co-hosts  Bill Mariano and  Rob Hellewell begin with Sightings of Radical Brilliance and the recent  trend of folks moving away from email and towards text and chat tools . They dive into the diverse challenges and risks associated with this shift. Next, Bill and Rob introduce their guest speaker,  David Kessler , Head of Data and Information Risk, United States, at Norton Rose Fulbright US LLP, to discuss cybersecurity challenges across the various stages of the EDRM. In this episode they ask the following key questions to David: What does a high-level overview of data security look like today? Who does this affect? Where are vulnerabilities within the EDRM? What are some key solutions for overcoming top challenges? In the end, our co-hosts wrap up with a few key takeaways. Follow us on  Twitter and discover more about our speakers and the show  here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , data-privacy; information-governance, cybersecurity, preservation and collection, processing, podcast, data-privacy, information-governance, ediscovery-review,, cybersecurity; preservation-and-collection; processing; podcast
December 4, 2019
Podcast
ediscovery process, podcast, legal-operations, information-governance

Bridge the Gap: Innovative Ways to Enable eDiscovery Collaboration Between Legal and IT

In the very first episode of season two, co-hosts¬†Bill Mariano and¬†Rob Hellewell, introduce themselves and welcome listeners back for another riveting season of Law & Candor, the¬†podcast wholly...,   In the very first episode of season two, co-hosts  Bill Mariano and  Rob Hellewell , introduce themselves and welcome listeners back for another riveting season of Law & Candor, the podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob begin with, Sightings of Radical Brilliance, the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this first episode, they dive into a recent story around  how legal technology helped capture the BTK killer and recap the key legal mistakes of this notorious serial killer. In the guest speaker segment of the show, our co-hosts were joined by  Craig Shaver , Director, eDiscovery Program, Hilton Worldwide, who helped them uncover the answers to the following questions around cross-departmental collaboration: What are the current challenges in play when IT and Legal are out of sync? Why is it critical for these two groups to be in sync? What are some of the risks of these groups being out of alignment? Who is the best person to lead the effort of aligning Legal and IT? Are there other departments within an organization that need to be at the table as well? What are the greatest challenges you‚Äôve seen in achieving better alignment? What are some new ways these two groups can ensure they are in alignment? What are the benefits to an organization of this alignment? In conclusion, our speakers share top takeaways. If you enjoyed the show, join in the conversation on  Twitter and discover more about our speakers and the show  here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click  here .   , legal-operations; information-governance, ediscovery process, podcast, legal-operations, information-governance, ediscovery-process; podcast
September 20, 2019
Podcast
ediscovery-and-review

The Truth Behind Data Reuse

Discover how data repositories can be set up to reuse data for future matters in this podcast episode.,   In the second episode of season one, co-hosts Bill Mariano and Rob Hellewell kick off the show with SIGHTINGS OF RADICAL BRILLIANCE. In this episode, they discuss the company Big Moon Power and some of the exciting things they are doing to harness the power of ocean tides to generate electricity. Next, they introduce their guest Erika Namnath , Executive Director of Advisory Services at Lighthouse, to discuss the truth behind data reuse. Together, they uncover the answers to the questions below: What is data reuse? What are the different types of data reuse? How would you reuse data around trade secrets and IP? What about privilege, PII, and PHI? What are some of the current limitations that companies are facing when trying to leverage data reuse? What about objectively non-responsive documents? How do you handle those types of work product for data reuse? What are the key benefits of data reuse? Finally, our co-hosts wrap up the episode with a few key takeaways. Join in on the conversation on Twitter and discover more about our speakers and the show here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here . , ediscovery-review, ediscovery-and-review, data-re-use; podcast
September 16, 2019
Podcast
ai-and-analytics

The Future is Now – AI and Analytics are Here to Stay

In the d√©but episode, co-hosts Bill Mariano and Rob Hellewell introduce themselves and the premise of Law & Candor ‚Äì a podcast wholly devoted to pursuing the legal technology revolution.To kick...,   In the d√©but episode, co-hosts Bill Mariano and Rob Hellewell introduce themselves and the premise of Law & Candor ‚Äì a podcast wholly devoted to pursuing the legal technology revolution. To kick things off, Bill and Rob introduce the first segment of the podcast - SIGHTINGS OF RADICAL BRILLIANCE - which, as the name implies, is the part of the show where they discuss the latest news of noteworthy innovation and acts of sheer genius. In this episode, they dive into a recent story around Elon Musk‚Äôs brain-to-computer interface and what this means for the legal space. In the next segment - the guest speaker segment - our co-hosts are joined by Karl Sobylak , Senior Product Manager at Lighthouse, to uncover the answers the following questions around AI and analytics: Why do data science and analytics seem to be making great progress in so many industries aside from the law? How will AI and analytics be incorporated in the day to day life of a lawyer? What about the fear that AI and analytics will replace lawyers, is this true? What about the potential for AI and machine learning to be more limited in the law than they are for other industries, is that true? What‚Äôs the hardest part about applying data science to the law and how would this work for a corporate legal department? In conclusion, our speakers share three top takeaways and preview the next episode. Enjoy the show? Join in on the conversation on Twitter and discover more about our speakers and the show here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here .   , ai-and-analytics, ai-and-analytics, analytics; ai-big-data; podcast
September 20, 2019
Podcast
microsoft-365, information-governance

Moving to the Cloud: A Law Firm Journey

In the final episode of season one, co-hosts Bill Mariano and Rob Hellewell share their thoughts around AI-enabled deep fakes in SIGHTINGS OF RADICAL BRILLIANCE. In particular, they chat about the...,   In the final episode of season one, co-hosts Bill Mariano and Rob Hellewell share their thoughts around AI-enabled deep fakes in SIGHTINGS OF RADICAL BRILLIANCE. In particular, they chat about the implications and dangers around this technology and what that means for the legal space and beyond. Bill and Rob bring on David Arlington , Special Counsel at Baker Botts, to discuss the move to the Cloud from a law firm‚Äôs perspective. Bill and Rob cover the following questions with David in the season finale: Why did the firm decide to move to a cloud-based service? Did you get any pushback or fear around moving to the Cloud, and, if so, how did you handle it? How long did it take to get up on the Cloud, from the initial decision to getting up and running on the Cloud? What were some of the unanticipated surprises that popped up during this process? What kind of advantages have you seen so far? The season ends with key takeaways from the guest speaker section and a reminder to watch for the release of season two in December. Connect with us Twitter and discover more about our speakers and the show here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here .   , information-governance; microsoft-365, microsoft-365, information-governance, self-service, spectra; cloud-migration; podcast; law-firm
September 20, 2019
Podcast
microsoft-365, information-governance, chat-and-collaboration-data

Microsoft Office 365 Part 2: How to Leverage all the Tools in the Toolbox

In the fourth episode of season one, co-hosts Bill Mariano and Rob Hellewell begin with SIGHTINGS OF RADICAL BRILLIANCEaround the dawn of realistic face masks as well as retina scans and...,   In the fourth episode of season one, co-hosts Bill Mariano and Rob Hellewell begin with SIGHTINGS OF RADICAL BRILLIANCEaround the dawn of realistic face masks as well as retina scans and fingerprints for authentication, and the security and legal concerns that hide beneath. Next, Bill and Rob introduce guest Chris Hurlebaus , eDiscovery Architect at Lighthouse, to discuss the tools that are available in Office 365 and how to leverage them. The speakers cover the following questions in this episode: What do I need to know around Office 365 licensing when having an ediscovery conversation? What Office 365 tools are currently available to users? What are the different options/subscription levels? What are the advanced features of Office 365? What about reporting of ediscovery activities in Office 365? What is Microsoft looking to do next around this technology? In the end, our co-hosts wrap up with a few key takeaways. Follow us on Twitter and discover more about our speakers and the show here . Related Links Case Study: The Benefits of an Office 365 Workshop About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here .   , microsoft-365; information-governance; chat-and-collaboration-data, microsoft-365, information-governance, chat-and-collaboration-data, microsoft; podcast
September 20, 2019
Podcast
information-governance, microsoft-365

Moving to the Cloud: A Corporate Journey

Law & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss Rob Robinson's recent article around the eras of ediscovery and...,   Law & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss Rob Robinson's recent article around the eras of ediscovery and where the industry is going next. In today‚Äôs episode, Bill and Rob are joined by Alex Shusterman , eDiscovery Manager at Accenture. The three discuss key components for corporate legal teams to keep in mind when considering the move to the Cloud as well as the benefits. Below are the questions they address: What are the key aspects corporate legal teams should keep in mind when considering the move to the Cloud? Why is it critical for Legal and IT to be in collaboration for these types of moves? What should corporate legal teams avoid when moving to the Cloud? What are lessons learned from moving to the Cloud? What are some of the benefits of moving to the Cloud? In conclusion, our co-hosts end the episode with key takeaways. To join in on the conversation, connect with us Twitter and discover more about our speakers and the show here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here .   , information-governance; microsoft-365, information-governance, microsoft-365, self-service, spectra; cloud-migration; corporation; podcast
September 20, 2019
Podcast
microsoft-365, information-governance

Microsoft Office 365 Part 1: Microsoft’s Influence on the Next Evolution of eDiscovery

Co-hosts Bill Mariano and Rob Hellewell introduce the issues around ephemeral data in SIGHTINGS OF RADICAL BRILLIANCE. In particular, they look at the huge growth rates in Snapchat users and what...,   Co-hosts Bill Mariano and Rob Hellewell introduce the issues around ephemeral data in SIGHTINGS OF RADICAL BRILLIANCE. In particular, they look at the huge growth rates in Snapchat users and what the continued growth of ephemeral data means for the legal space. Next, Bill and Rob bring on Mo Ramsey , General Manager of Global Advisory Services at Lighthouse, to help uncover the answers to the following questions around Office 365 in the ediscovery space: What does Microsoft‚Äôs evolution of ediscovery capabilities in Office 365 look like? What‚Äôs Microsoft doing within ediscovery and how do they want to differentiate? What specific actions are advanced users able to perform in Office 365? What should teams consider when evaluating Office 365? The show concludes with key takeaways from the guest speaker segment. Join the conversation on Twitter and discover more about our speakers and the show here . About Law & Candor Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here .   , microsoft-365; information-governance, microsoft-365, information-governance, microsoft; podcast
No items found. Please try different search parameters.
Resource Article
Lighting the Path to Better eDiscovery
Lighting the Path to Better Review
Lighting the Path to Better Information Governance
Practical Applications of AI in eDiscovery
Lighting the Way for Review
Client Success
eDiscovery and Review
Diversity, Equity, and Inclusion
Chat and Collaboration Data
Microsoft 365
Legal Operations
Information Governance
Forensics
Data Privacy
Antitrust
AI and Analytics

Get the latest insights

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.