Artificial intelligence (AI) has proliferated across industries, in popular culture, and in the legal space. But what does AI really mean? One way to look at it is in reference to technology that lets lawyers and organizations efficiently manage massive quantities of data that no one’s been able to analyze and understand before.
While AI tools are no longer brand new, they’re still evolving, and so is the industry’s comfort and trust in them. To look deeper into the technology available and how lawyers can use it Lighthouse hosted a panel featuring experts Mark Noel, Director of Advanced Client Data Solutions at Hogan Lovells, Sam Sessler, Assistant Director of Global eDiscovery Services at Norton Rose Fulbright, Bradley Johnston, Senior Counsel eDiscovery at Cardinal Health, and Paige Hunt, Lighthouse’s VP of Global Discovery Solutions.
Some of the key themes and ideas that emerged from the discussion include:
Meeting client expectations
Understanding attorneys’ duty of competence
Identifying critical factors in choosing an AI tool
Assessing AI’s impact on process and strategy
The future of AI in the legal industry
The term “AI” can be misleading. It’s important to recognize that, right now, it’s an umbrella term encompassing many different techniques. The most common form of AI in the legal space is machine learning, and the earliest tools were document review technologies in the eDiscovery space. Other forms of AI include deep learning, continuous active learning (CAL), neural networks, and natural language processing (NLP).
While eDiscovery was a proving ground for these solutions, the legal industry now sees more prebuilt and portable algorithms used in a wide range of use cases, including data privacy, cyber security, and internal investigations.
Clients’ Expectations and Lawyers’ Duties
The broad adoption of AI technologies has been slow, which comes as no surprise to the legal industry. Lawyers tend to be wary of change, particularly when it comes at the hands of techniques that can be difficult to understand. But our panel of experts agreed that barriers to entry were less of an issue at this point, and now many lawyers and clients expect to use AI.
Lawyers and clients have widely adopted AI techniques in eDiscovery and other privacy and security matters. However, the emphasis from clients is less about the technology and more about efficiency. They want their law firms and vendors to provide as much value as possible for their budgets.
Another client expectation is reducing risk to the greatest extent possible. For example, many AI technologies offer the consistency and accuracy needed to reduce the risk of inadvertent disclosures.
Mingled with client expectations is a lawyer’s duty to be familiar with technology from a competency standpoint. We aren’t to the point in the legal industry where lawyers violate their duty of competence if they don’t use AI tools. However, the technology may mature to the point where it becomes an ethical issue for lawyers not to use AI.
Choosing the Right AI Tool
Decide Based on the Search Task
There’s always the question of which AI technology to deploy and when. While less experienced lawyers might assume the right tool depends on the practice area, the panelists all focused on the search task. Many of the same search tasks occur across practice areas and enterprises.
Lawyers should choose an AI technology that will give them the information they need. For example, Technology-assisted review (TAR) is well-suited to classifying documents, whereas clustering is helpful for exploration.
Focus More on Features
Teams should consider the various options’ features and insights when purchasing AI for eDiscovery. They also must consider the training protocol, process, and workflow. At the end of the day, the results must be repeatable and defensible. Several solutions may be suitable as long as the team can apply a scientific approach to the process and perform early data assessment. Additional factors include connectivity with the organization’s other technology and cost.
The process and results matter most. Lawyers are better off looking at the system as a whole and its features in deciding which AI tech to deploy instead of focusing on the algorithm itself.
Although not strictly necessary, it can be helpful to choose a solution the team can apply to multiple problems and tasks. Some tools are more flexible than others, so reuse is something to consider.
Some Use Cases Allow for Experimentation
There’s also the choice between a well-established solution versus a lesser-known technology. Again, defensibility may push a team toward a well-known and respected tool. However, teams can take calculated risks with newer technologies when dealing with exploratory and internal tasks.
A Custom Solution Isn’t Necessary
The participants noted the rise in premade, portable AI solutions more than once. Rarely will it benefit a team to create a custom AI solution from scratch. There’s no need to reinvent the wheel. Instead, lawyers should always try an off-the-shelve system first, even if it requires fine-tuning or adjustments.
AI’s Impact on Process
The process and workflow are critical no matter which solution a team chooses. Whether for eDiscovery, an internal investigation, or a cyber security incident, lawyers need accurate and defensible results.
Some AI tools allow teams to track and document the process better than others. However, whatever the tool’s features, the lawyers must prioritize documentation. It’s up to them to thoughtfully train the chosen system, create a defensible workflow, and log their progress.
As the adage goes: garbage in, garbage out. The effort and information the team inputs into the AI tool will influence the validity of the results. The tool itself may slightly influence the team’s approach. However, any approach should flow from a scientific process and evidence-based decisions.
Consistency is relevant, too. Organizations can use AI tools to improve the accuracy and uniformity of identifying, classifying, and redacting information. A well-trained AI tool can offer better results than people who may be inconsistently trained, biased, or distracted.
Another factor is reusing AI technology for multiple search tasks. Depending on the tool, an organization can use it repeatedly. Or it can use the results from one project to the next. That may look like knowing which documents are privileged ahead of time or an ongoing redaction log. It can also look like using a set of documents to better train the algorithm for the next task.
The Future of AI
The panelists wrapped the webinar by discussing what they expect for the future of AI in the legal space. They agreed that being able to reuse work products and the concept of data lakes will become even greater focuses. Reuse can significantly impact tasks that have traditionally had a huge cost burden, such as privilege reviews and logs, sensitive data identification, and data breach and cyber incidents.
Another likelihood is AI technology expanding to more use cases. While lawyers tend to use these tools for similar search tasks, the technology itself has potential for many other legal matters, both adversarial and transactional.
For 25 years, Lighthouse has provided innovative software and services to manage the increasingly complex landscape of enterprise data for compliance and legal teams. Lighthouse leads by developing proprietary technology that integrates with industry-leading third-party software, automating workflows, and creating an easy-to-use, end-to-end platform. Lighthouse also delivers unique proprietary applications and advisory services that are highly valuable for large, complex matters, and a new SaaS platform designed for in-house teams. Whether reacting to incidents like litigation or governmental investigations, or designing programs to proactively minimize the potential for future incidents, Lighthouse partners with multinational industry leaders, top global law firms, and the world's leading software provider as a channel partner.