How lawyers use AI

From: POLITICO's Digital Future Daily - Tuesday May 23,2023 08:14 pm
How the next wave of technology is upending the global economy and its power structures
May 23, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the OpenAI

A ChatGPT display. | Getty Images

Turns out, courtroom battles over AI aren’t the only way for lawyers to capitalize on the generative AI goldrush.

Because of their particular skills — reading, summarizing fast, answering questions — the new generative AI platforms seem a perfect fit for certain kinds of jobs, like reading and summarizing contracts, even writing boilerplate legal documents and answering questions based on a body of text.

This has led to a lot of anxiety about robots coming for white-collar jobs. It also leaves big companies thinking hard about what they can use these platforms for, and how soon.

Earlier this year, an AI startup called Harvey partnered with law firm Allen & Overy to automate some of the firm’s legal document drafting and research tasks.

Harvey’s backers include Sequoia Capital and an investment fund managed by OpenAI (yes, that one) and its raison d’etre is creating custom generative AI software for law firms. Allen & Overy is far from the only firm to use Harvey, but it’s certainly got one of the highest profiles in the world of big law.

So what will AI in a big law office really look like? I sat down with David Wakeling — who is leading A&O’s AI rollout as head of their Markets Innovation Group — to get the lowdown on how adoption is going, and how they are mitigating some of the risks. The biggest and best language models, like GPT-4, are also known for confidently making stuff up when they don’t know the answer — “hallucinations” is the technical term — and that could be quite dangerous in a business where law firms need rock-solid information and arguments.

“It’s not a chatbot,” Wakeling clarified at the outset. “We're using it as a single question and answer,” he said. So the kind of emotional back-and-forth that the New York Times’ Kevin Roose had with ChatGPT is not what A&O’s AI tool is built for.

To Wakeling, the most interesting thing about fine tuning a large language model for lawyers to use “is not so much adding huge amounts of data,” he said, but “more about suppressing the areas of GPT4 that are not very helpful for the business use-case.”

And those use-cases are “boring,” Wakefield said, like “productivity gain on drafting a provision for a loan agreement or something. It’s not really about individuals. It's not reflective of societal trends. It's simple.”

So how does it work? Well, the base GPT-4 large language model that Harvey provides is “tuned for law,” Wakeling said.

And unlike Google’s Bard or the new GPT4-powered Bing search, where the AI tool combs through the open internet for answers to user questions, the Harvey AI model looks through a restricted dataset each time it needs to find an answer. “So you're keeping the base model, but forcing it to answer with respect to a book on my shelf,” Wakeling said. Still, “we work on the assumption that it hallucinates and has errors,” he said. Wakeling compared Harvey’s GPT4 model to “a very confident, extremely articulate 12 year old who doesn't know what it doesn't know.“

Down the line, Wakefield thinks that firms who are able to leverage AI to shore up the “the skills and knowledge of the institution” will stand a better chance at survival than those that don’t adopt AI and let institutional knowledge stay only in the brains of humans.

And how about the impact on jobs? Right now, Wakeling said the AI service that he sees on his desktop would not result in job cuts and that the resulting productivity gain would be “at all levels — at the most junior and senior.” But he also added that he “could not crystal ball gaze” about how AI usage would affect A&O’s workforce in the coming years, given how quickly the technology is evolving.

While everyone can agree that AI will be a game-changer for businesses (and the individuals working for them), not everyone is so sanguine about the kind of approach A&O is taking. “It's a shame that the benefits of AI are going to big companies and law firms rather than to consumers and customers. And it's not clear whether the savings will be passed on,” said Joshua Browder, the CEO of DoNotPay, which provides low-cost legal advice via an AI-powered chatbot.

Wakeling said the use of Harvey is not currently reflected in what they bill their clients — but that might change later on. “In the future we imagine AI being used to support services at cheaper fixed fee rates than achievable in the past for e.g. due diligence,” he said over email.

“It's not going to change billing in the short term,” said Browder, “especially because some of these AI partnerships are exclusive,” meaning big law firms have no real incentive to share the cost-savings with their clients. But on a longer timescale, Browder has faith in the “tens of thousands of lawyers” trying to apply AI tools to their individual market areas to correct the balance in their clients’ favor.

Someday, A&O’s clients might see fewer billable hours on their invoice as a result of the great Harvey AI experiment, even though AI isn’t actually going to practice law anytime soon.

“Eventually, the market is efficient,” Browder said.

 

DON’T MISS POLITICO’S HEALTH CARE SUMMIT: The Covid-19 pandemic helped spur innovation in health care, from the wide adoption of telemedicine, health apps and online pharmacies to mRNA vaccines. But what will the next health care innovations look like? Join POLITICO on Wednesday June 7 for our Health Care Summit to explore how tech and innovation are transforming care and the challenges ahead for access and delivery in the United States. REGISTER NOW.

 
 
relieving the tension

France’s main privacy regulator is making it its business to resolve the “tension” between AI developers and the European Union’s General Data Protection Regulation, the EU’s flagship data privacy law.

As POLITICO’s Laura Kayali reported for Pro s this morning, France’s CNIL (or in English, its National Commission on Informatics and Liberty) has appointed a new head of its AI unit, Félicien Vallet, who will seek to enact the “action plan” on generative AI France promised earlier this month.

The “tension” to which Vallet refers has to do with the voluminous amounts of data that AI developers must collect to train their powerful models. Last month Italy became the first country to temporarily ban ChatGPT for potential GDPR violations in collecting that data, although it reneged on the ban after the software’s developer, OpenAI, agreed to share more information about how it collects and uses data, add a form that allows users to opt out from their data being used to train AI models, and verify users’ ages.

Vallet described an “issue of articulation” between the EU’s planned AI Act and the GDPR that he would seek to resolve in France — in other words, that there are still some kinks that need to be worked out in bringing the two pieces of legislation to harmony — and said “the CNIL’s position is not to slow down innovation but to accompany it.” The final version of the AI Act is expected in June. — Derek Robertson

white house on ai

The Biden administration announced this afternoon a few new guidances on AI, building on last year’s blueprint for an AI bill of rights and the flurry of discussion around the technology in Washington in recent weeks.

In a statement, the White House said the Office of Science and Technology will release a new “National AI R&D Strategic Plan” aimed at encouraging AI development that “promotes responsible American innovation, serves the public good, protects people’s rights and safety, and upholds democratic values,” building on a Trump-era document not updated since 2019.

The OSTP will also request public comment on “national priorities for mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives,” and the Department of Education is planning a report on “Artificial Intelligence and the Future of Teaching and Learning.” The White House also held a “listening session” today featuring “workers representing diverse sectors of the economy, including call centers, trucking, warehousing, health care, and gig work, as well as policy experts, researchers, and policymakers,” all presumably with a stake in managing the looming effects of AI on the American economy. — Derek Robertson

Tweet of the Day

Federal Reserve:

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

May 22,2023 08:03 pm - Monday

Open-source wants to eat the internet

May 18,2023 08:02 pm - Thursday

The one big problem Washington faces on AI

May 16,2023 08:14 pm - Tuesday

OpenAI stumps in Washington

May 15,2023 08:02 pm - Monday

Inside the AI culture war