Managing AI Risks: Q&A with Legal Experts from Davis+Gilbert

May 10, 2023

Global Gateway Advisors sat down with Michael C. Lasky and Samantha Rothaus from Davis+Gilbert to discuss how best to manage risks around the use of generative AI.

Michael is a partner and Chair of the firm’s Public Relations law practice, and Samantha is a partner in the firm’s Advertising and Marketing law practice group.

They counseled the PR Council in drafting its new guidelines on generative AI, and created their own Marketing Law Clearance Checklist for content creators.


Q: How do we check for plagiarism or copyright infringement, knowing that AI can
draw from multiple sources to generate each sentence / image?

A: Companies should have a written internal clearance process to vet materials, with a designated person for final sign-off. Using pre-cleared material like licensed images is a good practice to reduce risk, as are tools like plagiarism detectors or reverse-image searching. For lower stakes tasks, taking an AI output and making unique, substantial changes to it will likely reduce copyright risks.

Q: How do we avoid recycling misinformation and biases that may be embedded
in AI outputs?

A: There will need to be a greater emphasis put on training. For text, the process will require critical thinking, fact-checking, and researching multiple trusted sources. For images or voices look for small glitches, distortions or other signs of inauthenticity. If we make disclosure a norm when using generative AI in content creation, this will also help viewers assess what degree of credibility to give to the material.

Q: If someone inadvertently uses an AI-generated image that infringes on
copyrights, who is liable, the AI developer or the individual?

A: This is an open question. At the moment, we are not seeing users be targeted in litigation – only the platforms themselves (specifically, Stability AI and Midjourney). However, there is an argument that users may have contributory liability to producing infringing content. We suspect that if we do see this kind of litigation arise, it will likely be against large companies as the “user” rather than individual people.

Q: How far away are AI regulations? Do we need a set of international rules?

A: We don’t see this happening anytime soon. Just look at privacy – this is an area with a huge patchwork of different kinds of rules across different jurisdictions, and in the U.S., pushes to nationalize these laws in federal legislation have been unsuccessful for several years. We think the same will happen for AI. A set of globally adopted norms will be needed, but it remains to be seen whether full-on legislation is necessary or is even realistically going to happen. And these norms may not emerge for several years, as it will take time for people to understand how these services are best used.

Q: Are you using generative AI for your work?

A: While some AI tools have emerged for the practice of law, many of them are distinct from the generative AI technologies being introduced in the communications and marketing space. We don’t think any of our clients would want their substantive legal advice to be AI generated. As for us, we have not personally begun using any AI tools in our daily work, though we’ve played with them to better understand how our clients are using them.

Share your feedback with us at