Insights


How to assess AI risks + implement workplace best practices

May 11, 2023

The early fanfare over generative AI has largely given way to pragmatic concerns over risks and the need for standards and best practices at the workplace.

In this issue, we explore the top three risks that every organization needs to consider, and ask legal experts to weigh in.

  • Several hiccups by early adopters may have prompted companies to rethink their strategy – in a recent KPMG survey, 60% of major U.S. executives said that they are still a year or two away from AI adoption.
  • While cost and lack of clear business case were cited as the two highest barriers, cybersecurity and data privacy ranked as their top concerns.  

Check out our website for the rest of our Exploring AI in Communications series on defining your approach to AI and an overview of the AI landscape for communications


1. Key risk areas to consider

Copyright risks: Pending lawsuits and a growing number of media companies demanding payments from AI firms have put a spotlight on the value of publicly available data – some of which are copyrighted.

  • It’s impossible to be aware of all the copyrighted materials on the internet. A user cannot know how familiar an AI-generated output is to the original work.
  • Unwittingly publishing such material not only exposes the user and the company to infringement claims, but could also damage a hard-earned reputation. 
  • For PR professionals: What is your action plan for potential misuse that could impact your company and/or clients?

Data privacy risks: OpenAI has adjusted to data privacy concerns by rolling out an incognito mode in ChatGPT that allows users to turn off chat history.

  • Data will still be kept for 30 days to track abusive behavior, however, and the onus is on the user to disable this feature.
  • Also, many other generative AI systems use third-party contractors to review input/output data for safety, which means that the sharing of confidential data may result in a breach of confidentiality.
  • Companies can license the use of an AI model, so they can monitor what employees type in as prompts and protect the information shared. For more peace of mind, Microsoft is reportedly testing a private GPT alternative.

Misinformation and bias risks: This is arguably the most insidious risk when it comes to generative AI. Whether the fabrication stems from AI hallucinations or intentional human acts, AI makes spreading misinformation that much easier to pull off – and much harder to detect.

  • Deepfakes can be used to depict a company executive in a compromising situation, for instance, or to forge images to file fraudulent insurance claims.
  • Meanwhile, using AI to evaluate resumes of candidates may result in discriminatory hiring practices if biases are left unchecked, potentially exposing the company to litigation and penalties.  

Deepfake detection technology is expected to lag behind because of its prohibitive costs and lack of legal or financial incentives. For now, strengthening our media literacy may be our best defense.


2. Ask the legal experts: Assessing + mitigating AI risk

We connected with Davis+Gilbert to discuss how to mitigate risks around the use of generative AI. Michael C. Lasky is a partner and Chair of the firm’s Public Relations law practice, and Samantha Rothaus is a partner in the firm’s Advertising and Marketing law practice group.

Below is an excerpt of the conversation. Please click here for the full interview.

Q: How do we check for plagiarism or copyright infringement, knowing that AI can draw from multiple sources to generate each sentence or image?

A: Companies should have a written internal clearance process to vet materials, with a designated person for final sign-off. Using pre-cleared material like licensed images is a good practice to reduce risk, as are tools like plagiarism detectors or reverse-image searching. For lower stakes tasks, taking an AI output and making unique, substantial changes to it will likely reduce copyright risks.

Q: How do we avoid recycling misinformation and biases that may be embedded in AI outputs?

A: There will need to be a greater emphasis put on training. For text, the process will require critical thinking, fact-checking, and researching multiple trusted sources. For images or voices, look for small glitches, distortions or other signs of inauthenticity. If we make disclosure a norm when using generative AI in content creation, this will also help viewers assess what degree of credibility to give to the material.  

Q: If someone inadvertently uses an AI-generated image that infringes on copyrights, who is liable, the AI developer or the individual?

A: This is an open question. At the moment, we are not seeing users being targeted in litigation – only the platforms themselves (specifically, Stability AI and Midjourney). However, there is an argument that users may have contributory liability to producing infringing content.  We suspect that if we do see this kind of litigation arise, it will likely be against large companies as the “user” rather than individual people.


3. Other AI trends + hot topics

  • At its annual Think conference, IBM announced WatsonX, a platform “for companies looking to introduce AI into their business model.”
  • Twitter and Reddit will start charging for access to their data. Elon Musk reportedly cut OpenAI off of Twitter’s data after deciding that the $2 million per year licensing fee he was charging wasn’t enough.
  • In recent earnings calls, Alphabet, Microsoft, Amazon and Meta all emphasized their intent to make hefty investments in AI. In contrast, Apple’s tone is more measured. Separately, PricewaterhouseCoopers plans to invest $1 billion in generative AI to automate aspects of its tax, audit and consulting services.
  • Google merged its two main AI research units, DeepMind and Brain, to gear up for an intense AI battle.
  • ChatGPT is back in Italy after OpenAI met most of the government’s demands, including creating the incognito mode and providing more details on how the tool processes information.
  • IBM’s CEO said hiring back-office functions such as HR will be paused, and that he can “easily see” 30% of the non-customer-facing roles replaced by AI over the next 5 years.
  • AI developers would be required to disclose copyright material used in training their tools, according to a new draft of EU legislation. Separately, the G7 nations called for the creation of global standards for assessing AI risks to promote prudent development.
  • Vice President Kamala Harris and other White House leaders told the CEOs of Alphabet, Microsoft, OpenAI and Anthropic they had a “moral” obligation to keep their products safe, in a first meeting of AI leaders. 

Share your feedback with us at insights@gga.nyc.