MEDIA NEWS + MOVES
Insights
May 26, 2023
1. Section 230 remains untouched
The big picture: The Supreme Court left intact Section 230 of the Communications Decency Act – a liability shield for internet companies over third-party content – and punted the issue back to Congress.
What’s happening: Referred to as “the internet’s most important law,” Section 230 has faced scrutiny in connection to tech companies’ content moderation decisions.
-
Section 230 was written 25 years ago in the internet’s infancy. Opponents argue that it is outdated. Defenders say it has allowed the internet to thrive, and believe that the way it is written enables innovation and protects free speech.
-
The cases, Gonzalez v. Google and Twitter v. Taamneh, challenged whether tech companies should be held liable for terrorism-related material posted on their platforms.
-
Last week, the court dismissed the case and declined to take up questions on it, leaving intact a lower court ruling in Google’s favor.
Why it matters: Section 230 has shaped the internet as we know it. Greater reform may lead to deeper regulation of posted content and interaction with other internet users.
What comes next: The court is still deciding whether to hear cases challenging social media content moderation laws in Texas and Florida.
2. What’s trending?
-
Time-strapped journalists are increasingly looking for data and expert sources to inform their reporting. A recent Cision report found that 68% want to see original research and trend data in pitches. (Axios)
-
The number of news and information websites generated by AI – and operating with little to no human oversight – more than doubled in just two weeks. In a special report, Newsguard identified 125 websites that are entirely or mostly generated by AI tools. (The New York Times)
-
Elon Musk has been dubbed the new “king of conservative media,” and is positioning Twitter as the center of gravity for Republicans ahead of the 2024 election. High-profile, right-wing personalities, including fired Fox News host Tucker Carlson, have said they will bring content exclusively to the platform. Florida Gov. Ron DeSantis announced his presidential bid this week in a Twitter Spaces chat, but the event was derailed by technical glitches. (CNN)
-
Dotdash Meredith, one of the largest internet publishers in the country, is debuting a new ad tool, D/Cipher, that doesn’t rely on internet tracking cookies or first-party data. Advertisers can target users on any of Dotdash Meredith’s digital platforms based on intent and interests with which the user is likely to engage. (Axios)
3. Journalist moves
Business [Reporting on business from newspapers, magazines and online sources]
- Naomi Shavin – senior podcast producer, Bloomberg News; previous: producer, Axios (Cision)
- Kalley Huang – reporter, The Information; previous: reporting fellow, New York Times (Cision)
- John Schafer – markets reporter, Yahoo Finance; promotion (Cision)
- Elisabeth Buchwald – economy explainer reporter, CNN; previously: personal finance reporter, USA Today (Cision)
- Jonathan Tully – digital content editor, Human Resource Executive; editor, Mashable (Cision)
- Megan Leonhardt – senior economics writer, Barron’s; writer, Fortune (Twitter)
- Chelsea Emery – executive editor, Staffing Industry Analysts; promotion (Cision)
Technology [Covering startups, advanced technologies and the intersection of tech/business]
- Michelle Ma, reporter, clean tech, Bloomberg; previous, freelance (Talking Biz News)
- Brian Kahn – editor, climate tech, Bloomberg; previous: climate editor, Protocol (Talking Biz News)
More Moves of Interest [Additional updates from notable journalists + editors]
- Sam Jacobs – editor-in-chief, TIME; promotion (Cision)
- Adam Levy – executive producer + news editor, BBC News; previous, show runner, CNN+ (Cision)
- Mary Bruce – White House correspondent, ABC News; promotion (Muck Rack)
- Lauren N. Williams – deputy editor, race + equity, The Guardian; previous: senior editor, The Atlantic (Talking Biz News)
- David Gelles – managing correspondent, Climate Forward Newsletter, The New York Times; promotion (The New York Times)
Share your feedback with us at insights@gga.nyc.
Featured Insights
MEDIA NEWS + MOVES
Read More
How to assess AI risks + implement workplace best practices
Insights
May 11, 2023
The early fanfare over generative AI has largely given way to pragmatic concerns over risks and the need for standards and best practices at the workplace.
In this issue, we explore the top three risks that every organization needs to consider, and ask legal experts to weigh in.
- Several hiccups by early adopters may have prompted companies to rethink their strategy – in a recent KPMG survey, 60% of major U.S. executives said that they are still a year or two away from AI adoption.
- While cost and lack of clear business case were cited as the two highest barriers, cybersecurity and data privacy ranked as their top concerns.
Check out our website for the rest of our Exploring AI in Communications series on defining your approach to AI and an overview of the AI landscape for communications.
1. Key risk areas to consider
Copyright risks: Pending lawsuits and a growing number of media companies demanding payments from AI firms have put a spotlight on the value of publicly available data – some of which are copyrighted.
- It’s impossible to be aware of all the copyrighted materials on the internet. A user cannot know how familiar an AI-generated output is to the original work.
- Unwittingly publishing such material not only exposes the user and the company to infringement claims, but could also damage a hard-earned reputation.
- For PR professionals: What is your action plan for potential misuse that could impact your company and/or clients?
Data privacy risks: OpenAI has adjusted to data privacy concerns by rolling out an incognito mode in ChatGPT that allows users to turn off chat history.
- Data will still be kept for 30 days to track abusive behavior, however, and the onus is on the user to disable this feature.
- Also, many other generative AI systems use third-party contractors to review input/output data for safety, which means that the sharing of confidential data may result in a breach of confidentiality.
- Companies can license the use of an AI model, so they can monitor what employees type in as prompts and protect the information shared. For more peace of mind, Microsoft is reportedly testing a private GPT alternative.
Misinformation and bias risks: This is arguably the most insidious risk when it comes to generative AI. Whether the fabrication stems from AI hallucinations or intentional human acts, AI makes spreading misinformation that much easier to pull off – and much harder to detect.
- Deepfakes can be used to depict a company executive in a compromising situation, for instance, or to forge images to file fraudulent insurance claims.
- Meanwhile, using AI to evaluate resumes of candidates may result in discriminatory hiring practices if biases are left unchecked, potentially exposing the company to litigation and penalties.
Deepfake detection technology is expected to lag behind because of its prohibitive costs and lack of legal or financial incentives. For now, strengthening our media literacy may be our best defense.
2. Ask the legal experts: Assessing + mitigating AI risk
We connected with Davis+Gilbert to discuss how to mitigate risks around the use of generative AI. Michael C. Lasky is a partner and Chair of the firm’s Public Relations law practice, and Samantha Rothaus is a partner in the firm’s Advertising and Marketing law practice group.
- They advised the PR Council in drafting its new guidelines on generative AI, and created their own Marketing Law Clearance Checklist for content creators.
Below is an excerpt of the conversation. Please click here for the full interview.
Q: How do we check for plagiarism or copyright infringement, knowing that AI can draw from multiple sources to generate each sentence or image?
A: Companies should have a written internal clearance process to vet materials, with a designated person for final sign-off. Using pre-cleared material like licensed images is a good practice to reduce risk, as are tools like plagiarism detectors or reverse-image searching. For lower stakes tasks, taking an AI output and making unique, substantial changes to it will likely reduce copyright risks.
Q: How do we avoid recycling misinformation and biases that may be embedded in AI outputs?
A: There will need to be a greater emphasis put on training. For text, the process will require critical thinking, fact-checking, and researching multiple trusted sources. For images or voices, look for small glitches, distortions or other signs of inauthenticity. If we make disclosure a norm when using generative AI in content creation, this will also help viewers assess what degree of credibility to give to the material.
Q: If someone inadvertently uses an AI-generated image that infringes on copyrights, who is liable, the AI developer or the individual?
A: This is an open question. At the moment, we are not seeing users being targeted in litigation – only the platforms themselves (specifically, Stability AI and Midjourney). However, there is an argument that users may have contributory liability to producing infringing content. We suspect that if we do see this kind of litigation arise, it will likely be against large companies as the “user” rather than individual people.
3. Other AI trends + hot topics
- At its annual Think conference, IBM announced WatsonX, a platform “for companies looking to introduce AI into their business model.”
- Twitter and Reddit will start charging for access to their data. Elon Musk reportedly cut OpenAI off of Twitter’s data after deciding that the $2 million per year licensing fee he was charging wasn’t enough.
- In recent earnings calls, Alphabet, Microsoft, Amazon and Meta all emphasized their intent to make hefty investments in AI. In contrast, Apple’s tone is more measured. Separately, PricewaterhouseCoopers plans to invest $1 billion in generative AI to automate aspects of its tax, audit and consulting services.
- Google merged its two main AI research units, DeepMind and Brain, to gear up for an intense AI battle.
- ChatGPT is back in Italy after OpenAI met most of the government’s demands, including creating the incognito mode and providing more details on how the tool processes information.
- IBM’s CEO said hiring back-office functions such as HR will be paused, and that he can “easily see” 30% of the non-customer-facing roles replaced by AI over the next 5 years.
- AI developers would be required to disclose copyright material used in training their tools, according to a new draft of EU legislation. Separately, the G7 nations called for the creation of global standards for assessing AI risks to promote prudent development.
- Vice President Kamala Harris and other White House leaders told the CEOs of Alphabet, Microsoft, OpenAI and Anthropic they had a “moral” obligation to keep their products safe, in a first meeting of AI leaders.
Share your feedback with us at insights@gga.nyc.
Featured Insights
MEDIA NEWS + MOVES
Read More
Managing AI Risks: Q&A with Legal Experts from Davis+Gilbert
Insights
May 10, 2023
Global Gateway Advisors sat down with Michael C. Lasky and Samantha Rothaus from Davis+Gilbert to discuss how best to manage risks around the use of generative AI.
Michael is a partner and Chair of the firm’s Public Relations law practice, and Samantha is a partner in the firm’s Advertising and Marketing law practice group.
They counseled the PR Council in drafting its new guidelines on generative AI, and created their own Marketing Law Clearance Checklist for content creators.
___
Q: How do we check for plagiarism or copyright infringement, knowing that AI can
draw from multiple sources to generate each sentence / image?
A: Companies should have a written internal clearance process to vet materials, with a designated person for final sign-off. Using pre-cleared material like licensed images is a good practice to reduce risk, as are tools like plagiarism detectors or reverse-image searching. For lower stakes tasks, taking an AI output and making unique, substantial changes to it will likely reduce copyright risks.
Q: How do we avoid recycling misinformation and biases that may be embedded
in AI outputs?
A: There will need to be a greater emphasis put on training. For text, the process will require critical thinking, fact-checking, and researching multiple trusted sources. For images or voices look for small glitches, distortions or other signs of inauthenticity. If we make disclosure a norm when using generative AI in content creation, this will also help viewers assess what degree of credibility to give to the material.
Q: If someone inadvertently uses an AI-generated image that infringes on
copyrights, who is liable, the AI developer or the individual?
A: This is an open question. At the moment, we are not seeing users be targeted in litigation – only the platforms themselves (specifically, Stability AI and Midjourney). However, there is an argument that users may have contributory liability to producing infringing content. We suspect that if we do see this kind of litigation arise, it will likely be against large companies as the “user” rather than individual people.
Q: How far away are AI regulations? Do we need a set of international rules?
A: We don’t see this happening anytime soon. Just look at privacy – this is an area with a huge patchwork of different kinds of rules across different jurisdictions, and in the U.S., pushes to nationalize these laws in federal legislation have been unsuccessful for several years. We think the same will happen for AI. A set of globally adopted norms will be needed, but it remains to be seen whether full-on legislation is necessary or is even realistically going to happen. And these norms may not emerge for several years, as it will take time for people to understand how these services are best used.
Q: Are you using generative AI for your work?
A: While some AI tools have emerged for the practice of law, many of them are distinct from the generative AI technologies being introduced in the communications and marketing space. We don’t think any of our clients would want their substantive legal advice to be AI generated. As for us, we have not personally begun using any AI tools in our daily work, though we’ve played with them to better understand how our clients are using them.
Share your feedback with us at insights@gga.nyc.
Featured Insights
MEDIA NEWS + MOVES
Read More
Defining your approach in the evolving AI landscape
Insights
April 24, 2023
This is part two in our series about generative AI’s impact on communications professionals. If you missed part one, you can read it here.
The next installment will include a deep dive on organizational risk surrounding AI in the workplace, centered on data privacy and ethics.
1. From theoretical to practical: How can I use AI in my work?
Generative AI tools are undeniably impressive, but do they really turbocharge productivity? Here are a few accessible use cases for communications professionals.
-
Summarize + condense: AI tools can shave hours off research by synthesizing lengthy texts and gleaning the highlights; some can even trim an hour-long Zoom meeting into a two-minute video clip.
-
Brainstorm: Ideation using AI can help you conceive ideas and see blind spots, in tasks ranging from media strategy development to crisis management.
-
Produce rough drafts: Generative AI can be a helpful starting point for drafting press releases, social media content, tailored pitches to journalists, bios based on LinkedIn profiles, and more.
-
Create visuals: Bypass copyright concerns and generate customized digital artwork for presentations.
2. Initiate strategic planning to avoid AI pitfalls
The work is underway in many organizations to assess and map the near-term implications of generative AI on their business. For reputation management and strategic communications leaders, consider the following checklist as a starting point for your planning:
-
Become familiar with the AI basics: Learn to use the tools on the market – from free ones like ChatGPT to enterprise products that provide more customization and data transparency.
Read about the long-term arc the technology is likely to follow, so your vision won’t be swayed by the boom or bust of any particular system.
Keep your antenna up on what competitors, media and clients are doing.
-
Form a task force of employees from various departments to identify use cases and select the appropriate tools by evaluating their accuracy, ease of use, customization options and pricing.
For larger organizations, use a three-tiered plan. Whatever your strategy, don’t give into the hype and over-pivot. Stay committed to the fundamentals and focus on your core competence as reputation managers.
Is it important to bring in key external stakeholders and AI experts for broader input? Or is speedy implementation the higher priority?
-
Weigh the risks vs. rewards: Be cognizant of generative AI’s early limitations, such as its inability to properly cite sources, and its tendency to hallucinate and perpetuate existing biases.
A risk-demand matrix can clarify where AI adds the most value. Focus on tasks that are repetitive and high-volume, and revisit the matrix often.
Huddle with your legal team to hash out potential liabilities. While humans also err, the difference is that AI tools cannot explain their thinking and be held accountable.
Let the tools augment, and not replace, your work processes. Always comb through the output for accuracy, tone, and style.
-
Establish clear standards for using AI-generated text in communications materials. Define the scope of usage and the review process, as well as any necessary disclaimers.
Self-policing is key before legislators can catch up, so draft a company-wide security policy with both your risk tolerance and your clients’ in mind. Encourage staff to experiment with AI tools within the bounds of these rules.
Reiterate the sensitivity of sharing confidential or proprietary information while using AI, particularly as the tools are integrated into email and other messaging platforms.
How will you ensure staff compliance, and what type of governance will you put in place?
3. Other AI news on the radar
-
Google’s Bard AI chatbot can now generate code, debug existing code, and write functions for Google Sheets. However, new reporting suggests that Google sidelined concerns from its AI ethics team in order to fast track Bard’s launch.
-
The Biden administration has ramped up efforts to regulate AI tools by launching a public consultation on the technology. Meanwhile, Senate Majority Leader Chuck Schumer is spearheading the congressional effort to craft AI legislation.
-
European legislators plan to add new provisions to a pending bill aimed at “steering the development of very powerful artificial intelligence in a direction that is human centric, safe and trustworthy.” China issued a mandate that requires a security review of generative AI tools before they’re allowed to go live.
-
Elon Musk plans to launch his own AI company to rival OpenAI. This came just two weeks after he cosigned an open letter urging a pause on all AI development. Amazon also joined the AI race with its own system.
Share your feedback with us at insights@gga.nyc.
Featured Insights
MEDIA NEWS + MOVES
Read More
Exploring AI in Communications
Insights
April 6, 2023
The meteoric rise of ChatGPT made generative AI technology widely accessible to the public, conjuring a mix of intrigue, ambivalence and an urgency to stay ahead.
Businesses are scrambling to make sense of it, professionals are fearful of losing their jobs and regulators find themselves playing catch-up.
The big picture: Global Gateway Advisors recognizes this is a watershed moment that will redefine how we work and communicate.
-
To help you navigate, we are launching a series on generative AI and its impact on communications professionals and how they do their jobs.
-
We hope to cut through the noise and distill the barrage of information to provide you with pragmatic and actionable advice.
Go deeper: This first installment will give you the lay of the land, catch the latest reactions from the communications world and explore what is at stake.
What’s next: In part two, we will zoom in on what generative AI means to the communications industry, and suggest a systematic approach on how to incorporate it in your organizations and workflows.
But first…
-
Don’t miss our latest Media News & Moves updates, now on our website.
-
Will you be at the Page Spring Seminar in Brooklyn, NY next week? If yes, drop us a note and let’s meet.
-
We’re proud to sponsor StratCommWorld 2023, a leading conference connecting strategic communications and public affairs, taking place May 1-2, 2023 at the National Press Club in Washington, D.C. Click here for details.
1. The basics on generative AI
Generative AI falls under the broad category of machine learning, with ChatGPT being the most sophisticated chatbot to date.
-
Trained on a vast amount of data scraped from the internet, ChatGPT responds to text prompts and is able to have human-like conversations on any topics from quantum physics to mental health, compose haikus and write computer codes.
-
But this AI tool, developed by research firm OpenAI, isn’t without flaws. It is known to hallucinate (which in AI terms means producing factual inaccuracies and reasoning errors), and its training data cuts off in 2021 – which means it’s useless for real time tasks like media tracking.
Clones such as Google’s Bard and Chatsonic have sprung up at warp speed, with varying successes. China has also entered the generative AI arms race with Baidu’s Ernie. And today, Meta released an AI model that can identify items without images.
Yes, but: Privacy and security concerns loom large, as employers struggle to figure out how to fold ChatGPT into their workflows without risking the security of corporate secrets, customer information and intellectual property.
-
OpenAI has made it clear that they utilize user conversations to continually train ChatGPT, and urged users not to share sensitive information in their prompts. Legal and ethical issues also abound.
2. Across the pond… and beyond
Italy became the first country to ban ChatGPT. The Italian data-protection authority said it is investigating whether OpenAI complied with General Data Protection Regulation, citing the ChatGPT data breach on March 20.
-
EU lawmakers have also been negotiating sweeping new rules to limit high-risk AI tools.
-
Meanwhile in Japan, companies like SoftBank and Hitachi are restricting the use of generative AI in business operations due to fears of proprietary information leaks.
More than 1,000 prominent tech researchers and leaders including Elon Musk (a co-founder of OpenAI who severed his ties in 2018) called for a 6-month moratorium on further development of the generative AI systems, citing “profound risks to society and humanity.”
The bottom line: The open letter calls for safety policies to be put in place before the AI race evolves to a point where even its creators cannot predict the outcome.
3. What does this mean for communicators?
Some communications companies are fully embracing the technology.
-
In March, PR firm Gregory FCA and global communications giant Stagwell separately announced the launch of their AI writing tools called Write Release and Taylor, respectively.
-
While Write Release runs on OpenAI and uses prompts to gather information and writes press announcements, Taylor drafts PR content like press releases, pitches, blog posts and social media copy.
The communications teams at Microsoft – a major investor and partner of OpenAI – said they are experimenting with these AI tools.
Coca-Cola was among the first major brands to use ChatGPT and its sibling, Dall.E. The company teamed up with consulting firm Bain & Company to integrate these tools into its marketing strategy.
What they’re saying: Not everyone shares the enthusiasm. A February survey conducted by ICCO and PRCA revealed that one in four global PR leaders said they will never use AI tools such as ChatGPT, with more than half of them admitting to never using the technology.
4. How should I keep up with the latest?
Here’s what we’re paying attention to:
-
AI Weekly: A curated site on all things AI
-
On Tech: AI: A newsletter from The New York Times
-
The Algorithm – A newsletter from MIT Technology Review
-
AI Disruption: A blog that explains how AI disrupts various industries
-
Exponential View: A newsletter that focuses on implications of new AI technology
-
The Batch: Curates weekly reports on AI for business leaders and engineers
-
TWIML AI Podcast: Explores how AI changes people’s lives and business operations
-
The Economist: An overview of big tech’s pursuit of AI dominance
Share your feedback – or ideas about what we should cover in this series – by emailing insights@gga.nyc.
Featured Insights
MEDIA NEWS + MOVES
Read More
Keep an eye on uncertainty in the news industry
Insights
Global Gateway Advisors
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Egestas dui id ornare arcu odio ut. Nisi vitae suscipit tellus mauris a diam maecenas. Mus mauris vitae ultricies leo integer malesuada. Velit dignissim sodales ut eu sem integer. Tempus egestas sed sed risus pretium quam vulputate dignissim. Nisl suscipit adipiscing bibendum est ultricies integer. Massa ultricies mi quis hendrerit dolor. Vivamus arcu felis bibendum ut tristique et egestas.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Egestas dui id ornare arcu odio ut. Nisi vitae suscipit tellus mauris a diam maecenas. Mus mauris vitae ultricies leo integer malesuada. Velit dignissim sodales ut eu sem integer. Tempus egestas sed sed risus pretium quam vulputate dignissim. Nisl suscipit adipiscing bibendum est ultricies integer. Massa ultricies mi quis hendrerit dolor. Vivamus arcu felis bibendum ut tristique et egestas.
Featured Insights
MEDIA NEWS + MOVES
Read More