Defining your approach in the evolving AI landscape

April 24, 2023

This is part two in our series about generative AI’s impact on communications professionals. If you missed part one, you can read it here.

The next installment will include a deep dive on organizational risk surrounding AI in the workplace, centered on data privacy and ethics.

1. From theoretical to practical: How can I use AI in my work?

Generative AI tools are undeniably impressive, but do they really turbocharge productivity? Here are a few accessible use cases for communications professionals.

  • Summarize + condense: AI tools can shave hours off research by synthesizing lengthy texts and gleaning the highlights; some can even trim an hour-long Zoom meeting into a two-minute video clip.

  • Brainstorm: Ideation using AI can help you conceive ideas and see blind spots, in tasks ranging from media strategy development to crisis management.

  • Produce rough drafts: Generative AI can be a helpful starting point for drafting press releases, social media content, tailored pitches to journalists, bios based on LinkedIn profiles, and more.

  • Create visuals: Bypass copyright concerns and generate customized digital artwork for presentations.

2. Initiate strategic planning to avoid AI pitfalls

The work is underway in many organizations to assess and map the near-term implications of generative AI on their business. For reputation management and strategic communications leaders, consider the following checklist as a starting point for your planning:

  1. Become familiar with the AI basics: Learn to use the tools on the market – from free ones like ChatGPT to enterprise products that provide more customization and data transparency.

    Read about the long-term arc the technology is likely to follow, so your vision won’t be swayed by the boom or bust of any particular system.

    Keep your antenna up on what competitors, media and clients are doing.

  2. Form a task force of employees from various departments to identify use cases and select the appropriate tools by evaluating their accuracy, ease of use, customization options and pricing.

    For larger organizations, use a three-tiered plan. Whatever your strategy, don’t give into the hype and over-pivot. Stay committed to the fundamentals and focus on your core competence as reputation managers.

    Is it important to bring in key external stakeholders and AI experts for broader input? Or is speedy implementation the higher priority?

  3. Weigh the risks vs. rewards: Be cognizant of generative AI’s early limitations, such as its inability to properly cite sources, and its tendency to hallucinate and perpetuate existing biases.

    A risk-demand matrix can clarify where AI adds the most value. Focus on tasks that are repetitive and high-volume, and revisit the matrix often.

    Huddle with your legal team to hash out potential liabilities. While humans also err, the difference is that AI tools cannot explain their thinking and be held accountable.

    Let the tools augment, and not replace, your work processes. Always comb through the output for accuracy, tone, and style.

  4. Establish clear standards for using AI-generated text in communications materials. Define the scope of usage and the review process, as well as any necessary disclaimers.

    Self-policing is key before legislators can catch up, so draft a company-wide security policy with both your risk tolerance and your clients’ in mind. Encourage staff to experiment with AI tools within the bounds of these rules.

    Reiterate the sensitivity of sharing confidential or proprietary information while using AI, particularly as the tools are integrated into email and other messaging platforms.

    How will you ensure staff compliance, and what type of governance will you put in place?

3. Other AI news on the radar

  • Google’s Bard AI chatbot can now generate code, debug existing code, and write functions for Google Sheets. However, new reporting suggests that Google sidelined concerns from its AI ethics team in order to fast track Bard’s launch.

  • The Biden administration has ramped up efforts to regulate AI tools by launching a public consultation on the technology. Meanwhile, Senate Majority Leader Chuck Schumer is spearheading the congressional effort to craft AI legislation.

  • European legislators plan to add new provisions to a pending bill aimed at “steering the development of very powerful artificial intelligence in a direction that is human centric, safe and trustworthy.” China issued a mandate that requires a security review of generative AI tools before they’re allowed to go live.

  • Elon Musk plans to launch his own AI company to rival OpenAI. This came just two weeks after he cosigned an open letter urging a pause on all AI development. Amazon also joined the AI race with its own system.

Share your feedback with us at