Multicolored arrows on a dark surface as a symbol of teamwork and unity.

Are corporate DEI efforts actually retreating?

Insights


Global Gateway Advisors

March 19, 2024

Recent headlines suggest companies have pulled back on their DEI commitments, reflecting a shift from the social justice movement of 2020 to a muted environment after the Supreme Court struck down affirmative action in 2023. The research shows a more complex reality.

  • Businesses are balancing their investments – and how they communicate about these initiatives – to avoid legal issues and unwanted public scrutiny.
  • It’s also important to separate workplace DEI, which centers around building an internal culture of inclusion, from ESG, a set of standards geared toward an investor audience.

As other defining issues, ranging from AI strategy to election year politics, capture companies’ attention, this edition will help communications leaders provide better counsel to their organizations on DEI topics.


1. Affirmative action ruling had immediate impact on corporate DEI

The big picture: Fortune 100 companies are navigating a shift in how diversity, equity and inclusion programs can operate.

  • In June 2023, the Supreme Court ended race-based affirmative action in higher education. Republican attorneys general have raised concerns about DEI initiatives in corporations and warned against making race-based hiring decisions.

  • Companies have since faced complaints and litigation related to their DEI initiatives.

Go deeper: Based on analysis before the SCOTUS decision, legal experts anticipated that a ruling against affirmative action would lead to scrutiny of DEI programs in the workplace.

  • This prediction held true. Companies have traditionally argued that diversity strengthens the workforce, similar to affirmative action in universities. The Supreme Court’s decision weakened the legitimate grounds for these programs, so businesses that have maintained their stance on DEI are now working to avoid drawing legal scrutiny.

What else: A new case before the Supreme Court, Muldrow v. City of St. Louis, is worth monitoring. A female police sergeant alleged that she was reassigned to a less prestigious role because she is a woman.

  • If the court widens its judgment beyond job transfers, development and retention programs specifically for women or people of color may be subject to change.

Why it matters: Companies are examining their DEI programs more closely to ensure they comply with the changing legal landscape. Employers can proactively adapt to this shift and continue building a diverse workforce and an inclusive culture.

2. Align DEI initiatives to measurable outcomes

The big picture: Companies are more clearly aligning DEI commitments to business goals.

  • Mentions of ESG and DEI in shareholder proposals and boardrooms may be down, but a recent Littler study indicates ongoing commitment among most large companies.

  • This is further supported by a Purpose Brand finding that 154 Fortune 500 companies released diversity disclosures in 2023, nearly double the 79 produced in 2022.

  • All eyes will be on the diversity reports published in 2024, as they will likely reflect the broader industry trend of companies “reassessing verbiage and DEI endeavors that could bring legal risk.”

Go deeper: The Littler survey also reveals a potential disconnect between chief diversity officers (CDO) and chief legal officers (CLO) regarding how their perspectives and goals for DEI have diverged in the current climate.

  • While 57% of CDOs say their organization is defining metrics for DEI progress, just 19% of CLOs say the same. This highlights the need for stronger alignment within top leadership regarding DEI goals and measurement.

What else: 91% of C-suite leaders say the Supreme Court ruling has not lessened their DEI prioritization.

  • A survey of organizations for McKinsey’s 2023 Women in the Workplace report showed that 60% of respondents increased DEI staffing and budgets over the past year, and 34% maintained their staffing and budgets. Only 4% reported a decrease in 2024.

  • These trends suggest that the idea of a widespread corporate retreat from DEI is likely overstated.

  • “For the companies that are very well run in this space … perhaps they changed some of their programs, perhaps they tweak some of their efforts, perhaps they changed some things. But many of them are maintaining significant commitments to this business effort,” said Karyn Twaronite, EY’s global vice chair of diversity, equity, and inclusiveness. “This isn’t a ‘let’s do it for three years and not do it anymore’ kind of program.”

Why it matters: Companies with DEI experts in senior leadership are better equipped to navigate the evolving landscape.

  • The top impact measures of DEI programs boost employee engagement and strengthen company culture, leading to greater organizational success and a positive impact on the bottom line.

  • Strong DEI leaders bring deep knowledge and core competencies to do the work, so their organizations are doubling down on it because they deliver results. They can also drive focus on the most critical culture and business metrics.

  • “Impactful work may sometimes occur quietly, but it’s still happening,” Gusto’s Emil Yeargin said at SXSW. “Celebrate daily wins and successes because the work is never done, and measure relentlessly to show impact.”

3. Build a more inclusive culture

Companies have broadened the definition of diversity, equity and inclusion to ensure that employees – regardless of how they identify – feel seen, heard, and valued.

  • They emphasize the importance of fostering a diverse and inclusive workplace where people feel a sense of belonging, which is key to attracting and retaining talent, particularly in a competitive job market.

  • “I think instead of saying this is a program for Black employees, it would be more like, ‘this is a program to increase the equity of promotion rates across the firm, and everybody is included in applying to be part of this program, but will play different roles,’” said Porter Braswell, founder of 2045 Studio.

Go deeper: A report from Expanding Equity, a W.K. Kellogg Foundation program helping companies implement DEI initiatives, found that 94% of companies have zeroed in on retention, implementing at least one inclusion and belonging initiative.

  • Core DEI initiatives, such as parental leave and accessible facilities, and policies like equal pay and anti-harassment remain in place.

What else: Some firms, like Blackstone, are focusing on hiring for socioeconomic diversity and on changing job requirements to find more diverse talent without targeting a specific race or ethnicity.

  • This movement extends to disability inclusion at work: An Accenture report showed companies leading in this area see significant financial advantages, including 1.6 times more revenue, 2.6 times more net income, and 25% higher productivity.

  • “For too long, people with disabilities – individuals who are perfectly qualified and overwhelmingly willing to work – face enormous barriers to being offered a job,” said Ted Kennedy, Jr., co-chair of the Disability Equality Index.

Why it matters: The trend toward inclusion and belonging is likely to continue, ensuring employees from all backgrounds feel valued and supported at work.

4. Considerations for strategic communicators

  • Be authentic to the organization’s core values: Ensure DEI efforts and communication strategies reflect the company’s values – particularly when it comes to an emphasis on inclusion.

  • Reflect on the past + align on the future: Evaluate past public commitments around DEI and assess how the current approach reinforces those commitments. Address any disconnect among key leadership stakeholders to ensure the CEO, CDO, CLO and/or CHRO share a unified understanding and approach to DEI within the company.

  • Set a single strategic approach: Create a DEI communications strategy grounded in data and insights, and embed it across all levels of the organization. Engage affinity groups and managers to ensure everyone feels a part of the strategy.

  • Be prepared: Track current events and prepare a scenario plan to identify vulnerabilities and help the organization mitigate internal or external challenges.

  • Establish clear metrics: Regularly measure and report on progress. Transparently reporting on these metrics builds trust and reinforces the business value of an inclusive workforce.


Share your feedback with us at insights@gga.nyc.



Innovations in drug discovery – and what communicators should know

Insights


Global Gateway Advisors

September 5, 2023
As AI permeates business functions across nearly every industry, communicators can glean important lessons from the way each sector talks about the technological advancement and disruption in their respective fields. 

With respect to healthcare, researchers have leveraged AI in medicine for years, and we are beginning to see how life-changing treatments can reach the market much faster. 

  • The “patent cliff” – when the world’s 10 biggest drugmakers stand to lose nearly half their revenue by the end of the decade – is fast approaching. 
  • Meanwhile, more than 150 small-molecule drugs are in discovery using an AI-first approach, with more than 15 in clinical trials. The annual growth rate for that pipeline is nearly 40%, according to the Boston Consulting Group.

In this issue, we explore the evolving use of AI in drug discovery, and with it, the rising potential of real-word evidence (RWE). 

Then, we’ll evaluate the essential role that communicators play in shaping public perception and dialogue around the use of AI in drug and medical device development.


1. Moving from concept to market faster. How AI creates efficiencies in drug discovery

The big picture: Estimates vary, but it currently costs about $1 billion and takes roughly 10 years to develop a new drug, with only a fraction of them making it to the market. 

  • Change won’t be immediate. But AI can help scientists discover a drug faster by predicting how different molecules might behave in the body, and discarding dead-end compounds so promising candidates make it to clinical trials quicker.
  • While there is no shortcut in human clinical trials, AI can optimize and diversify patient pools by identifying high-potential candidates. Currently, just 5% of eligible patients participate in clinical research, which limits the ability to study drug efficacy for specific subgroups.

Go deeper: Decentralized clinical trials can facilitate patient engagement by using remote monitoring via wearable devices, which transmit real-world data (RWD) like vital signs and medication adherence to researchers. 

  • Researchers can use AI to analyze RWD for potential adverse events and safety signals, allowing earlier detection of potential drug safety issues.
  • In some cases, AI is helping drug companies bypass the animal testing stage, allowing them to use computer models of humans instead. Machine learning can also accelerate the repurposing of existing drugs, which is patentable.

What else? Rare diseases get a leg up from the Orphan Drug Tax Credit and the FDA’s fast track designation, but their small patient pools present tough challenges that discourage drugmakers from prioritizing research in this space

  • As a result, 95% of rare diseases have no approved treatments.
  • AI is getting better at finding subtle links in large swaths of information that even the finest minds could miss, which helps researchers repurpose drugs and develop new ones faster, even without a large sample size.

What they’re saying:

  • Eric Topol, Scripps Research Translational Institute: “There is no shortage of interest [in AI]. Every major pharma company has invested in partnerships with at least one, if not multiple, AI companies.”
  • David Ricks, Eli Lilly: “In a discovery process, you want to funnel wide. In the past, perhaps humans would just think of what they already knew about. The machine doesn’t. It just knows about everything that was there and it comes up with constructs that humans just don’t.” 
  • Tim Guilliams, Healx: “The potential to suddenly create a viable pipeline for many conditions with only a handful of patients, at the very least, gives real hope.”

Yes, but: Jim Weatherall, AstraZeneca’s VP of data science, AI and R&D, said the challenge for the next few years is pull-through, or to actually bring these drugs to market. He is otherwise optimistic: “We’ve been on a journey from ‘what is this?’ to ‘why did we ever do it any other way?’”

2. AI bolsters the pipeline from RWD to RWE

The big picture: Successful AI drug development requires high-quality, real world data, which is challenging to obtain and can be rife with privacy implications. RWD often comprises electronic health records, which present challenges at scale due to a lack of standardization (as they are collected outside the controlled environment of a clinical trial).

  • Some researchers believe the answer to these concerns could lie in synthetic data produced by applying predictive AI algorithms to RWD. In pharma, synthetic data could be used to handle large but sensitive samples, where regulatory restrictions and data privacy are involved, such as in cross-border research.
  • For now, synthetic data is a niche pursuit and hasn’t yet made its way into clinical use, largely due to concerns that it inaccurately represents the target population.


Go deeper: “The complexity and the variability in healthcare and science makes it a really hard problem to solve,” said Jim Swanson, chief information officer of Johnson & Johnson. “You can create synthetic data easily enough, but is it correlated enough to give you a specific and accurate example? That’s the problem you have to solve.”

  • As such, RWD is used increasingly throughout the drug development process, from identifying early targets to post-market safety surveillance. 
  • The ability to convert RWD to RWE using analytics is a crucial measure of success, as regulators recognize the benefits of RWE and fold them into decision-making. 
  • This is where AI comes in. Algorithms can identify patterns and relationships within RWD to produce RWE. It can then be used to predict patient outcomes and compare treatments to help researchers understand which are more effective and safe in the real-world setting.

3. Evaluating implications for communicators

The biopharma industry is on a precipice. A Morgan Stanley report estimates that even a modest improvement in early-stage drug development success rates could bring 50 novel therapies to market over 10 years.

After discovery comes the story. 

  • Communicating new science is tricky and can have a lasting negative impact if not done right. 
  • The challenge is figuring out how to communicate AI’s benefits and ethical considerations in medicine – when the first AI-developed drug eventually hits the market.

Here are five key considerations for communicators.

  1. Understand and be transparent about AI’s capabilities and limitations to build trust. Don’t shy away from the risks. 
  2. Be authentic and clear about the potential and limitations of AI. 
  3. Be true to the work and its impact. Use data and insights to educate. Leverage publications and medical meetings as opportunities. 
  4. Showcase the significant personal and societal impact of healthcare innovation on patients over the last century, with AI as the latest example.
  5. Proactively address concerns about data privacy and AI biases. Clearly communicate how your AI solutions adhere to regulations and best practices. Consider working with medical experts to create a campaign that speaks to the worries and anxieties of the public.


Share your feedback with us at insights@gga.nyc.



Navigating healthcare's next frontier: AI

Insights


Global Gateway Advisors

July 28, 2023

A recent Yale School of Management survey showed close to half of 200 top CEOs across sectors believe health is the field in which AI will make the most transformative contribution. 

  • Healthcare spending on AI software is expected to grow 40% to nearly $6.2 billion in 2023, compared with $4.4 billion in 2022
  • Rules-based predictive AI has been around for a while, powering myriads of applications, whether identifying hospital readmission risks for patients or predicting clinical outcomes in drug trials. 

Why it matters: Now, with breakthroughs in generative AI, disparate sources of unstructured data like clinical notes, diagnostic images and medical charts have turned into assets for the data-hungry technology. 

The bottom line: The untapped potential is huge for the field of healthcare, which currently generates 30% of the world’s data volume. 

What’s next: In this issue, we explore three major pain points in healthcare that are ripe for an AI solution, specifically for providers, hospitals and health insurance companies. 

  • Our next edition will focus on the pharmaceutical, biotechnology and medical devices industries.


1. Admin tasks detract from high-value work

The big picture: Clinician burnout and healthcare staff attrition are worldwide concerns, leading to lost productivity, increased costs and a decline in patient care quality.

Using AI to tackle paperwork isn’t glamorous, but can reduce a huge burden for providers. For years, doctors have used simpler admin tools like speech recognition software to help with documentation. AI can do far more – such as summarizing, organizing and tagging conversations between physicians and patients. 

Why it matters: Physicians are spending an estimated 4.5 hours a day completing electronic health records required for treatment and billing. 

  • In dollar terms, admin expenses accounted for up to $1 trillion in the U.S., or 25% of the total national health expenditures in 2019. 
  • Automating paperwork means providers can spend time connecting with patients.

What else? When the quality of patient care declines, hospital readmissions tend to rise, resulting in an even heavier burden on the healthcare system. 

  • Clinician burnout is not unique to the U.S., and it was a problem even before the pandemic. In a 2019 study, Germany estimated an average of €9 billion in lost productivity annually, while Switzerland averaged an annual loss of €5.8 billion to replace doctors who left the field due to stress and exhaustion.

Go deeper:

However, a fragmented landscape with multiple AI programs could perpetuate the industry-wide issue of interoperability.

2. Prior authorization inefficiencies lead to delays or coverage denials

The big picture: Prior authorization is the process through which a doctor files a request with a patient’s health insurer before treatments, tests or prescriptions. It has remained a largely manual process on both sides, with humans sorting through a patient’s health plan and medical history via emails, phone calls and faxes. 

Because prior authorization is based on data exchange, AI will be able to automate up to 75% of the manual work and reduce the approval window to days, if not hours, per McKinsey. That would present a massive improvement over the current average of 10 days. 

Why it matters: Data show that 93% of physicians said prior authorizations delay patient care, and 82% said the process is so complicated that it causes patients to abandon treatment altogether.

  • Automation is difficult as payers have no standardized method for receiving and approving requests. Prior authorization has the lowest electronic adoption rate (about 26%) among all admin tasks for payers. 
  • The manually-intensive nature exposes the process to errors. A 2022 report found that 13% of prior authorization denials by Medicare Advantage plans were for benefits that should have been covered.

The Centers for Medicare & Medicaid Services (CMS) recently proposed a rule that would require certain payers to implement an automated process, meet shorter time frames and be more transparent about their decision-making.

Go deeper: 

  • Florida Blue partnered with Olive, a healthcare automation company, to issue approvals while a patient is still at the doctor’s office. Rather than deny requests that it cannot immediately approve, the tool instead routes them to clinicians for human review. Health Care Service Corp. has implemented a similar automated tool.
  • Startups like Cohere offer AI solutions that can be customized for individual health plans’ specific prior authorization needs. 

In June, however, the American Medical Association called for more oversight of how AI is used for prior authorizations to ensure it does not result in more coverage denials.

3. Physicians struggle to stay on top of latest medical knowledge

The big picture: A 2022 survey showed that 95% of physicians are interested in learning about new trials, treatments or procedures, but 68% said they feel overwhelmed by the amount of information they have to keep up with. 

Large language models, or LLMs, can help healthcare professionals stay up-to-date by quickly summarizing and analyzing new research findings, and suggesting relevant studies based on the provider’s specialty and patient population. 

Why it matters: “Medical knowledge is growing so rapidly that only 6% of what the average new physician is taught at medical school today will be relevant in 10 years,” according to a National Bureau of Economic Research report. “Technology such as AI could provide valuable clinical data to the clinician at the time of diagnosis.” 

That said, “adoption of AI for decision-making in medicine is outpacing efforts to oversee its use,” per STAT News. A research collective called Health AI Partnership – which includes leaders from New York Presbyterian, Mayo Clinic, and other major institutions – published a guide to help health systems overcome challenges, address biases and prioritize equity while implementing AI tools. 

Go deeper:

  • New York-based Northwell Health integrated Aidoc’s AI system into 17 of its hospitals, while also launching an AI-enhanced pregnancy chat app earlier this year to screen for common symptoms and give users personalized advice. New York City Health + Hospitals and NYU Langone Health are also proactively using AI for patient care.
  • Mount Sinai is incorporating AI for more timely diagnosis of eye disease and risk assessment of systemic health conditions. The hospital also has a chatbot that guides anxious patients who are trying to decide between making a regular doctor’s appointment, visiting a local urgent care or heading to an emergency room.

Yes, but: Generative AI’s tendency to hallucinate is a major red flag in the high-stakes realm of patient care. LLMs “should never replace humans in the diagnosis and treatment of patients,” said Dr. Karen DeSalvo, chief health officer at Google and a former Obama administration health official. 

4. Upcoming health events + conferences to monitor

  1. Healthcare Automation and Digitalization Congress
    September 25-26, 2023 // Zurich, Switzerland
  2. HLTH 2023
    October 8-11, 2023 // Las Vegas
  3. Reuters Total Health
    November 7-8, 2023 // Chicago
  4. FT Global Pharma and Biotech Summit
    November 7-9, 2023 // Digital & In-Person
  5. FT Health Technology Summit
    November 30, 2023 // Digital
  6. 2023 Forbes Healthcare Summit
    December 4-5, 2023 // New York City


Share your feedback with us at insights@gga.nyc.



Catch up on the latest AI trends + topics

Insights


Global Gateway Advisors

July 7, 2023

This edition features the latest developments around AI regulation and news from Amazon, Google, and SoftBank. Plus, we’ve compiled a brief roundup of podcasts and thought leaders to follow for timely AI insight.


1. Latest AI trends + hot topics

 

  • NYC hiring law takes effect. Starting July 5, New York City businesses that use AI in hiring must audit their processes for evidence of bias, and report out the results. This new law, believed to be the first of its kind, requires employers using machine learning in their hiring practices to engage third party auditors on an annual basis.

 

  • Novelty wearing off? ChatGPT saw traffic fall for the first time in June, down 9.7% from May, according to preliminary estimates from Similarweb. The decline was even greater just in the U.S., with a 10.3% month-on-month decline. Nevertheless, ChatGPT remains by far the most visited chatbot.

 

  • Amazon CEO: Don’t count us out of the AI race just yet. In a recent interview, Amazon CEO Andy Jassy said that the company’s proprietary AI chips have an edge in price performance, as it goes up against category leader Nvidia. Jassy likened the buzz around generative AI chatbots as the “hype cycle” before the “substance cycle,” specifically pointing to AWS as one business that can capitalize on the AI buzz over the long term.

 

  • Google updated its privacy policy, explicitly saying the company reserves the right to scrape everything posted online to build its AI tools. This is an unusual clause for a privacy policy, as a business typically describes ways that it uses the information users post on the company’s own services. Here, the language implies that the entire public internet is fair game for Google’s AI projects.

 

  • SoftBank Group shifts to “offense mode” on AI,” CEO Masayoshi Son said at a shareholder meeting. His focus on AI preceded the launch of ChatGPT – the CEO has mentioned “AI” more than 500 times in quarterly and annual results presentations between 2017 and mid-2022.

    Now, SoftBank is set to develop its own generative AI platform, with a $37 million cash infusion from the Japanese government. Several companies backed by SoftBank’s Vision Fund are expected to become big winners as the AI wave expands, with one chip designer slated for a blockbuster IPO later this year.

 

  • European companies sound the alarm over AI law. Dozens of Europe’s largest companies, including Germany’s Siemens and France’s Airbus, have spoken out against the EU’s proposed AI regulation, saying the rules risk harming competitiveness, yet fail to deal with potential challenges.

    Meanwhile, Japan is reportedly leaning toward softer AI rules closer to the U.S. than to the EU, as it looks to the technology to boost economic growth and propel it to leadership status in advanced chips.

1. Where to turn for insightful AI perspective

For a range of input on the ever-evolving AI landscape, below are four podcast recommendations and four great Twitter follows.

Podcasts:

  • Lex Fridman Podcast
    With a massive following, this podcast features the preeminent AI researcher who has the technical heft to book the best minds in AI as guests. Each episode spans at least an hour, but includes well-labeled timestamps so you can jump to the topics that interest you.

 

  • Hard Fork
    A lighthearted weekly show put out by the New York Times, featuring two journalists exploring the latest in tech with abundant humor. Topics are not confined to AI, and the content isn’t as technical as many other AI podcasts. Each episode clocks in at about an hour.

 

  • Your Undivided Attention
    In this biweekly podcast from the Center for Humane Technology, co-founders Tristan Harris and Aza Raskin explore the power that technology has over our lives and discuss challenges and solutions with a wide-range of thought leaders. Episode durations vary.

 

  • Me, Myself and AI
    Why do only 10% of companies succeed with AI? This biweekly podcast by MIT Sloan Management Review and Boston Consulting Group attempts to answer that question, featuring leaders who’ve achieved big AI wins at their companies. Each episode runs under 30 minutes.

 

Thought leaders on Twitter:

  • Yann LeCun – Chief AI scientist at Facebook and professor at NYU. 
  • Kai-Fu Lee – CEO of Chinese tech VC firm Sinovation Ventures, former CEO of Google China and writer. 
  • Andrew Ng – Co-founder of Coursera, Stanford adjunct faculty and former head of Baidu AI Group and Google Brain. 
  • Fei-Fei Li – Leading AI scientist, Stanford professor and co-director of the Stanford Institute for Human-Centered AI.


Share your feedback with us at insights@gga.nyc.



Current Events Brief: Preparing for SCOTUS Decision on Affirmative Action

Insights


Global Gateway Advisors

June 23, 2023

The Supreme Court is expected to rule soon on two affirmative action cases specific to race-based admissions programs in higher education. Though employer initiatives are subject to different legal parameters than academic institutions, there may be implications for civil rights laws that impact workplace diversity programs.

The big picture: The question in front of the justices is whether colleges and universities can consider race in their admissions decisions.

But while public and private academic institutions are the focus of these cases, a decision prohibiting race as criteria in admissions could encourage legal challenges to corporate diversity, equity and inclusion initiatives.

Why it matters: For strategic communicators advising executives, DEI leaders and employee resource groups, there are key questions to consider in anticipation of the SCOTUS rulings.

  • How does this impact our work?

  • What changes might be required for what we can say and do?

  • What do our internal and external stakeholders expect of our organization?

  • What should we be doing now?

In this newsletter, we address these questions and provide resources to help you prepare.


1. Legal Q&A: Navigating the Court’s ruling

On Thursday, we spoke with Richard A. Bierschbach, Dean and John W. Reed Professor of Law at Wayne State University Law School in Detroit.

Bierschbach, a former law clerk to Supreme Court Justice Sandra Day O’Connor, gave us his take ahead of the Court’s decision. His insights are also informed by his experience in the New York offices of three global law firms and tenure as a lawyer in the U.S. Department of Justice.

What are the legal issues here – and what’s at stake for corporations?

  • The basic question in these cases is the legality of affirmative action in college admissions under federal law.

  • Specifically, can Harvard and the University of North Carolina (UNC) consider race as a factor in their admissions decisions under the equal protection and due process clauses of the U.S. Constitution, and Title VI of the Civil Rights Act of 1964?

  • The immediate legal effect of the Court’s decision will apply only to educational institutions.

  • But, depending on what the decision says, the implications could be much broader for all sorts of companies and organizations.

How is SCOTUS expected to rule?

  • Based on the oral arguments and where the Court is on these issues, most observers think the Court is going to find that what Harvard and UNC are doing is unlawful. A different outcome would be a big surprise.

  • The real question is, in what way are they going to find it unlawful? There’s a range of ways, and we really won’t know until we read the opinion.

  • For instance, the Court could prohibit the consideration of race outright. Or it could restrict its use even more than the law currently does without totally prohibiting it.

  • So, the effect on universities and on companies down the road is really going to depend on what the Court’s reasoning is and how broadly or narrowly the opinion is written.

  • That said, I think we can expect a ruling that puts affirmative action and DEI programs under a legal microscope.

How could an adverse ruling against Harvard and UNC put private employers in legal jeopardy?

  • While colleges and universities will be the only institutions bound by Court’s decision, its implications could reverberate well into the legal framework that governs private employers and other institutions.

  • The affirmative action and DEI programs of private companies are governed by a different legal framework under Title VII of the Civil Rights Act and related state and federal anti-discrimination laws.

  • But like Title VI, which is at issue in these cases, those laws also prohibit discrimination on the basis of race. And courts often look to the one in interpreting the other.

  • So the thinking is if the Court does strike down Harvard’s and UNC’s programs, plaintiffs will use the decision to then bring similar challenges under the legal framework that does apply to private companies and other organizations.

How would this change what companies can say or do regarding their DEI commitments and initiatives?

  • Companies don’t want to overreact or abandon their values.

  • Diversity itself is not unconstitutional, and these decisions won’t change that. But institutions should consider positioning themselves in a more nuanced way.

  • Programs that lean heavily on race and other protected categories are going to raise red flags. Numerical quotas and specific consideration of race as a programmatic criterion are going to get attention.

  • Employers may want to emphasize things like life experience and the value of different perspectives. Anything that smacks of identity-based categories will be looked at very skeptically.

How can companies safely navigate this challenge to their DEI initiatives?

  • Organizations are going to want to be more creative in how they think about and implement DEI concepts.

  • Think about pathway programs like partnering with HBCUs or pipelines for first generation workers who are the first in their family to get a degree. Provide programs for people who come from certain socioeconomic backgrounds or geographic areas that have been historically underprivileged.

  • Think about qualifying criteria in those terms, rather than what we would call immutable characteristics.

  • Companies should also think beyond hiring and continue to focus on leadership development, retention, and the needs of their workforce, perhaps applying similar criteria to programs within the organization.

  • Companies should already be reviewing their DEI programs – taking a hard look at their substance, how they talk about them, how they think about and conceptualize them, how their managers talk about and implement them, and how they train their employees around all of those issues – to bring them into conformity with this new approach.

How soon could employers be impacted by the Court’s decision and where can they turn for help?

  • The legal impact will be immediate on educational institutions. And we can expect that plaintiffs targeting companies, assuming the Court invalidates race-based admissions, will move pretty quickly.

  • They’re likely framing up complaints right now, making use of the anticipated decision to structure similar challenges.

  • So employers could face litigation risk in fairly short order, depending on what their programs look like.

  • It’s going to take some time to see what lower courts do with those claims. It will take a while for those to work their way through the system. But it’s coming, and companies should already be meeting with their legal counsel and thinking about that sort of risk management.

  • There are other immediate strategic considerations as well, especially regarding the cultural effects of any changes and how companies demonstrate and communicate their values to employees, suppliers, consumers and investors.

  • Companies may want to look to states like Michigan, California and Washington – places that, under state law, already have tougher restrictions on public institutions regarding race-based admissions, hiring, and contracting – to get a sense of how they are coping and what approaches they have taken.

  • And I can’t overstate the importance of companies staying true to their culture. Remember that the legal issue before the Court here is one of race-based decision making. The ideas of diversity and sound business practices – hopefully those will never be unlawful, regardless of the makeup of the Court.

This interview is for informational purposes only and should not be construed as legal advice. Responses were edited for space.



Share your feedback with us at insights@gga.nyc.



How companies are (or are not) communicating their AI strategies

Insights


Global Gateway Advisors

June 20, 2023

Six months since the launch of ChatGPT, AI has quickly become one of the most talked about topics by many corporate executives. Corporate earnings calls showed a 77% year over year uptick in mentions of AI in Q4 2022, with the pace only intensifying since then. 

Several key messaging themes are emerging among top tech CEOs, investors and consulting firms.

  • Advocacy tied to business momentum: Difference in tone reflects the AI race hierarchy – with Microsoft and Nvidia showing full-steam-ahead enthusiasm, while Alphabet and Apple lean into the concerns and cautions with a more measured, thoughtful stance.
  • Engagement is widespread: With Apple as a notable exception, major tech companies are actively engaging with media and communicating to shareholders about their AI strategies. 
  • Technology described as revolutionary: Many statements are optimistic and forward-looking; keywords include “profound,” “invest substantially,” “promises,” “rebirth,” and “incredible,” “qualitative breakthroughs.”
  • Excitement tempered by cautious tone: Future-forward messages are often followed by caution and pragmatism, manifesting in phrases like “take our time,” “responsible AI,” and a “number of issues that need to be sorted.”


1. AI messaging balances opportunity and pragmatism

Here’s a look at how top tech companies and their CEOs have contributed to the AI dialogue. 

ALPHABET

Shows cautious optimism rather than exuberance, wades into regulatory discussions

  • Alphabet CEO Sundar Pichai is front and center on AI media engagements, appearing in primetime interviews and meeting with the UK prime minister to talk about regulation.  
  • While Pichai is direct about AI’s potential… “I’ve always thought of AI as the most profound technology humanity is working on. More profound than fire or electricity or anything that we’ve done in the past,” Pichai said on CBS News’ “60 Minutes” this month.
  • …He is clear that Alphabet is not rushing AI: “We’ve been cautious. There are areas where we’ve chosen not to be the first to put a product out. We’ve set up good structures around responsible AI. You will continue to see us take our time,” Pichai told Bloomberg.
  • Alphabet recently challenged OpenAI’s calls to form a central AI governing body, by voicing its preference for a “multi-layered, multi-stakeholder approach to AI governance.” The company cited the overarching impact AI will have on many regulated industries, such as finance and healthcare. 

AMAZON

Vows to take its deep machine learning experience to new heights

  • Amazon CEO Andy Jassy leans on the online retailer’s longstanding history of using AI to claim the company as a leader in this next wave of technology, noting that generative AI “promises to significantly accelerate machine learning adoption.”
  • Jassy said the company is “investing heavily” and “will continue to invest substantially” in generative AI, per a recent letter to shareholders. He noted Amazon has been working on its own large language models (LLMs) for “a while now,” and said he believes it will “transform and improve virtually every customer experience.” 

APPLE

Takes the “show, don’t tell” approach + lets its products do the messaging

  • Apple, as a product company,  takes the practical approach on AI and lets its product features speak for themselves. Apple doesn’t often use the term “AI” – instead referring to “machine learning,” or simply talking about the features the technology enables.
  • CEO Tim Cook isn’t vocal about AI, so when he said AI is “huge,” but that there are “a number of issues that need to be sorted,” it was enough to make headlines

IBM

Talks about job displacements by AI + the need for upskilling

  • IBM CEO Arvind Krishna announced a hiring freeze in the back office in May, saying “I could easily see 30% of that getting replaced by AI and automation over a five-year period.”
  • Recently, he tried to soothe nerves by pointing to declining working-age populations.

Having employees do routine tasks that A.I. could do is “not an option,” he said. “We are going to need technology to do some of the mundane work so that people can do higher-value work.” 

META

Subtly pivots away from Metaverse, envisions AI that can facilitate human interactions

  • Meta acknowledges playing catch up on AI after a heavy focus on the metaverse. 
  • CEO Mark Zuckerberg mentioned AI 27 times in Meta’s Q1 earnings call in April. The company plans to commercialize its proprietary generative AI by December. Zuckerberg said in a company meeting that the “incredible breakthroughs” on generative AI will enable Meta to “build it into every single one of our projects.”
  • In a recent podcast interview, Zuckerberg also said the AI assistants he plans to launch will take on roles including “a mentor, a life coach, a cheerleader that can help pick you up through all of life’s challenges,” adding that AI can help people “express themselves better to people in situations where they would otherwise have a hard time doing that.” 

MICROSOFT 

Champions how AI will positively impact society, with a nod to risks

  • Microsoft CEO Satya Nadella speaks extensively on the positive societal changes that AI will bring about, from democratizing access to new skills to turbocharging productivity growth. While acknowledging risks such as biases, Nadella contends that the benefits of AI – Microsoft’s AI, at least – will outweigh the potential drawbacks. 
  • In a recent WIRED story, Nadella said: “I am haunted by the fact that the industrial revolution didn’t touch the parts of the world where I grew up until much later … So I’m not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That’s a fantastic world to live in.”
  • On risks… “It’s an abdication of our own responsibility to say this is going to just go out of control. We can deal with powerful technology,” Nadella said.
  • On competition… “At the end of the day, [Google is] the 800-pound gorilla in [search],” Nadella said in an interview. “I hope that, with our innovation, they will definitely want to come out and show that they can dance. And I want people to know that we made them dance.”

NVIDIA

Touts AI as a social equalizer that will usher in the “rebirth of the computer industry” 

  • Nvidia co-founder and CEO Jensen Huang boldly claims that AI has closed the “digital divide” because “everyone can be a programmer” – all they need to do is speak to the computer. 
  • “A.I. has reinvented computing from the ground up,” Huang said during a recent commencement speech in his birthplace of Taiwan. “In every way, this is a rebirth of the computer industry.” 
  • Nvidia, as a chip maker, is one degree removed from the AI tools that it powers. Thus, the company stays above the fray when it comes to regulation. Huang has made no notable mentions of AI’s risks. 

SALESFORCE 

Stresses the need for “trust layer,” carves out niche as custodian of safe AI deployment

  • Salesforce CEO Marc Benioff has described the new age of generative AI as more revolutionary than any technology innovation in this lifetime, or any lifetime, but he also openly discusses the risks and importance of creating tools that keeps customers safe.
  • The company isn’t looking to win on building the models themselves; instead, Salesforce’s role is as custodian and guide to make AI useful and safe. To that end, they launched an enterprise product called AI Cloud that can be incorporated into business operations.
  • “Trusted and responsible AI” is the most important goal for Salesforce, Benioff said. Regarding risk considerations, he noted “we understand the burden there must be on us as we’re trying to take this forward.”


2. How major investors have weighed in on AI

  • Bill Ackman, founder of hedge fund Pershing Square Capital, predicted that Alphabet would be a “secular winner” in AI and pumped $1 billion into its shares
  • Stanley Druckenmiller snapped up $430 million of Microsoft and Nvidia stocks, saying that “unlike crypto, I think AI is real.”
  • Altimeter Capital’s Brad Gerstner says “AI tech tools will be bigger than the Internet.” He describes himself as a pragmatic optimist. “Yes we need to prepare more for white collar job displacement,” he wrote on Twitter, adding that he is also optimistic and a believer “that the net benefits to humanity far outweigh the potential harm. AI will unleash a massive wave of human productivity.”
  • Cathie Wood, the founder of Ark Invest and best known for investing in disruptive innovation, unloaded her Nvidia positions after the stock’s astronomical ascent, citing growing competition. She instead scooped up Meta shares, saying the company is “able to deliver better” using less computing power and more data, adding that she likes “the fact that Mark Zuckerberg is now prioritizing artificial intelligence as opposed to the metaverse.”
  • Roundhill Investments launched the first pure play ETF in generative AI. Chief investment officer Tim Maloney said: “We haven’t seen anything really like it in recent history, or history generally.” 
  • Last fall, VC giant Sequoia Capital took a novel approach and posted a blog on their website to invite AI founders to email their pitches directly. Hundreds of responses ensued, which they fielded through weekly Zoom calls. “Generative AI is well on the way to becoming not just faster and cheaper, but better in some cases than what humans create by hand,” Sequoia wrote.
  • At this year’s annual Berkshire Hathaway meeting, Warren Buffett expressed his doubts about AI. “There won’t be anything in AI that replaces Ajit [Jain, Berkshire vice chairman of insurance],” Buffett said. “And, when something can do all kinds of things, I get a little bit worried,” he added. “Because I know we won’t be able to uninvent it.”


3. Other AI trends + hot topics

  • Generative AI may be able to fully automate half of all work activity in just 22 years, including decision-making, management and interfacing with stakeholders, according to a new McKinsey report. Per McKinsey’s research, generative AI could add “$2.6 trillion to $4.4  trillion annually” to the global economy, close to the size and productivity of the UK. 
  • The E.U. took a major step toward passing what would be the first laws to regulate AI, which would severely curtail uses of facial recognition software and require AI system makers to disclose more about the data used to create their programs.
  • Advanced Micro Devices (AMD) revealed a new AI chip to challenge Nvidia’s dominance, but did not share details on who plans to buy it or how it will bolster company sales. Amazon reportedly may be a potential customer.
  • Singer Paul McCartney said AI helped create one last Beatles song using a demo with John Lennon’s voice, amid ethical questions around authorship and ownership when it comes to creating music with voices of established artists.


Share your feedback with us at insights@gga.nyc.



MEDIA NEWS + MOVES

Insights


Global Gateway Advisors

May 26, 2023


1. Section 230 remains untouched

The big picture: The Supreme Court left intact Section 230 of the Communications Decency Act – a liability shield for internet companies over third-party content – and punted the issue back to Congress.

What’s happening: Referred to as “the internet’s most important law,” Section 230 has faced scrutiny in connection to tech companies’ content moderation decisions.

  • Section 230 was written 25 years ago in the internet’s infancy. Opponents argue that it is outdated. Defenders say it has allowed the internet to thrive, and believe that the way it is written enables innovation and protects free speech.

  • The cases, Gonzalez v. Google and Twitter v. Taamneh, challenged whether tech companies should be held liable for terrorism-related material posted on their platforms. 

  • Last week, the court dismissed the case and declined to take up questions on it, leaving intact a lower court ruling in Google’s favor.

Why it matters: Section 230 has shaped the internet as we know it. Greater reform may lead to deeper regulation of posted content and interaction with other internet users.

What comes next: The court is still deciding whether to hear cases challenging social media content moderation laws in Texas and Florida.


2. What’s trending?

  • Time-strapped journalists are increasingly looking for data and expert sources to inform their reporting. A recent Cision report found that 68% want to see original research and trend data in pitches. (Axios)

  • The number of news and information websites generated by AI – and operating with little to no human oversight – more than doubled in just two weeks. In a special report, Newsguard identified 125 websites that are entirely or mostly generated by AI tools. (The New York Times)

  • Elon Musk has been dubbed the new “king of conservative media,” and is positioning Twitter as the center of gravity for Republicans ahead of the 2024 election. High-profile, right-wing personalities, including fired Fox News host Tucker Carlson, have said they will bring content exclusively to the platform. Florida Gov. Ron DeSantis announced his presidential bid this week in a Twitter Spaces chat, but the event was derailed by technical glitches. (CNN)

  • Dotdash Meredith, one of the largest internet publishers in the country, is debuting a new ad tool, D/Cipher, that doesn’t rely on internet tracking cookies or first-party data. Advertisers can target users on any of Dotdash Meredith’s digital platforms based on intent and interests with which the user is likely to engage. (Axios)


3. Journalist moves

Business [Reporting on business from newspapers, magazines and online sources]

  • Naomi Shavin – senior podcast producer, Bloomberg News; previous: producer, Axios (Cision)
  • Kalley Huang – reporter, The Information; previous: reporting fellow, New York Times (Cision
  • John Schafer – markets reporter, Yahoo Finance; promotion (Cision
  • Elisabeth Buchwald – economy explainer reporter, CNN; previously: personal finance reporter, USA Today (Cision)
  • Jonathan Tully – digital content editor, Human Resource Executive; editor, Mashable (Cision
  • Megan Leonhardt – senior economics writer, Barron’s; writer, Fortune (Twitter
  • Chelsea Emery – executive editor, Staffing Industry Analysts; promotion (Cision)

Technology [Covering startups, advanced technologies and the intersection of tech/business]

  • Michelle Ma, reporter, clean tech, Bloomberg; previous, freelance (Talking Biz News
  • Brian Kahn – editor, climate tech, Bloomberg; previous: climate editor, Protocol (Talking Biz News

More Moves of Interest [Additional updates from notable journalists + editors]

  • Sam Jacobs – editor-in-chief, TIME; promotion (Cision)
  • Adam Levy – executive producer + news editor,  BBC News; previous, show runner, CNN+ (Cision)
  • Mary Bruce – White House correspondent, ABC News; promotion (Muck Rack)
  • Lauren N. Williams – deputy editor, race + equity, The Guardian; previous: senior editor, The Atlantic (Talking Biz News
  • David Gelles – managing correspondent, Climate Forward Newsletter, The New York Times; promotion (The New York Times)


Share your feedback with us at insights@gga.nyc.



How to assess AI risks + implement workplace best practices

Insights


Global Gateway Advisors

May 11, 2023

The early fanfare over generative AI has largely given way to pragmatic concerns over risks and the need for standards and best practices at the workplace.

In this issue, we explore the top three risks that every organization needs to consider, and ask legal experts to weigh in.

  • Several hiccups by early adopters may have prompted companies to rethink their strategy – in a recent KPMG survey, 60% of major U.S. executives said that they are still a year or two away from AI adoption.
  • While cost and lack of clear business case were cited as the two highest barriers, cybersecurity and data privacy ranked as their top concerns.  

Check out our website for the rest of our Exploring AI in Communications series on defining your approach to AI and an overview of the AI landscape for communications


1. Key risk areas to consider

Copyright risks: Pending lawsuits and a growing number of media companies demanding payments from AI firms have put a spotlight on the value of publicly available data – some of which are copyrighted.

  • It’s impossible to be aware of all the copyrighted materials on the internet. A user cannot know how familiar an AI-generated output is to the original work.
  • Unwittingly publishing such material not only exposes the user and the company to infringement claims, but could also damage a hard-earned reputation. 
  • For PR professionals: What is your action plan for potential misuse that could impact your company and/or clients?

Data privacy risks: OpenAI has adjusted to data privacy concerns by rolling out an incognito mode in ChatGPT that allows users to turn off chat history.

  • Data will still be kept for 30 days to track abusive behavior, however, and the onus is on the user to disable this feature.
  • Also, many other generative AI systems use third-party contractors to review input/output data for safety, which means that the sharing of confidential data may result in a breach of confidentiality.
  • Companies can license the use of an AI model, so they can monitor what employees type in as prompts and protect the information shared. For more peace of mind, Microsoft is reportedly testing a private GPT alternative.

Misinformation and bias risks: This is arguably the most insidious risk when it comes to generative AI. Whether the fabrication stems from AI hallucinations or intentional human acts, AI makes spreading misinformation that much easier to pull off – and much harder to detect.

  • Deepfakes can be used to depict a company executive in a compromising situation, for instance, or to forge images to file fraudulent insurance claims.
  • Meanwhile, using AI to evaluate resumes of candidates may result in discriminatory hiring practices if biases are left unchecked, potentially exposing the company to litigation and penalties.  

Deepfake detection technology is expected to lag behind because of its prohibitive costs and lack of legal or financial incentives. For now, strengthening our media literacy may be our best defense.


2. Ask the legal experts: Assessing + mitigating AI risk

We connected with Davis+Gilbert to discuss how to mitigate risks around the use of generative AI. Michael C. Lasky is a partner and Chair of the firm’s Public Relations law practice, and Samantha Rothaus is a partner in the firm’s Advertising and Marketing law practice group.

Below is an excerpt of the conversation. Please click here for the full interview.

Q: How do we check for plagiarism or copyright infringement, knowing that AI can draw from multiple sources to generate each sentence or image?

A: Companies should have a written internal clearance process to vet materials, with a designated person for final sign-off. Using pre-cleared material like licensed images is a good practice to reduce risk, as are tools like plagiarism detectors or reverse-image searching. For lower stakes tasks, taking an AI output and making unique, substantial changes to it will likely reduce copyright risks.

Q: How do we avoid recycling misinformation and biases that may be embedded in AI outputs?

A: There will need to be a greater emphasis put on training. For text, the process will require critical thinking, fact-checking, and researching multiple trusted sources. For images or voices, look for small glitches, distortions or other signs of inauthenticity. If we make disclosure a norm when using generative AI in content creation, this will also help viewers assess what degree of credibility to give to the material.  

Q: If someone inadvertently uses an AI-generated image that infringes on copyrights, who is liable, the AI developer or the individual?

A: This is an open question. At the moment, we are not seeing users being targeted in litigation – only the platforms themselves (specifically, Stability AI and Midjourney). However, there is an argument that users may have contributory liability to producing infringing content.  We suspect that if we do see this kind of litigation arise, it will likely be against large companies as the “user” rather than individual people.


3. Other AI trends + hot topics

  • At its annual Think conference, IBM announced WatsonX, a platform “for companies looking to introduce AI into their business model.”
  • Twitter and Reddit will start charging for access to their data. Elon Musk reportedly cut OpenAI off of Twitter’s data after deciding that the $2 million per year licensing fee he was charging wasn’t enough.
  • In recent earnings calls, Alphabet, Microsoft, Amazon and Meta all emphasized their intent to make hefty investments in AI. In contrast, Apple’s tone is more measured. Separately, PricewaterhouseCoopers plans to invest $1 billion in generative AI to automate aspects of its tax, audit and consulting services.
  • Google merged its two main AI research units, DeepMind and Brain, to gear up for an intense AI battle.
  • ChatGPT is back in Italy after OpenAI met most of the government’s demands, including creating the incognito mode and providing more details on how the tool processes information.
  • IBM’s CEO said hiring back-office functions such as HR will be paused, and that he can “easily see” 30% of the non-customer-facing roles replaced by AI over the next 5 years.
  • AI developers would be required to disclose copyright material used in training their tools, according to a new draft of EU legislation. Separately, the G7 nations called for the creation of global standards for assessing AI risks to promote prudent development.
  • Vice President Kamala Harris and other White House leaders told the CEOs of Alphabet, Microsoft, OpenAI and Anthropic they had a “moral” obligation to keep their products safe, in a first meeting of AI leaders. 


Share your feedback with us at insights@gga.nyc.



Managing AI Risks: Q&A with Legal Experts from Davis+Gilbert

Insights


Global Gateway Advisors

May 10, 2023

Global Gateway Advisors sat down with Michael C. Lasky and Samantha Rothaus from Davis+Gilbert to discuss how best to manage risks around the use of generative AI.

Michael is a partner and Chair of the firm’s Public Relations law practice, and Samantha is a partner in the firm’s Advertising and Marketing law practice group.

They counseled the PR Council in drafting its new guidelines on generative AI, and created their own Marketing Law Clearance Checklist for content creators.

___

Q: How do we check for plagiarism or copyright infringement, knowing that AI can
draw from multiple sources to generate each sentence / image?

A: Companies should have a written internal clearance process to vet materials, with a designated person for final sign-off. Using pre-cleared material like licensed images is a good practice to reduce risk, as are tools like plagiarism detectors or reverse-image searching. For lower stakes tasks, taking an AI output and making unique, substantial changes to it will likely reduce copyright risks.

Q: How do we avoid recycling misinformation and biases that may be embedded
in AI outputs?

A: There will need to be a greater emphasis put on training. For text, the process will require critical thinking, fact-checking, and researching multiple trusted sources. For images or voices look for small glitches, distortions or other signs of inauthenticity. If we make disclosure a norm when using generative AI in content creation, this will also help viewers assess what degree of credibility to give to the material.

Q: If someone inadvertently uses an AI-generated image that infringes on
copyrights, who is liable, the AI developer or the individual?

A: This is an open question. At the moment, we are not seeing users be targeted in litigation – only the platforms themselves (specifically, Stability AI and Midjourney). However, there is an argument that users may have contributory liability to producing infringing content. We suspect that if we do see this kind of litigation arise, it will likely be against large companies as the “user” rather than individual people.

Q: How far away are AI regulations? Do we need a set of international rules?

A: We don’t see this happening anytime soon. Just look at privacy – this is an area with a huge patchwork of different kinds of rules across different jurisdictions, and in the U.S., pushes to nationalize these laws in federal legislation have been unsuccessful for several years. We think the same will happen for AI. A set of globally adopted norms will be needed, but it remains to be seen whether full-on legislation is necessary or is even realistically going to happen. And these norms may not emerge for several years, as it will take time for people to understand how these services are best used.

Q: Are you using generative AI for your work?

A: While some AI tools have emerged for the practice of law, many of them are distinct from the generative AI technologies being introduced in the communications and marketing space. We don’t think any of our clients would want their substantive legal advice to be AI generated. As for us, we have not personally begun using any AI tools in our daily work, though we’ve played with them to better understand how our clients are using them.

Share your feedback with us at insights@gga.nyc.



Defining your approach in the evolving AI landscape

Insights


Global Gateway Advisors

April 24, 2023

This is part two in our series about generative AI’s impact on communications professionals. If you missed part one, you can read it here.

The next installment will include a deep dive on organizational risk surrounding AI in the workplace, centered on data privacy and ethics.

1. From theoretical to practical: How can I use AI in my work?

Generative AI tools are undeniably impressive, but do they really turbocharge productivity? Here are a few accessible use cases for communications professionals.

  • Summarize + condense: AI tools can shave hours off research by synthesizing lengthy texts and gleaning the highlights; some can even trim an hour-long Zoom meeting into a two-minute video clip.

  • Brainstorm: Ideation using AI can help you conceive ideas and see blind spots, in tasks ranging from media strategy development to crisis management.

  • Produce rough drafts: Generative AI can be a helpful starting point for drafting press releases, social media content, tailored pitches to journalists, bios based on LinkedIn profiles, and more.

  • Create visuals: Bypass copyright concerns and generate customized digital artwork for presentations.

2. Initiate strategic planning to avoid AI pitfalls

The work is underway in many organizations to assess and map the near-term implications of generative AI on their business. For reputation management and strategic communications leaders, consider the following checklist as a starting point for your planning:

  1. Become familiar with the AI basics: Learn to use the tools on the market – from free ones like ChatGPT to enterprise products that provide more customization and data transparency.

    Read about the long-term arc the technology is likely to follow, so your vision won’t be swayed by the boom or bust of any particular system.

    Keep your antenna up on what competitors, media and clients are doing.

  2. Form a task force of employees from various departments to identify use cases and select the appropriate tools by evaluating their accuracy, ease of use, customization options and pricing.

    For larger organizations, use a three-tiered plan. Whatever your strategy, don’t give into the hype and over-pivot. Stay committed to the fundamentals and focus on your core competence as reputation managers.

    Is it important to bring in key external stakeholders and AI experts for broader input? Or is speedy implementation the higher priority?

  3. Weigh the risks vs. rewards: Be cognizant of generative AI’s early limitations, such as its inability to properly cite sources, and its tendency to hallucinate and perpetuate existing biases.

    A risk-demand matrix can clarify where AI adds the most value. Focus on tasks that are repetitive and high-volume, and revisit the matrix often.

    Huddle with your legal team to hash out potential liabilities. While humans also err, the difference is that AI tools cannot explain their thinking and be held accountable.

    Let the tools augment, and not replace, your work processes. Always comb through the output for accuracy, tone, and style.

  4. Establish clear standards for using AI-generated text in communications materials. Define the scope of usage and the review process, as well as any necessary disclaimers.

    Self-policing is key before legislators can catch up, so draft a company-wide security policy with both your risk tolerance and your clients’ in mind. Encourage staff to experiment with AI tools within the bounds of these rules.

    Reiterate the sensitivity of sharing confidential or proprietary information while using AI, particularly as the tools are integrated into email and other messaging platforms.

    How will you ensure staff compliance, and what type of governance will you put in place?

3. Other AI news on the radar

  • Google’s Bard AI chatbot can now generate code, debug existing code, and write functions for Google Sheets. However, new reporting suggests that Google sidelined concerns from its AI ethics team in order to fast track Bard’s launch.

  • The Biden administration has ramped up efforts to regulate AI tools by launching a public consultation on the technology. Meanwhile, Senate Majority Leader Chuck Schumer is spearheading the congressional effort to craft AI legislation.

  • European legislators plan to add new provisions to a pending bill aimed at “steering the development of very powerful artificial intelligence in a direction that is human centric, safe and trustworthy.” China issued a mandate that requires a security review of generative AI tools before they’re allowed to go live.

  • Elon Musk plans to launch his own AI company to rival OpenAI. This came just two weeks after he cosigned an open letter urging a pause on all AI development. Amazon also joined the AI race with its own system.

Share your feedback with us at insights@gga.nyc.