Do You Generate Content with AI in Your Organization? Since August, Communicating Poorly Is Also a Legal Risk

- trinimaturana
- The Voices in English
Índice
On August 2, 2025, the first provisions of the European Union’s AI Act came into effect. While the full regulation will roll out progressively over the next two years, one area is already clear: foundational models, the engines behind tools like ChatGPT, Copilot, and Gemini, are now under regulatory scrutiny.
For Internal Comms teams, this is a turning point. Until now, poorly designed AI content might have been seen as a reputational risk, a matter of tone, or simple misalignment. From now on, it also becomes a compliance risk.
Internal Comms in the spotlight
When we use AI to draft a CEO message, summarize an HR policy, or create training materials, we are not only shaping perception—we are entering a regulated space.
The AI Act establishes obligations around:
- Transparency: employees must know when content is generated or assisted by AI.
- Accuracy and bias: organizations are responsible for ensuring outputs are not misleading or discriminatory.
- Governance: processes must exist to supervise, audit, and correct the use of generative models.
This is not just about IT or Legal. It directly involves Internal Comms, because we are often the first to deploy these tools at scale within the organization.
The dual risk: culture and law
Poorly designed AI content can erode culture by spreading confusing, impersonal, or even biased narratives. Now, under the AI Act, that same content can also lead to sanctions and legal consequences.
In other words: what was once seen as “just a comms issue” is now a matter of legal compliance and organizational trust.
What Internal Comms must do now
Three immediate steps for any IC team working with AI:
- Audit usage: identify where AI is already being used in comms processes.
- Set internal guidelines: define rules for transparency, human oversight, and cultural alignment.
- Train leaders and communicators: ensure they know the opportunities—and the boundaries—of responsible AI use.
“AI in Internal Comms is no longer just about efficiency. It’s about trust, purpose, and compliance.”
Closing thought
The AI Act makes something very clear: the way we design, supervise, and communicate with AI matters more than ever.
For Internal Comms, this is an opportunity to step into a strategic role—ensuring not only cultural coherence, but also legal safety.
From August onward, bad communication is not only ineffective. It can be illegal.