"AI without governance is not an asset. It is a risk."

A conversation of Awed Studio with Esther, founder of Board Sovereignty, about AI, leadership responsibility, and the question no one in the boardroom asks out loud.

Esther van Egerschot advises boards and C-suites across Europe on strategic transformation and governance. With Board Sovereignty, she has developed an approach that empowers leadership teams to shape technological change rather than merely managing it.

We spoke with her about where European marketing organizations actually stand regarding AI, and why most teams are focusing their optimization efforts in the wrong places.

Awed: In our work with brands, we see AI implementations being announced loudly and then quietly discontinued. You observe this from a boardroom perspective. What do you see?

Esther: The same thing, just from a different angle. Boards are told that their organizations are using AI. What they rarely get is a clear answer on what difference it actually makes. That is the real problem. Not a lack of technology, but the missing link between tool usage and business outcomes.

Furthermore, every project should budget as much for AI governance as it does for technical implementation. If you don’t, you create a significant risk. Currently, however, we see very little to no budget allocated for governance.

Awed: What does that look like in practice?

Esther: Teams are using generative tools for content, automation for campaigns, predictive models for targeting, and AI for recruiting and candidate selection. The activity is real. But when we ask, "What has this actually done for the company?", the answer is usually a mix of anecdotes and gut feeling.

Activity is being mistaken for transformation. This is dangerous because it stops learning. When teams believe they are already far ahead, they stop honestly auditing what works. Additionally, there is a lot of "Shadow AI" usage, where employees use AI on private devices or without corporate licenses which is extremely risky.

Awed: You speak of a "governance gap" as the actual risk. Why isn't this getting more attention?

Esther: Because governance doesn't feel good. Tools feel good. New features, new possibilities. Governance feels like a brake.

But that is a misinterpretation. Governance is the very prerequisite for AI to become scalable. Organizations deploying AI without clear data policies, brand guidelines, and model validation are building on an unstable foundation. And in Europe, where regulatory requirements are increasing, this is no longer a theoretical risk.

Awed: We work with many marketing teams that think creatively but lack structured workflows. They are under pressure to deliver quickly. Governance feels like an obstacle there.

Esther: I understand that pressure. But speed without structure creates chaos that becomes expensive to clean up later. The first question in AI governance is: How does this AI create value; what problem does it solve? This initial question already secures a Return on Investment (ROI) that every board wants to see.

And what I always tell leadership teams: Governance doesn't just protect the organization; it protects the people working with AI. If someone uses a tool without guardrails, they carry a risk they cannot fully oversee.

Awed: You distinguish between two types of organizations: efficiency-driven and advantage-driven. Where are most teams stuck currently?

Esther: In efficiency, which is understandable. The gains are visible, quickly achievable, and easy to report. Costs down, speed up. The problem is: everyone else is doing exactly the same thing. In two years, efficiency will be the standard. Organizations that haven't built their own proprietary capabilities by then will have no advantage. They will have only caught up.

Awed: What do you mean specifically by "proprietary capabilities"?

Esther: Things that are not easily copied, like proprietary data that is well-structured and actually usable. Or internal expertise that doesn't depend entirely on external agencies. Workflows tailored to your own brand and customers. You don’t just build that in three months. That is exactly why now is the right time to start.

Awed: From our perspective—creative and marketing consultancy—the real bottleneck was never production speed. It was always the quality of strategic thinking beforehand. Do you see it that way too?

Esther: Absolutely. And that is one of the biggest blind spots I encounter. Teams invest in tools and prompts, but not in the fundamental question: What do we really know about our customers? What is our actual message? Why should someone read, hear, or buy this? How do we engage our stakeholders and communicate internally and externally?

AI makes fast content faster. It does not make a weak strategy stronger.

Awed: Trust is a major topic right now. Consumers are questioning AI-generated content. Some brands are already positioning "human-made" as a selling point. How do you see this development?

Esther: As a logical consequence. When everyone uses the same tools, everything ends up looking the same. And when everything looks the same, distinctiveness becomes valuable. For boards, this creates a new strategic question: Where do we want to be visibly human as a brand? This isn't an emotional question; it’s a conscious decision that leadership must make.

Awed: What is your most important advice for a leadership team that honestly wants to know: where do we really stand?

Esther: Ask four questions:

  1. Measurement: Can we measure the effect AI has on our business? If not, you don't have an AI problem; you have a measurement problem.

  2. Responsibility: Who in this organization is responsible for AI decisions? If the answer is unclear, the AI is steering the organization—not the other way around.

  3. Guardrails: Do we have guardrails that protect our brand when AI-generated content is published? If not, every employee with a tool is a potential brand risk.

  4. Competence: Are our employees at all levels sufficiently competent to not only minimize risks but also seize real value-adding opportunities?

That sounds direct. But it is accurate.

Awed: Last question. What comes next? What will shape the next two years?

Esther: Not better tools, but better decisions about where AI is used—and where it isn't. The question used to be, "Are we using AI?" Today it is, "Are we using it safely, demonstrably improving performance, or creating value?" Most organizations cannot yet answer "yes" to that. Those who can will have a lead that is hard to close.

Awed: Thank you very much for the conversation!

"In two years, efficiency will be the standard. Organizations that haven't built their own proprietary capabilities by then will have no advantage."

awedstudio.com

Next
Next

Beyond the Hype: A Board Member’s Playbook for European AI Sovereignty