How to build an ethics-led AI policy with substance
Oct 28, 2025Creative agencies are rushing to have something in place about how they use AI. But generic statements about "responsible use" or "AI as a tool" don't answer the fundamental client question: how do you protect and nurture the value we're paying for?
Instead of a hurried tick box exercise, what if your AI policy became the answer to this question and your competitive edge? This is what I worked with Helen Dibble to create for her agency Incredibble.
"My biggest realisation was that NOT doing this work is the real risk. Risk to client trust when we can't explain how we're protecting their IP. Risk to team confidence when policy gets handed down without transparency. Risk to our work itself if we're using AI without clear boundaries."
- Helen Dibble, founder of Incredibble
The work
The aim was to develop an ethics-led AI policy that goes beyond virtue signalling and vague statements. Something with substance that could be used to lead conversations — internally with the team, externally with clients and across the industry.
Helen wanted to think this through carefully. To clarify what makes her agency valuable to clients, where AI undermines that value and where it might support it. To understand (rather than assume) how her team felt about AI and the ways they were currently using it. To do the deep reflection work necessary to build a framework that would give everyone — team and clients — real confidence.
The ethical dimension was about integrity in practice: protecting people and craft, while being honest about what the technology can and can't do.
The process
Phase 1: laying the groundwork
I ran an AI ethics workshop for Helen and her team to lay the groundwork from which to build an ethics-led AI policy.
The workshop did three things:
- Introduced ethical frameworks as tools to think through AI use systematically as a team rather than reactively as individuals
 - Created space to explore grey areas, practise holding conflicting truths and finding a way forward together
 - Moved the team towards alignment, developing shared language and nuanced thinking that became the foundation for the work to follow
 
The aim was to help Helen's team grapple with AI as both powerful and potentially problematic technology — acknowledging benefits while recognising legitimate concerns about impact on craft, autonomy and value.
This gave us the substance to work with. Not a hype-led interpretation of what AI use should look like, but a grounded understanding of where the team was and a shared foundation to build from.
Phase 2: defining the philosophy
With that foundation in place, I worked directly with Helen to translate Incredibble's values into a clear AI position. This was facilitated strategic thinking — challenging assumptions, testing boundaries and articulating what makes the agency's work irreplaceable.
The groundwork from phase 1 made this possible. Helen could build on the shared language and understanding that had emerged, connecting it to her vision for Incredibble and the integrity that defines how the agency operates.
The resulting philosophy is a clear statement on where AI fits into Incredibble's work. No borrowed language or safe generalisations. Underpinned by an understanding of the difference between AI use that supports human expertise and creativity, and AI use that undermines it.
Phase 3: building the use policy
With the philosophy defined, we moved to the practical work: mapping where AI would and wouldn't be used across the agency's workflow.
We worked through the project lifecycle — briefing, research, strategy, execution, review — stress-testing the philosophy against reality. For each stage, we defined:
- What the agency uses AI for
 - What they don't use AI for
 - How they use it (with what permissions and human checkpoints)
 - Why these boundaries are important
 
What emerged was a narrow and specific set of use cases for AI. This means clients have something concrete to evaluate rather than vague reassurances about responsible use. Every boundary is defensible — Helen can articulate exactly why it exists. The policy itself demonstrates the kind of deep strategic thinking the agency offers clients.
What this makes possible
Building client confidence: Helen can articulate exactly where AI is used, why those boundaries exist and how they protect the human expertise and creativity clients are paying for.
Differentiating in a busy market: Incredibble now has an AI position with substance. It demonstrates they've thought deeply about what makes them valuable — and that thinking is visible to clients.
Strengthening team buy-in: transparency throughout the process made Helen's team feel valued and part of the decision-making, rather than being handed a set of rules to obey.
Shaping industry conversations: Incredibble is now ahead of the curve, because Helen's done the work to understand where AI legitimately fits into her business and can speak about it with authority.
Why this work is important
Without this work, agencies can't confidently explain their AI choices — whether they're using it extensively or not at all. That lack of clarity undermines client trust and leaves teams making decisions without proper safeguards.
The key to getting this right is having space for the hard reflective work: defining a philosophy, addressing your team's concerns, and building the clarity that determines whether AI makes you more or less valuable.
This work requires facilitation — someone to ask the questions you need to answer, challenge your assumptions and create space for deep thinking. It needs a process tailored to your reality, starting with how your team works and where tensions exist.
I can help you develop an ethics-led AI position and use policy with substance. But the value goes beyond the document itself: concrete boundaries your team and clients can trust, and confidence at every level.
I'm taking on new clients from January 2026. If you'd like to explore how this might work for your agency or organisation, let's talk.