Our approach to AI
DIE CREW AG AI Policies | Version 1.1 | April 2026
We use AI. Systematically and throughout the entire value chain: from research, strategy, and ideation to execution, campaign rollout, and project management. With text, image, audio, and video systems. In internal processes as well as in client projects.
For us, this is no longer a special feature, but a professional standard. And that is exactly why we document how we do it: not as a compliance exercise, but because our customers, partners, and team have the right to know by what rules we play. Clear rules build trust. And trust is the foundation for AI to actually work.
Where we use AI
We distinguish between three areas of application, to which the same principles apply:
Our own communication – AI supports us in creating content for our own public presentation: text, images, audio, videos, and analyses.
Client projects – AI is an integral part of our work for clients. Campaign development, content production, strategy, research – in many areas, we rely on AI-supported or AI-generated processes. However, the responsibility for the outcome lies not with the AI, but with the person operating it.
Internal processes and automation – We automate internal workflows with AI and develop or implement AI solutions and automations for clients. When we do so, we adhere to their data security and privacy requirements.
1. Data Security
We use only those AI systems that allow us to comply with the data security standards we set for ourselves and/or those we have contractually agreed upon with our clients.
- No training with customer data. We work exclusively with systems that guarantee that entered or uploaded data is not used to train models. This applies to all systems in production.
- No excessive granting of rights. We do not use platforms that secure rights to processed data or generated results beyond the defined purpose of use.
- Testsysteme separat. Different conditions may apply in evaluation and test environments. Live customer data or confidential data is never used there.
- Regular review. Our AI steering committee regularly checks whether the systems in use continue to meet the agreed-upon security standards. If provider terms change, we respond.
2. Data protection
We process personal data in accordance with the GDPR and the requirements of the EU AI Act. Customer data and confidential project content are processed exclusively in approved, data protection-compliant systems – preferably with EU hosting and corresponding certifications. We establish the contractual basis for data processing individually with our customers through data protection agreements.
3. Ethics
For us, AI does not replace human decision-making or human responsibility. It supports both.
- People first.
All AI-generated results are reviewed, evaluated, and accounted for by humans. Fully automated decisions without human review do not take place. - No manipulation, no discrimination. We consistently rule out the use of AI if it would serve to deceive, discriminate against, or unfairly influence people.
- Competition law compliance. We produce advertising content. All factual statements in AI-generated content are reviewed by humans for accuracy. We exclude misleading advertising, regardless of whether it was formulated by a human or an AI.
- Critical reflection. We do not blindly accept AI output. We review, question, and correct. This is part of our standard practice, not the exception.
4. IP and copyright
AI-generated content operates within a legal framework that has not yet been fully clarified. We are transparent about this.
- Commercial usability as a selection criterion. In production environments, we only use systems that grant us full commercial rights to the generated results and allow us to transfer these rights to our customers. Should there be systems in individual cases where such restrictions apply, we expressly inform our customers.
- No dual use. We do not use what we generate for one client for other clients. Technically induced similarities – such as when an identical prompt leads to a comparable result for another user – are beyond our control and cannot be ruled out. What we do rule out is the deliberate reuse of the same result.
- No promises we cannot keep. Under current law, AI-generated content is often not protected by copyright. We ensure that the systems we use guarantee commercial usability – we can only provide copyright guarantees to the extent permitted by law.
5. Accountability
Responsible AI use does not arise from rules alone. It requires people who actively champion it. That is why we have a clear internal structure.
The AI Steering Committee is the central body that steers our AI strategy,
is responsible for tool approvals, and regularly assesses compliance issues. It ensures that no new system is put into production without firstly verifying data security, legal compliance, and terms of use.
AI Champions in all relevant areas embed AI expertise into everyday work. They serve as the first points of contact within the team, support the implementation of new tools, and ensure that the guidelines aren’t left to gather dust in a folder – but are actively put into practice.
At the same time, we continuously invest in our team’s AI expertise – through the DECAID Academy, internal training, and a learning culture where experiences with AI are actively shared.
For questions, feedback, or complaints, please contact the following individuals:
Kai Wanner, Head of E-Sales/Partners – k.wanner@diecrew.de
Felix Holzbaur, Project Manager E-Sales/AI & Automation – f.holzbaur@diecrew.de
Michael Frank, Executive Board Member/Partner – m.frank@diecrew.de
6. Transparency
- Labeling of AI-generated content. We label AI-generated content in accordance with legal requirements – specifically Article 50 of the EU AI Act and copyright law – as well as the labeling standards defined by our clients. Content that has undergone documented human editing and for which a person bears editorial responsibility is not subject to a separate labeling requirement under Article 50 of the AI Regulation.
- Standard vs. project-specific. AI tools are part of our standard professional toolkit. For client projects, we coordinate the use of AI separately if systems outside our standard are used or if data protection or contractual requirements necessitate it.
- Tech stack available upon request. We provide our current AI tool stack to clients and partners upon request.
Last updated: March 2026
Next scheduled review: September 2026
Revision history: Version 1.0 → 1.1 March 2026 (Initial release)
Please direct any questions or comments regarding the policy to the contact persons listed above.