AI Usage Policy
Last updated: April 14, 2026
This Policy sets out the principles governing all AI systems designed, provided, and operated by Storyteller AI Inc. (hereinafter "the Company") in accordance with the EU AI Act (Regulation (EU) 2024/1689), Japan's Act on the Promotion of Research, Development and Utilization of AI-Related Technologies (Act No. 53 of 2025, hereinafter "Japan AI Act"), the AI Business Operator Guidelines v1.2 (March 2026) jointly published by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry, and ISO/IEC 42001:2023 (AI Management System standard).
Under an AI governance framework aligned with ISO/IEC 42001:2023, the Company designs and operates its AI systems around three pillars: transparency, explainability, and human-centricity. This Policy applies together with the Privacy Policy and Terms of Service.
Table of Contents
- 1. Scope and Regulatory Basis
- 2. AI Systems We Provide and Use
- 3. Transparency Obligations (EU AI Act Art. 50)
- 4. Prohibited AI Practices (EU AI Act Art. 5)
- 5. High-Risk AI Restrictions (Annex III / GDPR Art. 22)
- 6. Data Governance and Training Use
- 7. Human-Centricity and Human-in-the-Loop
- 8. AI Literacy and User Support (EU AI Act Art. 4)
- 9. Disclosure of Hallucination and Misinformation Risks
- 10. AI Impact Assessment and Governance
- 11. Incident Reporting and Contact
- 12. Annual Transparency Report
- 13. Amendments to This Policy
1. Scope and Regulatory Basis
This Policy is built on the following four pillars:
- (1) EU AI Act (Regulation (EU) 2024/1689): Entered into force on August 1, 2024. Article 5 (prohibited practices) and Article 4 (AI literacy) have applied since February 2, 2025. Article 50 (transparency obligations), Annex III (high-risk AI), and penalties will apply from August 2, 2026. Non-compliance may result in fines up to EUR 15 million or 3% of global annual turnover, whichever is higher.
- (2) Japan AI Act (Act No. 53 of 2025): In full force since September 1, 2025. Under Article 6, the Company, as an AI business operator, uses AI in accordance with the Act's basic principles (transparency, international cooperation, risk response) and cooperates with national measures.
- (3) AI Business Operator Guidelines v1.2 (MIC / METI, March 2026): The Company declares its compliance with the ten principles reflecting the OECD AI Principles and the G7 Hiroshima AI Process: human-centricity, safety, fairness, privacy, security, transparency, accountability, education/literacy, fair competition, and innovation.
- (4) ISO/IEC 42001:2023 (AI Management System): The Company maintains an organizational governance framework aligned with the AIMS international standard and conducts AI System Impact Assessments in accordance with Clause 6.1.4. The Company has a track record of supporting client enterprises in building ISO/IEC 42001-aligned AI governance.
This Policy applies to the Company's corporate website, the AI Concierge, the AI School, and all AI development, operations, and consulting services the Company provides to client enterprises.
2. AI Systems We Provide and Use
For transparency, we disclose the principal AI systems, foundation models, and providers we rely on in our services:
- Large Language Models (LLMs): Anthropic Claude (Opus / Sonnet / Haiku), Google Gemini, OpenAI GPT series, etc. (selected per use case)
- AI Gateway: Multi-provider architecture via Google Vertex AI and Vercel AI Gateway (model redundancy and cost optimization)
- AI Agents: LangGraph-based Planner-Executor agents (AI Concierge)
- Retrieval-Augmented Generation (RAG): Information retrieval combining vector databases with internal knowledge bases
- Observability stack: Trace collection, AI safety monitoring, and hallucination detection via LangSmith
All of the above systems currently operate within the scope of "limited-risk AI" under the EU AI Act. The Company does not currently operate any use cases that fall under high-risk AI (Annex III).
3. Transparency Obligations (EU AI Act Art. 50)
Ahead of the August 2, 2026 application date, the Company complies with the following transparency obligations under EU AI Act Article 50:
- Disclosure of AI interaction (Art. 50(1)): Our AI Concierge chat UIs always display a "🤖 Chatting with AI" indicator so that users are made aware they are interacting with an AI system at the time of their first exposure.
- Machine-readable labeling of AI-generated content (Art. 50(2)): Images, audio, and video generated by the Company will progressively be marked with C2PA (Coalition for Content Provenance and Authenticity) watermarks and metadata by August 2, 2026.
- Deepfake and synthetic media disclosure (Art. 50(4)): Where synthetic media generation is offered, the Company clearly discloses this fact and also fulfills disclosure duties for AI-generated text used for public interest purposes.
- Emotion recognition and biometric categorization (Art. 50(3)): No such systems are currently offered. Any future offering will be preceded by an amendment to this Policy and advance notice.
4. Prohibited AI Practices (EU AI Act Art. 5)
The Company does not operate, and will not accept client engagements whose purpose is, any of the following practices prohibited under EU AI Act Article 5:
- Subliminal manipulation or exploitation of human vulnerabilities
- Social scoring by public authorities
- Predictive policing based solely on personal characteristics
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
- Emotion recognition in the workplace or educational institutions (except for genuine safety or medical reasons)
- Biometric categorization based on sensitive attributes (race, political opinions, religion, etc.)
- Real-time remote biometric identification in publicly accessible spaces (save for limited law enforcement exceptions)
5. High-Risk AI Restrictions (Annex III / GDPR Art. 22)
The following uses are classified as high-risk AI under EU AI Act Annex III. The Company prohibits the use of AI output as the sole basis for decisions in these areas and always requires independent human review:
- Recruitment, performance evaluation, promotion, and dismissal
- Credit scoring and insurance premium calculation
- Evaluation and admissions at educational institutions
- Eligibility assessment for public benefits
- Law enforcement, judicial decisions, and refugee determination
- Safety assessment of critical infrastructure
In accordance with GDPR Article 22, the Company does not, as a matter of principle, engage in fully automated decision-making that produces legal or similarly significant effects on individuals. Where such processing becomes necessary, we guarantee prior explicit consent, Human-in-the-Loop review, and the right to explanation under EU AI Act Article 86.
6. Data Governance and Training Use
Under Article 17 of Japan's Act on the Protection of Personal Information (purpose specification), GDPR Article 5 (data minimization), and ISO/IEC 42001:2023 Clause 8, the Company implements the following data governance:
- Policy on training data use: The Company does not use personal data obtained from customers for the pre-training or fine-tuning of foundation models (LLMs). Should such use become necessary, we will provide prior explicit consent and opt-out mechanisms.
- Treatment of prompt inputs: Prompts submitted to the AI Concierge are retained solely for service provision, quality improvement, and safety monitoring, and are covered by agreements (Zero Data Retention or equivalent) ensuring they are not used to train foundation-model providers.
- Special categories of personal data: Under Article 20(2) of the Japanese APPI, the acquisition of sensitive personal information (medical history, criminal record, race, etc.) requires prior consent from the data subject.
- Cross-border transfers (APPI Art. 28): Transfers of personal data to overseas AI providers (Anthropic, Google, OpenAI, etc.) are only carried out with the data subject's consent or equivalent safeguards (Standard Contractual Clauses).
- Right to erasure from models: Because removing specific data from trained models is technically constrained, the Company relies on data minimization and anonymization as a matter of principle, and responds to erasure requests as promptly as technically feasible.
7. Human-Centricity and Human-in-the-Loop
In line with the "human-centricity" principle of the AI Business Operator Guidelines v1.2 and Clause 8.3 (Human Oversight) of ISO/IEC 42001:2023, the Company implements the following governance:
- For significant decisions (contracts, legal matters, healthcare, hiring, credit, etc.), AI output is treated as reference information and final decisions are always made by humans.
- AI Concierge and similar outputs are always accompanied by guidance recommending human verification.
- Users retain the right to request human intervention and re-evaluation of AI outputs (GDPR Art. 22(3)).
- The Company does not use AI output as the sole basis for any organizational decision.
8. AI Literacy and User Support (EU AI Act Art. 4)
Under EU AI Act Article 4 (in force since February 2025), the Company provides the following literacy support to users and client enterprises:
- Documentation and help resources explaining the capabilities and limits of AI systems
- Best-practice guides for prompt engineering
- Notices regarding hallucination, bias, and privacy risks
- Structured AI literacy education through Storyteller AI School (for interested participants)
9. Disclosure of Hallucination and Misinformation Risks
Generative AI may produce output that is factually incorrect (hallucinations), biased, or based on outdated information. The Company expressly notes that:
- AI output is informational only, and users are responsible for verifying facts.
- Do not use AI output as a substitute for medical, legal, tax, financial, or investment advice.
- Before any significant decision, always consult primary sources and appropriately qualified professionals.
- The Company makes no warranty, express or implied, as to the accuracy of AI outputs.
10. AI Impact Assessment and Governance
In accordance with ISO/IEC 42001:2023 Clause 6.1.4 (AI System Impact Assessment), the Company conducts an AI Impact Assessment when introducing new AI systems or making material changes to existing ones:
- Assessment of potential impacts on data subjects and society
- Analysis of bias, fairness, safety, and security risks
- Integration with Data Protection Impact Assessments (DPIA)
- Review of Human Oversight design
- Recording in the risk register and periodic re-assessment
11. Incident Reporting and Contact
Any concern, incident, or objection relating to our AI systems should be directed to the contacts below. We will respond within one month as a rule:
AI Ethics / Data Protection Officer: dpo@storytlr.ai
General inquiries: contact@storytlr.ai
In the event of a significant AI-related incident, the Company will notify the Japanese government under the Japan AI Act and the competent supervisory authority within 72 hours under GDPR Article 33.
12. Annual Transparency Report
From fiscal year 2026 onward, the Company will publish an annual AI transparency report. The report will cover models used, summaries of training data, number of incidents, AI Impact Assessments performed, and the number of Human-in-the-Loop interventions.
13. Amendments to This Policy
This Policy may be amended in response to legal changes, technological developments, and improvements in the Company's AI governance framework. Material changes will be announced on this website in advance.
The latest version will always be published on this page. Earlier versions are available upon request.
Version 1.0 — Published April 14, 2026