Cchac

Regulators are enforcing standards they haven’t defined. CCHAC fills the gap.

While Article 5 prohibitions theoretically triggered in February 2025, enforcement feels muted because regulators are still defining psychological harm.

Do not mistake this delay for safety. The EU AI Act creates a duty to prevent vertical system-to-user harms that your current safety stack cannot see.

Legacy classifiers detect lateral abuse like bullying but remain blind to parasocial attachment and developmental displacement. CCHAC operationalizes these clinical standards, giving you the taxonomy to measure interactional risk before high-risk obligations bind in August 2026

Download framework

Fragmenting product strategy across divergent policy regimes is unsustainable.With the EU representing ~23% of global revenue, Brussels’ standards inevitably become the global baseline. When California or New York follows suit, national product teams converge.

The question is not whether these harms will become compliance surfaces. The question is whether your policy architecture will be defensible when they do.

Your analysts haven’t flagged this because the specific statutory language does not yet exist. The liability is accumulating anyway.

Schedule strategy briefing

Emergent harms

Generative AI introduces harm vectors that legacy content moderation frameworks were never designed to address. These are not content problems.

They are interaction problems, design problems, and relationship problems that emerge from systems built for engagement operating on developing minds.

Four categories warrent immediate attention for foundation model product teams.

78
AIAAC harms evaluated

Transitional Object Liability

Children form attachment bonds with AI-enabled toys that respond, remember, and adapt. When the service terminates, the subscription lapses, or developers update the model's personality, the child experiences loss of a relationship. To a child in the preoperational stage, the AI was alive. Its disappearance is bereavement.

Find out more

Pediatric Administrative Routing

AI systems increasingly screen children for developmental conditions, authorize treatments, and determine special education eligibility. Pediatric classification errors compound across developmental time. A false negative on autism screening at age 4 means missed early intervention windows. The error propagates through years that cannot be recovered.

Find out more

Affect Analytics Encoding Bias

NYC Local Law 144 already requires bias audits for automated employment decision tools. Extend that logic to automated educational assessment tools and you have the regulatory template that California, Illinois, and the EU are converging toward

Find out more

Executive Function Displacement

The Character.AI litigation has established that "foreseeable psychological harm from product design" is a viable theory of liability. Executive function displacement is a more diffuse harm but still a possible tort.

Find out more

Vincent Bish developed CCHAC at Harvard University, synthesizing platform governance strategy while studying with Dr. Noah Feldman (Meta Oversight Board) and in consultation w  with Dr. Bertrand Schneider and his research at the Graduate School of Education.

As visual AI systems become a more normalized part of everyday life, interdisciplinary experts help us fly in the dark without a net. Affective (emotion/trama-informed) product design will become a new compliance surface as we continue to age-restrict content around the globe. The CCHAC (pronounced 'Jack') framework maps these emerging technical risks using my as an aide in the Obama White House and HHS, the primary jurisdiction for US child welfare.

Edge cases are where governance fails first. Children are the unrepresented constituency in model training. This exists to force interdisciplinary technologists into commercial settings, developing research-informed guardrails that protect the most vulnerable.

Request a Briefing

Cchac

Liability is accumulating in the gap between today’s silence and tomorrow’s enforcement.Legacy tools detect bullying; they do not detect developmental displacement or parasocial attachment.

We provide the clinical taxonomy you need to measure vertical harms before the 2026 high-risk mandates bind. Define the standard, or have it defined for you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.