Digital Focus

From Innovation to Obligation: Why Child-Centred AI Is Now a Safeguarding Duty

Inspired by UNICEF Innocenti Guidance on AI and Children, Version 3

Artificial intelligence now sits quietly inside many ordinary moments of childhood. Children learn with AI tools, experiment with generative systems, and are shaped by recommendation engines that predict, nudge, and influence their choices. These systems are not neutral. They reflect values, incentives, and blind spots built into them long before a child ever logs in.

This reality changes the safeguarding conversation. Protection can no longer begin after harm occurs. It must be built into how AI is designed, governed, and used.

UNICEF’s updated Guidance on AI and Children, Version 3, offers a clear framework for doing exactly that. Grounded in the Convention on the Rights of the Child, it reframes AI not as a technical issue alone, but as a child rights issue with legal, ethical, and moral weight.

Why this Moment Matters

AI has moved faster than the policies meant to govern it. Generative AI, AI companions, automated profiling, and large-scale data extraction are now part of children’s lives, often without their understanding or meaningful consent. At the same time, new risks have emerged. These include AI-generated child sexual abuse material, non-consensual intimate images, exploitative data supply chains, environmental harms that fall hardest on younger generations, and the use of AI in conflict and cyber operations.

Opportunities exist as well. AI can support learning, expand access for children with disabilities, and strengthen child well-being. But opportunity without protection creates exposure. That gap is what the updated guidance seeks to close.

Regulation is Child Protection

Good intentions are not safeguards. When AI systems affect children, voluntary commitments are not enough. Clear laws, oversight bodies, and enforcement mechanisms are essential. Without them, children become unknowing test subjects for systems designed to optimize speed, profit, or scale rather than safety.

Child-centered AI requires governments to define responsibility for harm and ensure that accountability is enforceable. Regulation, in this context, is not a barrier to innovation. It is a boundary that protects children from being harmed in the name of progress.

Safety Must be Designed, not Retrofitted

Safeguarding added after harm is damage control. Systems that interact with children must be assessed for risk before deployment. Designers and deployers must anticipate misuse, manipulation, and exploitation, including sexual abuse, coercion, and psychological harm. If a system cannot demonstrate that it is safe for children in advance, it is not ready for use.

Children are not Data Sources

Children’s data deserves a higher standard of care. This means collecting less data, not more, limiting profiling, and placing firm boundaries around surveillance. A child’s curiosity, mistakes, or questions should not become a permanent digital record that follows them into adulthood.

Protecting privacy is not about hiding information. It is about preserving a child’s right to grow without being endlessly tracked, categorized, or predicted.

Fairness is a Safeguard

AI systems can scale bias quickly and quietly. Without deliberate checks, they can exclude or disadvantage children based on race, gender, disability, language, or geography. Children living with disabilities face particular risk of being overlooked or misrepresented by automated systems.

Auditing for discrimination and designing for inclusion are not optional extras. They are central to protecting dignity and equality.

Transparency Protects Trust

Children and caregivers have a right to know when AI is being used, how decisions are made, and who is responsible when something goes wrong. Opacity shields systems, not children. Transparency creates the conditions for challenge, redress, and accountability. Safeguarding fails when harm has no clear owner.

Child Rights Must be Explicit

Respect for children’s rights cannot be assumed. It must be clearly stated and embedded in governance, procurement, design, and deployment. Protection from exploitation and abuse, respect for dignity, and recognition of children as rights-holders must guide every stage of the AI lifecycle. What is not explicit is often ignored.

The Best Interests of the Child Must Come First

Efficiency, engagement metrics, and commercial gain cannot outweigh child well-being. Every system that affects children should be able to answer simple questions. Does this benefit the child. Does it support healthy development. Does it avoid addiction, manipulation, or harm.

If the answers are unclear, the system should not proceed.

Inclusion and Literacy are Part of Safeguarding

Children should not be passive users of AI. Digital and AI literacy give them tools to question outputs, recognize limits, and seek help. Inclusion ensures that children in low-resource settings are not left unprotected or unheard.

An informed child is a safer child.

A Shared Responsibility

Child-centered AI cannot be delivered by one sector alone. Governments, industry, schools, families, faith communities, and safeguarding professionals all have roles to play. When responsibility is fragmented, children fall through the gaps.

Safeguarding works when duty is shared and clearly defined.

From Guidance to Obligation

AI is shaping childhood now, not in some distant future. That makes child-centered AI an obligation, not an aspiration. Regulators must act. Industry must design responsibly. Educators must build literacy. Parents must stay engaged. Safeguarding professionals must lead with clarity and courage. Children should not have to adapt to unsafe systems. AI must be required to adapt to protect children.

Read more about this here

Source of image

Show More

Related Articles

Back to top button