Celebration of children

Hong Kong Teen Develops AI Tool to Help Detect Abuse in Childcare Centres

A 17-year-old student from Hong Kong has developed an artificial intelligence system designed to help identify signs of possible physical abuse inside childcare centres, using existing CCTV footage to strengthen child safety and protection.

The system, known as Kid AID, analyzes live video feeds and flags unusual or potentially harmful interactions based on movement patterns, duration of contact, and behavioural cues. Rather than replacing human caregivers or supervisors, the technology is intended to act as an early-warning layer in environments where constant, close monitoring of every child is often difficult.

Kid AID was created by Chow Sze Lok, a student of St Mary’s Canossian College, who developed the project in just six months using an old laptop. Despite its modest beginnings, the system has already gained significant recognition, earning awards at the Hong Kong Science Fair and the international InfoMatrix competition.

At its core, the project responds to a critical child-protection challenge: incidents of abuse in childcare settings often go unnoticed until visible injuries or behavioural changes appear. By analysing CCTV footage in real time, the system aims to detect concerning physical interactions early, allowing authorities or administrators to intervene before harm escalates.

The technology focuses on patterns rather than isolated moments, examining repeated or forceful movements that may indicate risk. This approach helps reduce false alarms while drawing attention to interactions that merit closer human review.

Chow’s work reflects a growing trend of student-led innovation addressing real-world safety gaps through technology. In childcare environments, where children may lack the ability to report harm or clearly explain what happened to them, early detection tools can play a crucial role in safeguarding their rights to safety, dignity, and protection.

While Kid AID is still a prototype, its success highlights the potential for accessible, low-cost AI solutions to support childcare oversight, particularly in settings with limited staff or high child-to-caregiver ratios. It also raises broader questions about how technology can be ethically integrated into child-focused spaces to enhance protection without violating privacy.

As discussions around child safety, surveillance, and responsible AI continue globally, projects like Kid AID show how young innovators are contributing practical solutions to complex social problems — placing children’s well-being at the center of technological advancement.

Read more about this here

Source of Image

Show More

Related Articles

Back to top button