home bbs files messages ]

Just a sample of the Echomail archive

<< oldest | < older | list | newer > | newest >> ]

 Message 2134 
 Mike Powell to All 
 Keep kids safe in AI 2/2 
 30 Dec 25 09:42:02 
 
TZUTC: -0500
MSGID: 1891.consprcy@1:2320/105 2db9221d
PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
BBSID: CAPCITY2
CHRS: ASCII 1
FORMAT: flowed
 (continued)

What needs to happen?

Ideally, protecting children would involve parents, schools, governments, and
tech companies all working together. But after years of slow progress on
social media regulation, its not hard to see why confidence in that happening
any time soon is low. 

Many of the biggest problems could be addressed if the companies behind AI
tools and social platforms took more responsibility and enforced meaningful
safeguards. Tech companies need to be subject to urgent, meaningful 
regulation if were going to protect children, Steele says. At the moment, far
too much responsibility is falling on families, schools, and the goodwill of
industry, and that simply isnt safe. 

Bartuski agrees that companies should be doing far more. They have the money,
resources, and visibility to be able to do a lot more. Many social media
companies have used Foggs Persuasive Design to get kids habituated to be
lifelong users of their platforms. Tech companies do this on purpose, she
explains. 

But this is where the tension lies. We can say tech companies should do more,
yet as the risks become clearer, corporate incentives are often moving in the
opposite direction. With the guardrails being removed from AI development
(specifically in the US), there are some (not all) companies that are using
that to their advantage, Bartuski says. She has already seen companies push
ahead with features they know are dangerous. 

Even so, experts agree that certain steps would have an immediate and
significant impact. There need to be clear rules on what AI systems must not
be allowed to do, including creating sexualized images of children, promoting
self-harm, or using design features that foster emotional dependency, Steele
says. 

This forms the basis of the Safe AI for Children Alliances Non-Negotiables
Campaign , which outlines three protections every child should have. 
Alongside banning the creation of sexualized images of children, the campaign
states that AI must never be designed to make children emotionally dependent
and AI must never encourage children to harm themselves. 

But relying on tech companies alone wont cut it. Independent oversight is
essential. This is why Briercliffe believes stronger external checks are
needed across the industry. There must be mandatory, independent, third-party
testing and evaluation before deployment, he says. We also need independent
oversight, transparency about how systems behave in real-world conditions, 
and real consequences when companies fail to protect children. 

And ultimately, this goes beyond individual platforms. This is ultimately a
question of societal responsibility, Tara says. We must set strong,
enforceable standards that ensure childrens safety comes before commercial
incentives.

What can parents do? 

Even with regulations slow to catch up, parents shouldnt feel at a loss. 
There are meaningful steps you can take right now. Its completely
understandable for parents to feel worried, Steele says. The technology is
moving very fast, and the risks arent intuitive. But it is important not to
feel powerless. 

 1. Understand the basics 

Parents dont need to learn how every AI tool works, Bartuski says. But 
getting clear on the risks and benefits is important. Steele offers a free
Parent and Educator Guide at safeaiforchildren.org that lays out all the 
major concerns in clear, accessible language, which is a good place to start. 

 2. Create open, non-judgmental communication 

If kids feel judged or are worried about consequences, they are not going to
turn to parents when something is wrong, Bartuski says. If they dont feel 
safe talking to you, you are placing them in potentially dangerous and/or
exploitative situations. Keep conversations calm, curious, and shame-free. 

 3. Talk about the tech 

You might assume your children understand AI better than you do because they
use it more. But they may not grasp how it works, how often it gets things
wrong, or that fake content can look real. Bartuski says kids need to know
that chatbots can be wrong, manipulative, or unsafe, even when they sound
caring or convincing. 

 4. Use shared spaces 

This isnt about banning tech outright. Its about making it safer. Steele
suggests enforcing shared spaces", which involves using AI tools in communal
areas, experimenting together, and avoiding private one-on-one use behind
closed doors. This could reduce the chance of harmful interactions going
unnoticed. 

 5. Extend the conversation beyond the home 

Safety shouldnt stop at your front door. If you are worried, ask your child's
school what they have in place, Briercliffe says. Even ask your employer to
bring in a professional to give a talk. Experts agreed that while parents 
play a key role here, this is a wider cultural challenge, and the more openly
we all discuss it, the safer children will be. 

 6. Find more balance and reduce screen time 

Weve been talking about limiting screen time for years, and its just as
important now that AI is showing up across apps, games, and social platforms.
Kids need to be taught balance, Bartuski says. Play is essential for growth
and development. She also stresses that reducing screen time only works if 
its replaced with activities that are engaging, fun, and cognitively
challenging.
 
======================================================================
Link to news story:
https://www.techradar.com/ai-platforms-assistants/how-you-can-keep-your-kids-s
afe-in-this-ai-powered-world

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)
SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700
SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470
SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45
SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35
PATH: 2320/105 229/426


<< oldest | < older | list | newer > | newest >> ]

(c) 1994,  bbs@darkrealms.ca