fbpx
23.7 C
Cape Town
Wednesday, October 16, 2024

The AI revolution: A double-edged sword for children

Published on

The AI revolution is a double-edged sword for children. So says Anna Collard, the SVP Content Strategy & Evangelist at KnowBe4 AFRICA. She says it’s challenging to maximise AI’s benefits for children’s education and growth, while also ensuring their privacy, healthy development, and emotional well-being.

The AI revolution: A double-edged sword for children

In just two years, artificial intelligence has undergone a revolution. Generative AI tools like ChatGPT, Google’s Gemini, and Microsoft’s Copilot have rapidly become part of our daily lives. With Meta integrating AI chatbots into popular platforms like WhatsApp, Facebook, and Instagram, the technology is more accessible than ever before. For children growing up in this AI-powered world, the implications are both exciting and concerning, warns Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 AFRICA.

“These AI tools offer unprecedented opportunities for learning, creativity, and problem-solving. Children can use them to create art, compose music, write stories, and even learn new languages through engaging interactive methods,” Collard explains.

 

The AI revolution: A double-edged sword for children

 

“The personalised nature of AI chatbots, with their ability to provide quick answers and tailored responses, makes them especially appealing to young minds.”

However, as with any transformative technology, AI brings with it a host of potential risks that parents, educators, and policymakers must consider carefully.

AI Revolution – Transformative technology

From privacy concerns and the danger of overtrust to the spread of misinformation and possible psychological effects, the challenges are significant. “As we step into this AI-driven era, we must carefully weigh the incredible potential against the genuine risks,” warns Collard. “Our challenge is to harness AI’s power to enrich our children’s lives while simultaneously safeguarding their development, privacy, and overall well-being.”

“Parents need to know that while they seem harmless, chatbots collect data and may use it without proper consent, leading to potential privacy violations.”

The extent of these privacy risks varies greatly. According to a Canadian Standards Authority report, the threats range from relatively low-stakes issues, such as using a child’s data for targeted advertising, to more serious concerns.

Privacy concerns

Because chatbots can track conversations, preferences, and behaviours, they can create detailed profiles of child users. When used for malicious purposes, this information can enable powerful manipulative tactics to spread misinformation, polarisation, or grooming.

Collard points out further that large-language models were not designed with children in mind. The AI systems that power these chatbots train on vast amounts of adult-oriented data, which may not account for the special protections needed for minors’ information.

Overtrust

Another concern for parents is that children may develop an emotional connection with chatbots and trust them too much, whereas, in reality, they are neither human nor their friends. “The overtrust effect is a psychological phenomenon that is closely linked to the media equation theory, which states that people tend to anthropomorphise machines, meaning they assign human attributes to them and develop feelings for them,” comments Collard. “It also means that we overestimate an AI system’s capability and place too much trust in it, thus becoming complacent.”

Overtrust in generative AI can lead children to make poor decisions because they may not verify information.

“This can lead to a compromise of accuracy and many other potential negative outcomes,” she explains. “When children rely too much on their generative AI buddy, they may become complacent in their critical thinking, and it also means they may reduce face-to-face interactions with real people.”

Inaccurate and inappropriate information

AI chatbots, despite their sophistication, are not infallible. “When they are unsure how to respond, these AI tools may ‘hallucinate’ by making up the answer instead of simply saying it doesn’t know,” Collard explains. This can lead to minor issues like incorrect homework answers or, more seriously, giving minors a wrong diagnosis when they are feeling sick.

“AI systems are trained on information that includes biases, which means they can reinforce these existing biases and provide misinformation, affecting children’s understanding of the world,” she asserts.

From a parent’s perspective, the most frightening danger of AI for children is the potential exposure to harmful sexual material. “This ranges from AI tools that can create deepfake images of them or that can manipulate and exploit their vulnerabilities, subliminally influencing them to behave in harmful ways,” Collard says.

 

Psychological impact and reduction in critical thinking

 

As with most new technology, over-use can have poor outcomes. “Excessive use of AI tools by kids and teens can lead to reduced social interactions, as well as a reduction in critical thinking,” states Collard.

The AI revolution: A double-edged sword for children

“We’re already seeing these negative psychological side-effects in children through overuse of other technologies such as social media: a rise in anxiety, depression, social aggression, sleep deprivation and a loss of meaningful interaction with others.”

 

The AI revolution: A double-edged sword for children

 

Navigating this brave new world is difficult for children, parents and teachers, but Collard believes policymakers are catching up. “In Europe, while it doesn’t specifically relate to children, the AI Act aims to protect human rights by ensuring AI systems are safer.”

 

 

 

“By prioritising play and reading that children do not do on screens, parents will help boost their children’s self-esteem and critical-thinking skills.”

 

Until proper safeguards are in place, parents must monitor their children’s AI usage and counter their negative effects by introducing family rules.

 

MORE ABOUT: AI Wars! Microsoft and Google go head-to-head!

 

 

 

THIS ARTICLE HAS BEEN COMPILED BY THE TEAM AT RED RIBBON COMMUNICATIONS

Latest articles

World Food Day: Tackling malnutrition for a healthier South Africa

  By Alan Browde, CEO and founder of SA Harvest   Malnutrition in South Africa is an ongoing challenge that directly undermines human potential and social progress....

World Food Day: UWC fighting food insecurity among students

  The University of the Western Cape (UWC) is highlighting food insecurity on campus, culminating in a food packing event on the Bellville Main Campus...

Zebra Tribe get ready to lace up for the Cape Town Marathon

  The "Zebra Tribe" is set to make its fourth appearance at the Sanlam Cape Town Marathon, continuing its mission to support the development of...