If algorithms create the content – is it still speech? Is content moderation a euphemism for censorship? By the end of June 2024, the Supreme Court of the United States of America is expected to rule on the fraught issue of regulating freedom of speech in the digital era.

In some quarters, the regulation of points of view may amount to a violation of the First Amendment of the US Constitution, which upholds freedom of speech. While the First Amendment gives the citizen vast freedoms, it limits what the government can do.

Any discussion about the First Amendment must first consider what the state can regulate inside the Modern Public Square. The core considerations of this discussion in the United States revolve around social media posts about the 6th January 2021 Capitol riots.

Around the world, governments have created legislation to manage harmful digital content. The United Kingdom’s “Online Safety Law”, the European Union’s Digital Services Act, and Canada’s proposed “Online Harms Act” are three watershed examples. The Canadian model would make technology companies responsible for confronting a range of categories of harmful content.

These include (1) content that induces a child to harm themselves, (2) intimate content communicated without consent, (3) content that foments hatred, (4) content that incites terrorism or violent extremism, (5) content that incites violence, and (6) content used to bully a child.

This legislation would establish a new Digital Safety Commission to enforce standards and provide guidelines for platforms to introduce guardrails for children. For far too long, countless children have been exposed to harm.

Like the United States, Canada respects freedom of speech, but the position of the government is that the online environment must be accessible to all, without fear for their safety or their life.

The proposal is presently being deliberated. The hope is that platforms will introduce features to protect children, including parental controls and safe search settings, as AI and Algorithms are now integrated seamlessly into enterprise solutions and platforms.

The agnostics argue that we will see neither AI utopia, nor AI dystopia, anytime soon. They even question whether the term AI should be left behind as a relic as it has its genesis in a distant optimistic moment somewhere in the 1950s.

They contend that computers have reinforced many existing power structures and have played a very conservative role. Rather than revolutionize society, computers have helped to maintain and manage existing hierarchies, relationships of ownership, and power structures.

The evangelists, on the other hand, foresee that AI will change our relationships, both with ourselves and others. Just as a divide exists now between “digital migrants” and “digital natives”, so, too, will a divide emerge between “AI natives” and those who precede them. Children in schools in the West Indies will grow up with AI assistants like Bixby, Jasper, Rytr, and WriteSonic.

Bixby runs on smartphones and smart devices. It is voice-based and can be used for texting. Jasper can help writers to optimize and rank their content, and offers a wide range of writing templates for everything from one-off social media posts to long-form blogs — and can track voice and tone to build a brand, over time.

Rytr can translate text into more than thirty different languages, rewrite existing content, check for plagiarism, and create multiple versions of a piece.

WriteSonic automatically generates SEO-friendly content for long-form articles, social media advertisements, and website landing pages. It also offers a client support robot called Botsonic, an AI art generator named Photosonic, and a GPT-4-powered AI chatbot called ChatSonic.

AI assistants in education settings will be multilingual and will calibrate their features to individual students’ performance and learning styles, to enable every child to reach their full potential.

As AI-bespoke occasions for learning emerge, the average human’s capability stands both to increase and to be challenged. The boundary between AI and the human is porous. Once children acquire digital assistance at an early age to support their understanding of the world, they will become habituated to them.

Digital assistants will evolve with their users, learning their proclivities and their preferences. Relationships with digital assistants will blossom simply because humans are less intuitive and more disagreeable.

One consequence of this is that our dependence on human relationships may shrivel, and the ineffable qualities and lessons of childhood may vaporize. The omnipresent machine which cannot feel or experience human emotion will now shape the perception of the child, and his or her socialization.

How will AI assistants release the imagination of the university student? How will AI assistants change the way children make friends? AI has already transformed the education and cultural experiences of a generation. It has also changed primary, secondary, and tertiary education systems in every country. We are on the edge of creating a new Education System.

One in which machines and AI assistants will, in countless ways, act as human teachers, connected constantly to the WWW. Every small fluctuation in new knowledge that becomes knowable will not be out of the reach of the pupil.  But such machines will not have human sensibilities, insight, and emotion. How then do we protect children from harm, and moderate digital content?