Since the outbreak, an increasing number of public school students have been using laptops, tablets, and other similar devices that their schools have provided.
According to a September%202021%20report, the%20percentage%20of%20teachers%20who%20reported%20their%20schools%20had%20provided%20their%20students%20with%20such%20devices%20more%20than%20doubled%20from%2043%%20before%20the%20pandemic%20to%2086%%20during%20the%20pandemic
In some ways, it’s tempting to applaud how schools are doing more to keep their students connected to the internet during the pandemic.
The problem is that schools do not simply provide computers for students to use in order to keep up with their schoolwork.
Instead, in an Orwellian trend, the vast majority of schools are using those devices to monitor what students are doing outside of school.
This student surveillance is being carried out – at taxpayer expense – in cities and school communities across the country.
For example, the Minneapolis school district paid over $355,000 to use Gaggle’s student surveillance tools until 2023.
Outside of school hours, three-quarters of reported incidents – that is, cases where the system flagged students’ online activity – occurred.
When the GoGuardian surveillance app detects students typing keywords related to self-harm in Baltimore’s public school system, police officers are dispatched to the children’s homes.
Vendors claim that these tools protect students from self-harm and potentially dangerous online activities.
Those claims, however, have been questioned by privacy advocates and news outlets.
Vendors frequently refuse to reveal how and what type of data were used to train their artificial intelligence programs.
Privacy advocates are concerned that these tools will harm students by criminalizing mental health issues and suppressing free speech.
As a researcher who studies privacy and security issues in a variety of settings, I am well aware that intrusive surveillance techniques harm students emotionally and psychologically, disproportionately penalize minority students, and compromise online security.
Even the most advanced AI systems are unable to comprehend human language and context.
This is why student surveillance systems detect many false positives rather than real issues.
In some cases, these surveillance programs have flagged students who are talking about music that is deemed inappropriate.
Latest News from Infosurhoy.