The grass in the machine

It is hard to doubt the impact that technology has had and is having on our lives. It is changing many of the ways we do things, some for the better, others not so much. There are many ways in which the introduction of technology is making our everyday human activities safer.

Consider for example the changes in motor vehicles. From inertia reel seat belts, to airbags which automatically deploy in the event of an accident we have seen improved technology save lives and prevent serious injury. I doubt many people disable the airbags in their cars. Many vehicles are now fitted with collision prevention system, that apply braking automatically to prevent accidents occurring at dangerous speeds.

We all seem to be comfortable with this type of technology and its use.

Lets think now not about physical prevention, but instead look at the use of intelligence (used in the sense of having information about events).

If in school a student discloses to a member of staff that they are thinking of harming themselves in some way we would not ignore that information. The child may have intimated they were taking or intending to take drugs, that they were self-harming or were having suicidal thoughts. They may have disclosed to a friend their intention to harm others. We would act on it. I doubt anyone has an issue with that. Indeed, if a friend passed on the information second-hand we would also act. I’d suggest we are not only legally bound to do so but morally bound as well. Think also of a younger child who through their writing, their drawings and their play discloses abuse that they cannot find the words to properly disclose. We would act on it.

So this morning I saw this:

It’s worth reading the whole article but to give you a taste:-

Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like Gaggle, Securly, and GoGuardian to surface potentially worrisome communications to school administrators.

These Safety Management Platforms (SMPs) use natural-language processing to scan through the millions of words typed on school computers. If a word or phrase might indicate bullying or self-harm behavior, it gets surfaced for a team of humans to review.

In an age of mass school-shootings and increased student suicides, SMPs can play a vital role in preventing harm before it happens. Each of these companies has case studies where an intercepted message helped save lives. But the software also raises ethical concerns about the line between protecting students’ safety and protecting their privacy.

“A good-faith effort to monitor students keeps raising the bar until you have a sort of surveillance state in the classroom,” Girard Kelly, the director of privacy review at Common Sense Media, a non-profit that promotes internet-safety education for children, told Quartz. “Not only are there metal detectors and cameras in the schools, but now their learning objectives and emails are being tracked too.”

And I have to admit I’m conflicted.

Child discloses to friend they’re self-harming and the info gets to me, I thank the friend for being a good friend and act on the information. Child posts online that they’re self-harming and the big AI machine in the sky emails to tell me and I come over all “That’s a gross violation of their human rights and an invasion of their privacy. We mustn’t listen.”

And I’ve seen a number of responses to the article that are broadly speaking dead set against the idea.

Now I can see the difference between an air bag and a big AI machine in the sky, but I’m still conflicted – why am I happy for one piece of technology to save a life but not the other? Clearly my issue is around privacy and where is that data stored and who is going to use it and for what purpose and for how long. And anyway, even if all those questions are answered to our satisfaction HOW CAN WE TRUST THESE FACEBOOK-LIKE MONSTERS AT ALL!

At the end of the day this is all about trust. For example, I won’t have Facebook on my phone because I do not trust them. I won’t have an Amazon Echo in the house, because I do not trust them. But I use Siri, even though I don’t completely trust Apple (at least they make their money by charging me for stuff, so there’s some comfort there).

So we have to be honest with ourselves. Working properly (as it probably doesn’t currently but will soon) the big AI machine in the sky will be capable of saving children from harm. At some point we are going to have to come up with an answer to the question “How many children am I prepared to be harmed in order retain my purity over privacy issues?” What happens when a child gets hurt in some way and you know, you know for a fact, that had you been scanning their text you could have prevented it. What value purity then?

This is the biggest change in the 21stcentury. What was once private is now blurred. More of us is in the public domain than ever before. The tools to aggregate and interrogate that information exist. I think we have to have a discussion about what stays between the child and the machine.

I remain conflicted.

 

Advertisements