I am seeing increasing calls for real life identity to be a fundamental part of online presence, especially in places like Twitter where disinformation thrives. This is a problem.
Supposedly ensuring posts are connected to a real life identity will help prevent hate speech, disinformation, and scams. This proposal lacks crucial nuance. It falls into the trap tech always seems to fall into — pushing for major change without ensuring we have marginalized people at the table and in the discussion before we make major changes.
I have heard these suggestions from all over, but most consistently from Scott Galloway. I like hearing from his experience a lot, but I think from my career I have some exposure to knock-on effects here1 which demonstrate why this would put marginalized people in physical danger, as I show below. Enforcing identity is great at reducing problems for you as long as you’re straight, white, male, and American. For others it’s not quite as clear cut. This has been extensively researched, and Jillian C. York maintains a list of a lot of that research.
To be clear, I’m not trying to land on one side of the identity debate or another. I’m trying to inject nuance into the conversation and make sure we don’t leave certain groups out of the conversation as a whole, as we have done so many times before. A perennial problem in almost all of tech is we don’t stop to consider externalities, especially in how things will affect marginalized folks.
I can’t speak to all those affected but I want to talk about how these policies have affected members of the LGBT+ community in the past, and how it will inevitably do so in the future. Members of marginalized communities, such as LGBT+, often find support in online groups, but don’t want that associated with their real life identities. Exposing that identity can cause grief, and sometimes puts them at risk of violence or death.
We already have a case study here in Google+. Google+ had a “real name” policy that is similar to current proposals. But they ditched it, in part due to the danger it caused. They outed a trans woman when Google decided to apply these policies to Hangouts. This wasn’t a one-off. Facebook had almost identical results with a similar policy.
Insisting on identity in forums where marginalized people can currently gather in relative safety seems shortsighted. Especially when we’re seeing increases in instances of violence against LGBT+ people. And we are seeing a similar rise in hate crimes against Asian-Americans, with instances increasing 339% last year alone. Rise in violence against the marginalized combined with the ongoing rise in swatting attacks2 makes this exceptionally dangerous.
There are support and gathering groups online that will become hunting grounds for certain types of people and prosecutors. We are seeing the rise of legal cannabis all over the country, for recreational use, but also for medicinal use. In many situations providing a viable alternative to highly addictive opiates to PTSD-suffering veterans as well as others.
Having a safe place to discuss this without fear of decades in prison has been, and continues to be important. As the Supreme Court removes what were previously constitutional rights, this is even more crucial. What innocuous conversations are you having online today that might make you liable for a $10,000 civil lawsuit a year from now?
So far I’ve focused on America-oriented problems, but living under certain regimes can make this situation much worse. In the US many of the problems I listed (but not all) are fellow citizens using someone’s identity to commit a crime (usually violent) against them. This is terrible, but in some countries that violence is not only legal, but also perpetrated against the person by their own government, leaving them absolutely no recourse.
I don’t have a grand solution unfortunately. It seems that lack of identity contributes to death threats, scams, and disinformation. But enforcing identity causes great risk and harm to the most vulnerable among us. Any solution in this space will not be a straightforward or simple one.3
One method that might help is prohibiting amplifying any account in algorithmic feeds (e.g. Twitter, TikTok) unless identity is verified. You can follow anyone, but you won’t ever see a “suggestion” from a non-verified account, preventing their amplification effect.
Another might be stronger “liveness” testing. Some proposals already exist to make this better than the current captcha system. This could also utilize the biometrics that modern devices provide.4
This could be rolled out in tandem with making bot registration much more thorough. Twitter is already experimenting in this area. Making liveness testing much more frequent, but much easier than captchas, and forcing bots through a separate system that requires strong identity attestation to an individual or company would allow good bots to continue while limiting malicious bots.
Hate speech, disinformation, death threats – these are all terrible things that happen online that might be improved or reduced for some people by enforcing identity. But we know from experience that enforcing identity will cause crimes, violence, death, and legal trouble.
Tech has a long history of ignoring or downplaying the marginalized. These efforts around identity come from a good place – a desire to see social and online media improve and to counteract some of its most pernicious problems. But even in an attempt to do good with social media, we again risk leaving out the vulnerable and causing more harm than good. And, as we’ve seen, once information is public it’s close to impossible to claw back. If we get this wrong there’s no going back.
-
And others have much more of course. ↩︎
-
Swatting is calling law enforcement and reporting a false crime with the intent to have police send a SWAT team, often resulting in injury or death. ↩︎
-
If someone suggests a “simple” solution here and uses the word “blockchain”, just run away. ↩︎
-
It’s beyond the scope of this post but OS integrated biometrics stored in a secure enclave on your device (e.g. fingerprint scanner or FaceID) is much safer and radically different from centrally stored biometrics (e.g. using your face as a Delta boarding pass). Centrally-stored biometrics are a bad idea almost 100% of the time. ↩︎