How to save our social media by treating it like a city

The work of integrity teams provides a different solution. We may be in the spotlight now, but we have a long history in the industry. We’ve learned a lot from approaches to fighting spam in email or search engines, and we borrow a lot of concepts from computer security. 

One of the best strategies for integrity we’ve found is to bring some real-world friction back into online interactions. I’ll focus on two examples to help explain this, but there are many more such mechanisms, like limits on group size, a karma or reputation system (like Google’s PageRank), a “neighborhood you’re from” indicator, structures for good conversation, and a less powerful share button. For now, let’s talk about two ideas that integrity workers have developed: we’ll call them driving exams and speed bumps.

First, we need to make it harder for people to have fake accounts. Imagine if, after being arrested for a crime, anyone could pop out of jail and be perfectly disguised as a whole new person. Imagine if it was impossible to tell whether you were talking to a bunch of people or one person rapidly changing disguises. This lack of trust is no good. At the same time, we need to remember that pseudonymous accounts aren’t always bad. Perhaps the person behind the pseudonym is a gay teen who is not out to family, or a human rights activist living under a repressive regime. We don’t need to ban all fake accounts. But we can make their costs higher.

One solution is analogous to the way, in many countries, you can’t  drive a car until you’ve learned how to operate it under supervision and passed a driving exam. Similarly, new accounts should not get immediate access to all the features on an app. To unlock the features that are more abusable (to spam, harass, etc.), perhaps an account should need to pay some costs in time and effort. Maybe it just needs time to “ripen.” Maybe it needs to have enough goodwill accrued in some karma system. Maybe it needs to do a few things that are hard to automate. Only once the account has qualified through this “driving exam” will it be trusted with access to the rest of the app.

Spammers could, of course, jump through those hoops. In fact, we expect them to. After all, we don’t want to make it too hard for the legitimate users of fake accounts. By requiring some effort to create a new “disguise,” however, we’re reintroducing some physics back into the equation. Three fake accounts could be manageable. But hundreds or thousands would become too difficult to pull off.

Online, the worst harms almost always come from the power users. This is pretty intuitive to understand—social apps generally encourage their members to post as much as possible. Power users can do this much more often, and to different audiences, and more simultaneously, than is possible in real life. In legacy cities, the cost of one person doing harm is bounded by the physical need for any given person to be in one place or speak to one audience at a time. This is not true online.

Online, some actions are perfectly reasonable if done in moderation, but they become suspicious when done in volume. Think of creating two dozen groups at once, or commenting on a thousand videos an hour, or posting every minute for a whole day. When we see people using a feature too much, we think they’re probably doing something akin to driving at an unsafe speed. We have a solution: the speed bump. Lock them out from doing that thing for a while. There’s no value judgment here—it’s not a punishment, it’s a safety feature. Such measures would be an easy way to make things safer for everyone while inconveniencing only a small fraction of people.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *