Things of Your Day


I guess Rejected got a 4k restoration? It feels weird to watch it in better quality than a 240p avi.


Appropriate given recent events in my life.



White, of Strunk & White fame, is E.B White, of Charlotte’s Web fame.

I just learned this.


50% Owl is the new ruler of the internet


Today is the 70th anniversary of the Universal Declaration of Human Rights:




Long but worth the read

Thanks appears to be made up based on some true things. But still, very emotional because you know shit like this went down with people being gay back then.





This will of course only be used by humans and never bots, perish the thought.


I’m more interested in captchas that block humans who fail the Turing Test from interacting with me online.

Like, when I go nazi-hunting on twitter, I usually find nests of bots that I can clearly just block en-masse.

But every now then then I find an account that is clearly a real human. But their behavior is so bot-like that I report them anyway.


Is it possible to make such a test that won’t have some sort of inherent bias? You’re going to get a lot of false positives keeping out good people based on language, nationality, race, culture, etc.

Can you design software that can accurately determine whether the user is a nazi?


Almost definitely not.

But I would assume it has bias and tune it to my own preferences for interaction. I’d embrace the bias and tune it to my own biases as much as possible.

True. But we see the kinds of “I want to join the FRC forum” messages we get today. I’m sure we’ve denied at least some good people who are real people and would have contributed. But that is a cost we paid to keep this place spam and nazi free.

We already collectively decided with email that we’ll accept even a high rate of false positives rather than have a single false negative.

I bet you could detect nazis with a high enough degree of accuracy with a regular expression…

I wouldn’t want to use something like this for anything large-scale. I’m thinking personal scale. A dynamic twitter blocklist that identifies people I don’t ever want to interact with based on heuristics.

Imagine it analyzes individual users’ other tweets. Then it uses some sort of “The Good Place” system.

  • Retweeted RWNJ site w/o comment? -50 points
  • Retweets RWNJ sites more than 20% of all total retweets? -80 points
  • Statistically significant number of posts consisting of 14 words? -400 points
  • Uses the word “libtard”? -1 point per use
  • Followed by other low-score accounts? -20 per account

I then set a point threshold below which no account can interact with me.


The post you just made would lose you a point because you used the word. This posts also loses a point, if we count instances where the word appears when quoting another user.


I think that would be fine.

If someone is constantly quoting shitheads and/or using their words, that’s just as annoying to me. Make it a low negative score for things like that, and intermittent callouts wouldn’t overcome a positive score.

But frankly, if someone ends up using that word a LOT, I probably don’t want to interact with them regardless of their intent.


Were I a nazi I’d have all my 1000 bots follow every single person you follow. (if this ever caught on, there’s little risk because you said this’d be a personal thing)


I’ve had this basic idea for awhile.

We create a social network that has a very strict tree structure. The entire tree could even be publicly visible We start with a single person, hypothetically me. It’s invite only. I invite people. They then invite people. And so on. Everyone is linked to the person that invited them. They are also dependent on the person that invited them. Likewise, they are responsible for the behavior of the people they invite.

We then use simple tried and true human moderation. Users report spam, nazis, bots, harassment, etc. Then a team of trusted good people review the reports and take common-sense action. This team can be a combination of paid employees and crowd sourced users.

The key difference is that if a bad account is found, we can cut off not just that account, but every account underneath it in the tree. If we ban a nazi, we will also not hesitate to recursively ban everyone the nazi has eve invited to the network, and all the way down the tree. If a nazi happened to invite a good person, too bad. If we’re nice we can maybe give people the ability to get their accounts re-homed. Maybe they can find someone willing to take responsibility for them and adopt them into their tree.

Likewise, if a good account with quality content has somehow invited a bunch of nazis, we won’t hesitate to go one level up and take them out. If we wanted to be even nicer, which we don’t have to be, we could let them keep their good account, but remove their ability to invite new users.

The tree is also useful for disseminating content. Users can control to exactly where on the tree relative to their location content will be posted. Post to everyone below you. Post to everyone 1 space away. 2 spaces away. All kinds of possible options. We might also use it to restrict users ability to publish. Newer or untrusted users won’t be able to message people who are too far away. It should be relatively difficult to publish content that goes up, sideways, and then down another branch of the tree.

What say other people of this?