I came upon this: blog.ericgoldman.org/archives/2019/06/once-again-section-230-protects-twitters-account-suspension-decisions-brittain-v-twitter.htm
Twitter is claiming §230 protection for deleting the accounts of Craig Brittain. This is an example of user-content editing that is actually intended to make Twitter more, well, "family friendly", in the original spirit of §230, by removing "offensive" speech.
From the Apenwarr blog, apenwarr.ca/log/?m=201902 (Avery Pennarun):
Forget privacy: you're terrible at targeting anyway
I don't mind letting your programs see my private data as long as I get something useful in exchange. But that's not what happens.
The state of personalized recommendations is surprisingly terrible. At this point, the top recommendation is always a clickbait rage-creating article about movie stars or whatever Trump did or didn't do in the last 6 hours. Or if not an article, then a video or documentary. That's not what I want to read or to watch, but I sometimes get sucked in anyway, and then it's recommendation apocalypse time, because the algorithm now thinks I like reading about Trump, and now everything is Trump. Never give positive feedback to an AI.
This is, by the way, the dirty secret of the machine learning movement: almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic. There's no magic here. If you use ML to teach a computer how to sort through resumes, it will recommend you interview people with male, white-sounding names, because it turns out that's what your HR department already does.
And this comic from http://smbc-comics.com/comic/2011-11-17, though I would have said "government blanket surveillance" in the first panel: