hopfog
3 days ago
I built a multiplayer chatroom where all messages are transformed by an LLM (e.g. into pirate speak or corporate jargon):
I also built this incremental clicker game where you split words ad infinitum (like Infinite Craft but in reverse):
nextcaller
3 days ago
This is what I thought the future will be like since years ago. Everybody is going to be harmless since llms will translate and cushion, or outright censor any problematic communication.
KPGv2
2 days ago
For a long time I've wanted to write a self-censoring browser tool that sits between my social media forms and the HTTP call that sends what I type. It was going to be rudimentary: when you hit "post" on FB/TWT/etc., some quick sentiment analysis happens and prompts the user—upon detecting negative speech—"are you sure you want to send this?"
The idea is that you have actual triggers to remind you to be kind. Nextdoor has something like this, if you use profanity or other charged words, it will gently nudge you: remember to be kind.
(Obviously, if you know Nextdoor, this doesn't work. Lotta "random minority is scaring me by existing near my house")
But incorporating an LLM might be awesome. I am not wedded to the idea of censoring incoming speech, but I'd sure like to be nudged if I am being a problem.
There used to be a web-based tool you could give it your Reddit username, and it would do an analysis of your posts and give you statistics and a kindness score (or something like that).
I found that my enjoyment of the website went up by regularly running that script, because it reinforced that I should be kinder online (I find this more difficult than in meatspace), and by being kinder, it was far less likely I'd get a mean response, which lowered stress levels.
Maybe this would be a useful project to work on. A browser plugin of some kind, if Monkeyscript or something can use Rust-based web workers. I really don't know where browser tech is these days.
hawest
3 days ago
I explored whether this could be helpful in an online dispute resolution platform. The system could detect insulting or angry messages that threaten to derail a conversation, and suggest a more neutral way of formulating them. I think it's promising!
actionfromafar
3 days ago
Next step is just re-writing on the receiving end.
mdrzn
3 days ago
Tried the multiplayer chatroom for a while and it seems fun, great idea.