Defeating the cancer of imageboards through asynchronous LLM post quality moderation
All of our moderation issues have historically been because of some kind of spam. We have begun to solve this using a system that pretty much makes spambots commit suicide while regular users remain safe.
Our actual constraint, now that we are taking steps toward defeating a common Internet enemy, is post quality, which has a great impact on Cyberix's outside reputation.
We are featured on AllChans. The implications of this are easy to make out: 4channers who don't like 4chan go to altchans but decide to spread their modern 4chan cancer culture there instead of cultivating themselves into the existing culture or creating something new.
Cyberix is definitely affected by this, even though we aren't exactly an imageboard but a forum with imageboard elements. We appreciate and encourage open speech, but we are beginning to realize that you cannot just have totally unmoderated speech without it turning into a cesspool of flamewars and bad-faith arguments/posts. We appreciate quality.
Here is my proposal:
What if we deployed an LLM that was finetuned on Cyberix rules (and rules 3 and 4 from Lainchan just to be safe) and biased against the cancer that runs in the blood of traditional imageboards and short-form websites?
It'd process new posts in batches, just like how the vision model processes attachments. No need to deal with spam or useless catchphrase / 'bloatposters' when the LLM takes care of that kind of cruft for us. I imagine having a new mod panel section that would let us customize the model even more to reduce false positives (if any...) and target specific kinds of unwanted posts (and soyspeak if we really wanted to, for example).
An LLM that analyzes recent posts and threads would be more effective at adhering to specific rules and examples, as it has no human experience or bias. It would be able to perform a consistent application of rules and forum standards without human mood variation.
Its goal is NOT to inhibit good but otherwise negative speech (an extreme example of this would be an anonymous poster over TOR making a detailed thread about his life experiences and what led him to become a racist), but to inhibit nonproductive speech that can also be of a "trashy" nature (an anonymous poster using Tor to post a one-liner saying "Brown hands wrote this" or someone posting "is this site turning into an insane asylum" in a thread about religion) in otherwise serious or productive threads.
I'd love to hear your thoughts on this idea. I believe this implementation would be yet another grand move in our work to conquer all of the long-standing problems that have plagued anonymous messageboards since their inception. We've stopped the CP problem, and we've begun to stop the spam problem. This would likely be the "part 2" of the solution to that problem.
Our actual constraint, now that we are taking steps toward defeating a common Internet enemy, is post quality, which has a great impact on Cyberix's outside reputation.
We are featured on AllChans. The implications of this are easy to make out: 4channers who don't like 4chan go to altchans but decide to spread their modern 4chan cancer culture there instead of cultivating themselves into the existing culture or creating something new.
Cyberix is definitely affected by this, even though we aren't exactly an imageboard but a forum with imageboard elements. We appreciate and encourage open speech, but we are beginning to realize that you cannot just have totally unmoderated speech without it turning into a cesspool of flamewars and bad-faith arguments/posts. We appreciate quality.
Here is my proposal:
What if we deployed an LLM that was finetuned on Cyberix rules (and rules 3 and 4 from Lainchan just to be safe) and biased against the cancer that runs in the blood of traditional imageboards and short-form websites?
It'd process new posts in batches, just like how the vision model processes attachments. No need to deal with spam or useless catchphrase / 'bloatposters' when the LLM takes care of that kind of cruft for us. I imagine having a new mod panel section that would let us customize the model even more to reduce false positives (if any...) and target specific kinds of unwanted posts (and soyspeak if we really wanted to, for example).
An LLM that analyzes recent posts and threads would be more effective at adhering to specific rules and examples, as it has no human experience or bias. It would be able to perform a consistent application of rules and forum standards without human mood variation.
Its goal is NOT to inhibit good but otherwise negative speech (an extreme example of this would be an anonymous poster over TOR making a detailed thread about his life experiences and what led him to become a racist), but to inhibit nonproductive speech that can also be of a "trashy" nature (an anonymous poster using Tor to post a one-liner saying "Brown hands wrote this" or someone posting "is this site turning into an insane asylum" in a thread about religion) in otherwise serious or productive threads.
I'd love to hear your thoughts on this idea. I believe this implementation would be yet another grand move in our work to conquer all of the long-standing problems that have plagued anonymous messageboards since their inception. We've stopped the CP problem, and we've begun to stop the spam problem. This would likely be the "part 2" of the solution to that problem.
I'm not against the idea of an LLM checking post quality and filtering out low/zero-effort. I like the idea of it in fact, and the only questions I have is how will its training process be done? Is there examples of high quality and low quality posts already determined to train the LLM on?
Replies:
>>10622
probably not. a good idea for gathering bad posts would probably be to head to 4chan's /g/ board and copy and paste half of the threads and posts there under 'bad and then go back to cyberix and put the good posts under good and see how it rolls
just ban everything to do with politics
[CH]
[VPN]