Defeating the cancer of imageboards through asynchronous LLM post quality moderation
>Keep the model focused on style and structure rather than ideological content
Currently this is clearly not the case as we can see by https://cy-x.net/topic/reaper-llm-stress-testing-and-abuse-thread/766?page=1#p10694 , which is copy-pasted from your post but with an added mention of niggers in the end, going down from "action=keep confidence=0.95" to "action=queue confidence=0.99 | Contains a toxic comment towards a specific group".
[DE]
[TOR]
Looks like it's been fixed. I like it, personally
[US-TX]
Will be updating the Reaper to consider the home forum that the thread belongs to for a more accurate judgement.
This way the model knows what kind of content belongs where, which helps it judge off-topic posts (food in TECH) without being told to lower its standards for any board.
This is defense-in-depth and will also help to clean up spam because they like posting ads in META whenever they come here
This way the model knows what kind of content belongs where, which helps it judge off-topic posts (food in TECH) without being told to lower its standards for any board.
This is defense-in-depth and will also help to clean up spam because they like posting ads in META whenever they come here
nigger
[PL]
I'm a bit exceptical about the whole thing, simply because I've never seen something like it in action, but considering the issue at hand maybe it could be a nice option, so I welcome the addition. As others have already mentioned, maybe it could be tweaked just so it can adapt to the type of content, terms and culture of the site. But hey, as long as it does its job properly then I don't really mind that much. But context is important, regardless.
And yeah, I do agree using 4chins /g/ as a template for "bad posts" is a good idea. Technology discussion in that place sucks, you rarely get an actual good thread where people discuss a topic in good faith, most of the time it's just shitposting, off-topic garbage, anons calling each other slurs and spam. So anything that can help us avoid that is welcomed.
And yeah, I do agree using 4chins /g/ as a template for "bad posts" is a good idea. Technology discussion in that place sucks, you rarely get an actual good thread where people discuss a topic in good faith, most of the time it's just shitposting, off-topic garbage, anons calling each other slurs and spam. So anything that can help us avoid that is welcomed.
What LLM/model is the AutoMod system working on? I assume it's something like a 3B model
[RO]
yeah but since its an llm finetuned by the "Powers That Be", it can be overtly negative to otherwise fine and relatively quality post just because they feature the nigger word i think
Replies:
>>10746
Your post contained the nigger word and the AutoMod did not flag it. I dont think thatll be a problem lmao.
Let me see if I can get flagged for saying nigger in an otherwise constructive post. My post is great. its necessary to verify whether the AI has truly been made to ignore mentions of hating niggers or calling somebody a nigger as long as the post that contains the nigger word is productive and provides some sort of value to the thread its in.
This is an English imageforum.
https://cy-x.net/topic/how-much-matters-at-what-college-did-you-study-in/491?page=1#p10890
This post has been incorrectly flagged for queueing for containing the faggot word, which was actually quoted from another user.
This post has been incorrectly flagged for queueing for containing the faggot word, which was actually quoted from another user.
Replies:
>>10905
Resolved.
It's been a bit since this was rolled out and even though it was temporary, seeing the LLMs response atop each post has been really informative to my opinion. I was really looking forward to this because if this worked out, it would've improved the general quality of this forum without harming genuine posters, kicking out any of those "brown hands typed this" fags.
Unfortunately, I don't think this is working well. I see a number of posts queued for things that shouldn't be a problem, from expression of opinions to simply mentioning one slur in an overall good post. The LLM should be queuing posts that add nothing of value, aren't related to the topic, and are just insulting shitposts. Most of these things can probably be tweaked away, but at what cost? When the LLM does get the power to queue posts, I'd rather scroll through the 10th "brown hands typed this" then get my post queued simply for calling someone a nigger while explaining how what he said was stupid. A posters way of expression should not be hindered as long as they back it up.
Another thing I have a problem with is that I believe the results are faked. I'm not talking about its responses, but its confidence score. I'm sure many of us have already heard of the many many stories of people using LLMs and it spitting out faked data completely and them just falling for it. They do it all the time. Have you seen a single confidence score below 0.95? Being so confident about EVERY POST is kind of suspicious to me.
In conclusion, I personally think this method just isn't it. Maybe it can be adapted in another way to improve some other aspect of Cyberix, but to manage post quality would concern me greatly. I have no doubt us in staff would be able to resolve false flags, but waiting on a staff member to read through your post and approve it is annoying and would most likely push away good posters.
Still a lot better than what those soyjak idiots could've done though.
Unfortunately, I don't think this is working well. I see a number of posts queued for things that shouldn't be a problem, from expression of opinions to simply mentioning one slur in an overall good post. The LLM should be queuing posts that add nothing of value, aren't related to the topic, and are just insulting shitposts. Most of these things can probably be tweaked away, but at what cost? When the LLM does get the power to queue posts, I'd rather scroll through the 10th "brown hands typed this" then get my post queued simply for calling someone a nigger while explaining how what he said was stupid. A posters way of expression should not be hindered as long as they back it up.
Another thing I have a problem with is that I believe the results are faked. I'm not talking about its responses, but its confidence score. I'm sure many of us have already heard of the many many stories of people using LLMs and it spitting out faked data completely and them just falling for it. They do it all the time. Have you seen a single confidence score below 0.95? Being so confident about EVERY POST is kind of suspicious to me.
In conclusion, I personally think this method just isn't it. Maybe it can be adapted in another way to improve some other aspect of Cyberix, but to manage post quality would concern me greatly. I have no doubt us in staff would be able to resolve false flags, but waiting on a staff member to read through your post and approve it is annoying and would most likely push away good posters.
Still a lot better than what those soyjak idiots could've done though.
Replies:
>>11215
I recognize the same issues you recognize.
>but waiting on a staff member to read through your post and approve it is annoying and would most likely push away good posters.
The general consensus appears to be that the current Phase 1 behavior will likely be the final behavior of the system. It will only keep or throw posts into the report queue. This is balanced and escalating its privilege based on current results would be a dangerous decision to make.
I'd like to try one last time with this experiment though, based on the approach suggested in .
The new design is multi-dimensional. Instead of asking the model "how confident are you this is bad" it now rates four concrete observable properties:
- relevance (0-10): Does the post engage with the thread topic or quoted posts?
- effort (0-10): Is there actual substance? Reasoning, experience, information, argument?
- novelty (0-10): Does it add something new or is it pure repetition and bloat?
- civility (0-10): Is it a content-free attack? (Low civility alone never queues a post)
A post calling someone a nigger while making a coherent argument would score something like R:8 E:7 N:5 C:2 and the math keeps it. "Brown hands typed this" would score R:1 E:1 N:1 C:1 and gets queued. Your confidence score concern should disappear entirely because there is a confidence score no longer. There is instead specific dimensions you can read and contest individually.
This has been deployed already and you will be able to see the new stamp format on new posts. I will make the system re-analyze the last 100 posts for comparisons.
Replies:
>>11797
we are now going to blacklist your entire fucking language
Sorry to rain on your parades, but I used ChatGPT to write that comment, just to see how it would be detected.
Replies:
>>11804
[RS]
the one case where a synthetic effortpost is helpful
Can I ask why you opted for an LLM instead of a more specific ML model? If you want something that detects low quality posts you can train a recurrent neural network or something similarly capable of pattern recognition on text strings, then have it output a 'quality' score directly. That score would be representative of how similar the input is to your training data full of 'brown hands' type posts.
The ability to moderate/flag posts in this way existed long before the current AI hype.
The ability to moderate/flag posts in this way existed long before the current AI hype.
Replies:
>>12384
[DE]
Purely because it provided a reason for giving a post that kind of score.
I have some hope that letting a bad poster know why their post scored poorly and was queued for moderation would help them improve their conduct and make a better post next time. It provides constructive feedback that some people might legitimately benefit from.
I had the automod earlier call my post 'rambling, but on-topic', which I find funny, but at the same time it goes to show that attempting to distill all posts into what an LLM believes to be quality posting would just result in all posts eventually converging to appeal to the AI, not posters. Which would rob the point of a site like this one.
[US-FL]
Oh and the other thing: absolutely keep the LLM's prediction of what the post contains if you move forward with it, because having an AI judge a post and then delete it for poorly defined reasons is why Youtube is such a hellhole right now.
[US-FL]