Forum:IP reader restrictions
Recently a sysadmin, Ciencia Al Poder from Spanish Uncyc, implemented a restriction on IP readers in order to stop AI bots from slowing down Uncyclopedia. Some AI bots read through as many pages on the Internet as possible in order to "learn" or whatever it is they do. This has essentially increased traffic on our site, causing it to run slower, according to some people. I personally did not experience this slowing of the site, and I was active during the time it occurred, but maybe some others did.
The restriction simply stops IP readers if they try to click on a non main space page, such as clicking "View history", giving them an error message instead of loading the page. While this appears to have solved the problem of AI bots increasing traffic, it has made navigating our site virtually impossible for IP readers and editors, and it is certainly a turn off to potential new Uncyclopedians. If someone new to our site wants to explore and see how everything works before creating an account, like I did, they will be stopped by this restriction.
Furthermore, this restriction was implemented without any discussion with the Uncyclopedia community. As far as I can tell, it was just a few brief messages from just a few users in the help section of the Uncyc Discord. This was a huge change to the function of our site that should have required at least a discussion before implementing. I believe we should remove this restriction and only implement it again if we see a more significant slowing of the site, and I would like to hear what everyone else thinks of this restriction. Thank you. Cheers. MrX 22:46, 29 May 2025 (UTC)
TO BE CLEAR the error messages I'm talking about are those "429 Too Many Requests" some of you have been getting when browsing Uncyc. They were implemented INTENTIONALLY. MrX 14:06, 30 May 2025 (UTC)
- Could those bots be where all those recent 429 errors are coming from? Oᑭöᔑᔑᙀᙏ
(vandalize my talk page) 22:56, 29 May 2025 (UTC)
- I believe that is possible. I hadn't gotten any in a long time, but to be fair, maybe others did. MrX 22:57, 29 May 2025 (UTC)
- Ciencia said that there wouldn't be a problem if the bots only crawled article pages since they're cached. I don't know much about backend web development but would it not be possible to serve IP users cached version of non-mainspace pages? It sounds like that could work well based on what I heard from Ciencia (and if my intuition about how websites work isn't completely wrong.) SystemPhantom (talk) 23:07, 29 May 2025 (UTC)
- Maybe that could work. In Discord, I've been asking about alternative solutions to the AI bot problem, but Ciencia doesn't seem to think anything else will work. MrX 13:53, 30 May 2025 (UTC)
- I've seen around some folks using Anubis (a proof-of-work challenge) to take the scrapers away, wouldn't this be viable although it require server-side configuration? Coco (talk) 23:10, 29 May 2025 (UTC)
- Maybe. What all would Anubis entail? The link only offers a little information, unless I'm missing something. MrX 13:54, 30 May 2025 (UTC)
- Perhaps we could force all IP users to do a captcha or smth like that so that we don't have to kneecap IP users? Data Devourer (talk) 23:22, 29 May 2025 (UTC)
- Yeah maybe. Captcha would be less than ideal but better than what we currently have. I still think we should remove the restriction entirely, then implement a captcha if the site slows down too much. MrX 13:56, 30 May 2025 (UTC)
- Just let
my ballzIp run around for an hour or two. if it creates garbage articles then it is most certainly a robot.[[File:Lawrence.gif.gif|70px|link=User:ColonelKurtz]] ([[User talk:ColonelKurtz|talk]]) (talk) 23:24, 29 May 2025 (UTC)- ColonelKurtz is a robot, got it
- nnonononono if someone accuses someone else of being a robot the accuser is psychologically projecting[[File:Lawrence.gif.gif|70px|link=User:ColonelKurtz]] ([[User talk:ColonelKurtz|talk]]) (talk) 23:59, 29 May 2025 (UTC)
23:40, 29 May 2025 (UTC)
- ColonelKurtz is a robot, got it
- When browsing the site, I keep encountering "429 Too Many Requests", Imposing restrictions seems like a helpless move?
09:46, 30 May 2025 (UTC)
- @C780178: Are you getting those 429 errors while logged in? Those are the very error messages I'm talking about. Those were implemented intentionally. MrX 13:59, 30 May 2025 (UTC)
- Not so much anymore. Oᑭöᔑᔑᙀᙏ
(vandalize my talk page) 13:58, 30 May 2025 (UTC)
@MrX: I have an account on the Spanish Uncyclopedia. Maybe I could ask the guy that made those restrictions if there are viable alternatives. (and yea I know the guy is on Discord too but maybe it's better to reach through the wiki idk it's just an idea I had) 🎄🎄🎄DaniPine3 (talk)🎄🎄🎄 14:12, 30 May 2025 (UTC)
- That would be helpful and appreciated! I also have an account there (soy SrX) that I've made like 5 edits with and haven't used in a long time, maybe I'll chime in there too if it helps. MrX 14:14, 30 May 2025 (UTC)
- every time im logged out I cant go to recent changes or else my computer acts special and loada 429 error for some reason
(talk) 18:13, 30 May 2025 (UTC)
- Yeah, that's exactly what I'm talking about. The reason is the restriction that we have implemented. MrX 18:14, 30 May 2025 (UTC)
- but couldn't the bots just go through contributions on me for example and just click on every page I have ever read or edoted?
(talk) 18:15, 30 May 2025 (UTC)
- No, because your contributions page is a "special page" and they get rate limited on those. 18:42, 30 May 2025 (UTC)
- but couldn't the bots just go through contributions on me for example and just click on every page I have ever read or edoted?
- Yeah, that's exactly what I'm talking about. The reason is the restriction that we have implemented. MrX 18:14, 30 May 2025 (UTC)
- every time im logged out I cant go to recent changes or else my computer acts special and loada 429 error for some reason