Jump to content

Wikipedia:Village pump (all)

Page protected
From Wikipedia, the free encyclopedia
(Redirected from Wikipedia:VPA)

This is the Village pump (all) page which lists all topics for easy viewing. Go to the village pump to view a list of the Village Pump divisions, or click the edit link above the section you'd like to comment in. To view a list of all recent revisions to this page, click the history link above and follow the on-screen directions.

Click here to purge the server cache of this page (to see recent changes on Village pump subpages)

Village pump sections
post, watch, search
Discuss existing and proposed policies
post, watch, search
Discuss technical issues about Wikipedia
post, watch, search
Discuss new proposals that are not policy-related
post, watch, search
Incubate new ideas before formally proposing them
post, watch, search
Discuss issues involving the Wikimedia Foundation
post, watch, search
Post messages that do not fit into any other category
Other help and discussion locations
I want... Then go to...
...help using or editing Wikipedia Teahouse (for newer users) or Help desk (for experienced users)
...to find my way around Wikipedia Department directory
...specific facts (e.g. Who was the first pope?) Reference desk
...constructive criticism from others for a specific article Peer review
...help resolving a specific article edit dispute Requests for comment
...to comment on a specific article Article's talk page
...to view and discuss other Wikimedia projects Wikimedia Meta-Wiki
...to learn about citing Wikipedia in a bibliography Citing Wikipedia
...to report sites that copy Wikipedia content Mirrors and forks
...to ask questions or make comments Questions


Discussions older than 7 days (date of last made comment) are moved to a sub page of each section (called (section name)/Archive).

Policy

Should WP:Demonstrate good faith include mention of AI-generated comments?

Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies "Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people"). More fundamentally, WP:AGF can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor.

Should WP:DGF be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? Photos of Japan (talk) 00:23, 2 January 2025 (UTC)[reply]

No. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. Thryduulf (talk) 01:23, 2 January 2025 (UTC)[reply]
Note that this topic is discussing using AI to generate replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue.
WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)[reply]
And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)[reply]
Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)[reply]
Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. Thryduulf (talk) 04:36, 2 January 2025 (UTC)[reply]
I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. Cremastra (uc) 14:31, 2 January 2025 (UTC)[reply]
I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in WP:DGF would cause actual harm? Photos of Japan (talk) 04:29, 2 January 2025 (UTC)[reply]
By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)[reply]
I think bloodofox's comment was about "you" in the rhetorical sense, not "you" as in Thryduulf. jlwoodwa (talk) 11:06, 2 January 2025 (UTC)[reply]
Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Wikipedia to be incredibly insulting and offensive. :bloodofox: (talk) 04:38, 2 January 2025 (UTC)[reply]
My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)[reply]
Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Wikipedia. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)[reply]
I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them.
I'm not mocking anybody, nor am I advocating to let chatbots run rampant. I'm utterly confused why you think I might advocate for selling Wikipedia to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. Thryduulf (talk) 05:01, 2 January 2025 (UTC)[reply]
So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)[reply]
No, this is not a everyone else is the problem, not me issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue.
I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter.
AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. Thryduulf (talk) 12:09, 2 January 2025 (UTC)[reply]
In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Wikipedia's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down.
In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article.
It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. Photos of Japan (talk) 22:44, 2 January 2025 (UTC)[reply]
LLMs don't understand Wikipedia's policies and norms They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Wikipedia does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Wikipedia. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. Duly signed, WaltClipper -(talk) 14:33, 15 January 2025 (UTC)[reply]
You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. Simonm223 (talk) 14:15, 14 January 2025 (UTC)[reply]
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts is simply
FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying IBM", and the context of that was mainframe computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Wikipedia in these very discussions. Thryduulf (talk) 14:47, 14 January 2025 (UTC)[reply]
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts is factually incorrect.
FUD both predates AI by many decades (indeed if you'd bothered to read the fear, uncertainty and doubt article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like Roko's basilisk), examples can be found in these sprawling discussions from those opposing AI use on Wikipedia. Thryduulf (talk) 14:52, 14 January 2025 (UTC)[reply]
Not really – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a blanket assumption that using AI to generate comments is not showing good faith. Cremastra (uc) 02:35, 2 January 2025 (UTC)[reply]
  • Yes because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly what AGF should say needs work, but something needs to be said, and AGFDGF is a good place to do it. XOR'easter (talk) 02:56, 2 January 2025 (UTC)[reply]
    Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. Thryduulf (talk) 03:01, 2 January 2025 (UTC)[reply]
Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)[reply]
That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)[reply]
I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Wikipedia. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)[reply]
How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)[reply]
Wikipedia is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)[reply]
You are entitled to that philosophy, but that doesn't actually answer any of my questions. Thryduulf (talk) 04:45, 2 January 2025 (UTC)[reply]
"why does it matter if it was AI generated or not?"
Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them.
"How will they be enforceable? "
WP:DGF isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. Photos of Japan (talk) 05:16, 2 January 2025 (UTC)[reply]
The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (example). The AI was at least superficially polite. WhatamIdoing (talk) 04:27, 2 January 2025 (UTC)[reply]
Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "offering new insights or advancing scholarly understanding" and "merely" reiterating what other sources have written.
Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)[reply]
Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)[reply]
But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)[reply]
True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be [AI-generated]" part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)[reply]
All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say.
"Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also sounds good, until you realize that the bot is actually criticizing its own original post.
The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. Photos of Japan (talk) 08:33, 2 January 2025 (UTC)[reply]
I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no meeting of the minds, and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain.
But... do you actually think they're doing this for the purpose of intentionally harming Wikipedia? Or could this be explained by other motivations? Never attribute to malice that which can be adequately explained by stupidity – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something WP:SHUN- and even block-worthy) reasons. WhatamIdoing (talk) 08:49, 2 January 2025 (UTC)[reply]
The user's talk page has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below in your own words"
Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. Photos of Japan (talk) 09:35, 2 January 2025 (UTC)[reply]
Wikipedia:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)[reply]
"Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)[reply]
It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock.
But I wonder if you have read AGF recently. The first sentence is "Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful."
So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Wikipedia. I might not be successful, but I sure am going to try hard to reach my goal"? WhatamIdoing (talk) 23:17, 4 January 2025 (UTC)[reply]
Trying to hurt Wikipedia doesn't mean they have to literally think "I am trying to hurt Wikipedia", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Wikipedia, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)[reply]
Sure, I'd count that as a case of "trying to hurt Wikipedia-the-community". WhatamIdoing (talk) 06:10, 5 January 2025 (UTC)[reply]
  • The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. CMD (talk) 04:45, 2 January 2025 (UTC)[reply]
    In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥  05:02, 2 January 2025 (UTC)[reply]
    Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)[reply]
    All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥  05:09, 2 January 2025 (UTC)[reply]
    Sure, but WP:DGF doesn't mention any unhelpful rhetorical patterns. CMD (talk) 05:32, 2 January 2025 (UTC)[reply]
    The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. Photos of Japan (talk) 05:38, 2 January 2025 (UTC)[reply]
    ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)[reply]
    Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)[reply]
    This is just semantics.
    For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article.
    The only difference between these four sentences is that two of them are more annoying to type than the other two. Gnomingstuff (talk) 08:08, 2 January 2025 (UTC)[reply]
    Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)[reply]
    Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)[reply]
    LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)[reply]
    LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)[reply]
    A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. Photos of Japan (talk) 23:09, 2 January 2025 (UTC)[reply]
No -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). Gnomingstuff (talk) 06:17, 2 January 2025 (UTC)[reply]
It's rarely productive to get mad at someone on Wikipedia for any reason, but if someone uses an LLM and it screws up their comment they don't get any pass just because the LLM screwed up and not them. You are fully responsible for any LLM content you sign your name under. -- LWG talk 05:19, 1 February 2025 (UTC)[reply]
No. When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. lethargilistic (talk) 17:29, 2 January 2025 (UTC)[reply]
  • Comment LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Wikipedia. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia. I would indef such users for lacking WP:CIR. tgeorgescu (talk) 17:39, 2 January 2025 (UTC)[reply]
    That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)[reply]
    WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)[reply]
    I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. BusterD (talk) 20:56, 2 January 2025 (UTC)[reply]
    ... but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia: That problem existed with some humans even prior to LLMs. —Bagumba (talk) 02:53, 20 January 2025 (UTC)[reply]
  • No - Not a good or bad faith issue. PackMecEng (talk) 21:02, 2 January 2025 (UTC)[reply]
  • Yes Using a 3rd party service to contribute to the Wikipedia on your behalf is clearly bad-faith, analogous to paying someone to write your article. Zaathras (talk) 14:39, 3 January 2025 (UTC)[reply]
    Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)[reply]
    That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are making an effort to show they're acting in good faith. Daß Wölf 23:06, 9 January 2025 (UTC)[reply]
  • Comment Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. Darkfrog24 (talk) 22:42, 3 January 2025 (UTC)[reply]
  • No – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. Dhtwiki (talk) 05:04, 5 January 2025 (UTC)[reply]
    There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions.
    We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..."
    The end result is that it's "completely banned" ...except for an apparent majority of uses. WhatamIdoing (talk) 06:34, 5 January 2025 (UTC)[reply]
    Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)[reply]
    Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Wikipedia values. PackMecEng (talk) 15:19, 8 January 2025 (UTC)[reply]
  • No The OP seems to misunderstand WP:DGF which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per WP:CREEP. Andrew🐉(talk) 23:11, 5 January 2025 (UTC)[reply]
  • No. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. pythoncoder (talk | contribs) 05:56, 8 January 2025 (UTC)[reply]
  • No, this is not about good faith. Adumbrativus (talk) 11:14, 9 January 2025 (UTC)[reply]
  • Yes. AI use is not a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the WP:DGF section is about.
It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point away from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. Daß Wölf 22:56, 9 January 2025 (UTC)[reply]
Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that AI use is not a demonstration of bad faith... but it is equally not a "demonstration of good faith", does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)[reply]
  • Yes. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own anywhere is inherently bad-faith and one doesn't need to know Wiki policies to understand that. JoelleJay (talk) 23:30, 9 January 2025 (UTC)[reply]
  • Yes. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a competence issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. Iseult Δx talk to me 01:26, 10 January 2025 (UTC)[reply]
    Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)[reply]
  • No - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --Goldsztajn (talk) 01:31, 10 January 2025 (UTC)[reply]
    Using a washing machine still results in washed clothes. Using LLMs results in communication failures because the LLM-using party isn't fully engaging. Hydrangeans (she/her | talk | edits) 04:50, 27 January 2025 (UTC)[reply]
    And before there's a reply of 'the washing machine-using party isn't fully engaging in washing clothes'—washing clothes is a material process. The clothes get washed whether or not you pay attention to the suds and water. Communication is a social process. Users can't come to a meeting of the minds if some of the users outsource the 'thinking' to word salad-generators that can't think. Hydrangeans (she/her | talk | edits) 05:00, 27 January 2025 (UTC)[reply]
  • No - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. Sohom (talk) 11:24, 13 January 2025 (UTC)[reply]
To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but using AI should be thrown into the same cross-hairs as completely AI generated comments. Sohom (talk) 11:35, 13 January 2025 (UTC)[reply]
@Sohom Datta You mean shouldn't be thrown? I think that would make more sense given the context of your original !vote. Duly signed, WaltClipper -(talk) 14:08, 14 January 2025 (UTC)[reply]
  • No. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" Duly signed, WaltClipper -(talk) 14:43, 13 January 2025 (UTC)[reply]
  • Yes. If I plug another user's comments into an LLM and ask it to generate a response, I am not participating in the project in good faith. By failing to meaningfully engage with the other user by reading their comments and making an effort to articulate myself, I'm treating the other user's time and energy frivolously. We should advise users that refraining from using LLMs is an important step toward demonstrating good faith. Hydrangeans (she/her | talk | edits) 04:55, 27 January 2025 (UTC)[reply]
  • Yes per Hydrangeans among others. Good faith editing requires engaging collaboratively with your human faculties. Posting an AI comment, on the other hand, strikes me as deeply unfair to those of us who try to engage substantively when there is disagreement. Let's not forget that editor time and energy and enthusiasm are our most important resources. If AI is not meaningfully contributing to our discussions (and I think there is good reason to believe it is not) then it is wasting these limited resources. I would therefore argue that using it is full-on WP:DISRUPTIVE if done persistently enough –– on par with e.g. WP:IDHT or WP:POINT –– but at the very least demonstrates an unwillingness to display good faith engagement. That should be codified in the guideline. Generalrelative (talk) 04:59, 28 January 2025 (UTC)[reply]
  • I appreciate your concern about the use of AI in discussions. It is important to be mindful of how AI is used, and to ensure that it is used in a way that is respectful of others.

I don't think that WP:DGF should be amended to specifically mention AI. However, I do think that it is important to be aware of the potential for AI to be used in a way that is not in good faith. When using AI, it is important to be transparent about it. Let others know that you are using AI, and explain how you are using it. This will help to build trust and ensure that others understand that you are not trying to deceive them. It is also important to be mindful of the limitations of AI. AI is not a perfect tool, and it can sometimes generate biased or inaccurate results. Be sure to review and edit any AI-generated content before you post it.

Finally, it is important to remember that AI is just a tool. It is up to you to use it in a way that is respectful and ethical. |} It's easy to detect for most, can be pointed out as needed. No need to add an extra policy JayCubby

  • Questions: While I would agree that AI may be used as a tool for good, such leveling the field for those with certain disabilities, might it just as easily be used as a tool for disruption? What evidence exists that shows whether or not AI may be used to circumvent certain processes and requirements that make Wiki a positive collaboration of new ideas as opposed to a toxic competition of trite but effective logical fallacies? Cheers. DN (talk) 05:39, 27 January 2025 (UTC)[reply]
    AI can be used to engage positively, it can also be used to engage negatively. Simply using AI is therefore not, in and of itself, an indication of good or bad faith. Anyone using AI to circumvent processes and requirements should be dealt with in the exact same way they would be if they circumvented those processes and requirements using any other means. Users who are not circumventing processes and requirements should not be sanctioned or discriminated against for circumventing processes and requirements. Using a tool that others could theoretically use to cause harm or engage in bad faith does not mean that they are causing harm or engaging in bad faith. Thryduulf (talk) 08:05, 27 January 2025 (UTC)[reply]
    Well said. Thanks. DN (talk) 08:12, 27 January 2025 (UTC)[reply]
    As Hydrangeans explains above, an auto-answer tool means that the person is not engaging with the discussion. They either cannot or will not think about what others have written, and they are unable or unwilling to reply themselves. I can chat to an app if I want to spend time talking to a chatbot. Johnuniq (talk) 22:49, 27 January 2025 (UTC)[reply]
    And as I and others have repeatedly explained, that is completely irrelevant to this discussion. You can use AI in multiple different ways, some of which are productive contributions to Wikipedia, some of which are not. If someone is disruptively not engaging with discussion then they can already be sanctioned for doing so, what tools they are or are not using to do so could not be less relevant. Thryduulf (talk) 02:51, 28 January 2025 (UTC)[reply]
    This implies a discussion that is entirely between AI chatbots deserves the same attention and thought needed to close it, and can effect a consensus just as well, as one between humans, so long as its arguments are superficially reasonable and not disruptive. It implies that editors should expect and be comfortable with arguing with AI when they enter a discussion, and that they should not expect to engage with anyone who can actually comprehend them... JoelleJay (talk) 01:00, 28 January 2025 (UTC)[reply]
    That's a straw man argument, and if you've been following the discussion you should already know that. My comment implied absolutely none of what you claim it does. If you are not prepared to discuss what has actually been written then I am not going to waste more of my time replying to you in detail. Thryduulf (talk) 02:54, 28 January 2025 (UTC)[reply]
    It's not a strawman; it's an example that demonstrates, acutely, the flaws in your premise. Hydrangeans (she/her | talk | edits) 03:11, 28 January 2025 (UTC)[reply]
    If you think that demonstrates a flaw in the premise then you haven't understood the premise at all. Thryduulf (talk) 03:14, 28 January 2025 (UTC)[reply]
    I disagree. If you think it doesn't demonstrate a flaw, then you haven't understood the implications of your own position or the purpose of discussion on Wikipedia talk pages. Hydrangeans (she/her | talk | edits) 03:17, 28 January 2025 (UTC)[reply]
    I refuse to waste any more of my time on you. Thryduulf (talk) 04:31, 28 January 2025 (UTC)[reply]
    Both of the above users are correct. If we have to treat AI-generated posts in good faith the same as human posts, then a conversation of posts between users that is entirely generated by AI would have to be read by a closing admin and their consensus respected provided it didn't overtly defy policy. Photos of Japan (talk) 04:37, 28 January 2025 (UTC)[reply]
    You too have completely misunderstood. If someone is contributing in good faith, we treat their comments as having been left in good faith regardless of how they made them. If someone is contributing in bad faith we treat their comments as having been left in bad faith regardless of how they made them. Simply using AI is not an indication of whether someone is contributing in good or bad faith (it could be either). Thryduulf (talk) 00:17, 29 January 2025 (UTC)[reply]
    But we can't tell if the bot is acting in good or bad faith, because the bot lacks agency, which is the problem with comments that are generated by AI rather than merely assisted by AI. Photos of Japan (talk) 00:31, 29 January 2025 (UTC)[reply]
    But we can't tell if the bot is acting in good or bad faith, because the bot lacks agency exactly. It is the operator who acts in good or bad faith, and simply using a bot is not evidence of good faith or bad faith. What determines good or bad faith is the content not the method. Thryduulf (talk) 11:56, 29 January 2025 (UTC)[reply]
    But the if the bot operator isn't generating their own comments, then their faith doesn't matter, the bot's does. Just like how if I hired someone to edit Wikipedia to me, what would matter is their faith. Photos of Japan (talk) 14:59, 30 January 2025 (UTC)[reply]
    A bot and AI can both be used in good faith and in bad faith. You can only tell which by looking at the contributions in their context, which is exactly the same as contributions made without the use of either. Thryduulf (talk) 23:12, 30 January 2025 (UTC)[reply]
    Not to go off topic, but do you object to any requirements on users for disclosure of use of AI generated responses and comments etc...? DN (talk) 02:07, 31 January 2025 (UTC)[reply]
    I'm not in favour of completely unenforceable requirements that would bring no benefits. Thryduulf (talk) 11:38, 31 January 2025 (UTC)[reply]
    Is it a demonstration of good faith to copy someone else's (let's say public domain and relevant) argument wholesale and paste it in a discussion with no attribution as if it was your original thoughts?
    Or how about passing off a novel mathematical proof generated by AI as if you wrote it by yourself? JoelleJay (talk) 02:51, 29 January 2025 (UTC)[reply]
    Specific examples of good or bad faith contributions are not relevant to this discussion. If you do not understand why this is then you haven't understood the basic premise of this discussion. Thryduulf (talk) 12:00, 29 January 2025 (UTC)[reply]
    If other actions where someone is deceptively appropriating, word-for-word, an entire argument they did not write, are intuitively "not good faith", then why would it be any different in this scenario? JoelleJay (talk) 16:57, 1 February 2025 (UTC)[reply]
    This discussion is explicitly about whether use of AI should be regarded as an indicator of bad faith. Someone deceptively appropriating, word-for-word, an entire argument they did not write is not editing in good faith. It is completely irrelevant whether they do this using AI or not. Nobody is arguing that some uses of AI are bad faith - specific examples are neither relevant nor useful. For simply using AI to be regarded as an indicator of bad faith then all uses of AI must be in bad faith, which they are not (as multiple people have repeatedly explained).
    Everybody agrees that some people who edit using mobile phones do so in bad faith, but we don't regard simply using a mobile phone as evidence of editing in bad faith because some people who edit using mobile phones do so in good faith. Listing specific examples of bad faith use of mobile phones is completely irrelevant to a discussion about that. Replace "mobile phones" with "AI" and absolutely nothing changes. Thryduulf (talk) 18:18, 1 February 2025 (UTC)[reply]
    Except the mobile phone user is actually doing the writing. Hydrangeans (she/her | talk | edits) 19:39, 1 February 2025 (UTC)[reply]
    I know I must be sounding like a stuck record at this point, but there are only so many ways you can describe completely irrelevant things as completely irrelevant before that happens. The AI system is incapable of having faith, good or bad, in the same way that a mobile phone is incapable of having faith, good or bad. The faith comes from the person using the tool not from the tool itself. That faith can be either good or bad, but the tool someone uses does not and cannot tell you anything about that. Thryduulf (talk) 20:07, 1 February 2025 (UTC)[reply]
    That is a really good summary of the situation. Using a widely available and powerful tool does not mean you are acting in bad faith, it is all in how it is used. PackMecEng (talk) 02:00, 28 January 2025 (UTC)[reply]
    A tool merely being widely available and powerful doesn't mean it's suited to the purpose of participating in discussions on Wikipedia. By way of analogy, Infowars is/was widely available and powerful, in the sense of the exercise it influenced over certain Internet audiences, but its very character as a disinformation platform makes it unsuitable for citation on Wikipedia. LLMs are widely available and might be considered 'powerful' in the sense that they can manage a raw output of vaguely plausible-sounding text, but their very character as text prediction models—rather than actual, deliberated communication—make them unsuitable mechanisms for participating in Wikipedia discussions. Hydrangeans (she/her | talk | edits) 03:16, 28 January 2025 (UTC)[reply]
    Even if we assume your premise is true, that does not indicate that someone using an LLM (which come in a wide range of abilities and are only a subset of AI) is contributing in either good or bad faith. It is completely irrelevant to the faith in which they are contributing. Thryduulf (talk) 04:30, 28 January 2025 (UTC)[reply]
    But this isn’t about if you think its a useful tool or not. This is about if someone uses one are they automatically acting in bad faith. We can argue the merits and benefits of AI all day, and they certainly have their place, but nothing you said struck at the point of this discussion. PackMecEng (talk) 13:59, 28 January 2025 (UTC)[reply]
Yes. To echo someone here, no one signed up here to argue with bad AI chat bots. If you're a non native speaker running through your posts through ChatGPT for spelling and grammar that's one thing, but wasting time bickering with AI slop is an insult. Hydronym89 (talk) 16:33, 28 January 2025 (UTC)[reply]
Your comment provides good examples of using AI in good and bad faith, thus demonstrating that simply using AI is not an indication of either. Thryduulf (talk) 00:18, 29 January 2025 (UTC)[reply]
Is that an fair comparison? I disagree that it is. Spelling and grammar checking doesn't seem to be what we are talking about.
The importance of context in which it is being used is, I think, the part that may be perceived as falling through the cracks in relation to AGF or DGF, but I agree there is a legitimate concern for AI being used to game the system in achieving goals that are inconsistent with being WP:HERE.
I think we all agree that time is a valuable commodity that should be respected, but not at the expense of others. Using a bot to fix grammar and punctuation is acceptable because it typically saves more time than it costs. Using AI to enable endless debates, even if both opponents are using it, seems like an awful waste of space, let alone the time it would cost admins that need to sort through it all. DN (talk) 01:16, 29 January 2025 (UTC)[reply]
Engaging in endless debates that waste the time of other editors is disruptive, but this is completely irrelevant to this discussion for two reasons. Firstly, someone engaging in this behaviour may be doing so in either good or bad faith: someone intentionally doing so is almost certainly WP:NOTHERE, and we regularly deal with such people. Other people sincerely believe that their arguments are improving Wikipedia and/or that the people they are arguing with are trying to harm it. This doesn't make it less disruptive but equally doesn't mean they are contributing in bad faith.
Secondly this behaviour is completely independent of whether someone is using AI or not: some people engaging in this behaviour are using AI some people engaging in this behaviour are not. Some people who use AI engage in this behaviour, some people are not.
For the perfect illustration of this see the people in this discussion who are making extensive arguments in good faith, without using AI, while having not understood the premise of the discussion - despite this being explained to them multiple times. Thryduulf (talk) 12:13, 29 January 2025 (UTC)[reply]
Would you agree that using something like grammar and spellcheck is not the same as using AI (without informing other users) to produce comments and responses? DN (talk) 22:04, 29 January 2025 (UTC)[reply]
They are different uses of AI, but that's not relevant because neither use is, in and of itself, evidence of the faith in which the user is contributing. Thryduulf (talk) 22:14, 29 January 2025 (UTC)[reply]
You are conflating "evidence" with "proof". Using AI to entirely generate your comments is not "proof" of bad faith, but it definitely provides less "evidence" of good faith than writing out a comment yourself. Photos of Japan (talk) 03:02, 30 January 2025 (UTC)[reply]
No, it provides no evidence of good or bad faith at all. Thryduulf (talk) 12:54, 30 January 2025 (UTC)[reply]
  • No per WP:CREEP. After reading the current version of the section, it doesn't seem like the right place to say anything about AI. -- King of ♥ 01:05, 29 January 2025 (UTC)[reply]
  • Yes, with caveats this discussion seems to be spiraling into a discussion of several separate issues. I agree with Remsense and Simonm223 and others that using an LLM to generate your reply to a discussion is inappropriate on Wikipedia. Wikipedia runs on consensus, which requires communication between humans to arrive at a shared understanding. Putting in the effort to fully understand and respond to the other parties is an essential part of good-faith engagement in the consensus process. If I hired a human ghost writer to use my Wiki account to argue for my desired changes on a wiki article, that would be completely inappropriate, and using an AI to replace that hypothetical ghost writer doesn't make it any more acceptable. With that said, I understand this discussion to be about how to encourage editors to demonstrate good faith. Many of the people here on both sides seem to think we are discussing banning or encouraging LLM use, which is a different conversation. In the context of this discussion demonstrating good faith means disclosing LLM use and never using LLMs to generate replies to any contentious discussion. This is a subset of "articulating your honest motives" (since we can't trust the AI to accurately convey your motives behind your advocacy) and "avoidance of gaming the system" (since using an LLM in a contentious discussion opens up the concern that you might simply be using minimal effort to waste the time of those who disagree with you and win by exhaustion). I think it is appropriate to mention the pitfalls of LLM use in WP:DGF, though I do not at this time support an outright ban on its use. -- LWG talk 05:19, 1 February 2025 (UTC)[reply]
  • No. For the same reason I oppose blanket statements about bans of using AI elsewhere, it is not only a huge over reach but fundamentally impossible to enforce. I've seen a lot of talk around testing student work to see if it AI, but that is impossible to do reliably. When movable type and the printing press began replacing scribes, the handwriting of scribes began to look like that of a printing press. As AI becomes more prominent, I imagine human writing will begin to look more AI generated. People who use AI for things like helping them translate their native writing into English should not be punished if something leaks through that makes the use obvious. Like anywhere else on the Internet, I foresee any strict rules against the use of AI to quickly be used in bad faith in heated arguments to accuse others of being a bot.
GeogSage (⚔Chat?⚔) 19:12, 2 February 2025 (UTC)[reply]

[tangent] If any of the people who have used LLMs/AI tools would be willing to do me a favor, please see the request at Wikipedia talk:Large language models#For an LLM tester. I think this (splitting a very long page – not an article – by date) is something that will be faster and more accurately done by a script than by a human. WhatamIdoing (talk) 18:25, 29 January 2025 (UTC)[reply]

  • Yes. The purpose of a discussion forum is for editors to engage with each other; fully AI-generated responses serve no purpose but to flood the zone and waste people's time, meaning they are, by definition, bad faith. Obviously this does not apply to light editing, but that's not what we're actually discussing; this is about fully AI-generated material, not about people using it grammar an spellchecking software to clean up their own words. No one has come up with even the slightest rationale for why anyone would do so in good faith - all they've provided is vague "but it might be useful to someone somewhere, hypothetically" - which is, in fact, false, as their total inability to articulate any such case shows. And the fact that some people are determine to defend it regardless shows why we do in fact need a specific policy making clear that it is inappropriate. --Aquillion (talk) 19:08, 2 February 2025 (UTC)[reply]

Contacting/discussing organizations that fund Wikipedia editing

I have seen it asserted that contacting another editor's employer is always harassment and therefore grounds for an indefinite block without warning. I absolutely get why we take it seriously and 99% of the time this norm makes sense. (I'm using the term "norm" because I haven't seen it explicitly written in policy.)

In some cases there is a conflict between this norm and the ways in which we handle disruptive editing that is funded by organizations. There are many types of organizations that fund disruptive editing - paid editing consultants, corporations promoting themselves, and state propaganda departments, to name a few. Sometimes the disruption is borderline or unintentional. There have been, for instance, WMF-affiliated outreach projects that resulted in copyright violations or other crap being added to articles.

We regularly talk on-wiki and off-wiki about organizations that fund Wikipedia editing. Sometimes there is consensus that the organization should either stop funding Wikipedia editing or should significantly change the way they're going about it. Sometimes the WMF legal team sends cease-and-desist letters.

Now here's the rub: Some of these organizations employ Wikipedia editors. If a view is expressed that the organizations should stop the disruptive editing, it is foreseeable that an editor will lose a source of income. Is it harassment for an editor to say "Organization X should stop/modify what it's doing to Wikipedia?" at AN/I? Of course not. Is it harassment for an editor to express the same view in a social media post? I doubt we would see it that way unless it names a specific editor.

Yet we've got this norm that we absolutely must not contact any organization that pays a Wikipedia editor, because this is a violation of the harassment policy. Where this leads is a bizarre situation in which we are allowed to discuss our beef with a particular organization on AN/I but nobody is allowed to email the organization even to say, "Hey, we're having a public discussion about you."

I propose that if an organization is reasonably suspected to be funding Wikipedia editing, contacting the organization should not in and of itself be considered harassment. I ask that in this discussion, we not refer to real cases of alleged harassment, both to avoid bias-inducing emotional baggage and to prevent distress to those involved. Clayoquot (talk | contribs) 03:29, 22 January 2025 (UTC)[reply]

I'm not sure the posed question is actually the relevant one. Take as a given that Acme Co. is spamming Wikipedia. Sending Acme Co. a strongly worded letter to cut it out could potentially impact the employment of someone who edits Wikipedia, but is nonspecific as to who. I'd liken this to saying, "Amazon should be shut down." It will doubtless effect SOME Wikipedia editor, but it never targeted them. This should not be sanctioned.
The relevant question is if you call out a specific editor in connection. If AcmeLover123 is suspected or known to be paid by Acme Co. to edit Wikipedia, care should be taken in how it's handled. Telling AcmeLover123, "I'm going to tell your boss to fire you because you're making them look bad" is pretty unambiguous WP:HARRASMENT, and has a chilling effect like WP:NLT. Thus, it should be sanctioned. On the other hand, sending Acme Co. that strongly worded letter and then going to WP:COIN to say, "Acme Co. has been spamming Wikipedia lately. I sent them a letter telling them to stop. AcmeLover123 has admitted to being in the employ of Acme Co." This seems to me to be reasonable. So I think just as WP:NLT has no red-line rule of "using this words means it's a legal threat", contacting an employer should likewise be considered on a case-by-case basis. EducatedRedneck (talk) 14:20, 28 January 2025 (UTC)[reply]
Even if a specific editor is named when contacting an employer, we should be looking at it on a case-by-case basis. My understanding is that in the events that have burned into our collective emotional memory, trolls contacted organizations that had nothing to do with their employee's volunteer Wikipedia activity. Contacting these employers was a gross violation of the volunteer's right to privacy.
Personally, if Acme Co was paying me to edit and someone had a sincere complaint about these edits that they wanted to bring to AN/I, I would actually much prefer them to bring that complaint to Acme Co first to give us a chance to correct the problem with dignity. If a post about an Acme Co-sponsored project on AN/I isn't a violation of privacy, I can't see why sending exactly the same content to Acme Co via less-public channels like email would be one. Whether a communication constitutes harassment depends on the content. Clayoquot (talk | contribs) 00:30, 30 January 2025 (UTC)[reply]
Yes, what you described is why I don't think anyone here thinks contacting an employer is categorically forbidden. Though my concerns are, as I mentioned above, less about privacy (though HEB's comments below are well-taken), and far more about the chilling effect similar to WP:NLT. If there's even a whiff of such a chilling effect, I think it's reasonable to treat it the same. If it's vague, a stern caution is appropriate. If it reads as a clear intimidation, there should be a swift indef until it is clearly and unambiguously stated that there was no attempt to target the editor. Even that is a little iffy; it'd be easy for someone to do the whole, "That's a nice job you have there. It'd be a shame if something happened to it" shtick, then immediately apologize and insist it was expressing concern. The intimidation and chilling effect could remain well after any nominal retraction. EducatedRedneck (talk) 15:00, 30 January 2025 (UTC)[reply]
I think the main problem is we won't have access to the email to evaluate it unless one of the off-wiki parties shares it... We won't even know an email was sent. For accountability and transparency reasons these interactions need to take place on-wiki if they take place at all. Horse Eye's Back (talk) 15:04, 30 January 2025 (UTC)[reply]
@Horse Eye's Back That's fair. I think because off-wiki communications is a black box like you said, I figure we can't police that anyway, so there's no point in trying. The only thing we can police is mentioning it on-wiki. If I understand you right, your thinking is that there is a bright line of contacting an entity off-wiki about Wikipedia matters. It seems like that line extends beyond employers, too. (E.g., sending someone's mother an email saying, "Look what your (grown) child is doing to Wikipedia!")
I assume the bright line is trying to influence how they relate to Wikipedia. That is, emailing Acme Co. and saying, "Hey, your Wikipedia article doesn't have a picture of [$thing]. Can you release one under CC?" seems acceptable, but telling them, "Hey, someone has been editing your article in such-and-such a way. You should try to get them to stop." is firmly in the just-take-it-to-ANI territory. Am I getting that right? EducatedRedneck (talk) 15:29, 30 January 2025 (UTC)[reply]
More or less, for me the bright line is naming a specific editor or editors... However I would interpret "You should try to get them to stop." as an attempt at harassment by proxy, even with no name attached. Horse Eye's Back (talk) 15:38, 30 January 2025 (UTC)[reply]
I see. Okay, that makes sense to me. I'm sure there are WP:BEANS ways to try to game it, but at the very least it'd catch the low-hanging fruit of blatant intimidation. You've convinced me; thanks for taking the time to explain your reasoning to me. EducatedRedneck (talk) 10:49, 31 January 2025 (UTC)[reply]
Just in general you should not be attempting to unilaterally handle AN/I level issues off-wiki. That is entirely inappropriate. Horse Eye's Back (talk) 15:04, 30 January 2025 (UTC)[reply]

Another issue is that it sometimes doing that can place another link or two in a wp:outing chain, and IMO avoiding that is of immense importance. The way that you posed the question with the very high bar of "always" is probably not the most useful for the discussion. Also, a case like this is almost always involves a concern about a particular editor or center around edits made by a particular editor, which I think is a non-typical omission from your hypothetical example. Sincerely, North8000 (talk) 19:41, 22 January 2025 (UTC)[reply]

I'm not sure what you mean by placing a link in an outing chain. Can you explain this further? I used the very high bar of "always" because I have seen admins refer to it as an "always" or a "bright line" and this shuts down the conversation. Changing the norm from "is always harassment" to "is usually harassment" is exactly what I'm trying to do.
Organizations that fund disruptive editing often hire just one person to do it but I've also seen plenty of initiatives that involve money being distributed widely, sometimes in the form of giving perks to volunteers. If the organization is represented by only one editor then there is obviously a stronger argument that contacting the organization constitutes harassment. Clayoquot (talk | contribs) 06:44, 23 January 2025 (UTC)[reply]

What would be the encyclopedic purpose(s) of the communication with the company? You don't describe one and I'm having a hard time coming up with any. Horse Eye's Back (talk) 00:42, 30 January 2025 (UTC)[reply]

It would usually be to tell them that we have a policy or guideline that their project is violating. Clayoquot (talk | contribs) 01:07, 30 January 2025 (UTC)[reply]
And the encyclopedic purpose served by that would be? Also note that if there is no on-wiki discussion then there is no consensus that P+G are being violated, so you're not actually telling them that they're violating P+G you're only telling them at you as a single individual think that they are violating P+G. Horse Eye's Back (talk) 01:16, 30 January 2025 (UTC)[reply]
It serves the same encyclopedic purpose, and carries same level of authoritativeness, as you or I dropping a warning template on a user's talk page. Clayoquot (talk | contribs) 03:08, 30 January 2025 (UTC)[reply]
Those are not at all the same (remember you aren't proposing to email the person, you're proposing to email someone you think is their employer)... At this point I think you want a liscense to harass, what you're proposing is unaccountable vigilante justice and the fact that you think anything you do off-wiki carries on-wiki authority is bizzare and disturbing. How else would you like to be able to harass other editors? Nailing a printed out warning template to someone's front door? Showing up at their place of work in person? Horse Eye's Back (talk) 14:55, 30 January 2025 (UTC)[reply]
Wikivoyage dealt with an apparent case of corporate-authorized spammy editing (or spam-adjacent) in 2020, and I thought that contacting the corporate office (a hotel chain) was a reasonable thing to do.
Paid editing isn't forbidden there, but touting is. Articles started filling up with recommendations to use that particular hotel chain. Contacting the editor(s) directly didn't seem to make a difference. Sending an e-mail message to the marketing department to ask whether they happened to have anybody working on this, and to see if we could get them to do the useful things (e.g., updated telephone numbers) without the not-so-useful things seemed to eventually have the desired effect.
Also, just to be clear, while a private e-mail is one way to go about this, I understand that there's this thing called social media, and I have heard that publicly contacting @CompanyName is supposed to be a pretty reliable way to get the attention of a corporate marketing department. "Hey, @CompanyName, do you know anything about why someone keeps pasting copyrighted content about your company into Wikipedia?" is not "contacting someone's employer"; it's "addressing the likely source of the problem".
In terms of history, I'm aware of two cases that made many editors quite uncomfortable. Without going into too many details, and purely from potentially fallible memory:
  • A banned editor was disrupting Wikipedia from IP addresses controlled by the US government. There was discussion on wiki about reporting this to the relevant agency. The disruption stopped (for a while). Some editors thought that a report could result in the editor losing his job, but (a) AFAICT nobody knows if that happened, and (b) if you have a contract that says misusing government computers could result in losing your job, then choosing to disrupt Wikipedia at work = choosing to lose your job.
  • An editor figured out someone's undisclosed real-world identity and phoned her up at work (i.e., called to talk to the editor herself, not her boss). This was taken as a much bigger deal. A stranger phoning you up at work to argue with you about Wikipedia is much more personal and threatening than a note being dropped in a government agency's public complaint box.
I don't think that either of these are equivalent to telling a company that its marketing plan is causing problems. WhatamIdoing (talk) 05:12, 1 February 2025 (UTC)[reply]

General reliability discussions have failed at reducing discussion, have become locus of conflict with external parties, and should be curtailed

The original WP:DAILYMAIL discussion, which set off these general reliability discussions in 2017, was supposed to reduce discussion about it, something which it obviously failed to do since we have had more than 20 different discussions about its reliability since then. Generally speaking, a review of WP:RSNP does not support the idea that general reliability discussions have reduced discussion about the reliability of sources either. Instead, we see that we have repeated discussions about the reliability of sources, even where their reliability was never seriously questioned. We have had a grand total of 22 separate discussions about the reliability of the BBC, for example, 10 of which have been held since 2018. We have repeated discussions about sources that are cited in relatively few articles (e.g., Jacobin).

Moreover these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project. Most recently we have had an unnecessary conflict with the Anti-Defamation League sparked by a general reliability discussion with them, but the original Daily Mail discussion did this also. In neither case was usage of the source a problem generally on Wikipedia in any way that has been lessened by their deprecation - they were neither widely-used, nor permitted to be used in a way that was problematic by existing policy on using reliable sources.

There is also some evidence, particularly from WP:PIA5, that some editors have sought to "claim scalps" by getting sources they are opposed to on ideological grounds 'banned' from Wikipedia. Comments in such discussions are often heavily influenced by people's impression of the bias of the source.

I think a the very least we need a WP:BEFORE-like requirement for these discussions, where the editors bringing the discussion have to show that the source is one for which the reliability of which has serious consequences for content on Wikipedia, and that they have tried to resolve the matter in other ways. The recent discussion about Jacobin, triggered simply by a comment by a Jacobin writer on Reddit, would be an example of a discussion that would be stopped by such a requirement. FOARP (talk) 15:54, 22 January 2025 (UTC)[reply]

  • The purpose of this proposal is to reduce discussion of sources. I feel that evaluating the reliability of sources is the single most important thing that we as a community can do, and I don't want to reduce the amount of discussion about sources. So I would object to this.—S Marshall T/C 16:36, 22 January 2025 (UTC)[reply]
  • Yeah I would support anything to reduce the constant attempts to kill sources at RSN. It has become one of the busiest pages on all of Wikipedia, maybe even surpassing ANI. -- GreenC 19:36, 22 January 2025 (UTC)[reply]
  • Oddly enough, I am wondering why this discussion is here? And not Talk RSN:Wikipedia talk:Reliable sources/Noticeboard, as it now seems to be a process discussion (more BEFORE) for RSN? Alanscottwalker (talk) 22:41, 22 January 2025 (UTC)[reply]
    Dropped a notice both there and at WT:RSP but I think these are all reasonable venues to have the discussion at, so since it's here we may as well keep it here if people think there's any more to say. Alpha3031 (tc) 12:24, 27 January 2025 (UTC)[reply]
  • Some confusion about pages here, with some mentions of RSP actually referring to RSN. RSN is a type of "before" for RSP, and RSP is intended as a summary of repeated RSN discussions. One purpose of RSP is to put a lid on discussion of sources that have appeared at RSN too many times. This isn't always successful, but I don't see a proposal here to alleviate that. Few discussions are started at RSP; they are started at RSN and may or may not result in a listing or a change at RSP. Also, many of the sources listed at RSP got there due to a formal RfC at RSN, so they were already subject to RFCBEFORE (not always obeyed). I'm wondering how many listings at RSN are created due to an unresolved discussion on an article talk page—I predict it is quite a lot. Zerotalk 04:40, 23 January 2025 (UTC)[reply]
    “Not always obeyed” is putting it mildly. FOARP (talk) 06:47, 23 January 2025 (UTC)[reply]
  • I fully agree that we need a strict interpretation of RFCBEFORE for the big "deprecate this source" RfCs. It must be shown that 1. The source is widely used on Wikipedia. 2. Removal/replacement of the source (on individual articles) has been contested. 3. Talk page discussions on use of the source have been held and have not produced a clear consensus.
We really shouldn't be using RSP for cases where a source is used problematically a single-digit number of times and no-one actually disagrees that the source is unreliable – in that case it can just be removed/replaced, with prior consensus on article talk if needed. Toadspike [Talk] 11:42, 26 January 2025 (UTC)[reply]
The vast majority of discussions at RSN are editors asking for advice, many of which get overlooked due to other more contentious discussions. The header and edit notice already contain wording telling editors not to open RFCs unless there has been prior discussion (as with any new requirement there's no way to make editors obey it).
RSP is a different problem, for example look at the entry for Metro. Ten different discussions are linked and the source rated as unreliable, except if you read those discussions most mention The Metro only in passing. There is also the misconception that RSP is (or should be) a list of all sources. -- LCU ActivelyDisinterested «@» °∆t° 19:55, 26 January 2025 (UTC)[reply]
  • If our processes of ascertaining reliability have become a locus of conflict with external parties I'd contend this is a good and healthy thing. If Wikipedia is achieving its neutrality goal it will not be presenting the propagandized perspective of "external parties" with enough power to worry Wikipedia at all. That we are now facing opposition from far-right groups like the Heritage Foundation demonstrates we are being somewhat successful curtailing propaganda and bias. We should be leaning into this, not shrinking away. Simonm223 (talk) 13:01, 27 January 2025 (UTC)[reply]
    Really, we should be actively seeking out such conflicts, merely for the purposes of having them? Wikipedia is not an advocacy service.
    I don't understand why we are even having a discussion about the Heritage Foundation because on any page where the topic of "should we be using the output of a think-tank for statements of fact about anything except themselves in the voice of WP" the outcome would inevitably be "no", so there's no actual need to make a blanket ban on using them for that purpose. FOARP (talk) 09:49, 31 January 2025 (UTC)[reply]
  • I agree with Simon223. Regarding "these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project". It takes two to have a conflict and Wikipedia is not a combatant. "reputation" shouldn't be a lever external partisan actors can pull to exert influence. They will never be satisfied. There are incompatible value systems. Wikipedia doesn't need to compromise its values for the sake of reputation. That would be harmful. And it doesn't need to pander to people susceptible to misinformation about Wikipedia. It can just focus on the task of building an encyclopedia according to its rules. Sean.hoyland (talk) 13:45, 27 January 2025 (UTC)[reply]
  • I do note that the vast majority of these disputes relate to the reliability of news outlets. Perhaps what is needed is better guidance on the reliability and appropriate use of such sources. Blueboar (talk) 14:26, 27 January 2025 (UTC)[reply]
  • I'd favour something stronger than "curtailed", such as "stopped" or "rolled back". But in 2019 RFC: Moratorium on "general reliability" RFCs failed. The closer (ToThAc) said most opposers' arguments "basically boil down to WP:CONTEXTMATTERS" which I rather thought was our (supporters') argument; however, we were a minority. Peter Gulutzan (talk) 18:35, 27 January 2025 (UTC)[reply]
    @Peter Gulutzan: I still stand by that closure. I think the real problems are that 1) the credibility of sources changes over time, 2) there may be additional factors the original RfC did not cover, or 3) the submitter failed to check RSPS or related pages. Such discussions are bound to be unavoidable regardless of context. ToThAc (talk) 18:45, 27 January 2025 (UTC)[reply]
  • The current Heritage discussion is a real problem and (if anyone ever dares close it) should make us rethink policy. But I think this proposal overlooks the real value of the RSP system, which is preventing ordinary discussions from ever reaching RSN. I see appeals to RSP all the time on talk pages and edit summaries, and they are usually successful at cutting off debate. RSN is active because editors correctly recognize that the system works and the consensuses reached there are very powerful. I do think that the pace of RFCs is much too strong. Some blame should be placed on the RSP format which marks discussions as stale after 4 years. As there are now many hundreds of listings, necessarily there must be reconsiderations every week just to keep up.
I'm inclined to think that we should
1. Set 3 years as minimum and 5 as stale, and deny RFCs by default unless (A) 3 years have passed since the last discussion or (B) there's been a major development which requires us to reconsider. It's very rare for a source to slide subtly into unreliability. Generally there is a major shift in management or policy which is discussed in the press. Often RFCs start with only handwaving about what warrants a new discussion.
2. Split the RSP-feeder process off from the normal RSN, which should return to its old format. IMO the biggest problem with the constant political news RFCs is that they distract attention from editors who actually need help with a non-perennial source. GordonGlottal (talk) 16:26, 29 January 2025 (UTC)[reply]
I strongly disagree that the Heritage Foundation RfC requires us to rewrite our policies. And blanket strict moratoria on new RFCs that last 36 months is significant overreach. Simonm223 (talk) 15:54, 30 January 2025 (UTC)[reply]
The issue with the Heritage Foundation RFC is that it has little to do with reliability. The problem is that editors wanted a technical solution to the threat that HF poses and think that blacklisting is the solution. But blacklisting states a requirement that the source be discussed at RSN, and RSN says that discussions should only be about reliability.
The discussion should have stayed at the village pump. The community should have been able to make a decision there without the unnecessary bureaucracy. Technically all comments in that RFC that aren't about reliability should be ignored, which would be ridiculous but required by rigidly sticking to process. -- LCU ActivelyDisinterested «@» °∆t° 13:42, 1 February 2025 (UTC)[reply]
  • "General reliability discussions have failed at reducing discussion" is neither provable or falsifiable yet its the core of your argument. You have no idea if thats true or not and pretending otherwise is just insulting the rest of us. What I would support along the lines of your argument is a more efficient way to speedily close discussions which are near repeats. Horse Eye's Back (talk) 15:58, 30 January 2025 (UTC)[reply]
    I would also agree that would be a benefit. In general speedy clerking is good for noticeboards. Simonm223 (talk) 16:06, 30 January 2025 (UTC)[reply]
    I think we also need to make it clear that taking something to the noticeboard for the explicit purpose of generating an additional discussion to meet the perennial sources listing criteria is gaming the system. Those are the only discussions I see that really piss me off. Horse Eye's Back (talk) 16:12, 30 January 2025 (UTC)[reply]
    I hear you. As it is most of those should just be closed as lacking WP:RFCBEFORE. Simonm223 (talk) 16:13, 30 January 2025 (UTC)[reply]
    There's a couple of these currently on the noticeboard. I'd happily just close them (rather than commenting 'Bad RFC'), but there's no policy reason for doing so at the moment that I'm aware of. Unless I've missed something that says RFCs without RFCBEFORE can just be closed.
    An effort not to WP:BITE would be needed though. Due to misconceptions about the RSP inexperienced editors see that the reliable sources for their country aren't on the RSP, and thinking it's a general list of sources want it get those sources added. Making the description of WP:RSP clearer could help clear up those misconceptions. -- LCU ActivelyDisinterested «@» °∆t° 13:32, 1 February 2025 (UTC)[reply]
    Failure to have a prior discussion is not grounds for closing an RFC, just like failure to do a WP:BEFORE search is not grounds for closing an AFD. Sometimes an RFC is necessary because you're on such a low-traffic page that you need the RFC system to draw attention to it.
    An RFC with no prior attempts at discussion doesn't happen very often, and we are not overwhelmed with RFCs in general (it used to be about three a day; now it's about two), so keeping this option open isn't hurting us. WhatamIdoing (talk) 23:37, 1 February 2025 (UTC)[reply]
    Yeah that's what I thought. Honestly the issue isn't the RFCs the issue comes from editors believing they need to add sources to the RSP. Every few months there's a new editor who sees that the sources in their country aren't listed and starts an RFC, mistakenly thinking that getting them on the RSP is necessary for the sources to be considered reliable. However that there's no agreement on whether a generally reliable source that has additional considerations should be yellow or green doesn't make me hopeful that much will change. -- LCU ActivelyDisinterested «@» °∆t° 03:27, 2 February 2025 (UTC)[reply]
  • These general reliability discussions most often refer to something like a newspaper, magazine, or website (which have lots of distinct articles/webpages) rather than something like a book, so I'll limit my discussion to the former. I frequently see editors starting general reliability discussions at the RSN without giving any examples of previous (specific WP text)+(source = specific news article/opinion article/webpage) combinations that call the newspaper's/magazine's/website's general reliability into question, and without introducing an example of this sort. Yes, when we use something from a newspaper/magazine/website, we should be paying attention to its overall "reputation for fact-checking and accuracy," but also WP:RSCONTEXT. I think it's a mistake to launch into an RSN discussion of whether a newspaper/magazine/website is GREL/GUNREL without first having discussions of (specific text)+(specific article/webpage) combinations for that newspaper/magazine/website. I agree with @FOARP's last paragraph. FactOrOpinion (talk) 17:14, 30 January 2025 (UTC)[reply]
    We need some way to differentiate between "reliable in general but not for, you know, just anything" and "reliable for", which is the kind of "That politician's tweet is reliable for what he said, even though it's not reliable in general." WhatamIdoing (talk) 05:18, 1 February 2025 (UTC)[reply]
  • Certainly, one may argue for applying RFCBEFORE more strictly. However, the premise that the general reliability concept has "obviously failed" at reducing discussion is incorrect; the simple counts presented here are not sufficient, for multiple reasons. Using the Daily Mail as an example:
Extended analysis of discussion-counting approach
  • We don't inherently care about the number of discussions, but whether the number decreased. We would need a comparison to the amount of discussion before the Daily Mail RfC. This is perhaps the easiest issue to correct, but the RSP list is not necessarily comprehensive (e.g. older discussions might be under-represented, due to being out of date or because they occurred before 2018 when RSP was created).
  • The number of discussions is much less relevant than the length of the discussions, which is a more accurate measurement for the amount of time and effort spent by editors. Even if discussions were initiated at the same rate, future discussions on the same topic are likely to be shorter.
  • Discussions subsequent to the original 2017 RfC (numbers 28 through 54 on the current RSP list) are not automatically or inherently futile. It's implied that they're simply reiterating the same subjects that were being debated before 2017, but reviewing them shows that this is clearly incorrect.
  • From my quick review, only 3 of the discussions (including the 2019 RfC) were primarily about restarting debate on the Daily Mail's overall reliability. This is an entirely reasonable number, given that a certain amount of re-evaluation is expected in order to determine whether consensus has changed. In other words, the original disputes were resolved and have largely remained resolved.
  • Instead, the largest group of discussions (including the 2020 RfC) involves clarifications and refinements of the general principle. In other words, after consensus was determined, editors moved on to discussing other topics in a way that productively built on the prior consensus, which reflects the normal Wikipedia process. Other types of discussions addressed the implementation mechanisms, questions from relatively inexperienced editors, etc. In addition, many of the discussions were quite short, which I would attribute at least in part to the existence of the pre-existing consensus.
  • RSP only counts discussions on RSN, whereas most discussions on the use of sources happen on individual articles. In fact, this is potentially where we would expect the most benefit. For example, there are 462 direct links to WP:DAILYMAIL from article talk pages, all of which indicate cases where the amount of discussion was potentially reduced. This doesn’t include discussions on user talk (507 links), discussions that used other redirects, or discussions that linked directly to RSP. It also doesn’t include discussions that were pre-empted entirely, by the edit filter or by knowledge of the existing consensus.
Beyond that, of course, reduction in repetitive discussion is not the only possible type of benefit. As determined by consensus, the removal of Daily Mail references since 2017 reflects a major improvement in the quality of our content. Perhaps that is assumed, but it is a major advantage that needs to be included in the cost-benefit analysis.
One thing I do agree with is that Wikipedia's reputation is a relevant factor to consider; our purpose is to serve the readers, and to do that we need them to trust us. It's conceivable that the benefits from classifying or reclassifying a particular source could be outweighed by the risk of igniting a controversy or appearing partisan, especially if a source is rarely used or if its disadvantages could be mitigated in other ways. (And assuming that the alternative isn't likely to alienate a different population that's even larger, etc.) However, there are relatively few sources where this is likely to be an issue, so I would be more likely to support an initiative that applies specifically to the relevant sources. Sunrise (talk) 10:04, 1 February 2025 (UTC)[reply]
I'm not sure that I agree with you that "We don't inherently care about the number of discussions, but whether the number decreased". Sometimes we really do care about how many times ____ gets revisited, because the fact that people are starting discussions indicates that they are uncertain. If you see something from DubiousWebsite.com, and you are dubious about it, and the notes at RSP confirm your initial impression, then you will not start a discussion. If, however, you discover that Fox News is listed, and Fox News happens to be a main source of your own (and your friends' and neighbors') news information, then you are likely to start a discussion because you believe it is wrong and, in good faith and with what you perceive to be Wikipedia's best interests at heart, you want to try to fix the mistake. WhatamIdoing (talk) 23:46, 1 February 2025 (UTC)[reply]
In general terms, yes, but I was speaking in the context of evaluating the effectiveness of an intervention. If N discussions occurred, that can indeed be a relevant issue, and you've given a reasonable argument to that effect. However, the argument that was made is "N discussions occurred, therefore the intervention had no effect at all", which isn't a valid line of reasoning because it doesn't tell us whether there was an improvement over the alternative. Another way to describe this is that the measurement has no control group.
The problem being highlighted by the bullet point you quoted isn't that we never care about discussion counts at all. Instead, the issue is that the count on its own has no meaning for the intervention's effectiveness, because the necessary comparison is missing. Furthermore, even if a correction is made, this is only the first of multiple reasons why the overall logic is insufficient, as I have described in the analysis. Sunrise (talk) 08:42, 2 February 2025 (UTC)[reply]

Primary sources vs Secondary sources

The discussion above has spiralled out of control, and needs clarification. The discussion revolves around how to count episodes for TV series when a traditionally shorter episode (e.g., 30 minutes) is broadcast as a longer special (e.g., 60 minutes). The main point of contention is whether such episodes should count as one episode (since they aired as a single entity) or two episodes (reflecting production codes and industry norms).

The simple question is: when primary sources and secondary sources conflict, which we do use on Wikipedia?

  • The contentious article behind this discussion is at List of Good Luck Charlie episodes, in which Deadline, TVLine and The Futon Critic all state that the series has 100 episodes; this article from TFC, which is a direct copy of the press release from Disney Channel, also states that the series has "100 half-hour episodes".
  • The article has 97 episodes listed; the discrepancy is from three particular episodes that are all an hour long (in a traditionally half-hour long slot). These episode receive two production codes, indicating two episodes, but each aired as one singular, continuous release. An editor argues that the definition of an episode means that these count as a singular episode, and stand by these episode being the important primary sources.
  • The discussion above discusses what an episode is. Should these be considered one episode (per the primary source of the episode), or two episodes (per the secondary sources provided)? This is where the primary conflict is.
  • Multiple editors have stated that the secondary sources refer to the production of the episodes, despite the secondary sources not using this word in any format, and that the primary sources therefore override the "incorrect" information of the secondary sources. Some editors have argued that there are 97 episodes, because that's what's listed in the article.
  • WP:CALC has been cited; Routine calculations do not count as original research, provided there is consensus among editors that the results of the calculations are correct, and a meaningful reflection of the sources. An editor argues that there is not the required consensus. WP:VPT was also cited.

Another example was provided at Abbott Elementary season 3#ep36.

  • The same editor arguing for the importance of the primary source stated that he would have listed this as one episode, despite a reliable source[1] stating that there is 14 episodes in the season.
  • WP:PSTS has been quoted multiple times:
    • Wikipedia articles usually rely on material from reliable secondary sources. Articles may make an analytic, evaluative, interpretive, or synthetic claim only if it has been published by a reliable secondary source.
    • While a primary source is generally the best source for its own contents, even over a summary of the primary source elsewhere, do not put undue weight on its contents.
    • Do not analyze, evaluate, interpret, or synthesize material found in a primary source yourself; instead, refer to reliable secondary sources that do so.
  • Other quotes from the editors arguing for the importance of primary over secondary includes:
    • When a secondary source conflicts with a primary source we have an issue to be explained but when the primary source is something like the episodes themselves and what is in them and there is a conflict, we should go with the primary source.
    • We shouldn't be doing "is considered to be"s, we should be documenting what actually happened as shown by sources, the primary authoritative sources overriding conflicting secondary sources.
    • Yep, secondary sources are not perfect and when they conflict with authoritative primary sources such as released films and TV episodes we should go with what is in that primary source.

Having summarized this discussion, the question remains: when primary sources and secondary sources conflict, which we do use on Wikipedia?

  1. Primary, as the episodes are authoritative for factual information, such as runtime and presentation?
  2. Or secondary, which guide Wikipedia's content over primary interpretations?

-- Alex_21 TALK 22:22, 23 January 2025 (UTC)[reply]

  • As someone who has never watched Abbott Elementary, the example given at Abbott Elementary season 3#ep36 would be confusing to me. If we are going to say that something with one title, released as a single unit, is actually two episodes we should provide some sort of explanation for that. I would also not consider this source reliable for the claim that there were 14 episodes in the season. It was published three months before the season began to air; even if the unnamed sources were correct when it was written that the season was planned to have 14 episodes, plans can change. Caeciliusinhorto-public (talk) 10:13, 24 January 2025 (UTC)[reply]
    Here is an alternate source, after the premiere's release, that specifically states the finale episode as Episode 14. (Another) And what of your thoughts for the initial argument and contested article, where the sources were also posted after the multiple multi-part episode releases? -- Alex_21 TALK 10:48, 24 January 2025 (UTC)[reply]
    Vulture does say there were 14 episodes in that season, but it also repeatedly describes "Career Day" (episode 1/2 of season 3) in the singular as "the episode" in its review and never as "the episodes". Similarly IndieWire and Variety refer to "the supersized premiere episode, 'Career Day'" and "the mega-sized opener titled 'Career Day Part 1 & 2'" respectively, and treat it largely as a single episode in their reviews, though both acknowledge that it is divided into two parts.
    If reliable sources do all agree that the one-hour episodes are actually two episodes run back-to-back, then we should conform to what the sources say, but that is sufficiently unexpected (and even the sources are clearly not consistent in treating these all as two consecutive episodes) that we do need to at least explain that to our readers.
    In the case of Good Luck Charlie, while there clearly are sources saying that there were 100 episodes, none of them seem to say which episodes are considered to be two, and I would consider "despite airing under a single title in a single timeslot, this is two episodes" to be a claim which is likely to be challenged and thus require an inline citation per WP:V. I have searched and I am unable to find a source which supports the claim that e.g episode 3x07 "Special Delivery" is actually two episodes. Caeciliusinhorto-public (talk) 12:18, 24 January 2025 (UTC)[reply]
@Caeciliusinhorto-public: That's another excellent way of putting it. Plans change. Sources like Deadline Hollywood are definitely WP:RS, but they report on future information and don't really update to reflect what actually happened. How are sources like Deadline Hollywood supposed to know when two or more episodes are going to be merged for presentation? To use a couple of other examples, the first seasons for both School of Rock and Andi Mack were reported to have 13 episodes each by Deadline Hollywood and other sources. However, the pilot for School of Rock (101) never aired and thus the first season actually only had 12 episodes, while the last episode of Andi Mack's first season (113) was held over to air in the second season and turned into a special and thus the first season only had 12 episodes. Using School of Rock, for example, would we still insist on listing 13 episodes for the season and just make up an episode to fit with the narrative that the source said there are 13 episodes? No, of course not. It's certainly worth mentioning as prose in the Production section, such as: The first season was originally reported to have 13 episodes; however, only 12 episodes aired due to there being an unaired pilot. But in terms of the number of episodes for the first season, it would be 12, not 13. Amaury22:04, 24 January 2025 (UTC)[reply]
And what of the sources published later, after the finale, as provided, in which the producer of the series still says that there are 14 episodes? Guidelines and policies (for example, secondary sources vs primary sources) can easily be confused; for example, claiming MOS:SEASON never applies because we have to quote a source verbatim even if it says "summer 2016", against Wikipedia guidelines. So, if we need to quote a source verbatim, then it is fully support that there are 14 episodes in the AE season, or there are 100 episodes in the GLC series. All of the sources provided (100 episodes, 14 episodes) are not future information. What would you do with this past information? -- Alex_21 TALK 23:56, 24 January 2025 (UTC)[reply]
Nevertheless, the question remains: does one editor's unsourced definition of an episode outrule the basis sourcing policies of Wikipedia? -- Alex_21 TALK 23:58, 24 January 2025 (UTC)[reply]
Usually we don't need to source the meaning of common English language words and concepts. The article at episode reflects common usage and conforms to this dictionary definition - "any installment of a serialized story or drama". Geraldo Perez (talk) 00:27, 25 January 2025 (UTC)[reply]
If a series had 94 half-hour episodes and three of one hour why not just say that? Phil Bridger (talk) 11:04, 24 January 2025 (UTC)[reply]
What would you propose be listed in the first column of the tables at List of Good Luck Charlie episodes, and in the infobox at Good Luck Charlie?
Contentious article aside, my question remains as to whether primary or secondary sources are what we based Wikipedia upon. -- Alex_21 TALK 11:11, 24 January 2025 (UTC)[reply]
  • If only we could divert all this thought and effort to contentious topics.
    Infoboxes cause a high proportion of Wikipedia disputes because they demand very short entries and therefore can't handle nuance. The solution is not to use the disputed parameter of the infobox.
    None of these sources are scholarly analysis or high quality journalism and they're merely repeating the publisher's information uncritically, so none of them are truly secondary in the intended meaning of the word.—S Marshall T/C 13:11, 24 January 2025 (UTC)[reply]
    Yes, secondary sources "contain analysis, evaluation, interpretation, or synthesis of the facts, evidence, concepts, and ideas taken from primary sources", that is correct. -- Alex_21 TALK 23:57, 24 January 2025 (UTC)[reply]
    I agree with S Marshall: if putting "the" number on it is contentious, then leave it out.
    Alternatively, add some text to address it directly. You could say something like "When a double-length special is broadcast, industry standards say that's technically two episodes.[1] Consequently, sources differ over whether 'The Amazing Double Special' should be counted as episode 13 and 'The Dénouement' as episode 14, or if 'The Amazing Double Special' is episodes 13 and 14 and 'The Dénouement' is episode 15. The table below uses natural counting [or the industry counting style; what matters is that you specify, not which one you choose] and thus labels it as episode 13 and the following one as episode 14 [or the other way around]."
    Wikipedia doesn't have to endorse one or the other as the True™ Episode Counting Style. Just educate the reader about the difference, and tell them which one the article is using. WhatamIdoing (talk) 23:54, 1 February 2025 (UTC)[reply]

Request for research input to inform policy proposals about banners & logos

I am leading an initiative to review and make recommendations on updates to policies and procedures governing decisions to run project banners or make temporary logo changes. The initiative is focused on ensuring that project decisions to run a banner or temporarily change their logo in response to an “external” event (such as a development in the news or proposed legislation) are made based on criteria and values that are shared by the global Wikimedia community. The first phase of the initiative is research into past examples of relevant community discussions and decisions. If you have examples to contribute, please do so on the Meta-Wiki page. Thanks! --CRoslof (WMF) (talk) 00:04, 24 January 2025 (UTC)[reply]

@CRoslof (WMF): Was this initiative in the works before ar-wiki's action regarding Palestine, or was it prompted by that? voorts (talk/contributions) 02:03, 24 January 2025 (UTC)[reply]
@voorts: Planning for this initiative began several months ago. The banners and logo changes on Arabic Wikipedia were one factor in making this work a higher priority, but by no means the only factor. One of the key existing policies that relates to this topic is the Wikimedia Foundation Policy and Political Association Guideline. The current version of that policy is pretty old at this point, and we've found that it hasn't clearly answered all the questions about banners that have come up since it was last updated. We can also see how external trends, including those identified in the Foundation's annual plan, might result in an increase in community proposals to take action. Updating policies is one way to support decision-making on those possible proposals. CRoslof (WMF) (talk) 01:09, 25 January 2025 (UTC)[reply]

RfC: Amending ATD-R

Should WP:ATD-R be amended as follows:

A page can be [[Wikipedia:BLANKANDREDIRECT|blanked and redirected]] if there is a suitable page to redirect to, and if the resulting redirect is not [[Wikipedia:R#DELETE|inappropriate]]. If the change is disputed via a [[Wikipedia:REVERT|reversion]], an attempt should be made to reach a [[Wikipedia:Consensus|consensus]] before blank-and-redirecting again. Suitable venues for doing so include the article's talk page and [[Wikipedia:Articles for deletion]].
+
A page can be [[Wikipedia:BLANKANDREDIRECT|blanked and redirected]] if there is a suitable page to redirect to, and if the resulting redirect is not [[Wikipedia:R#DELETE|inappropriate]]. If the change is disputed, such as by [[Wikipedia:REVERT|reversion]], an attempt should be made to reach a [[Wikipedia:Consensus|consensus]] before blank-and-redirecting again. The preferred venue for doing so is the appropriate [[WP:XFD|deletion discussion venue]] for the pre-redirect content, although sometimes the dispute may be resolved on the page's talk page.

Support (Amending ATD-R)

  • As proposer. This reflects existing consensus and current practice. Blanking of article content should be discussed at AfD, not another venue. If someone contests a BLAR, they're contesting the fact that article content was removed, not that a redirect exists. The venue matters because different sets of editors patrol AfD and RfD. voorts (talk/contributions) 01:54, 24 January 2025 (UTC)[reply]
  • Summoned by bot. I broadly support this clarification. However, I think it could be made even clearer that, in lieu of an AfD, if a consensus on the talkpage emerges that it should be merged to another article, that suffices and reverting a BLAR doesn't change that consensus without good reason. As written, I worry that the interpretation will be "if it's contested, it must go to AfD". I'd recommend the following: This may be done through either a merge discussion on the talkpage that results in a clear consensus to merge. Alternatively, or if a clear consensus on the talkpage does not form, the article should be submitted through Articles for Deletion for a broader consensus to emerge. That said, I'm not so miffed with the proposed wording to oppose it. -bɜ:ʳkənhɪmez | me | talk to me! 02:35, 24 January 2025 (UTC)[reply]
    I don't see this proposal as precluding a merge discussion. voorts (talk/contributions) 02:46, 24 January 2025 (UTC)[reply]
    I don't either, but I see the wording of although sometimes the dispute may be resolved on the article's talk page closer to "if the person who contested/reverted agrees on the talk page, you don't need an AfD" rather than "if a consensus on the talk page is that the revert was wrong, an AfD is not needed". The second is what I see general consensus as, not the first. -bɜ:ʳkənhɪmez | me | talk to me! 02:53, 24 January 2025 (UTC)[reply]
  • I broadly support the idea, an AFD is going to get more eyes than an obscure talkpage, so I suspect it is the better venue in most cases. I'm also unsure how to work this nuance in to the prose, and not suspect the rare cases where another forum would be better, such a forum might emerge anyway. CMD (talk) 03:28, 24 January 2025 (UTC)[reply]
  • Support per my extensive comments in the prior discussion. Thryduulf (talk) 11:15, 24 January 2025 (UTC)[reply]
  • Support, although I don't see much difference between the status quo and the proposed wording. Basically, the two options, AfD or the talk page, are just switched around. It doesn't address the concerns that in some cases RfD is or is not a valid option. Perhaps it needs a solid "yes" or "no" on that issue? If RfD is an option, then that should be expressed in the wording. And since according to editors some of these do wind up at RfD when they shouldn't, then maybe that should be made clear here in this policy's wording, as well. Specifically addressing the RfD issue in the wording of this policy might actually lead to positive change. P.I. Ellsworth , ed. put'er there 17:26, 24 January 2025 (UTC)[reply]
  • Support the change in wording to state the preference for AFD in the event of a conflict, because AFD is more likely to result in binding consensus than simply more talk. Robert McClenon (talk) 01:04, 25 January 2025 (UTC)[reply]
  • Support Per Thryduulf's reasoning in the antecedent discussion. Jclemens (talk) 04:45, 25 January 2025 (UTC)[reply]
  • Support. AfD can handle redirects, merges, DABifies...the gamut. This kind of discussion should be happening out in the open, where editors versed in notability guidelines are looking for discussions, rather than between two opposed editors on an article talk page (where I doubt resolution will be easily found anyways). Toadspike [Talk] 11:48, 26 January 2025 (UTC)[reply]
  • Support firstly, because by "blank and redirect" you're fundamentally saying that an article shouldn't exist at that title (presumably either because it's not notable, or it is notable but it's best covered at another location). WP:AFD is the best location to discuss this. Secondly, because this has been abused in the past. COVID-19 lab leak theory is one example; and when it finally reached AFD, there was a pretty strong consensus for an article to exist at that title, which settled a dispute that spanned months. There are several other examples; AFD has repeatedly proven to be the best settler of "blank and redirect" situations, and the best at avoiding the "low traffic talk page" issue. ProcrastinatingReader (talk) 18:52, 26 January 2025 (UTC)[reply]
  • Support, my concerns have been aired and I'm comfortable with using AfD as a primary venue for discussing any pages containing substantial article content. Utopes (talk / cont) 22:30, 29 January 2025 (UTC)[reply]

Oppose (Amending ATD-R)

  • Oppose. The status quo reflects the nuances that Chipmunkdavis has vocalized. There are also other venues to consider: if the page is a template, WP:TFD would be better. If this is long-stable as a redirect, RfD is a better venue (as I've argued here, for example). -- Tavix (talk) 17:13, 24 January 2025 (UTC)[reply]
    The intent here is to address articles. Obviously TfD is the place to deal with templates and nobody is suggesting otherwise. voorts (talk/contributions) 17:28, 24 January 2025 (UTC)[reply]
    The section in question is about pages, not articles. If the proposed wording is adapted, it would be suggesting that WP:BLAR'd templates go to AfD. As I explained in the previous discussion, that's part of the reason why the proposed wording is problematic and that it was premature for an RfC on the matter. -- Tavix (talk) 17:35, 24 January 2025 (UTC)[reply]
    As a bit of workshopping, how about changing doing so to articles? -- Tavix (talk) 17:46, 24 January 2025 (UTC)[reply]
    Done. Pinging @Consarn, @Berchanhimez, @Chipmunkdavis, @Thryduulf, @Paine Ellsworth, @Tavix. voorts (talk/contributions) 22:51, 24 January 2025 (UTC)[reply]
    Gentle reminder to editor Voorts: as I'm subscribed to this RfC, there is no need to ping me. That's just an extra unnecessary step. P.I. Ellsworth , ed. put'er there 22:58, 24 January 2025 (UTC)[reply]
    Not everyone subscribes to every discussion. I regularly unsubscribe to RfCs after I !vote. voorts (talk/contributions) 22:59, 24 January 2025 (UTC)[reply]
    I don't. Just saving you some time and extra work. P.I. Ellsworth , ed. put'er there 23:03, 24 January 2025 (UTC)[reply]
    considering the above discussion, my vote hasn't really changed. this does feel incomplete, what with files and templates existing and all that, so that still feels undercooked (and now actively article-centric), hence my suggestion of either naming multiple venues or not naming any consarn (speak evil) (see evil) 23:28, 24 January 2025 (UTC)[reply]
    Agree. I'm beginning to understand those editors who said it was too soon for an RfC on these issues. While I've given this minuscule change my support (and still do), this very short paragraph could definitely be improved with a broader guidance for up and coming generations. P.I. Ellsworth , ed. put'er there 23:38, 24 January 2025 (UTC)[reply]
    If you re-read the RFCBEFORE discussions, the dispute was over what to do with articles that have been BLARed. That's why this was written that way. I think it's obvious that when there's a dispute over a BLARed article, it should go to AfD, not RfD. I proposed this change because apparently some people don't think that's so obvious. Nobody has or is disputing that BLARed templates should go to TfD, files to FfD, or miscellany to MfD. And none of that needs to be spelled out here per WP:CREEP. voorts (talk/contributions) 00:17, 25 January 2025 (UTC)[reply]
    If you want to be fully inclusive, it could say something like "the appropriate deletion venue for the pre-redirect content" or "...the blanked content" or some such. I personally don't think that's necessary, but don't object if others disagree on that score. (To be explicit neither the change that was made, nor a change to along the lines of my first sentence, change my support). Thryduulf (talk) 00:26, 25 January 2025 (UTC)[reply]
    Exactly. And my support hasn't changed as well. Goodness, I'm not saying this needs pages and pages of instruction, nor even sentence after sentence. I think us old(er) farts sometimes need to remember that less experienced editors don't necessarily know what we know. I think you've nailed the solution, Thryduulf! The only thing I would add is something short and specific about how RfD is seldom an appropriate venue and why. P.I. Ellsworth , ed. put'er there 00:35, 25 January 2025 (UTC)[reply]
    Done. Sorry if I came in a bit hot there. voorts (talk/contributions) 00:39, 25 January 2025 (UTC)[reply]
    Also, I think something about RfDs generally not being appropriate could replace the current footnote at the end of this paragraph. voorts (talk/contributions) 00:52, 25 January 2025 (UTC)[reply]
    @Voorts: That latest change moves me to the "strong oppose" category. Again, RfD is the proper venue when the status quo is a redirect. -- Tavix (talk) 01:00, 25 January 2025 (UTC)[reply]
    I'm going to back down a bit with an emphasis on the word "preferred". I agree that AfD is the preferred venue, but my main concern is if a redirect gets nominated for deletion at RfD and editors make purely jurisdictional arguments that it should go to AfD because there's article content in its history even though it's blatantly obvious the article content should be deleted. -- Tavix (talk) 01:22, 25 January 2025 (UTC)[reply]
    this is a big part of why incident 91724 could become a case study. "has history, needs afd" took priority over the fact that the history had nothing worth keeping, the redirect had been stable as a blar for years, and the ages of the folks at rfd (specifically the admins closing or relisting discussions on blars) having zero issue with blars being nominated and discussed there (with a lot of similar blars nominated around the same time as that one being closed with relatively litte fuss, and blars nominated later being closed with no fuss), and at least three other details i'm missing
    as i said before, if a page was blanked relatively recently and someone can argue for there being something worth keeping in it, its own xfd is fine and dandy, but otherwise, it's better to just take it to rfd and leave the headache for them. despite what this may imply, they're no less capable of evaluating article content, be it stashed away in the edit history or proudly displayed in any given redirect's target consarn (speak evil) (see evil) 10:30, 25 January 2025 (UTC)[reply]
    As I've explained time and time again it's primarily not about the capabilities of editors at RfD it's about discoverability. When article content is discussed at AfD there are multiple systems in place that mean everybody interested or potentially interested knows that article content is being discussed, the same is not true when article content is discussed at RfD. Time since the BLAR is completely irrelevant. Thryduulf (talk) 10:39, 25 January 2025 (UTC)[reply]
    if you want to argue that watchlists, talk page notifs, and people's xfd logs aren't enough, that's fine by me, but i at best support also having delsort categories for rfd (though there might be some issues when bundling multiple redirects together, though that's nothing twinkle or massxfd can't fix), and at worst disagree because, respectfully, i don't have much evidence or hope of quake 2's biggest fans knowing what a strogg is. maybe quake 4, but its list of strogg was deleted with no issue (not even a relisting). see also quackifier, just under that discussion consarn (speak evil) (see evil) 11:03, 25 January 2025 (UTC)[reply]
    I would think NOTBURO/IAR would apply in those cases. voorts (talk/contributions) 02:41, 25 January 2025 (UTC)[reply]
    I would think that as well, but unfortunately that's not reality far too often. I can see this new wording being more ammo for process wonkery. -- Tavix (talk) 02:49, 25 January 2025 (UTC)[reply]
    Would a footnote clarifying that ameliorate your concerns? voorts (talk/contributions) 02:53, 25 January 2025 (UTC)[reply]
    Unless a note about RfD being appropriate in any cases makes it clear that it strictly limited to (a) when the content would be speedily deleted if restored, or (b) there has been explicit consensus the content should not be an article (or template or whatever), then it would move me into a strong oppose. This is not "process wonkery" but the fundamental spirit of the entire deletion process. Thryduulf (talk) 03:35, 25 January 2025 (UTC)[reply]
    ^Voorts, see what I mean? -- Tavix (talk) 03:43, 25 January 2025 (UTC)[reply]
    See what I mean this attitude is exactly why we are here. I've spent literal years explaining why I hold the position I do, and how it aligns with the letter and spirit of pretty much every relevant policy and guideline. It shouldn't even be controversial for blatantly obvious the article content should be deleted to mean "would be speedily deleteable if restored", yet on this again a single digit number of editors have spent years arguing that they know better. Thryduulf (talk) 03:56, 25 January 2025 (UTC)[reply]
    both sides are on single digits at the time of writing this, we just need 3 more supports to make it 10 lol
    ultimately, this has its own caveat(s). namely, with the csd not covering every possible scenario. regardless of whether or not it's intentional, it's not hard to look at something and go "this ain't it, chief". following this "process" to the letter would just add more steps to that, by restoring anything that doesn't explicitly fit a csd and dictating that it has to go to afd so it can get the boot there for the exact same reason consarn (speak evil) (see evil) 10:51, 25 January 2025 (UTC)[reply]
    Thanks. That alleviates my concerns. -- Tavix (talk) 23:45, 24 January 2025 (UTC)[reply]
  • oppose, though with the note that i support a different flavor of change. on top of the status quo issue pointed out by tavix (which i think we might need to set a period of time for, like a month or something), there's also the issue of the article content in question. if it's just unsourced, promotional, in-universe, and/or any other kind of fluff or cruft or whatever else, i see no need to worry about the content, as it's not worth keeping anyway (really, it might be better to just create a new article from scratch). if a blar, which has been stable as a redirect, did have sources, and those sources were considered reliable, then i believe restoring and sending to afd would be a viable option (see purple francis for an example). outside of that, i think if the blar is reverted early enough, afd would be the better option, but if not, then it'd be rfd
    for this reason, i'd rather have multiple venues named ("Suitable venues include Articles for Deletion, Redirects for Discussion, and Templates for Discussion"), no specific venue at all ("The dispute should be resolved in a fitting discussion venue"), or conditions for each venue (for which i won't suggest a wording because of the aforementioned status quo time issue) consarn (speak evil) (see evil) 17:50, 24 January 2025 (UTC)[reply]
  • Oppose. The proper initial venue for discussing this should be the talk page; only if agreement can't be reached informally there should it proceed to AfD. Espresso Addict (talk) 16:14, 27 January 2025 (UTC)[reply]
  • Oppose as written to capture some nuances; there may be a situation where you want a BLAR to remain a redirect, but would rather retarget it. I can't imagine the solution there is to reverse the BLAR and discuss the different redirect-location at AfD. Besides that, I think the intention is otherwise solid, as long as its consistent in practice. Moving forward it would likely lead to many old reversions of 15+ year BLAR'd content, but perhaps that's the intention; maybe only reverse the BLAR if you're seeking deletion of the page, at which point AfD becomes preferable? Article deletion to be left to AfD at that point? Utopes (talk / cont) 20:55, 27 January 2025 (UTC), moving to support, my concerns have been resolved and I'm happy to use AfD as a primary venue for discussing article content. Utopes (talk / cont) 22:29, 29 January 2025 (UTC)[reply]

Discussion (Amending ATD-R)

  • not entirely sure i should vote, but i should probably mention this discussion in wt:redirect that preceded the one about atd-r, and i do think this rfc should affect that as well, but wouldn't be surprised if it required another one consarn (speak evil) (see evil) 12:38, 24 January 2025 (UTC)[reply]
  • I know it's not really in the scope of this discussion but to be perfectly honest, I'm not sure why BLAR is a still a thing. It's a cliche, but it's a hidden mechanism for backdoor deletion that often causes arguments and edit wars. I think AfDs and talk-page merge proposals where consensus-building exists produce much better results. It makes sense for duplicate articles, but that is covered by A10's redirection clause. J947edits 03:23, 25 January 2025 (UTC)[reply]
    BLARs are perfectly fine when uncontroversial, duplicate articles are one example but bold merges are another (which A10 doesn't cover). Thryduulf (talk) 03:29, 25 January 2025 (UTC)[reply]
    It is my impression that BLARs often occur without intention of an accompanying merge. J947edits 03:35, 25 January 2025 (UTC)[reply]
    Yes because sometimes there's nothing to merge. voorts (talk/contributions) 16:01, 25 January 2025 (UTC)[reply]
    I didn't say, or intend to imply, that every BLAR is related to a merge. The best ones are generally where the target article covers the topic explicitly, either because content is merged, written or already exists. The worst ones are where the target is of little to no (obvious) relevance, contains no (obviously) relevant content and none is added. Obviously there are also ones that lie between the extremes. Any can be controversial, any can be uncontroversial. Thryduulf (talk) 18:20, 25 January 2025 (UTC)[reply]
    BLARs are preferable to deletion for content that is simply non-notable and does not run afoul of other G10/11/12-type issues. Jclemens (talk) 04:46, 25 January 2025 (UTC)[reply]
  • I'm happy to align to whatever consensus decides, but I'd like to discuss the implications because that aspect is not too clear to me. Does this mean that any time an redirect contains any history and deletion is sought, it should be restored and go to AfD? Currently there's some far-future redirects with ancient history, how would this amendment affect such titles? Utopes (talk / cont) 09:00, 29 January 2025 (UTC)[reply]
    see why i wanted that left to editor discretion (status quo, evaluation, chance of an rm or histmerge, etc.)? i trust in editors who aren't that wonk from rfd (cogsan? cornsam?) to see a pile of unsourced cruft tucked away in the history and go "i don't think this would get any keep votes in afd" consarn (speak evil) (see evil) 11:07, 29 January 2025 (UTC)[reply]
    No. This is about contested BLARs, not articles that were long ago BLARed where someone thinks the redirect should be deleted. voorts (talk/contributions) 12:42, 29 January 2025 (UTC)[reply]
    then it might depend. is its status as a blar the part that is being contested? if the title is being contested (hopefully assuming the pre-blar content is fine), would "move" be a fitting outcome outside of rm? is it being contested solely over meta-procedural stuff, as opposed to actually supporting or opposing its content? why are boots shaped like italy? was it stable as a redirect at the time of contest or not? does this account for its status as a blar being contested in an xfd venue (be it for restoring or blanking again)? it's a lot of questions i feel the current wording doesn't answer, when it very likely should. granted, what i suggested isn't much better, but shh
    going back to that one rfd i keep begrudgingly bringing up (i kinda hate it, but it's genuinely really useful), if this wording is interpreted literally, the blar was contested a few years prior and should thus be restored, regardless of the rationales being less than serviceable ("i worked hard on this" one time and... no reason the other), the pre-blar content being complete fancruft, and no one actually supporting the content in rfd consarn (speak evil) (see evil) 13:54, 29 January 2025 (UTC)[reply]
    Well that case you keep citing worked out as a NOTBURO situation, which this clraification would not override. There are obviously edge cases that not every policy is going to capture. IAR is a catch-all exception to every single policy on Wikipedia. The reason we have so much scope creep in PAGs is becaude editors insist on every exception being enumerated. voorts (talk/contributions) 14:51, 29 January 2025 (UTC)[reply]
    if an outcome (blar status is disputed in rfd, is closed as delete anyway) is common enough, i feel the situation goes from "iar good" to "rules not good", at which point i'd rather have the rules adapt. among other things, this is why i want a slightly more concrete time frame to establish a status quo (while i did suggest a month, that could also be too short), so that blars that aren't blatantly worth or not worth restoring after said time frame (for xfd or otherwise) won't be as much of a headache to deal with. of course, in cases where their usefulness or lack thereof isn't blatant, then i believe a discussion in its talk page or an xfd venue that isn't rfd would be the best option consarn (speak evil) (see evil) 17:05, 29 January 2025 (UTC)[reply]
    I think the idea that that redirect you mentioned had to go to AfD was incorrect. The issue was whether the redirect was appropriate, not whether the old article content should be kept. voorts (talk/contributions) 17:41, 29 January 2025 (UTC)[reply]
    sure took almost 2 months to get that sorted out lol consarn (speak evil) (see evil) 17:43, 29 January 2025 (UTC)[reply]
    Bad facts make bad law, as attorneys like to say. voorts (talk/contributions) 17:45, 29 January 2025 (UTC)[reply]
    Alright. @Voorts: in that case I think I agree. I.e., if somebody BLAR's a page, the best avenue to discuss merits of inclusion on Wikipedia, would be at a place like AfD, where it is treated as the article it used to be, as the right eyes for content-deletion will be present at AfD. To that end, this clarification is likely a good change to highlight this fact. I think where I might be struggling is the definition of "contesting a BLAR" and what that might look like in practice. To me, "deleting a long-BLAR'd redirect" is basically the same as "contesting the BLAR", I think?
    An example I'll go ahead and grab is 1900 Lincoln Blue Tigers football team from cat:raw. This is not a great redirect pointed at Lincoln Blue Tigers from my POV, and I'd like to see it resolved at some venue, if not resolved boldly. This page was BLAR'd in 2024, and I'll go ahead and notify Curb Safe Charmer who BLAR'd it. I think I'm inclined to undo the BLAR, not because I think the 1900 season is particularly notable, but because redirecting the 1900 season to the page about the Lincoln Blue Tigers doesn't really do much for the people who want to read about the 1900 season specifically. (Any other day I would do this boldly, but I want to seek clarification).
    But let's say this page was BLAR'd in 2004, as a longstanding redirect for 20 years. I think it's fair to say that as a redirect, this should be deleted. But this page has history as an article. So unless my interpretation is off, wouldn't the act of deleting a historied redirect that was long ago BLAR'd, be equivalent to contesting the BLAR, that turned the page into a redirect in the first place, regardless of the year? Utopes (talk / cont) 20:27, 29 January 2025 (UTC)[reply]
    I don't think so. In 2025, you're contesting that it's a good redirect from 2004, not contesting the removal of article content. If somebody actually thought the article should exist, that's one thing, but procedural objections based on RfD being an improper forum without actually thinking the subject needs an article is the kind of insistence on needless bureaucracy that NOTBURO is designed to address. voorts (talk/contributions) 20:59, 29 January 2025 (UTC)[reply]
    I see, thank you. WP:NOTBURO is absolutely vital to keep the cogs rolling, lol. Very oftentimes at RfD, there will be a "page with history" that holds up the process, all for the discussion to close with "restore and take to AfD". Cutting out the middle, and just restoring article content without bothering with an RfD to say "restore and take to AfD" would make the process and all workflows lot smoother. @Voorts:, from your own point of view, I'm very interested in doing something about 1900 Lincoln Blue Tigers football team, specifically, to remove a redirect from being at this title (I have no opinion as to whether or not an article should exist here instead). Because I want to remove this redirect; do you think I should take it to RfD as the correct venue to get rid of it? (Personally speaking, I think undoing the BLAR is a lot more simple and painless especially so as I don't have a strong opinion on article removal, but if I absolutely didn't want an article here, would RfD still be the venue?) Utopes (talk / cont) 21:10, 29 January 2025 (UTC)[reply]
    I would take that to RfD. If the editor who created the article or someone else reversed the BLAR, I'd bring it to AfD. voorts (talk/contributions) 21:16, 29 January 2025 (UTC)[reply]
    Alright. I think we're getting somewhere. I feel like some editors may consider it problematic to delete a recently BLAR'd article at RfD under any circumstance. Like if Person A BLAR's a brand new article, and Person B takes it to RfD because they disagree with the existence of a redirect at the title and it gets deleted, then this could be considered a "bypassal of the AfD process". Whether or not it is or isn't, people have cited NOTBURO for deleting it. I was under the impression this proposal was trying to eliminate this outcome, i.e. to make sure that all pages with articles in its history should be discussed at AfD under its merits as an article instead of anywhere else. I've nommed redirects where people have said "take to AfD", and I've nommed articles where people have said "take to RfD". I've never had an AfD close as "wrong venue", but I've seen countless RfDs close in this way for any amount of history, regardless of the validity of there being a full-blown article at this title, only to be restored and unanimously deleted at AfD. I have a feeling 1900 Lincoln Blue Tigers football team would close in the same way, which is why I ask as it seems to be restoring the article would just cut a lot of tape if the page is going to end up at AfD eventually. Utopes (talk / cont) 21:36, 29 January 2025 (UTC)[reply]
    I think the paragraph under discussion here doesn't really speak to what should happen in the kind of scenario you're describing. The paragraph talks about "the change" (i.e., the blanking and redirecting) being "disputed", not about what happens when someone thinks a redirect ought not to exist. I agree with you that that's needless formalism/bureaucracy, but I think that changing the appropriate venue for those kinds of redirects would need a separate discussion. voorts (talk/contributions) 21:42, 29 January 2025 (UTC)[reply]
Fair enough, yeah. I'm just looking at the definition of "disputing/contesting a BLAR". For this situation, I think it could be reasoned that I am "disputing" the "conversion of this article into a redirect". Now, I don't really have a strong opinion on whether or not an article should or shouldn't exist, but because I don't think a redirect should be at this title in either situation, I feel like "dispute" of the edit might still be accurate? Even if it's not for a regular reason that most BLARs get disputed 😅. I just don't think BLAR'ing into a page where a particular season is not discussed is a great change. That's what I meant about "saying a redirect ought not to exist" might be equivalent to "disputing/disagreeing with the edit that turned this into a redirect to begin with". And if those things are equivalent, then would that make AfD the right location to discuss the history of this page as an article? That was where I was coming from; hopefully that makes sense lol. If it needs a separate discussion I can totally understand that as well. Utopes (talk / cont) 21:57, 29 January 2025 (UTC)[reply]
In the 1900 Blue Tigers case and others like it where you think that it should not be a redirect but have no opinion about the existence or otherwise of an article then simply restore the article. Making sure it's tagged for any relevant WikiProjects is a bonus but not essential. If someone disputes your action then a talk page discussion or AfD is the correct course of action for them to take. If they think the title should be a red link then AfD is the only correct venue. Thryduulf (talk) 22:08, 29 January 2025 (UTC)[reply]
Alright, thank you Thryduulf. That was kind of the vibe I was leaning towards as well, as AfD would be able to determine the merits the page's existence as a subject matter. This all comes together because not too long ago I was criticized for restoring a page that contained an article in its history. In this discussion for Wikipedia:Articles for deletion/List of cultural icons of Canada, I received the following message regarding my BLAR-reversal: For the record, it's really quite silly and unnecessary to revert an ancient redirect from 2011 back into a bad article that existed for all of a day before being redirected, just so that you can force it through an AFD discussion — we also have the RFD process for unnecessary redirects, so why wasn't this just taken there instead of being "restored" into an article that the restorer wants immediately deleted? I feel like this is partially comparable to 1900 Lincoln Blue Tigers football team, as both of these existed for approx a day before the BLAR, but if restoring a 2024 article is necessary per Thryduulf, but restoring a 2011 article is silly per Bearcat, I'm glad that this has the potential to be ironed out via this RfC, possibly. Utopes (talk / cont) 22:18, 29 January 2025 (UTC)[reply]
There are exactly two situations where an AfD is not required to delete article content:
  1. The content meets one or more criteria for speedy deletion
  2. The content is eligible to be PRODed
Bearcat's comment is simply wrong - RfD is not the correct venue for deleting article content, regardless of how old it is. Thryduulf (talk) 22:25, 29 January 2025 (UTC)[reply]
Understood. I'll keep that in mind for my future editing, and I'll move from the oppose to the support section of this RfC. Thank you for confirmation regarding these situations! Cheers, Utopes (talk / cont) 22:28, 29 January 2025 (UTC)[reply]
@Utopes: Note that is simply Thryduulf's opinion and is not supported by policy (despite his vague waves to the contrary). Any redirect that has consensus to delete at RfD can be deleted. I see that you supported deletion of the redirect at Wikipedia:Redirects for discussion/Log/2024 September 17#List of Strogg in Quake II. Are you now saying that should have procedurally gone to AfD even though it was blatantly obvious that the article content is not suitable for Wikipedia? -- Tavix (talk) 22:36, 29 January 2025 (UTC)[reply]
I'm saying that AfD probably would have been the right location to discuss it at. Of course NOTBURO applies and it would've been deleted regardless, really, but if someone could go back in time, bringing that page to AfD instead of RfD seems like it would have been more of an ideal outcome. I would've !voted delete on either venue. Utopes (talk / cont) 22:39, 29 January 2025 (UTC)[reply]
@Utopes: Note that Tavix's comments are, despite their assertions to the contrary, only their opinion. It is notable that not once in the literal years of discussions, including this one, have they managed to show any policy that backs up this opinion. Content that is blatantly unsuitable for Wikipedia can be speedily deleted, everything that can't be is not blatantly unsuitable. Thryduulf (talk) 22:52, 29 January 2025 (UTC)[reply]
Here you go. Speedy deletion is a process that provides administrators with broad consensus to bypass deletion discussion, at their discretion. RfD is a deletion discussion venue for redirects, so it doesn't require speedy deletion for something that is a redirect to be deleted via RfD. Utopes recognizes there is a difference between "all redirects that have non-speediable article content must be restored and discussed at AfD" and "AfD is the preferred venue for pages with article content", so I'm satisfied to their response to my inquiry. -- Tavix (talk) 23:22, 29 January 2025 (UTC)[reply]
Quoting yourself in a discussion about policy doe not show that your opinion is consistent with policy. Taking multiple different bits of policy and multiple separate facts, putting them all in a pot and claiming the result shows your opinion is supported by policy didn't do that in the discussion you quoted and doesn't do so now. You have correctly quoted what CSD is and what RfD is, but what you haven't done is acknowledged that when a BLARed article is nominated for deletion it is article content that will be deleted, and that article content nominated for deletion is discussed at AfD not RfD. Thryduulf (talk) 02:40, 30 January 2025 (UTC)[reply]

Question About No Quorum Redirect

I am confident that I can get a knowledgeable answer here quickly. There is a Deletion Review in progress, where the AFD was held in December 2023, and no one participated except the nominator, even after two Relists. After two relists, the closer closed it as a Redirect, which was consistent with what the nominator had written. In Deletion Review, the appellant is saying that the article should be restored. I understand that in the case of a soft delete, the article should be restored to user or draft space on request, but in this case, the article is already present in the history. So: Does the appellant have a right to have the article restored, or should they submit it to AFC for review, or what? I don't care, but the appellant does care (of course). Robert McClenon (talk) 20:44, 24 January 2025 (UTC)[reply]

Without a second participant, an uncontested AfD is not a discussion and so there is no mandated outcome and the redirect in question can be undone by any editor in good standing, and can be then taken to AfD again by any editor objecting to it. Draft isn't typically mandated in policies, because it's a relatively new invention compared to our deletion policies and isn't referenced everywhere it might be relevant or helpful to specify. Jclemens (talk) 07:04, 25 January 2025 (UTC)[reply]
Thank you, User:Jclemens. Is there an uninvolved opinion also? Robert McClenon (talk) 18:16, 25 January 2025 (UTC)[reply]
Uninvolved opinion: While I agree with Jclemens that the DR appellant can simply revert the redirect within policy, I have not looked at this specific article and it likely makes more sense to restore to draftspace. I believe the appellant can do this themselves and does not need to go through a DR to copy the contents of the article from its history to draftspace. Alternatively, they can revert the BLAR and move to draftspace. The only difference is that if/when the article is moved back from draft to mainspace, a histmerge might be needed. Toadspike [Talk] 11:56, 26 January 2025 (UTC)[reply]
WP:NOQUORUM indicates that such a close should be treated as an expired WP:PROD, which states that restoration of prodded pages can be done via admin or via Requests for undeletion - there's no identified expectation/suggestion that prods should go to DRV. WP:SOFTDELETE states that such a deleted article "can be restored for any reason on request", ie: restoration to mainspace is an expected possibility. It also states that redirection is an option since BLAR can be used by any editor if there are no objections. Putting those together, it's reasonable for a restoration from redirect to be treated as a belated objection, and this can be done by any editor without seeking permission (though it would be nice if valid issues identified in the original AFD were fixed as part of the restoration to avoid a second AFD). ~Hydronium~Hydroxide~(Talk)~ 12:08, 26 January 2025 (UTC)[reply]

Psychological research

In recent years, psychological research on social media users and its undesirable side effects have been discussed and criticized. Is there a specific policy on Wikipedia to protect users from covert psychological research? Arbabi second (talk) 00:22, 25 January 2025 (UTC)[reply]

For starters, try Wikipedia is not a laboratory and WP:Ethically researching Wikipedia. Robert McClenon (talk) 01:01, 25 January 2025 (UTC)[reply]
@Robert McClenon
That was helpful, thank you. Arbabi second (talk) 03:34, 26 January 2025 (UTC)[reply]
There are similarities and differences. With most social media, a corporation sets up a site to attract a community. The corporation wants to sell advertising to community members and gather the community members' personal data. The site doesn't have any other purpose. Wikipedia, on the other hand, has a clear purpose: we want to write an encyclopaedia together. Community members' personal data is not collected, except to the extent that we choose to share that data on our own userpages or by way of our contributions. Advertising is not sold, or at least, not by the WMF; some Wikipedians do try to sell pages to commercial interests but that's frowned upon.
We do rely on some of the legal protections meant for social media sites, which is important for a legal case currently in progress in India.
The fact that community members' personal data isn't collected, and where someone does provide personal data, isn't verified, means that it's really hard to carry out many kinds of psychological research because you don't have enough information about the Wikipedians involved. Some Wikipedians have more than one account (legitimately or otherwise); some accounts are shared (always illegitimately). All a psychologist can really do is analyze Wikipedians as a group, and even then, people writing an encyclopaedia have modified their behaviour (hopefully towards encyclopaedia-writing) compared to how they'd behave on a regular social media site.
How could you devise a valid piece of research that targets particular users?—S Marshall T/C 16:38, 1 February 2025 (UTC)[reply]
@S Marshall
I agree with you. You misunderstood me, and it's not your fault. I meant more to protect potential victims of Wikipedia's Breaching experiment.
I am not the author of this essay. I wrote my opinion on the article's talk page here Arbabi second (talk) 04:59, 2 February 2025 (UTC)[reply]
Which breaching experiment specifically?—S Marshall T/C 08:58, 2 February 2025 (UTC)[reply]
@S Marshall
Sorry. I prefer to keep my suspicions to myself for now. As much as it is possible that such a thing exists. I'm not talking about random playfulness by a new user. I'm talking about calculated, organized activity. But my specific question is, what measures does Wikipedia have in place to deal with this kind of harmful activity? Arbabi second (talk) 11:38, 2 February 2025 (UTC)[reply]
I remember a long time ago when "research" like this was performed on Usenet. The remedy now is as it was then. P.I. Ellsworth , ed. put'er there 12:28, 2 February 2025 (UTC)[reply]
Non-specific suspicions, Arbabi second? Are you concern trolling?—S Marshall T/C 16:08, 2 February 2025 (UTC)[reply]
This question inspires more questions:
  • How do you differentiate between "psychological" and "non-psychological" research?
  • What's the standard for "covert"? "Unknown to everyone"? "Something I don't remember agreeing to"?
  • Is a covert study more likely to be harmful? Is disclosed/non-covert research more likely to be harmless? (Generally, the potential for harm is a reason given for disclosing it and requiring informed consent, so it seems likely to be the other way around.)
  • Who do you think would be doing this research?
  • Do you think that a document saying "Covert psychological research is naughty" would stop them?
  • What kind of research do you think they would do on wiki? How do you imagine that harming people?
  • Do you think that an A/B software test is "psychological research"?
WhatamIdoing (talk) 00:06, 2 February 2025 (UTC)[reply]
@WhatamIdoing
I am not the author of this essay. I wrote my opinion on the article's talk page here. But regarding your questions, I should say that I was initially referring to organized Breaching experiment. Arbabi second (talk) 04:34, 2 February 2025 (UTC)[reply]
I'm only familiar with the regulations for research involving human subjects in the US. If you want to know relevant US law, you might start with this FAQ, especially the first section. Observational research on WP using public data (such as how articles change over time through edits, or what editors say on talk pages) is allowed and does not require informed consent. However, research like a "breaching experiment" that you referred to above would require informed consent. As Robert McClenon pointed you to, WP's policy is also that "research projects that are disruptive to the community or which negatively affect articles—even temporarily—are not allowed." Below, you say "Who would conduct this research? Universities, tech companies, and independent researchers may conduct psychological studies, sometimes without users’ awareness." I don't know how tech companies handle research involving human subjects, but universities have institutional review boards (IRBs) and do not allow research without consent except in situations that are exempt by law (e.g., "the observation of public behavior when the investigator(s) do not participate in the activities being observed"). Could someone nonetheless covertly start a breaching experiment involving editors? I don't see how to prevent it, though blocking policies would likely interrupt it. FactOrOpinion (talk) 17:23, 2 February 2025 (UTC)[reply]
@FactOrOpinion
Thank you for your attention and information. The topic of human research is important but very broad. I am most interested in Wikipedia's rules on this matter. My intention is to find and translate these rules into Persian. For example: that WP policy is that "research projects that are disruptive to society or negatively impact articles - even temporarily - are not allowed." On which page is it? If possible, please link to that page. Arbabi second (talk) 20:23, 2 February 2025 (UTC)[reply]
@اربابی دوم, that quote came from Wikipedia is not a laboratory. The other page that Robert McClenon highlighted, WP:Ethically researching Wikipedia, it also useful, though it's an information page rather than a policy. This Wikimedia page might have some useful information, though again it's not a policy page. FactOrOpinion (talk) 22:06, 2 February 2025 (UTC)[reply]
@Robert McClenon@S Marshall@WhatamIdoing
Difference between psychological and non-psychological research
Psychological research focuses on behavior, emotions, cognition, and social interactions, while non-psychological research may study technical data, usage patterns, or system efficiency.
Standard for "covert"
Covert research typically means a study conducted without participants’ knowledge or informed consent. Forgetting prior consent is different from not being informed at all.
Is covert research more likely to be harmful?
Yes, because lack of informed consent can lead to ethical concerns or psychological harm. Open research is usually subject to ethical oversight.
Who would conduct this research?
Universities, tech companies, and independent researchers may conduct psychological studies, sometimes without users’ awareness.
Would a policy against covert research be effective?
A formal policy could discourage such research, but enforcement and oversight would be necessary to prevent violations.
What kind of research could be done on Wikipedia?
Studies on user behavior, editing patterns, social interactions, and how information influences decision-making.
Is an A/B software test considered psychological research?
It depends. If it only tests interface improvements, then no. But if it examines users' perceptions, emotions, or behaviors without their knowledge, then it could be psychological research. Arbabi second (talk) 12:23, 2 February 2025 (UTC)[reply]

RfC: Should the explanation of “self-published” in WP:SPS be revised?

 You are invited to join the discussion at Wikipedia_talk:Verifiability/SPS_RfC. FactOrOpinion (talk) 21:17, 26 January 2025 (UTC)[reply]

Loose Restrictions on Free Speech

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


I believe Wikipedians should be able to hold right wing political opinions without huge discrimination against. The sites policies are very much left wing and due to that, Wikipedia should be more free for right wing opinions. What is allowed to be said here should be loosened and more open. We should not listen to the 0.01% of people who are offended, otherwise Wikipedia would be an oligarchy. SimpleSubCubicGraph (talk) 04:01, 27 January 2025 (UTC)[reply]

I think people misunderstood what I meant here. I am not trying to promote an anarchist wikipedia, I am trying to allow more speech but not make Wikipedia a free speech forum (despite the name) I am trying to remove certain limitations that censor right wing opinions. SimpleSubCubicGraph (talk) 05:50, 27 January 2025 (UTC)[reply]
I did change my suggestion but the main point for this suggestion is that right wing opinions are discriminated against and censored on Wikipedia. This violates NPOV as left wing opinions are accepted but right wing opinions are not. SimpleSubCubicGraph (talk) 05:53, 27 January 2025 (UTC)[reply]
This is just disruptive at this point. EvergreenFir (talk) 06:01, 27 January 2025 (UTC)[reply]
@EvergreenFir I'm not trying to be disruptive, I read over Wikipedia policies, I see a left wing bias in there that prevents religious and right wing people from expressing their opinions and I try to fix that. SimpleSubCubicGraph (talk) 06:16, 27 January 2025 (UTC)[reply]
We're not a venue meant to empower you (or anyone) in expressing your opinions. Remsense ‥  07:05, 27 January 2025 (UTC)[reply]
A distinction without a difference. We do not embrace free speech for its own sake, but to the degree it fosters building an encyclopedia. That is, explicitly, the point. Remsense ‥  07:03, 27 January 2025 (UTC)[reply]
@SimpleSubCubicGraph: I think a policy proposal needs to be much more concrete than what you've said here. Could you give a specific example of a "left-wing policy" Wikipedia currently has, and how you think it should be changed? jlwoodwa (talk) 06:02, 27 January 2025 (UTC)[reply]
@Jlwoodwa the ones on pronouns and incivility. Its very left wing to me and it goes against my morals, and religion and thats why I just want a site that is moderate not liberal leaning. SimpleSubCubicGraph (talk) 06:14, 27 January 2025 (UTC)[reply]
That's still not a policy proposal. Here's an example of what I think meets the minimum level of concreteness:

I don't like how the Wikipedia:Article titles policy says to use common names. I think official names should always be used when they exist.

Do you see what I mean? jlwoodwa (talk) 06:34, 27 January 2025 (UTC)[reply]
Wikipedia's policies and guidelines are a product of community consensus. Everyone is free to make proposals and attempt to establish a new consensus. —Bagumba (talk) 08:57, 27 January 2025 (UTC)[reply]
It is entirely possible to be right-leaning and civil, just as it is possible to be left-leaning and uncivil. Incivility is fairly universally condemned as unproductive and unprofessional, and I would much prefer the former over the latter. Also, keep in mind that your definition of "liberal" as "left-wing" is not how most of the world uses that word. Based on your description of your beliefs, you sound like a liberal to me. Toadspike [Talk] 09:07, 27 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

The role of ChatGPT in Wikipedia

Does ChatGPT play a role in Wikipedia's editorial and administrative affairs? To what extent is this role? If there is a policy, history, or notable case in this regard, please link to it. Arbabi second (talk) 17:29, 28 January 2025 (UTC)[reply]

This is not the right venue to post this topic on, a better place to put this would be the Teahouse. Regardless, WP:CHATGPT is a good starting point to learn about this. For the policy on using it in articles, see WP:RSPCHATGPT. Hope this helps! The 🏎 Corvette 🏍 ZR1(The Garage) 18:28, 28 January 2025 (UTC)[reply]
Not policy, guideline-ish. Gråbergs Gråa Sång (talk) 18:56, 28 January 2025 (UTC)[reply]
I agree the policy village pump isn't the right place to discuss general questions on ChatGPT's usage on Wikipedia, but just in case anyone's interested there's a study interviewing Wikipedian's about their LLM usage which I think should shed some light on how users here are currently using ChatGPT and the like. Photos of Japan (talk) 18:46, 28 January 2025 (UTC)[reply]
@Gråbergs Gråa Sång@Photos of Japan@The Corvette ZR1
It was very useful information but unfortunately not enough. Thank you anyway. Arbabi second (talk) 20:29, 28 January 2025 (UTC)[reply]
We aren't allowed to sign things created by others with our user name. I think using AI generated contents without explicit disclosure should fall under that, either in discussion or article space. Graywalls (talk) 07:19, 31 January 2025 (UTC)[reply]
If you're interested, we also have WP:WikiProject AI Cleanup/Resources that has a list of relevant resources and discussions about that topic! (And an archive of the project's discussions at WP:AINB) Chaotic Enby (talk · contribs) 11:15, 31 January 2025 (UTC)[reply]

Policy on use of interactive image maps

There appears to be a slight conflict between MOS:ACCESSIBILITY and MOS:ICONS. The former says:

Do not use techniques that require interaction to provide information, such as tooltips or any other "hover" text. Abbreviations are exempt from these requirements, so the template (a wrapper for the <abbr> element) may be used to indicate the long form of an abbreviation (including an acronym or initialism).

And makes ample reference to ensuring accessibility for screen readers. The latter says

Image maps should specify alt text for the main image and for each clickable area; see Image maps and {{English official language clickable map}} for examples.

And the linked image map no longer has an interactive image map, which I'm uncertain if resulted from a single editor or wider discussion. This feels like one of those small places where policy may have evolved, but as image maps are used so rarely it doesn't seem there's extremely clear guidance here. A good example of this in action is Declaration of Independence (painting) and the monstrosity at Gale (crater)#Interactive_Mars_map. I'd personally interpret MOS:ACCESSIBILITY as dissuading image maps entirely, but that doesn't appear to be a clear policy directive. Warrenᚋᚐᚊᚔ 09:21, 29 January 2025 (UTC)[reply]

Is there any relevant distinction to be made here on which kind of device a user choses to employ? Thanks. Martinevans123 (talk) 11:22, 29 January 2025 (UTC)[reply]
I can't imagine there isn't a policy somewhere that's basically "Don't break the mobile browsing experience". The problem with imagemaps is they don't scale nicely to different sized devices; at some point there's a need for the size to stay fixed so the links map appropriately. This is why I sort of feel there may be a policy gap here, since several things would imply don't use imagemaps but we also have explicit guidance on how to use them. Warrenᚋᚐᚊᚔ 11:26, 29 January 2025 (UTC)[reply]
Ah thanks. So editors/ readers who habitually use only desktop or laptop devices may not ever realise there's a problem? Martinevans123 (talk) 11:31, 29 January 2025 (UTC)[reply]
Or even readers who use more recent phones. It's easy to forget that a high end iPhone/Android device may have a much higher resolution screen than the vast majority of phones globally. Even if it renders properly, the individual click points in an imagemap can get so compressed that they're not interactable. This puts us in a situation of populating articles with navigational elements that can only be utilized a: on desktop and b: by sighted users. Warrenᚋᚐᚊᚔ 11:34, 29 January 2025 (UTC)[reply]
The mobile interface is different. Would it better to simply disable those kinds of images for mobile users (and maybe replace with some kind of advice/apology), instead of taking them away for all users? Perhaps that's too difficult. Thanks. Martinevans123 (talk) 17:03, 29 January 2025 (UTC)[reply]
There's a way to with Template:If mobile but that's apparently depreciated, so it seems like this policy overrides it, which seems like an even further call to avoid using imagemaps (without being exactly clear enough to be a policy guideline on imagemaps). Warrenᚋᚐᚊᚔ 17:12, 29 January 2025 (UTC)[reply]
It's possible to navigate between imagemap links using the Tab key. Hence it seems likely they are rendered by screen readers as if they are a sequence of image links, explaining why MOS:ICONS requires alt text to be specified for each clickable area. I suspect MOS:NOTOOLTIPS is intended to apply to mouse-only interactive elements such as tooltips, rather than Tab-interactive elements such as wikilinks and image links, in which case the two policies are mutually consistent.
Your point about mobile browsers is a good one. WMF's Principal Web Software Engineer briefly commented on this back in 2017 in T181756 and Template talk:Features and memorials on Mars#c-Jon_(WMF)-2017-11-30T22:50:00.000Z-Template not mobile friendly, suggesting wrapping the element [image map] with a scrollable container so as not to break mobile readability. Another possible approach would be to add custom CSS via WP:TemplateStyles. Template:Calculator#Best practices suggests [using] media queries with template styles to adjust the display depending on how wide the screen is, though to pursue this option, I think we'd need to call in someone with more expertise than me. Preimage (talk) 14:14, 31 January 2025 (UTC)[reply]
The problem with that, though, is if you appropriately scale imagemaps for mobile screens that have more than a couple of clickable elements you've basically rendered it unusual just by virtue of the size of fingers and screens. Not sure that's a policy problem, but it is a problem. Warrenᚋᚐᚊᚔ 14:17, 31 January 2025 (UTC)[reply]

Adding the undisclosed use of AI to post a wall of text into discussions as disruptive editing

I think participating in discussion process like AfD and consensus building by flooding it with a "wall of text" response generated by AI should be added into disruptive editing. Those kind of discussion are generated quickly with low effort in AI, but consumes considerable time to read and process. Graywalls (talk) 07:22, 31 January 2025 (UTC)[reply]

Courtesy link to the above section § Should WP:Demonstrate good faith include mention of AI-generated comments? where a similar proposal is being discussed. Chaotic Enby (talk · contribs) 11:28, 31 January 2025 (UTC)[reply]
Use a chatbot to generate a reply, that way the two AIs can just talk with each other and it doesn't waste editors time. -- LCU ActivelyDisinterested «@» °∆t° 14:51, 31 January 2025 (UTC)[reply]
Flooding discussions with walls of text is disruptive regardless of whether the walls are AI-generated or human-written. Equally-sized walls of equal relevance to the discussion are equally disruptive regardless of the method used to write them. Thryduulf (talk) 06:54, 1 February 2025 (UTC)[reply]
I agree that equal-sized text of equal quality is equally disruptive (that's tautologically true), however there's a difference in the effort required to create them. Human-generated walls of disruptive text are limited by the time and effort the disruptive human is willing to put in, which means that as long as our community continues to maintain a sufficiently healthy proportion of constructive vs disruptive editors, the problem is manageable. The problem with LLM wallspam is that it takes effort to process and respond to disruptive discussion, and a very small number of disruptive editors using LLMs can consume a very large amount of human bandwidth dealing with them. AI doesn't help the constructive response much, since even if you are using AI to summarize the disruptive text wall, and AI to craft your response to the disruptive text wall, you still need to put in the effort to internalize the content of the wall of text, decide if it is disruptive, and then craft a prompt for your own LLM to use in the response. All of which results in a situation where it takes a disproportionate amount of effort to respond to the disruption compared to the effort it takes to produce it. If LLM use is disclosed that would help the issue, but personally I would prefer that the other person I am communicating simply send me the prompt they put into their LLM and let me use the LLM to elaborate and clarify it myself if I feel that is helpful. -- LWG talk 05:25, 2 February 2025 (UTC)[reply]
How much time and effort the author puts into writing a text wall is irrelevant, what matters is how much time and effort it takes other people to read it. It makes absolutely no difference to this whether it was written by a human or an AI. If someone writes text walls very quickly, they will simply get to the point where people advise them about it (and take action if necessary) sooner. Thryduulf (talk) 12:59, 2 February 2025 (UTC)[reply]
I agree with what matters is how much time and effort it takes other people to read it. The example cited below of the discussion at Wikipedia:Articles_for_deletion/Ribu is a good example of the cost to the project of low-content discussion - it takes a significant amount of time to even determine that the contribution is low-quality, so it's not always a viable option to ignore it. I'm coming to this conversation from a perspective of someone who has in the past spent a lot of effort engaging with newer editors who come here in good faith in the sense that they want to build a better encyclopedia, but who lack understanding of our community norms around consensus building and tend to view POV issues as a battleground. One option is to simply ignore such people's comments, revert their contributions, and wait for them to get frustrated and leave or do something bad enough that they get blocked. But if we make that our default stance towards problem editors the pool of active editors will continue to decline, which hurts the long-term health of the project. To give these editors a chance to develop into useful contributors requires wading through, understanding, and replying to a lot of low-quality comments, and if these comments are AI generated then I'm just wasting my time. -- LWG talk 20:44, 2 February 2025 (UTC)[reply]
  • Support: ban prompt-lawyering and treat it like sockpuppetry. It's obvious that widespread access to generative AI allows editors to flood discussions with prompt-generated text and this is no doubt happening already on the site: it's just too easy to do. While it appears some editors here are keen to downplay the very real danger here (what's that about?), we need a statement noting that this is no OK and a form of not only wikilawyering but outright abuse. When it can be identified, this needs to be treated just as severely as sockpuppetry. :bloodofox: (talk) 07:11, 1 February 2025 (UTC)[reply]
    I've seen so many human-generated walls of text that were repetitive and failed to move discussion forward through new analysis or points. Personally I feel the community needs to deal with this problem, no matter how the text was created. isaacl (talk) 17:38, 1 February 2025 (UTC)[reply]
    Exactly this. The problem is the wall, not how the wall was built. Thryduulf (talk) 19:03, 1 February 2025 (UTC)[reply]
    One solution is AI, which can summarize human generated walls of text. "Summarize the following in 2 sentences". -- GreenC 19:23, 1 February 2025 (UTC)[reply]
    AI summary of this section: The discussion thread debates whether AI-generated "walls of text" in Wikipedia discussions should be considered disruptive editing. While some argue for treating AI-generated content like sockpuppetry, others point out that human-generated walls of text can be equally problematic, suggesting the focus should be on addressing lengthy, unproductive contributions regardless of their source GreenC 19:25, 1 February 2025 (UTC)[reply]
    True, but we shouldn't force everyone to rely on AI writing (potentially inaccurate) summaries if they wish to meaningfully participate in a discussion. Chaotic Enby (talk · contribs) 20:38, 1 February 2025 (UTC)[reply]
    +1 Hydrangeans (she/her | talk | edits) 04:40, 2 February 2025 (UTC)[reply]
    AI summation is a tool you can use, or not, it's your choice to generate and consume it. Obviously posting AI summation is not appropriate unless solicited. -- GreenC 05:03, 2 February 2025 (UTC)[reply]
    Unless the community decides to delegate decision-making to an AI program, repeated redundant verbose comments can swamp Wikipedia's current discussion format, and unnecessarily prolong discussion. This results in participants losing focus and no longer engaging, which makes it harder to build a true consensus. The problem is not trying to understand such comments, but how they slow down progress. isaacl (talk) 22:52, 1 February 2025 (UTC)[reply]
    There is WP:TEXTWALL ("The rush you feel in your veins as you type it"). It has varieties of walls of text. Maybe a new section for AI. -- GreenC 05:21, 2 February 2025 (UTC)[reply]
Oppose (although nothing is being proposed, but whatever). My opinion is unchanged from the last time this wall-of-text-producing topic came up. Gnomingstuff (talk) 22:49, 1 February 2025 (UTC)[reply]
Maybe we should consider a temporary moratorium on proposals to ban uses of AI. Just 30 days? It's the same thing over and over: "I'm worried that someone might use LLM to generate replies that don't represent their real thoughts. Please, let's have a rule that says they're bad, even though it's unenforceable and will result in false accusations." WhatamIdoing (talk) 00:20, 2 February 2025 (UTC)[reply]
Yes. Perhaps followed by a requirement that all future proposals explicitly state how they differ from the previous ones that have been rejected or failed to reach consensus, how/why the proposer believes that the differences will overcome the objections and/or why those objections do not apply to their proposal, and why AI needs to be called out specifically. This last point is poorly worded, I'm thinking of a requirement to explain why e.g. AI walls of text are a different problem to non-AI walls of text that mean we need a policy/guideline/whatever for AI walls of text specifically rather than walls of text in general. Thryduulf (talk) 06:52, 2 February 2025 (UTC)[reply]
The most recent major RfCs on generative AI were closed with strong consensuses supporting restrictions. We aren't going to put a moratorium on such discussions just because you and Thryduulf ardently opposed their outcomes. JoelleJay (talk) 17:59, 2 February 2025 (UTC)[reply]
Agree that such behavior is disruptive if it is in fact happening. I haven't personally seen it, but I think it is reasonable for the community to set expectations before the problem behavior becomes widespread. I would like the community to 1) encourage transparency in the use of LLMs to write content, and 2) recognize that it is unreasonable and disruptive to expect the other party to put a lot more effort into comprehending and replying to your comments than you put into creating them. If all you did to contribute to the discussion is spend 30 seconds putting the prompt "summarize the reasons to keep/delete this article" into your LLM of choice, then I shouldn't be expected to do more in response than spend 30 seconds saying "looks like they have an opinion about this, but couldn't be bothered to articulate it themselves." As Thryduulf has pointed out, this principle extends beyond LLMs: any low-effort, low-quality contribution to the discussion merits a similarly minimal response, however the extreme ease of generating responses with LLMs and the difficulty of quickly identifying their use makes them of special concern. -- LWG talk 05:34, 2 February 2025 (UTC)[reply]
I've previously written about being respectful of other editors, which includes being respectful of the time of others, such as making a concerted effort to be up-to-date on discussions when making a comment, copy editing one's remarks to be concise, avoiding comments that aren't germane to the discussion at hand, being understanding if no one responds to your inquiries, and considering how your actions affect the time spent by others on Wikipedia. Focusing on the time spent writing a comment is a distraction from the real problem of poor communication. I don't want editors to argue that their comments deserve a response because they spent a lot of time writing them. isaacl (talk) 06:16, 2 February 2025 (UTC)[reply]
Oh I definitely don't want to imply that comments deserve a response because of the time spent writing them. But I'm coming at this from the perspective of someone who has spent a lot of hours over the years responding to comments that don't deserve a response, because failing to respond will either result in escalating anger and continued disruption, or will drive editors away from the project. We can say "good riddance" to such editors, but a lot of us weren't the most consensus and culture savvy in our earlier days as editors, and we already struggle with editor retention and development. If someone is writing human-generated textwalls but is redeemable, I'd like to engage with them and try to mentor them into a better understanding of our culture, but I'm wasting my time if I'm doing that with LLMs, and it takes a lot of effort from me to tell the difference. Hence why I feel that the use of LLMs is acceptable, but should be disclosed. -- LWG talk 20:44, 2 February 2025 (UTC)[reply]
This is one example Wikipedia:Articles_for_deletion/Ribu. I believe signing your user name to your comment that was written by someone else (including AI) without attribution shouldn't be allowed in the first place. Graywalls (talk) 06:56, 2 February 2025 (UTC)[reply]
Oh it is definitely happening. Here's just the most recent example that I personally collapsed. JoelleJay (talk) 18:03, 2 February 2025 (UTC)[reply]
Thanks to Graywalls and JoelleJay for the examples. I agree that that behavior is a drain on valuable project resources. -- LWG talk 20:44, 2 February 2025 (UTC)[reply]
Seems a variation of WP:TEXTWALL. There is already an established AN/I practice for this, refuse to read it and ask for something shorter. CMD (talk) 11:26, 2 February 2025 (UTC)[reply]
We recently got strong consensus (a super-majority) that it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs, it makes perfect sense to reflect this in DE policy. JoelleJay (talk) 17:27, 2 February 2025 (UTC)[reply]
That makes sense, although I don't recall that consensus being directly related to walls of text. CMD (talk) 18:58, 2 February 2025 (UTC)[reply]

Looking at RfCs in AP areas, I see a lot of very new editors, maybe EC should be required

I think that requiring EC in such RfCs is important - I've seen a lot of new editors on both sides of issues who clearly haven't much understanding of our policies and guidelines. Doug Weller talk 12:17, 31 January 2025 (UTC)[reply]

I would generally support such a measure at this time. We are certainly seeing a bunch of "chatter" from new and ip contributors. In any normal discussion, I'd say fine. But in CTs or meta discussion, requiring entry permissions to formal processes is not an unreasonable step in adding layers of protection to vital conversations. Contributors who have no stake in the continued function of en.wiki are less concerned about its continued function than are long time contributors, generally speaking. Any rando can hurl bombs with impunity. This impunity is not always great for civil disagreement. BusterD (talk) 13:57, 31 January 2025 (UTC)[reply]
That's too extreme imo. If these new editors are making non-policy based arguments, surely the RfC closer will take that into account when they make their close. Some1 (talk) 14:11, 31 January 2025 (UTC)[reply]
I'm with Some1 in thinking this is too extreme, especially when sometimes RfCs come about because of new eyes seeing them and mentioning something on the talk page. I do think that maybe a more explicit statement about sticking to established policy would be helpful, but not simply making EC privileges even more fundamental to being able to use wikipedia. I do wish it were considered a policy violation to suggest just ignoring Wikipedia policy during RfCs that could warrant a minor sanction if not just very clearly a good faith suggestion, though.
Is this an issue with RfCs becoming disproportionately new users making bad suggestions junking things up? Or more of a general thing. Warrenᚋᚐᚊᚔ 14:22, 31 January 2025 (UTC)[reply]
You think that suggesting that we follow the long-standing official policy that Wikipedia:If a rule prevents you from improving or maintaining Wikipedia, ignore it. should be sanctionable? If so, you'll be the first in line for punishment, because you just suggested that we ignore that policy. WhatamIdoing (talk) 00:35, 2 February 2025 (UTC)[reply]
The EC system for ARBPIA talkpages is a mess. Banning people from participating, but still leaving all the technical tools for them to participate, and so enforcing bans by reverting their contributions, both takes up editor time to enforce and seems a deeply poor way to treat good faith contributors. I would oppose this system being extended elsewhere in that way. It should only be considered if we first have agreed technical ways to manage it, for example we hold all AP RfCs in EC-protected subpages and have big labels informing editors of the situation at the top and bottom of the RfCs. CMD (talk) 14:36, 31 January 2025 (UTC)[reply]
Noting that, since a few days ago, editors don't have all the technical tools for them to participate anymore, as an edit filter disallows non-edit request posts. My bad, it looks like the edit filter is still being tested and doesn't block posts yet. Chaotic Enby (talk · contribs) 16:51, 31 January 2025 (UTC)[reply]
Thanks for the update, I wonder how it excepts requests. That said, this would block for the entire talkpage wouldn't it, not just RfCs as is being proposed? CMD (talk) 01:32, 1 February 2025 (UTC)[reply]
This does affect the entire page, so a similar edit filter for RfCs would likely need the RfC itself to be transcluded from a separate page. For the edit request part, we "just" had to make a regex looking for every single redirect of {{edit protected}} and {{edit extended-protected}} (a lot!) Chaotic Enby (talk · contribs) 01:47, 1 February 2025 (UTC)[reply]
Just! Thanks for the work. CMD (talk) 01:51, 1 February 2025 (UTC)[reply]
Wait, you're actually planning to block people for using the Edit buttons? Even if they don't know what's going on? If you don't want non-EC folks participating on a page, then you really need to use page protection. Don't give them an Edit button and then block them for not noticing that they weren't supposed to use it. WhatamIdoing (talk) 00:37, 2 February 2025 (UTC)[reply]
This edit at Talk:Gulf of Mexico [2] is not unusual, see also [3] or [4] - I wish I could easily find out how many new editors there are there. — Preceding unsigned comment added by Doug Weller (talkcontribs) 14:52, 31 January 2025 (UTC)[reply]
New accounts whose sole (or essentially sole) purpose is to comment in contentious RFCs should be tagged with Template:Single-purpose account. Hemiauchenia (talk) 17:24, 31 January 2025 (UTC)[reply]
Well... I dunno about that. You shouldn't label someone as an SPA if they've only made one edit, because that's not sensible. We were all "single-purpose accounts" on our first edit. For example, your first four edits were about skunks.
Maybe we need two different SPA labels, one of which rather benignly says something like "Welcome to Wikipedia! If you have any questions, you can get answers at the Wikipedia:Teahouse" and the other says "This account has made more than n edits but has made few or no edits outside this topic area". WhatamIdoing (talk) 00:47, 2 February 2025 (UTC)[reply]
We can describe a temporal version of the latter now thanks to WP:ARBBER. CMD (talk) 02:16, 2 February 2025 (UTC)[reply]
So maybe retrofit the concept of an SPA to say that if you've made 11 edits, and "only" 7 are about American politics, then you're not an SPA? WhatamIdoing (talk) 02:22, 2 February 2025 (UTC)[reply]
The opposite, ARBBER expects no more than 3 of 11 edits to be about PIA. CMD (talk) 02:37, 2 February 2025 (UTC)[reply]
This rule is going to need a minimum number of edits. WhatamIdoing (talk) 22:55, 2 February 2025 (UTC)[reply]

Guideline against use of AI images in BLPs and medical articles?

I have recently seen AI-generated images be added to illustrate both BLPs (e.g. Laurence Boccolini, now removed) and medical articles (e.g. Legionella#Mechanism). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases?

To clarify, I am not including potentially relevant AI-generated images that only happen to include a living person (such as in Springfield pet-eating hoax), but exclusively those used to illustrate a living person in a WP:BLP context. Chaotic Enby (talk · contribs) 12:11, 30 December 2024 (UTC)[reply]

What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - Sebbog13 (talk) 12:17, 30 December 2024 (UTC)[reply]
Same with animals, organisms etc. - Sebbog13 (talk) 12:20, 30 December 2024 (UTC)[reply]
I personally am strongly against using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. jolielover♥talk 12:28, 30 December 2024 (UTC)[reply]
I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – Joe (talk) 12:38, 30 December 2024 (UTC)[reply]
There hasn't been a full discussion yet, and we have a list of uses at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts, but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. Chaotic Enby (talk · contribs) 12:44, 30 December 2024 (UTC)[reply]
Discussions are going on at Wikipedia_talk:Biographies_of_living_persons#Proposed_addition_to_BLP_guidelines and somewhat at Wikipedia_talk:No_original_research#Editor-created_images_based_on_text_descriptions. I recommend workshopping an RfC question (or questions) then starting an RfC. Some1 (talk) 13:03, 30 December 2024 (UTC)[reply]
Oh, didn't catch the previous discussions! I'll take a look at them, thanks! Chaotic Enby (talk · contribs) 14:45, 30 December 2024 (UTC)[reply]
There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in Simon Ekpa. CMD (talk) 15:00, 30 December 2024 (UTC)[reply]
While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)[reply]
For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)[reply]
The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. Chaotic Enby (talk · contribs) 20:00, 30 December 2024 (UTC)[reply]
We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- Nat Gertler (talk) 20:54, 30 December 2024 (UTC)[reply]
I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. CMD (talk) 10:15, 31 December 2024 (UTC)[reply]
Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools have been implemented in a lot of photo editing software, such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). Photos of Japan (talk) 18:18, 30 December 2024 (UTC)[reply]
Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. Chaotic Enby (talk · contribs) 20:04, 30 December 2024 (UTC)[reply]
I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. Horse Eye's Back (talk) 18:40, 30 December 2024 (UTC)[reply]
For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule.
I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- Colin°Talk 19:12, 30 December 2024 (UTC)[reply]
I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. Chaotic Enby (talk · contribs) 20:03, 30 December 2024 (UTC)[reply]
Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as this one (as used in the "medical" article Fluconazole) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — xaosflux Talk 19:26, 30 December 2024 (UTC)[reply]
I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. JekyllTheFabulous (talk) 13:33, 31 December 2024 (UTC)[reply]
AI generated medical related image. No idea if this is accurate, but if it is I don't see what the problem would be compared to if this was made with ink and paper. — xaosflux Talk 00:13, 31 December 2024 (UTC)[reply]
I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. Di (they-them) (talk) 20:46, 30 December 2024 (UTC)[reply]
AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. Photos of Japan (talk) 21:46, 30 December 2024 (UTC)[reply]
AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)[reply]
I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)[reply]
AI-generated images should always say "AI-generated image of [X]" in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)[reply]
Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)[reply]
always end up with "no consensus" and no guidelines on use at all, even if most people are against it Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. Some1 (talk) 02:28, 31 December 2024 (UTC)[reply]
Of interest perhaps is this 2023 NOR noticeboard discussion on the use of drawn cartoon images in BLPs. Zaathras (talk) 22:38, 30 December 2024 (UTC)[reply]
We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites.
That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. Gnomingstuff (talk) 23:31, 30 December 2024 (UTC)[reply]
Do you really mean to ban single images showing the way birds use their wings?
Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)[reply]
Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)[reply]
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)[reply]
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. Thryduulf (talk) 18:45, 15 January 2025 (UTC)[reply]
I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not:
  1. Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use.
  2. Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated.
The potential harm I mentioned above is twofold, firstly Wikipedia is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. Thryduulf (talk) 00:52, 31 December 2024 (UTC)[reply]
I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys the idea of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)[reply]
A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and WP:IAR still does (and should) apply in edge cases.
The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been. In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". Chaotic Enby (talk · contribs) 11:13, 31 December 2024 (UTC)[reply]
Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored every time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image is the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images.
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated. The key words here are "supposed to be" and "shouldn't", editors absolutely will speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that.
Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. Thryduulf (talk) 11:43, 31 December 2024 (UTC)[reply]
For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.
Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)[reply]
the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image). There are only two possible scenarios regarding verifiability:
  1. The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
    • Verifiability is no barrier to using the image, whether it is AI generated or not.
    • If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not.
  2. The image is either not an accurate representation, or we cannot verify whether it is or is not an accurate representation
    • The only reasons we should ever use the image are:
      • It has been the subject of notable commentary and we are presenting it in that context.
      • The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo)
    This is already policy, whether the image is AI generated or not is completely irrelevant.
You will note that in no circumstance is it relevant whether the image is AI generated or not. Thryduulf (talk) 13:27, 31 December 2024 (UTC)[reply]
In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.
In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)[reply]
If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image is misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. Thryduulf (talk) 15:04, 31 December 2024 (UTC)[reply]
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. Gnomingstuff (talk) 20:35, 31 December 2024 (UTC)[reply]
Yes, but that's a Commons thing. A guideline on English Wikipedia shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)[reply]
I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. Gnomingstuff (talk) 20:45, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated images on Wikipedia. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also WP:OR scraped from who knows what and where. Use only reliable material from reliable sources. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. :bloodofox: (talk) 03:12, 31 December 2024 (UTC)[reply]
    Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)[reply]
    Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Wikipedia. Wikipedia isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)[reply]
    "Unquestionably"? Let me question that, @Bloodofox. ;-)
    If an editor were to use an AI-based image-generating service and the prompt is something like this:
    "I want a stacked bar chart that shows the number of games won and lost by FC Bayern Munich each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is:
    • 2014–15: played 34 games, won 25, tied 4, lost 5
    • 2015–16: played 34 games, won 28, tied 4, lost 2
    • 2016–17: played 34 games, won 25, tied 7, lost 2
    • 2017–18: played 34 games, won 27, tied 3, lost 4
    • 2018–19: played 34 games, won 24, tied 6, lost 4
    • 2019–20: played 34 games, won 26, tied 4, lost 4
    • 2020–21: played 34 games, won 24, tied 6, lost 4
    • 2021–22: played 34 games, won 24, tied 5, lost 5
    • 2022–23: played 34 games, won 21, tied 8, lost 5
    • 2023–24: played 34 games, won 23, tied 3, lost 8"
    I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that?
    We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. WhatamIdoing (talk) 01:58, 2 January 2025 (UTC)[reply]
    Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)[reply]
    We're discussing generating images of people, places, and objects here The proposal contains no such limitation. and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. Thryduulf (talk) 03:14, 2 January 2025 (UTC)[reply]
    As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Wikipedia, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)[reply]
    So you think the lead image at Gisèle Pelicot is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed.
    A lot of my concern about blanket statements is the principle that what's sauce for the goose is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too.
    (Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.) WhatamIdoing (talk) 06:47, 2 January 2025 (UTC)[reply]
    Review WP:SYNTH and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. :bloodofox: (talk) 09:33, 2 January 2025 (UTC)[reply]
    Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)[reply]
    Yes, which explicitly states:
    It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.
    Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under WP:SYNTH: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. :bloodofox: (talk) 10:07, 2 January 2025 (UTC)[reply]
    The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. Remsense ‥  07:00, 31 December 2024 (UTC)[reply]
    100 dots: 99 chocolate-colored dots and 1 baseball-shaped dot
    @Bloodofox, here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate?
    I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get genuinely looks like those objects, I'm not actually going to worry about it. WhatamIdoing (talk) 06:57, 31 December 2024 (UTC)[reply]
    As you know, Wikipedia has the unique factor of being entirely volunteer-ran. Wikipedia has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Wikipedia editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future.
    In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors.
    Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI.
    As a long-time editor who has frequently stumbled upon intense WP:PROFRINGE content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Wikipedia readers and Wikipedia editors alike.
    Wikipedia is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines.
    A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage. :bloodofox: (talk) 07:40, 31 December 2024 (UTC)[reply]
    A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Wikipedia articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Wikipedia-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)[reply]
    I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'.
    As a translator myself, I can only say: Oh please. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ever beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, human translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Wikipedia itself).
    I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the reality is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Wikipedia.
    Either you, a human being, can contribute to the project or you can't. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Wikipedia in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project.
    If people can't be confident that Wikipedia is made by humans, for humans the project is finally on its way out.:bloodofox: (talk) 09:55, 31 December 2024 (UTC)[reply]
    I don't know how up to date you are on the current state of translation, but:
    In a previous State of the industry report for freelance translators, the word on TMs and CAT tools was to take them as "a given." A high percentage of translators use at least one CAT tool, and reports on the increased productivity and efficiency that can accompany their use are solid enough to indicate that, unless the kind of translation work you do by its very nature excludes the use of a CAT tool, you should be using one.
    Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.
    88% of respondents use at least one CAT tool for at least some of their translation tasks.
    Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.
    Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. Photos of Japan (talk) 17:26, 31 December 2024 (UTC)[reply]
    You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is absolutely horrible at translation and all of it must be thoroughly checked by humans, as you would be if you were a translator yourself. :bloodofox: (talk) 22:20, 31 December 2024 (UTC)[reply]
    "all machine translated material must be thoroughly checked and modified by, yes, human translators"
    You are just agreeing with me here.
    "if you’re just trying to convey factual information in another language that machine translation engines handle well, AI/MT with a human reviewer can be a great option. -American Translation Society
    There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. Photos of Japan (talk) 06:48, 1 January 2025 (UTC)[reply]
    And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)[reply]
    I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Wikipedia article?" The question here is not "Shall we put AI-generating buttons on Wikipedia's own website?" WhatamIdoing (talk) 02:27, 2 January 2025 (UTC)[reply]
    I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)[reply]
    Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is not "nonsense"?
    I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that List of scholarly publishing stings will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...).
    But I'm not worried about a Wikipedia editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of Template:Wikipedia editor graph (100 per month), feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. WhatamIdoing (talk) 07:09, 2 January 2025 (UTC)[reply]
    Translators are not using generative AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any generative faculties to its output since that is the exact opposite of what translation is supposed to do. JoelleJay (talk) 02:57, 2 January 2025 (UTC)[reply]
    Translators are not using generative AI for translation this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. Thryduulf (talk) 03:06, 2 January 2025 (UTC)[reply]
    Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)[reply]
  • Ban AI-generated from all articles, AI anything from BLP and medical articles is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. Remsense ‥  06:53, 31 December 2024 (UTC)[reply]
    @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)[reply]
    I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥  07:02, 31 December 2024 (UTC)[reply]
    A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)[reply]
    Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥  07:18, 31 December 2024 (UTC)[reply]
    Like everyone said, there should be a de facto ban on using AI images in Wikipedia articles. They are effectively fake images pretending to be real, so they are out of step with the values of Wikipedia.--♦IanMacM♦ (talk to me) 08:20, 31 December 2024 (UTC)[reply]
    Except, not everybody has said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. Thryduulf (talk) 10:24, 31 December 2024 (UTC)[reply]
    @Remsense, exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{pd-algorithm}} instead of {{cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? WhatamIdoing (talk) 02:33, 2 January 2025 (UTC)[reply]
    The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥  04:43, 2 January 2025 (UTC)[reply]
    How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? WhatamIdoing (talk) 07:13, 2 January 2025 (UTC)[reply]
    There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in File:Pinwheel scheduling.svg (from Pinwheel scheduling) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid?
    I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —David Eppstein (talk) 01:15, 1 January 2025 (UTC)[reply]
    I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of Obesity) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. WhatamIdoing (talk) 07:35, 2 January 2025 (UTC)[reply]
  • I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, artificial intelligence art or Théâtre D'opéra Spatial.—S Marshall T/C 11:21, 31 December 2024 (UTC)[reply]
    Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)[reply]
    That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)[reply]
    Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. Chaotic Enby (talk · contribs) 11:43, 31 December 2024 (UTC)[reply]
  • Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. Masem (t) 15:12, 31 December 2024 (UTC)[reply]
  • Support total ban of AI imagery - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Wikipedia will be increasingly respected for holding a hard line against synthetic imagery. Carrite (talk) 15:39, 31 December 2024 (UTC)[reply]
    For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. Thryduulf (talk) 16:34, 31 December 2024 (UTC)[reply]
  • Yes, we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture does not depict the real person because it is quite simply fake.
  • Even worse would be using AI to develop medical images in articles in any way. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. Cremastra 🎄 uc 🎄 20:08, 31 December 2024 (UTC)[reply]
    It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Wikipedia is not going to be taken over by AI, AI is not out to subvert Wikipedia, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)[reply]
    So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis.
    I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). Cremastra 🎄 uc 🎄 21:02, 31 December 2024 (UTC) Cremastra 🎄 uc 🎄 20:56, 31 December 2024 (UTC)[reply]
    Determining what benefits any image brings to Wikipedia can only be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot.
    The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things any image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. Thryduulf (talk) 21:43, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated text or images in articles, except in contexts where the AI-generated content is itself the subject of discussion (in a specific or general sense). Generative AI is fundamentally at odds with Wikipedia's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. pythoncoder (talk | contribs) 21:34, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated images except in ABOUTSELF contexts. This is especially a problem given the preeminence Google gives to Wikipedia images in its image search. JoelleJay (talk) 22:49, 31 December 2024 (UTC)[reply]
  • Ban across the board, except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. Seraphimblade Talk to me 00:29, 1 January 2025 (UTC)[reply]
  • Oppose blanket bans that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)[reply]
Lachlan Macquarie?
  • Oppose blanket bans AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now (right). This purports to be a particular person ("The Father of Australia") but, if you check the image description, you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. Andrew🐉(talk) 08:03, 1 January 2025 (UTC)[reply]
    So, you expect an the AI, notoriously trained on Wikipedia (and whatever else is floating around on the internet), to correct Wikipedia where humans have failed... using the data it scraped from Wikipedia (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)[reply]
    I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible.
The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology

To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps:

  1. Gathering information on the Opie portrait: This included details about its history, provenance, and any available information on its cost.
  2. Reviewing scholarly articles and publications: This step focused on finding academic discussions specifically addressing the attribution of the portrait to John Opie.
  3. Collecting expert opinions: Statements and opinions from art experts and historians were gathered to understand the range of perspectives on the certainty of the attribution.
  4. Examining historical documents and records: This involved searching for any records that could shed light on the portrait's origins and authenticity, such as Macquarie's personal journals or contemporary accounts.
  5. Exploring scientific and technical analyses: Information was sought on any scientific or technical analyses conducted on the portrait, such as pigment analysis or canvas dating, to determine its authenticity.
  6. Comparing the portrait to other Opie works: This step involved analyzing the style and technique of the Opie portrait in comparison to other known portraits by Opie to identify similarities and differences.
  • It was quite transparent in listing and citing the sources that it used for its analysis. These included the Wikipedia image but if one didn't want that included, it would be easy to exclude it.
    So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Wikipedia. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist.
    Andrew🐉(talk) 09:09, 2 January 2025 (UTC)[reply]
    They don't have to be black boxes but they are by design: they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Wikipedia is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). :bloodofox: (talk) 09:39, 2 January 2025 (UTC)[reply]
    While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. Chaotic Enby (talk · contribs) 17:40, 2 January 2025 (UTC)[reply]
    Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. Andrew🐉(talk) 17:28, 4 January 2025 (UTC)[reply]
  • Oppose blanket ban: best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what exactly would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. MichaelMaggs (talk) 12:52, 1 January 2025 (UTC)[reply]
    I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)[reply]
    That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)[reply]
    I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being generated by AI (like the Laurence Boccolini example below) and an image being altered or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. Chaotic Enby (talk · contribs) 15:24, 1 January 2025 (UTC)[reply]
  • Oppose as unenforceable. I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI because they don't like the results to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Wikipedia. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. lethargilistic (talk) 15:41, 1 January 2025 (UTC)[reply]
    And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. MichaelMaggs (talk) 17:39, 1 January 2025 (UTC)[reply]
    Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. MichaelMaggs (talk) 17:58, 1 January 2025 (UTC)[reply]
    As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say if if changes the image), while I am talking about creating an image ex nihilo, which is what "generating" means. Chaotic Enby (talk · contribs) 18:05, 1 January 2025 (UTC)[reply]
    I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. MichaelMaggs (talk) 18:16, 1 January 2025 (UTC)[reply]
    Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. Darkfrog24 (talk) 22:51, 3 January 2025 (UTC)[reply]
  • Support blanket ban because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output that has already been generated might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? XOR'easter (talk) 23:30, 1 January 2025 (UTC)[reply]
  • Support blanket ban - Primarily because of the "poisoning the well"/"dead internet" issues created by it. FOARP (talk) 14:30, 2 January 2025 (UTC)[reply]
  • Support a blanket ban to assure some control over AI-creep in Wikipedia. And per discussion. Randy Kryn (talk) 10:50, 3 January 2025 (UTC)[reply]
  • Support that WP:POLICY applies to images: images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on WP:OR and WP:V by using images instead of text? Shooterwalker (talk) 17:04, 3 January 2025 (UTC)[reply]
    As an aside on this: in some cases Commons is being treated as a way of side-stepping WP:NOR and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. FOARP (talk) 10:43, 4 January 2025 (UTC)[reply]
  • Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in The Dress and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]
  • First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Wikipedia, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Wikipedia editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. isaacl (talk) 17:59, 4 January 2025 (UTC)[reply]
  • Support Blanket Ban on AI generated imagery per most of the discussion above. It's a very slippery slope. I might consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -Ad Orientem (talk) 02:45, 5 January 2025 (UTC)[reply]
  • Oppose blanket ban It is far too early to take an absolutist position, particularly when the potential is enormous. Wikipedia is already is image desert and to reject something that is only at the cusp of development is unwise. scope_creepTalk 20:11, 5 January 2025 (UTC)[reply]
  • Support blanket ban on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. LEPRICAVARK (talk) 22:44, 5 January 2025 (UTC)[reply]
  • Support blanket ban as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). Alanscottwalker (talk) 15:32, 8 January 2025 (UTC)[reply]
Which parts of this photo are real?
  • Support indefinite blanket ban except ABOUTSELF and simple abstract examples (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of the first video of a horse in gallop, in 1878. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon[6][7]. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Wikipedia should act to limit its exposure to this kind of technology as far as is feasible. Daß Wölf 20:57, 9 January 2025 (UTC)[reply]
  • Support at least some sort of recomendation against the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (Generative artificial intelligence, Springfield pet-eating hoax, AI slop, etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see WP:AIIMAGE) fail WP:IMAGERELEVANCE (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay WP:PORTRAIT, and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view have no legitimate encyclopedic function whatsoever. Cakelot1 ☞️ talk 14:36, 14 January 2025 (UTC)[reply]
    Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which have no legitimate encyclopedic function whatsoever. This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use is relevant. Thryduulf (talk) 14:45, 14 January 2025 (UTC)[reply]
    That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)[reply]
    Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)[reply]
    Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea?
    "Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? WhatamIdoing (talk) 17:50, 15 January 2025 (UTC)[reply]
    Criteria (b) and (c) were not part of the statement I was responding to, and make it a very significantly different assertion. I will assume that you are not making motte-and-bailey arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome.
    Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. Thryduulf (talk) 18:56, 15 January 2025 (UTC)[reply]
  • This was archived despite significant participation on the topic of whether AI-generated images should be used at all on Wikipedia. I believe a consensus has been/can be achieved here and should be closed, so I have unarchived it. JoelleJay (talk) 17:37, 2 February 2025 (UTC)[reply]

BLPs

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Are AI-generated images (generated via text prompts, see also: text-to-image model) okay to use to depict BLP subjects? The Laurence Boccolini example was mentioned in the opening paragraph. The image was created using Grok / Aurora, a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.
AI-generated image of Laurence Boccolini
Some1 (talk) 12:34, 31 December 2024 (UTC)[reply]
AI-generated cartoon portrait of Germán Larrea Mota-Velasco

03:58, January 3, 2025: Note: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the Germán Larrea Mota-Velasco example, which was generated using DALL-E, another text-to-image model).

Some1 (talk) 11:10, 3 January 2025 (UTC)[reply]

notified: Wikipedia talk:Biographies of living persons, Wikipedia talk:No original research, Wikipedia talk:Manual of Style/Images, Template:Centralized discussion -- Some1 (talk) 11:27, 2 January 2025 (UTC)[reply]

  • No. I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) Chaotic Enby (talk · contribs) 12:46, 31 December 2024 (UTC)[reply]
    That AI generated image looks like Dick Cheney wearing a Laurence Boccolini suit. ScottishFinnishRadish (talk) 12:50, 31 December 2024 (UTC)[reply]
    There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them unless they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. Thryduulf (talk) 16:45, 31 December 2024 (UTC)[reply]
  • No. Well, that was easy.
    They are fake images; they do not actually depict the person. They depict an AI-generated simulation of a person that may be inaccurate. Cremastra 🎄 uc 🎄 20:00, 31 December 2024 (UTC)[reply]
    Even if the subject uses the image to identify themselves, the image is still fake. Cremastra (uc) 19:17, 2 January 2025 (UTC)[reply]
  • No, with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. Horse Eye's Back (talk) 20:37, 31 December 2024 (UTC)[reply]
  • No. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. pythoncoder (talk | contribs) 21:30, 31 December 2024 (UTC)[reply]
  • No except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -Kj cheetham (talk) 21:32, 31 December 2024 (UTC)[reply]
  • Yes, when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use any image to represent a BLP subject this is already policy. Thryduulf (talk) 21:46, 31 December 2024 (UTC)[reply]
    How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real Laurence Boccolini has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression.
    How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. Cremastra 🎄 uc 🎄 21:54, 31 December 2024 (UTC)[reply]
    How well can we determine how accurate a representation it is? in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation any image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. Thryduulf (talk) 23:54, 31 December 2024 (UTC)[reply]
    I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 uc 🎄 00:14, 1 January 2025 (UTC)[reply]
    I'm guessing your filter bubble doesn't include Facetune and their notorious Filter (social media)#Beauty filter problems. WhatamIdoing (talk) 02:46, 2 January 2025 (UTC)[reply]
    A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a Getty poll, 87% of respondents want AI-generated art to at least be transparent, and 98% consider authentic images "pivotal in establishing trust".
    And even if you disagree with all that, can you not see the larger problem of AI images on Wikipedia getting propagated into generative AI corpora? JoelleJay (talk) 04:20, 2 January 2025 (UTC)[reply]
    I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so.
    I think we're Wikipedia:Here to build an encyclopedia, not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. WhatamIdoing (talk) 07:40, 2 January 2025 (UTC)[reply]
  • Absolutely no fake/AI images of people, photorealistic or otherwise. How is this even a question? These images are fake. Readers need to be able to trust Wikipedia, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. :bloodofox: (talk) 22:24, 31 December 2024 (UTC)[reply]
  • No except for edge cases (mostly, if the image itself is notable enough to go into the article). Gnomingstuff (talk) 22:31, 31 December 2024 (UTC)[reply]
  • Absolutely not, except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. JoelleJay (talk) 23:06, 31 December 2024 (UTC)[reply]
  • No with no exceptions. Carrite (talk) 23:54, 31 December 2024 (UTC)[reply]
  • No. We don't permit falsifications in BLPs. Seraphimblade Talk to me 00:30, 1 January 2025 (UTC)[reply]
    For the requested clarification by Some1, no AI-generated images (except when the image itself is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. Actual photographs of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is not an image of the person. Seraphimblade Talk to me 05:42, 3 January 2025 (UTC)[reply]
  • No, but with exceptions. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)[reply]
    Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —David Eppstein (talk) 05:41, 3 January 2025 (UTC)[reply]
  • No, and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than Frankenstein images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. Randy Kryn (talk) 01:34, 1 January 2025 (UTC)[reply]
    Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)[reply]
    Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – Blind Lemon Jefferson was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. WhatamIdoing (talk) 07:44, 2 January 2025 (UTC)[reply]
  • Yes, so long as it is an accurate representation. Hawkeye7 (discuss) 03:40, 1 January 2025 (UTC)[reply]
  • No not for BLPs. Traumnovelle (talk) 04:15, 1 January 2025 (UTC)[reply]
  • No Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --SuperJew (talk) 07:10, 1 January 2025 (UTC)[reply]
    Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
    What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint. — Masem (t) 14:29, 1 January 2025 (UTC)[reply]
  • No, I'm in agreeance with Seraphimblade here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. Doawk7 (talk) 09:32, 1 January 2025 (UTC)[reply]
    So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)[reply]
    To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person."
    However, I really want to stick to what you say at the end there: Heck, most AI looks closer to the real thing than any portrait.
    That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.

    Per the wording of the RfC of "depict BLP subjects," I don't think there would be any valid case to utilize AI images. I hold a strong No. Doawk7 (talk) 04:15, 3 January 2025 (UTC)[reply]
  • No. We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. Di (they-them) (talk) 19:33, 1 January 2025 (UTC)[reply]
    Gisèle Pelicot?
  • Maybe There was a prominent BLP image which we displayed on the main page recently. (right) This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the Laurence Boccolini composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. Andrew🐉(talk) 08:30, 1 January 2025 (UTC)[reply]
    Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (uc) 14:18, 1 January 2025 (UTC)[reply]
    Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)[reply]
    Commons descriptions do not appear on our articles. CMD (talk) 10:28, 2 January 2025 (UTC)[reply]
    People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. Cremastra (uc) 14:15, 2 January 2025 (UTC)[reply]
    Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing. — Masem (t) 14:33, 1 January 2025 (UTC)[reply]
    Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (uc) 14:37, 1 January 2025 (UTC)[reply]
    Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)[reply]
    Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis)
    ...human [illustration] is not going to change or distort a person's appearance in the same way an AI image would. [Drawings] done by a [competent] person who is paying attention to what they are doing [...] by person who is aware, while they are making [the drawing], that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra (uc) 20:56, 1 January 2025 (UTC)[reply]
    @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)[reply]
    I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above: The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. Cremastra (uc) 00:16, 2 January 2025 (UTC)[reply]
    Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)[reply]
    I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (uc) 02:30, 2 January 2025 (UTC)[reply]
    To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. Cremastra (uc) 02:38, 2 January 2025 (UTC)[reply]
    Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? Thryduulf (talk) 02:58, 2 January 2025 (UTC)[reply]
    Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute.
    I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. Cremastra (uc) 15:30, 2 January 2025 (UTC)[reply]
    Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a Borgesian exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, known paths. JoelleJay (talk) 04:44, 2 January 2025 (UTC)[reply]
    Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)[reply]
    If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)[reply]
    If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)[reply]
    The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)[reply]
    Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. WhatamIdoing (talk) 07:58, 2 January 2025 (UTC)[reply]
    And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.
    And I don't want to count 100 dots either! JoelleJay (talk) 17:43, 2 January 2025 (UTC)[reply]
    Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. WhatamIdoing (talk) 07:44, 3 January 2025 (UTC)[reply]
  • Comment: when you Google search someone (at least from the Chrome browser), often the link to the Wikipedia article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). Photos of Japan (talk) 09:39, 1 January 2025 (UTC)[reply]
    This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. Thryduulf (talk) 11:39, 1 January 2025 (UTC)[reply]
  • Already opposed a blanket ban: It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I expressed my views there. MichaelMaggs (talk)
    Some editors might oppose a blanket ban on all AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/text-to-image models) to depict living people. Some1 (talk) 14:32, 1 January 2025 (UTC)[reply]
  • No For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --Masem (t) 14:35, 1 January 2025 (UTC)[reply]
  • I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we prefer photos over paintings (if available). So… we should prefer photos over AI imagery.
    That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image is the only option (ie there is no photo available), then the caption should clearly indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. Blueboar (talk) 14:56, 1 January 2025 (UTC)[reply]
    The issue with the latter is that Wikipedia images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)[reply]
    We're here to build an encyclopedia, not to protect commercial search engine companies.
    I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have inaccurate AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image looks like whatever's being depicted. We are not necessarily warranting that the image was created through a specific process, but the image really does need to look like the subject. WhatamIdoing (talk) 03:12, 2 January 2025 (UTC)[reply]
    You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. Photos of Japan (talk) 17:38, 3 January 2025 (UTC)[reply]
    As another editor pointed out in their comment, there's the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet, especially on a site such as Wikipedia and especially on their own biography. WP:BLP says the bios must be written conservatively and with regard for the subject's privacy. Some1 (talk) 18:37, 3 January 2025 (UTC)[reply]
    Once we can no longer tell the difference, what's the point in banning them? Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should let the horse in. Cremastra (uc) 18:47, 3 January 2025 (UTC)[reply]
    If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. JoelleJay (talk) 04:48, 2 January 2025 (UTC)[reply]
    Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)[reply]
    But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)[reply]
    Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. WhatamIdoing (talk) 07:48, 2 January 2025 (UTC)[reply]
  • Oppose. Yes. I echo my comments from the other day regarding BLP illustrations:

    What this conversation is really circling around is banning entire skillsets from contributing to Wikipedia merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Wikipedia. [...] Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.
    Additionally, [when I say say "entire skillsets," I am not] referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Wikipedia is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.

    lethargilistic (talk) 15:41, 1 January 2025 (UTC)[reply]
    Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Wikipedia has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Wikipedia via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Wikipedia. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Wikipedia. lethargilistic (talk) 15:59, 1 January 2025 (UTC)[reply]
    By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)[reply]
    I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images will be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. lethargilistic (talk) 16:29, 1 January 2025 (UTC)[reply]
    Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear WP:OR and outright WP:SYNTH. There's no two ways about it. Articles do not require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. :bloodofox: (talk) 23:39, 1 January 2025 (UTC)[reply]
    I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)[reply]
    Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of WP:SYNTH: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes.
    A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that.
    Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the fake sources LLMs also love to "hallucinate"? :bloodofox: (talk) 03:37, 2 January 2025 (UTC)[reply]
    So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion: Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Wikipedia. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review WP:NOTSYNTH because SYNTH is not a policy; NOR is the policy: If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH. Additionally, not all synthesis is even SYNTH. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. lethargilistic (talk) 08:08, 2 January 2025 (UTC)[reply]
    "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)[reply]
    NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)[reply]
    This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. :bloodofox: (talk) 09:59, 2 January 2025 (UTC)[reply]
  • Maybe: there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. Photos of Japan (talk) 18:14, 1 January 2025 (UTC)[reply]
    That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)[reply]
    It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. Thryduulf (talk) 22:14, 1 January 2025 (UTC)[reply]
    That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and WP:CREEP states that new policies should address current problems rather than hypothetical concerns. Photos of Japan (talk) 22:16, 1 January 2025 (UTC)[reply]
  • Easy no for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. Cessaune [talk] 19:05, 1 January 2025 (UTC)[reply]
  • No obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. Zaathras (talk) 22:19, 1 January 2025 (UTC)[reply]
  • No to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. XOR'easter (talk) 23:36, 1 January 2025 (UTC)[reply]
    While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)[reply]
    The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)[reply]
    That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. Thryduulf (talk) 03:04, 2 January 2025 (UTC)[reply]
  • No for all people, per Chaotic Enby. Nikkimaria (talk) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. Nikkimaria (talk) 04:00, 3 January 2025 (UTC)[reply]
  • No - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios ("Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant" is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is).
  • If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. FOARP (talk) 14:39, 2 January 2025 (UTC) FOARP (talk) 14:39, 2 January 2025 (UTC)[reply]
    we should be steering clear of copyvio we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to this discussion.
    if people upload faked images [...] the response should be as it is now in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. Thryduulf (talk) 15:14, 2 January 2025 (UTC)[reply]
    The idea that current policies are entirely adequate is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". XOR'easter (talk) 18:36, 2 January 2025 (UTC)[reply]
    I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)[reply]
    "in other words you are saying that the problem is faked images not AI" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt.
    "at least some AI images are legally acceptable for us" - Until they decide which ones that isn't much help. FOARP (talk) 19:05, 2 January 2025 (UTC)[reply]
    Yes – what FOARP said. AI-generated images are fakes and are misleading. Cremastra (uc) 19:15, 2 January 2025 (UTC)[reply]
    Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. Thryduulf (talk) 19:05, 2 January 2025 (UTC)[reply]
  • No! This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. JuxtaposedJacob (talk) | :) | he/him | 15:00, 2 January 2025 (UTC)[reply]
    Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. JuxtaposedJacob (talk) | :) | he/him | 15:40, 3 January 2025 (UTC)[reply]
  • No, unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Wikipedia, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ModernDayTrilobite (talkcontribs) 15:25, 2 January 2025 (UTC)[reply]
    To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict[ing] a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)[reply]
    If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like Al Capp. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Wikipedia. ModernDayTrilobite (talkcontribs) 19:13, 2 January 2025 (UTC)[reply]
  • No, with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of original research... - Nabla (talk) 18:02, 2 January 2025 (UTC
  • Maybe I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Wikipedia. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask Some1 to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Wikipedia against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- Colin°Talk 18:17, 2 January 2025 (UTC)[reply]
  • No This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. Tepkunset (talk) 18:31, 2 January 2025 (UTC)[reply]
  • No. LLMs don't generate answers, they generate things that look like answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate things that look like photos. Using them on BLPs is unacceptable. DS (talk) 19:30, 2 January 2025 (UTC)[reply]
  • No. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). win8x (talk) 19:56, 2 January 2025 (UTC)[reply]
  • No. Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo (or drawing) of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Wikipedia link, without the disclaimer. ~ ToBeFree (talk) 23:54, 2 January 2025 (UTC)[reply]
  • I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ProcrastinatingReader (talk) 00:48, 3 January 2025 (UTC)[reply]
    A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not.
    DS (talk) 02:44, 3 January 2025 (UTC)[reply]
    Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)[reply]
    Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Wikipedia would accept an analog substitute like a painting, there's no reason Wikipedia shouldn't accept an equivalent painting made with digital tools, and there's no reason Wikipedia shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)[reply]
    For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Wikipedia readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)[reply]
    Wikipedia's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Wikipedia. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Wikipedia and why our opposition to these immediate proposals comes from a desire to prevent harm to Wikipedia. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)[reply]
    Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Wikipedia, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Wikipedia is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)[reply]
    Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Wikipedia when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. lethargilistic (talk) 06:10, 3 January 2025 (UTC)[reply]
    To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation (unlikely, given my lack of painting skills, but let's not get lost in the metaphor), but if my painting hasn't been discussed anywhere besides Wikipedia, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ModernDayTrilobite (talkcontribs) 05:57, 3 January 2025 (UTC)[reply]
    An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Wikipedia. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Wikipedia would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)[reply]
    Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…).
    These things are fakes. The analysis stops there. FOARP (talk) 10:48, 4 January 2025 (UTC)[reply]
    Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Wikipedia because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
    At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing SOLELY on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.
    Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been verifiability, not provenance or falsity. Sometimes, IMO, that has lead to disaster and Wikipedia saying things I know to be factually untrue despite the contents of reliable sources. But that is the policy. We compare the contents of Wikipedia to reliable sources, and the contents of Wikipedia are considered verifiable if they cohere.
    I ask again: If Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to limit what humans can do on Wikipedia, what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? lethargilistic (talk) 23:31, 4 January 2025 (UTC)[reply]
    "Verifiable by comparing them to a reliable source" - comparing two images and saying that one looks like the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.
    "Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake."" - Try presenting a paraphrasing as a quotation and see what happens.
    "Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..." - This basically happened, and is the origin of WP:NOTGALLERY. Wikipedia is not a host for original works. FOARP (talk) 22:01, 6 January 2025 (UTC)[reply]
    Comparing two images and saying that one looks like the other is not "verifying" anything. Comparing text to text in a reliable source is literally the same thing.
    The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing. No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow more unverifiable simply because it is created in a lifelike style.
    Try presenting a paraphrasing as a quotation and see what happens. Besides what I just said, nobody is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.)
    This basically happened, and is the origin of WP:NOTGALLERY. That is not the same thing. User-generated images that illustrate the subject are not prohibited by WP:NOTGALLERY. Wikipedia is a host of encyclopedic content, and user-generated images can have encyclopedic content. lethargilistic (talk) 02:41, 7 January 2025 (UTC)[reply]
    Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (uc) 02:44, 7 January 2025 (UTC)[reply]
    Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)[reply]
    So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Wikipedia editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)[reply]
    +1 to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's trying to depict the person. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. Cremastra (uc) 23:18, 7 January 2025 (UTC)[reply]
    You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy.
    But to address your actual point: Any image—any photo—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery.
    Finally, a human being is responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—Is it an appropriate likeness? lethargilistic (talk) 10:20, 8 January 2025 (UTC)[reply]
    (Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Wikipedia image. Simple as. lethargilistic (talk) 10:32, 8 January 2025 (UTC)[reply]
    We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. isaacl (talk) 23:11, 7 January 2025 (UTC)[reply]
  • Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are not photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was added then removed from his article:
    AI-generated cartoon portrait of Germán Larrea Mota-Velasco by DALL-E
    Pinging people who !voted No above: User:Chaotic Enby, User:Cremastra, User:Horse Eye's Back, User:Pythoncoder, User:Kj cheetham, User:Bloodofox, User:Gnomingstuff, User:JoelleJay, User:Carrite, User:Seraphimblade, User:David Eppstein, User:Randy Kryn, User:Traumnovelle, User:SuperJew, User:Doawk7, User:Di (they-them), User:Masem, User:Cessaune, User:Zaathras, User:XOR'easter, User:Nikkimaria, User:FOARP, User:JuxtaposedJacob, User:ModernDayTrilobite, User:Nabla, User:Tepkunset, User:DragonflySixtyseven, User:Win8x, User:ToBeFree --- Some1 (talk) 03:55, 3 January 2025 (UTC)[reply]
    Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover.
    (this isn't even a good example, it looks more like Steve Bannon)
    Gnomingstuff (talk) 04:07, 3 January 2025 (UTC)[reply]
    Was I unclear? No to all of them. XOR'easter (talk) 04:13, 3 January 2025 (UTC)[reply]
    Still no, because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. pythoncoder (talk | contribs) 04:24, 3 January 2025 (UTC)[reply]
    I still think no. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we do end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. Di (they-them) (talk) 04:40, 3 January 2025 (UTC)[reply]
    No those are not okay, as this will just cause arguments from people saying a picture is obviously AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. win8x (talk) 05:27, 3 January 2025 (UTC)[reply]
    No to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ModernDayTrilobite (talkcontribs) 05:44, 3 January 2025 (UTC)[reply]
    Thanks for the ping, yes I can, the answer is no. ~ ToBeFree (talk) 07:31, 3 January 2025 (UTC)[reply]
    No, and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. Randy Kryn (talk) 09:28, 3 January 2025 (UTC)[reply]
    The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)[reply]
    Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)[reply]
    The RfC question hasn't been changed; see my response to Zaathras below. Some1 (talk) 15:42, 3 January 2025 (UTC)[reply]
    No, that's even a worse possible approach. — Masem (t) 13:24, 3 January 2025 (UTC)[reply]
    No. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the subject, not by machines trying to simulate an image. Besides, the given example is horribly drawn. Cremastra (uc) 15:03, 3 January 2025 (UTC)[reply]
    I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) Horse Eye's Back (talk) 16:06, 3 January 2025 (UTC)[reply]
    I said *NO*. FOARP (talk) 10:37, 4 January 2025 (UTC)[reply]
    No Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --SuperJew (talk) 01:12, 5 January 2025 (UTC)[reply]
    Still no. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. Tepkunset (talk) 20:43, 6 January 2025 (UTC)[reply]
  • Absolutely not. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Wikipedia is better than this. Retswerb (talk) 10:16, 3 January 2025 (UTC)[reply]
  • Comment The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. Zaathras (talk) 14:33, 3 January 2025 (UTC)[reply]
    The RfC question hasn't been modified; I've only added a 03:58, January 3, 2025: Note clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the exact same [8] as it is now, so I don't think the addition of the Note makes a whole ton of difference). Some1 (talk) 15:29, 3 January 2025 (UTC)[reply]
  • No At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. pillowcrow 21:34, 3 January 2025 (UTC)[reply]
  • Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]
  • No. Wikipedia is made by and for humans. I don't want to become Google. Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. SWinxy (talk) 00:03, 4 January 2025 (UTC)[reply]
  • No. Generative AI may have its place, and it may even have a place on Wikipedia in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. SmittenGalaxy | talk! 01:07, 4 January 2025 (UTC)[reply]
  • No due to reasons of copyright (AI harvests copyrighted material) and verifiability. Gamaliel (talk) 18:12, 4 January 2025 (UTC)[reply]
  • No. Even if you are willing to ignore the inherently fraught nature of using AI-generated anything in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. Ithinkiplaygames (talk) 19:53, 4 January 2025 (UTC)[reply]
    There's no guarantee the images will actually look like the person in question there is no guarantee any image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. Thryduulf (talk) 20:39, 4 January 2025 (UTC)[reply]
  • Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—S Marshall T/C 01:17, 5 January 2025 (UTC)[reply]
    This subsection is about purely AI-generated works, not about AI-enhanced ones. Chaotic Enby (talk · contribs) 01:23, 5 January 2025 (UTC)[reply]
  • No. Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the subject," - User:RossEvans19 (talk) 02:12, 5 January 2025 (UTC)[reply]
  • Yes, depending on specific case. One can use drawings by artists, even such as caricature. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of Jesus. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by AlphaFold would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. My very best wishes (talk) 02:50, 5 January 2025 (UTC)[reply]
    This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. My very best wishes (talk) 03:19, 5 January 2025 (UTC)[reply]
  • No, I think there's legal and ethical issues here, especially with the current state of AI. Clovermoss🍀 (talk) 03:38, 5 January 2025 (UTC)[reply]
  • No: Obviously, we shouldn't be using AI images to represent anyone. Lazman321 (talk) 05:31, 5 January 2025 (UTC)[reply]
  • No Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. Alanscottwalker (talk) 14:51, 5 January 2025 (UTC)[reply]
  • No, as AI's grasp on the Internet takes hold stronger and stronger, it's important Wikipedia, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – zmbro (talk) (cont) 16:52, 5 January 2025 (UTC)[reply]
  • No, not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. scope_creepTalk 20:19, 5 January 2025 (UTC)[reply]
  • No for natural images (e.g. photos of people). Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. spintheer (talk) 20:37, 5 January 2025 (UTC)[reply]
  • No I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. LEPRICAVARK (talk) 22:26, 5 January 2025 (UTC)[reply]
  • No I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. Tpdwkouaa (talk) 06:50, 6 January 2025 (UTC)[reply]
  • No - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had significant coverage and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. Robert McClenon (talk) 02:31, 7 January 2025 (UTC)[reply]
    So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)[reply]
    At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. DS (talk) 19:18, 7 January 2025 (UTC)[reply]
  • Strong no per bloodofox. —Nythar (💬-🍀) 03:32, 7 January 2025 (UTC)[reply]
No for AI-generated BLP images Mrfoogles (talk) 21:40, 7 January 2025 (UTC)[reply]
  • No - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. Daß Wölf 23:25, 7 January 2025 (UTC)[reply]
  • NoWP:NFC says that Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people. While AI images may not be considered copyrightable, it could still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if no images of the person were used, how on Earth would we trust the output?) RunningTiger123 (talk) 02:43, 8 January 2025 (UTC)[reply]
  • No, AI images should not be permitted on Wikipedia at all. Stifle (talk) 11:27, 8 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Expiration date?

"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]

  • No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. FOARP (talk) 05:27, 5 January 2025 (UTC)[reply]
  • An end date is a positive suggestion. Consensus systems like Wikipedia's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Wikipedia goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. lethargilistic (talk) 10:22, 5 January 2025 (UTC)[reply]
  • Agree with FOARP, no need for an end date. If something significantly changes (e.g. reliable sources/news outlets such as the New York Times, BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. Some1 (talk) 11:39, 5 January 2025 (UTC)[reply]
    Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Wikipedia should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. lethargilistic (talk) 03:07, 6 January 2025 (UTC)[reply]
  • WP:Consensus can change on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. CMD (talk) 03:15, 6 January 2025 (UTC)[reply]
  • No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so.
Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. Daß Wölf 22:17, 9 January 2025 (UTC)[reply]

Technical

Parent categories

An editor has requested a change to the way we display categories in the Category: namespace. The existing system, which looks approximately like this:

does not seem intuitive. @PrimeHunter figured out how to change the existing category footer to something that makes the meaning more obvious:

and to have this only appear in the Category: namespace (i.e., will not change/screw up any articles).

Could we please get this change implemented here? It would only require copying the contents of testwiki:MediaWiki:Pagecategories to MediaWiki:Pagecategories.

WhatamIdoing (talk) 20:18, 22 January 2025 (UTC)[reply]

This sort of sounds like it would be an overall general improvement - that is not something special for only the English Wikipedia, and for only users with their interface language in en. If so, this should be requested upstream. — xaosflux Talk 01:56, 23 January 2025 (UTC)[reply]
I think it'd be better to do this locally, where it's been requested. If it seems to be a net improvement, we could always suggest it for widespread use (which would require re-translation of the string for all 300+ languages – not something that can happen quickly). WhatamIdoing (talk) 03:44, 23 January 2025 (UTC)[reply]
+1 for doing it (it's an improvement), and +1 for doing it locally (no need to wait, and can easily undo the local change if and when upstream decides to do it). DMacks (talk) 19:55, 26 January 2025 (UTC)[reply]
 Done The local customisation can be removed if/when a gerrit patch has been merged to change the message across all wikis. – SD0001 (talk) 05:35, 31 January 2025 (UTC)[reply]
Thank you WhatamIdoing (talk) 05:37, 31 January 2025 (UTC)[reply]

Invisibly populated category redirects

Can anyone work out why Category:1951 events in Europe by month, Category:2007 events in Asia by month and Category:2008 events in Asia by month are appearing in Category:Wikipedia non-empty soft redirected categories? No contents are displayed, not even delayed caches, and yet they declare themselves non-empty. Timrollpickering (talk) 12:01, 27 January 2025 (UTC)[reply]

Probably the job queue being slow to update the categorylinks, or (less likely) it having dropped some jobs. When null-edited one of the cats, it disappeared from Category:Wikipedia non-empty soft redirected categories. Anomie 12:10, 27 January 2025 (UTC)[reply]
Is there supposed to be a job for this? Category:1951 events in Europe by month has {{Category redirect}} which tests whether the category is non-empty and should be added to Category:Wikipedia non-empty soft redirected categories. If the category is emptied without editing the category page or any template it transcludes then I wouldn't expect the wikitext of the category page to be reparsed automatically but I don't know whether it happens. PrimeHunter (talk) 13:25, 27 January 2025 (UTC)[reply]
Yes, the MediaWiki servers should be re-parsing every page periodically, but they do not do so. See T132467, a long-standing feature request from 2016. (And the related T157670.) As far as I know, a cron job needs to be set up, but it has never been followed through on. I think Wbm1058 is still running a bot on the English Wikipedia to refresh stale pages, and that this query shows the current staleness of pages by date (the maximum appears to be 88 days right now). It is not great to be dependent on a bot for this critical maintenance, and 88 days of staleness is too much. It would be great to know that pages would never be more than X hours or days stale, with X being a small number. – Jonesey95 (talk) 15:07, 27 January 2025 (UTC)[reply]
I briefly discussed this matter with a Foundation employee at Wikiconference North America in Indianapolis last October. As the English wiki continues to grow, closing in on 7 million articles, it becomes technically more and more difficult to frequently work though the entire database and refresh each and every page, whether they need refreshed or not (the vast majority don't). At my bot's peak performance, I had the refresh lag down to about 30 days for mainspace and 80 days for all other namespaces. After the database was restructured last year, my bots struggled to keep up and the lag times increased substantially. Only recently, they've come back down to 41 and 87 days, and the "new normal" may be 40 and 90 days, rather than 30 and 80. My bots should be considered as equivalent to that "cron job" – basically, I think, if such an internal job were set up, I doubt it would be much more efficient or timely at refreshing links than my bots are. My bots should be viewed as a stopgap; the last line of defense insuring that a link possibly still needing to be refreshed is refreshed after 90 days, and not nine years. The path forward is to identify the links refreshed by my bot that actually needed to be refreshed, determine why they failed to get refreshed before my stopgap bot refreshed them, and then fix that issue in order to refresh them a lot more quickly than my bot refreshes them. To that end, Phabs like T132467 are helpful, and I suggest that a higher priority be placed on T132467 than T157670. I'll look closer at what needs to happen with T132467 – maybe I can develop yet another bot to address that specific issue. – wbm1058 (talk) 16:57, 27 January 2025 (UTC)[reply]
Probably worth mentioning this issue to the WMF annual plan and the community wishlist since both are open. Snævar (talk) 19:09, 29 January 2025 (UTC)[reply]
This particular category is an easy case to manage. I just ran a script to purge the cache of each member of the category, which quickly reduced the category membership from 90 to 30. Then I noticed that there were still newly-empty categories in this category, so I ran the script again, which reduced membership to 25. There were still newly-empty members, so I ran the script a third time and that kept the membership at 25 as just as many new members arrived as my script had just purged out. Is this category always so active, or is something special happening now to make it more active than usual? I can add this operation to my bot that runs twice hourly, or maybe run it even more frequently than twice an hour; that would keep the membership better, with a minimum number of short-term empty members. – wbm1058 (talk) 01:10, 28 January 2025 (UTC)[reply]
Looks like User:JJMC89 bot III is moving a bunch of categories for Wikipedia:Categories for discussion/Speedy#Current requests, which are apparently showing up in Category:Wikipedia non-empty soft redirected categories momentarily. Anomie 01:19, 28 January 2025 (UTC)[reply]
Yes. Basically, there's an ongoing WP:CFD/S process to rename categories of the form "Date events in Foo" to "Date in Foo", that is, to remove the word "events" and one adjacent space. So for example Category:March 1979 events in North America has been moved to Category:March 1979 in North America. I think that it should have been a full CFD and not a speedy, but there you go. --Redrose64 🌹 (talk) 10:54, 28 January 2025 (UTC)[reply]
Addendum: as I typed the above, Category:March 1979 events in North America was in Category:Wikipedia non-empty soft redirected categories, and its cat page was listing March 1979 in Canada as a subcat, whereas a visit to Category:March 1979 in Canada showed the cat box containing March 1979 in North America. Visiting Category:March 1979 in North America did not list March 1979 in Canada as a subcat. I tried a WP:PURGE of all three categories, which had no effect (as I suspected it wouldn't), and then performed a WP:NULLEDIT of Category:March 1979 in Canada, which did not itself change, but it did cause both Category:March 1979 events in North America and Category:March 1979 in North America to be corrected, and the former to drop out of Category:Wikipedia non-empty soft redirected categories. --Redrose64 🌹 (talk) 11:04, 28 January 2025 (UTC)[reply]
Right, looking at Special:Log/move/JJMC89 bot III, that's the culprit. My understanding is that my "null edit" cache-purging bot enters tasks into the "job queue", or, rather usually executes its tasks nearly instantaneously, and its tasks only spend time waiting in the job queue at times when the system is particularly busy and overwhelmed by too many task requests being pushed at it simultaneously. The fact that my bot's purges are happening right away indicates to me that the page-moving software, which should be purging categories right after it moves them, isn't doing that. Search Phabricator for something like "Special:MovePage needs to purge the cache of Category: namespace pages immediately after moving them". I'm adding this to-do item to my MediaWiki core developers thread. Foundation management hasn't assigned the page-moving code to any employee's responsibilities as I guess they're waiting for volunteer me to push myself into the role. – wbm1058 (talk) 11:23, 28 January 2025 (UTC)[reply]
In the meantime, while waiting for Special:MovePage code fixes, maybe User:JJMC89 could enhance his bot to make it purge each category page right after it moves the category. Updating bot code is magnitudes easier than updating MediaWiki code. – wbm1058 (talk) 11:43, 28 January 2025 (UTC)[reply]
Looking at the timestamps of Redrose64's example, the category really was non-empty for a few seconds.
So for about 6 seconds from 23:41:02 to 23:41:08, Category:March 1979 events in North America really was a non-empty soft redirected category. Based on the mw.categorize entries in recentchanges, it looks like all three of the above edits did immediately update the category links. What didn't happen immediately is the re-parsing of Category:March 1979 events in North America to determine that it was now empty. If User:JJMC89 bot III was going to purge to have an effect here, it would have to have been after the Havana Jam edit emptied the category, not after the category was moved. Anomie 13:02, 28 January 2025 (UTC)[reply]
Oh, I see. This bot is editing at an incredibly high speed. 42 edits at 23:59, 27 January 2025, that's like an edit every 1.4 seconds, a majority of them being page moves. – wbm1058 (talk) 14:14, 28 January 2025 (UTC)[reply]
Here is the bot's edit log for the relevant time span. March 1979 events in North America-related activity seems to be co-mingled with Novels with lesbian themes-related activity. What's the algorithm here? Are two separate instances of the bot running in parallel? wbm1058 (talk) 14:14, 28 January 2025 (UTC)[reply]
There's some misunderstanding here. A purge doesn't work, it must be a WP:NULLEDIT; and doing that on the moved category isn't any good either, it needs to be performed on the category's member pages. --Redrose64 🌹 (talk) 22:12, 28 January 2025 (UTC)[reply]
@Redrose64: Indeed. I use User:RMCD bot/botclasses.php function purgeCache($page), which in turn uses mw:API:Purge with |forcerecursivelinkupdate=1, which is more or less functionally equivalent to what you call a null edit. The category's member pages are indeed categories themselves. – wbm1058 (talk) 23:06, 28 January 2025 (UTC)[reply]
There can be up to two instances running at the same time, one for WP:CFD/W and one WP:CFD/W/L. This is so the large batches on CFD/W/L do not delay processing of the ones on CFD/W. Usually there is only one running since CFD/W/L is not used most of the time. — JJMC89(T·C) 08:05, 29 January 2025 (UTC)[reply]
The bot makes a follow-up edit to the category after the move. I've reordered that step to after it recategorizes the contents instead of immediately after the move. That should remove the need to purge. — JJMC89(T·C) 07:58, 29 January 2025 (UTC)[reply]
Thanks. An editor User:Gray eyes is creating category soft redirects (e.g., Category:Sports in Gdańsk, Category:Organizations based in Łódź, Category:Sports in Lublin) which are populating Category:Wikipedia non-empty soft redirected categories. I don't know why these empty soft redirects are populating the non-empty category, nor why they are being created in the first place, given that the template produces a message "Administrators: If this category name is unlikely to be entered on new pages, and all incoming links have been cleaned up, click here to delete." implying that these newly-created categories should be deleted. – wbm1058 (talk) 17:02, 29 January 2025 (UTC)[reply]
I had to use this template (Template:Sports clubs and teams in Fooland category header) to create a Category:Sports clubs and teams in Gdańsk. These categories will be automatically emptied. Gray eyes (talk) 06:22, 30 January 2025 (UTC)[reply]
OK, now there are hundreds of empty categories in Category:Wikipedia non-empty soft redirected categories. I'll add a twice-hourly purge/null-edit to my bot, to manage this issue as a stopgap, until the issue with the MediaWiki software is identified and fixed. Any time a category is removed from a page, I think a forcerecursivelinkupdate purge should be done. – wbm1058 (talk) 12:55, 30 January 2025 (UTC)[reply]

Proposal: Move User:Enterprisey/easy-brfa.js to the MediaWiki namespace

This proposal is not necessarily to turn User:Enterprisey/easy-brfa.js into a gadget, but rather to simply move it to that namespace. The idea behind this is so that people can go to Wikipedia:Bots/Requests for approval/request and just click a button, which would redirect them to that same page plus a parameter such as withJS=MediaWiki:Easy-brfa.js, allowing them to use the tool straight away without having to install it, similar to what we have at DRN. Enterprisey has expressed no objection to this idea off-wiki. JJPMaster (she/they) 01:51, 29 January 2025 (UTC)[reply]

Are you offering to maintain the script? If so, I'll move it. There's a brief earlier discussion at Wikipedia:Bots/Noticeboard/Archive 19#easy-brfa, where there weren't really any objections. – SD0001 (talk) 15:17, 29 January 2025 (UTC)[reply]
@SD0001: I don't know how I would be able to maintain it as a non-interface-admin, but if I could, then I would agree to do so. JJPMaster (she/they) 14:40, 30 January 2025 (UTC)[reply]
@Enterprisey: any comment? — xaosflux Talk 19:20, 30 January 2025 (UTC)[reply]
Sounds good to me. I appreciate that it'll be maintained :) Enterprisey (talk!) 03:39, 2 February 2025 (UTC)[reply]

UserHoverStats: Show the Edit Count and Number of Articles Created

I'm working on learning Javascript and created a small script that will display the number of edits and articles an editor has made when the hover their mouse over an editors name. I was wondering if anyone could give me some feedback, ideas, improvements, dire warnings etc. This was mostly a fun little coding exercise for me so I don't know if people will find any use for it. Dr vulpes (Talk) 03:53, 29 January 2025 (UTC)[reply]

@Dr vulpes You might want to look at User:Chlod/Scripts/UserHighlighter, which has a similar hover text to show the user's groups. Might be something worth adding (minus the highlighting part). --Ahecht (TALK
PAGE
)
19:24, 29 January 2025 (UTC)[reply]
@Ahecht thanks I'll take a look! My background is in R so I'm still getting used to Javascript in general. I've already found things I don't like about my script that I need to work out. Dr vulpes (Talk) 20:15, 29 January 2025 (UTC)[reply]

Cite errors

The 2024–25 Port Vale F.C. season article seems unable to recognise named references anymore when the reference name isn't in speech marks ([ref name = "quote"] works but not [ref name = quote]. I can't explain why that would be? EchetusXe 09:36, 29 January 2025 (UTC)[reply]

It looks like two pages stuck together with Pritt stick. Why are there two Reference sections, two lots of defaultsort, two lots of categories? I suspect this is the source of your problems. DuncanHill (talk) 10:51, 29 January 2025 (UTC)[reply]
It's very odd, if I try to edit the whole page I only get down to the first reflist and set of nav templates and cats. DuncanHill (talk) 11:11, 29 January 2025 (UTC)[reply]
It looks like it was a problem on a transcluded page, this edit by @SKennedy157: seems to have fixed it. DuncanHill (talk) 11:26, 29 January 2025 (UTC)[reply]
ah thank you! EchetusXe 15:43, 29 January 2025 (UTC)[reply]

Tool enabled without approval?

People here may be interested in or may shed light on Wikipedia talk:Short description#Wikimedia Apps/Team/Android/Machine Assisted Article Descriptions. Feel free to cross-post elsewhere or ping editors / WMF people if that seems useful. Fram (talk) 12:13, 29 January 2025 (UTC)[reply]

Massive, un-asked for, blanking

This has happened a few times, most recently here. I edit an article to fix an error, add my edit summary, preview, and then when I click "Publish changes" a gert lump of the article has disappeared. Edge on Win 11, Monobook. Any ideas what is happening and how to prevent it? — Preceding unsigned comment added by DuncanHill (talkcontribs) 12:16, 29 January 2025 (UTC)[reply]

Was there a delay before you published? I've sometimes done something similar when I make an edit and preview but then get distracted by real life. When I go back the publish only saves the section I'm working on. It seems associated with reloading the page (or the browser restarts) as the section isn't part of the url.  —  Jts1882 | talk  13:43, 29 January 2025 (UTC)[reply]
Maybe a slight delay, not more than a minute or two range, long enough to double-check I haven't missed or broken anything else. DuncanHill (talk) 17:55, 29 January 2025 (UTC)[reply]
I had the same thing happen at Template:SEC baseball record vs. opponent, which doesn't have sections. It was fine in the preview, but published with most of the text missing. Took a few days before Gonnym spotted the missing content. --Ahecht (TALK
PAGE
)
19:26, 29 January 2025 (UTC)[reply]

Watchlist query

Resolved

I keep copies of my watchlisst on notepad++, do timed entries remain active and unchanged ? - FlightTime (open channel) 17:04, 29 January 2025 (UTC)[reply]

@FlightTime: Not sure I understand the question. We got Help:Watchlist#Temporarily_watching_pages and mw:Help:Watchlist_expiry. Polygnotus (talk) 18:37, 29 January 2025 (UTC)[reply]
When an entry expires it is removed from your watchlist. If you are exporting from /raw, the expiration time isn't included in the export - so if you were to clear and re-import from a text file the expiration time would be lost. — xaosflux Talk 19:55, 29 January 2025 (UTC)[reply]
@Xaosflux: Thank you. - FlightTime (open channel) 19:58, 29 January 2025 (UTC)[reply]

No deletion log for long-ago-deleted article

When I went to https://en.wikipedia.org/wiki/Resource_discovery, I was surprised to see a MediaWiki:thisisdeleted notice (View or undelete 2 deleted edits? (view logs for this page | view filter log)) but no deletion log entry, nothing like what you'll see if you visit the recently deleted https://en.wikipedia.org/wiki/Snape_kills_Dumbledore. (Sorry for external-style links, but the message there is different from the message you see on the edit screen.) Turns out that the article was deleted in 2004, when its entire content was:

{{delete}} I LOVE ALEXANDER DESPATIE

Is this normal behaviour for a page that was deleted so, so long ago and never recreated? Nyttend (talk) 20:28, 29 January 2025 (UTC)[reply]

Creation of new citation template for the U.S. Gov Damage Assessment Toolkit (DAT)

Image 1; A screenshot of the DAT, specifically showing the 2024 Greenfield tornado
Image 2; Another screenshot of the DAT, showing part of the 2011 Super Outbreak
Image 3; DAT information on a water tower hit by the 2023 Rolling Fork–Silver City tornado

The U.S. Government has a website called the Damage Assessment Toolkit (DAT). This website is an interactive map and database, where the National Oceanic and Atmospheric Administration uploads information regarding any tornado in the United States between roughly 2011 to 2025.

Note: This was directed to VPT by administrators after a decent discussion on the Wikipedia Discord Server.

Background of issue

The DAT (screenshot of it seen to the right; Image 1 & Image 2) is cited on hundreds of articles, including GAs and FAs. At several GANs/FACs, as well as on general article talk pages (and at the WikiProject Weather talk page), several users have expressed the desire for a separate citation template for the DAT. Why? Well, the screenshot to the right (Image 1) is a good example. The red line and subsequent triangles along the red line represent the U.S. government's information regarding the 2024 Greenfield tornado (92,000 page view article). The red line represents the track of the tornado and the triangles along the red line represent every "Damage Point" documented by the National Weather Service.

Each of these "Damage Point" triangles is clickable and by clicking the triangle, you can see it contains information. Image 3 to the right shows the information regarding a water tower hit by the 2023 Rolling Fork–Silver City tornado. This specific water tower is (1) actually discussed and mentioned directly in the Wikipedia article and (2) used as a photograph on the Tornadoes of 2023#March 24–27 (United States) article. In fact, that photograph is the photograph of it on the DAT. Since the DAT has photographs, the Commons has a stand-alone copyright-related template for it ({{PD-USGov-DAT}}). However, as seen in Image 3, the DAT does not just contain photographs. Specifically, information from the DAT is cited in the article including: The rating ("EF4") and the comments, "Collapsed water tower, bent just above near base, with anchoring pulled from concrete. Tank contained water, caused crater on ground impact. Potentially compromised by flying debris."

Now, why is this a problem? So, editors and readers alike have to manually change the date in the top right corner of the website (Image 1, Image 2) to match the date desired. The DAT is always being updated/changed, since hundreds of tornadoes occur in the U.S. every year. Because of this, the DAT automatically shows only the last week. Everything from more than a week ago is stored and accessible, by anyone, as long as the date is changed. For example, so see the DAT information for the 2013 Moore tornado (263,000-yearly viewed article), users need to change the date to May 19, 2013 to May 21, 2013. After the date is changed, users have to manually zoom into the area desired. The DAT shows the entire U.S. when it is first loaded up. Once loaded, users can zoom (just like on Google Maps) into the desired area.

To See this, I recommend setting the date from May 19, 2013 to May 21, 2013 and then zooming in on southern Oklahoma City, Oklahoma, to see the entire 2013 Moore tornado.

Due to the interactivity of the DAT, there is no "triangle-specific" or even "tornado-specific" URLs to cite; just the base DAT URL from above. This has led to some incidents of reviewers being unable to instantly verify the information and some other user having to explain how they can verify the information (Talk:2024 Greenfield tornado#Failed verification an example of this issue and subsequent discussion, where Sumanuil, a non-weather editor, was unable to verify the information in the article and another user (myself) had to explain how to verify the information).

What is being requested?

Since URL-specific citations are not able to be created, a citation template is being requested for it (even requested in the past at Wikipedia:Requested templates by Departure– in November 2024, which led nowhere).

The main things the DAT is used as a citation for on Wikipedia articles is the following items:

  • Tornado Tracks
    • Tornado Length {how long was it on the ground for; distance}
    • Tornado Width {how wide was the tornado; distance}
    • Tornado Track comments {statements by the U.S. government on the tornado; press releases}
  • "Damage Points"
    • The rating of the location(s) on the Enhanced Fujita scale
    • Estimated wind speed at the location(s) {in miles-per-hour}
    • Damage Point Comments {statements by the U.S. government on the tornado; press releases}

Is there a way for a template to be made which would allow users to cite the DAT-base URL and have options to specify the date, location, and then options for the different things above? The current citation for the DAT (as seen on the Tornadoes of 2024 article) is this: [1] The Weather Event Writer (Talk Page) 18:46, 30 January 2025 (UTC)[reply]

+1 - DAT is likely the most cited resource in the tornado editor community. Without the citation template, it will cause confusion. Wildfireupdateman :) (talk) 16:35, 31 January 2025 (UTC)[reply]
+1, this would be extremely beneficial to the tornado-space as a whole. A few articles that use the Damage Assessment Toolkit as a reference:

I could name several more, but my point is proven. EF5 13:37, 31 January 2025 (UTC)[reply]

+1 This came up in the FAC for Belvidere Apollo Theatre collapse - the DAT is a bit of a pain to work with, and it came up for sourcing an image in the aftermath. Ideally, an established reliable source wouldn't require this much explaining to FAC reviewers, so a template is definitely needed. The FAC passed, by the way, so now we've officially got a featured article to add to the articles that would benefit from this template. Departure– (talk) 16:07, 31 January 2025 (UTC)[reply]

References

  1. ^ Branches of the National Oceanic and Atmospheric Administration; National Weather Service; National Severe Storms Laboratory (2024). "Damage Assessment Toolkit". DAT. United States Department of Commerce. Archived from the original on 2020-04-23. Retrieved 2024-01-20.

Hi, there's a new button in IP contributions page, labeled global contributions, that brings you to Special:GlobalContributions/whatever the IP is, that is broken (use Special:Contributions/127.0.0.1 as an example). I believe this is a new mw feature, as the special page does exist on meta wiki m:Special:GlobalContributions), but not yet on the English Wikipedia.

At mw:Trust and Safety Product/Temporary Accounts/Updates in section December it says

"Special:GlobalContributions will be able to display information about cross-wiki contributions from registered users, IP addresses, IP ranges, and temporary accounts in the near future. (T375632)".
Myrealnamm (💬Let's talk · 📜My work) 21:27, 30 January 2025 (UTC)[reply]

@Myrealnamm This is a known bug, see phab:T385086. The special page only exists on wikis with temporary accounts enabled. 86.23.109.101 (talk) 21:43, 30 January 2025 (UTC)[reply]
great… Thanks for the reference! Myrealnamm (💬Let's talk · 📜My work) 22:00, 30 January 2025 (UTC)[reply]
I have used MediaWiki:Nospecialpagetext to add a message to pages like Special:GlobalContributions/86.23.109.101 which is linked on Special:Contributions/86.23.109.101. PrimeHunter (talk) 00:30, 31 January 2025 (UTC)[reply]
The local page id:Special:GlobalContributions/Taylor_49 is broken since today on all wikis, it says "No such special page You have requested an invalid special page.". The global page m:Special:GlobalContributions/Taylor_49 shows something but says "Error loading data from some wikis. These results are incomplete. It may help to try again." and the displayed information is blatantly incorrect (huge increases in page size far above my merits). This is broken not only for IP but also for registered users. It used to work until ca yesterday thus this is NOT a new feature that "will be able to display information" but an old feature that recently stopped working. Taylor 49 (talk) 21:07, 2 February 2025 (UTC)[reply]
Special:GlobalContribs is a new feature. It is not completely developed even if it is deployed to some wikis. And by some I mean about a dozen. It is fine to say it is broken but it is by no means "old". Bugs should be expected in such a case. Izno (talk) 21:43, 2 February 2025 (UTC)[reply]
Well I confused the too prominently visible new id:Special:GlobalContributions/Taylor_49 with good old id:Special:CentralAuth/Taylor_49 that still works as it used to. Nothing is broken, just confusing. Taylor 49 (talk) 21:52, 2 February 2025 (UTC)[reply]

Help with selective transclusion

Resolved
 – Resolved. ⇌ Jake Wartenberg 18:09, 31 January 2025 (UTC)[reply]

As pointed out here, this template needs to be edited so that the floating link appears on Wikipedia:Requests for permissions, but not Template:Admin dashboard. Any takers? ⇌ Jake Wartenberg 16:00, 31 January 2025 (UTC)[reply]

Tool for listing template param usage

Is there any tool to list pages that transclude a specific template and use (i.e. do not leave it empty) a specific parameter of that template? Janhrach (talk) 19:41, 31 January 2025 (UTC)[reply]

For any template with a TemplateData section in the documentation, click the "monthly report" for this information. – Jonesey95 (talk) 20:44, 31 January 2025 (UTC)[reply]

Next steps towards OWID visualization within MediaWiki

We at Wiki Project Med have built a method to visualize Our World in Data with all material coming from Commons. You can see it functional at mdwiki:WikiProjectMed:OWID#Way_3_(current_effort).

Wondering if we can get this and this copied to EN WP so we can begin testing here.

On MDWiki you should be able to:

  • scroll through the years of data,
  • if you put your cursor over a country it should highlight and give you the name,
  • if you put your cursor over the ranges bar, it should highlight all the countries in that range,
  • if you click on a country it should pull up a graph of how data has changed in that country over time
  • if you select a region of the world it will zoom into that region

It is built from about 500 seperate images. Doc James (talk · contribs · email) 05:06, 1 February 2025 (UTC)[reply]

We are working on improving functionality on mobile as currently this is poor. Just wanting to begin testing here, it is not ready for us in mainspace. Doc James (talk · contribs · email) 05:15, 1 February 2025 (UTC)[reply]

Global watchlist (for wikis in different languages)

Hi everyone,

I was wondering what the status of using the GlobalWatchlist extension on Wikipedia to have a unified watchlist across different wikis (all Wikipedia, but different languages)?

It seems like there is on-going development work on the extension itself, but I am not finding anything recent on its use for Wikipedia. Is there a trail of this somewhere?

Thanks a lot in advance!

Best, Julius Schwarz (talk) 08:39, 1 February 2025 (UTC)[reply]

It's on Meta-Wiki: m:Special:GlobalWatchlist. Nardog (talk) 09:29, 1 February 2025 (UTC)[reply]
Oh that's neat, thanks a lot! Julius Schwarz (talk) 15:21, 1 February 2025 (UTC)[reply]

I'm trying to replace the map attribute in Oak Creek Canyon's Template:Infobox valley with this:

|map = {{maplink-road|from=Oak Creek (AZ).map}}

Unfortunately, when I try I get this:

Lua error in Module:Location_map at line 526: "?'\"`UNIQ--mapframe-0000000D-QINU`\"'?" is not a valid name for a location map definition

It works fine with Template:Infobox river. Any ideas? TerraFrost (talk) 17:08, 1 February 2025 (UTC)[reply]

@TerraFrost The map parameter of {{Infobox valley}} is hardcoded to use {{Location map}}, the infobox fills in the inputs of the location map template with the data from various infobox fields. The map parameter of {{Infobox river}} is much more complex and can take an image, a location map, or a mapframe based map, and therefore it can handle a {{maplink road}} input.
The error you are seeing is due to the location map template having its parameters filled in with a half-parsed maplink-road, which causes the location map template to try to load a nonsense map title.
You need to check the template documentation - the infobox system is a bit of a mess and different templates can have the same parameter name doing different things or having different valid inputs. 86.23.109.101 (talk) 19:42, 2 February 2025 (UTC)[reply]

"this section could not be found" notifiactions

I've been getting those little pop-up notifications saying a section cannot be found when saving the creation of said section. That seems ....a little off. Beeblebrox Beebletalks 22:51, 1 February 2025 (UTC)[reply]

@Beeblebrox: It's because you're using templates in section headings. These are never a good idea. --Redrose64 🌹 (talk) 23:17, 1 February 2025 (UTC)[reply]
Huh. I knew you aren't supposed to do that in articles, but I guess I thought it was just an MOS thing, not a technical issue. Thanks for the reply. Beeblebrox Beebletalks 18:26, 2 February 2025 (UTC)[reply]

Unused categories script no longer functioning

Hi, I've been using the script User:Qwerfjkl/scripts/unusedCategories.js for almost three years without issue. A few days ago, the script suddenly stopped working, but only here on my laptop at home. At my work desktop, it functions perfectly fine. Any idea can could be causing an issue on my laptop? See also User talk:Qwerfjkl#Unused categories script. plicit 00:51, 2 February 2025 (UTC)[reply]

Proposals

Transclusion of peer reviews to article talk pages

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hello,

First time posting here.

I would like to propose that peer reviews be automatically transcluded to talk pages in the same way as GAN reviews. This would make them more visible to more editors and better preserve their contents in the article/talk history. They often take a considerable amount of time and effort to complete, and the little note near the top of the talk page is very easy to overlook.

This also might (but only might!) raise awareness of the project and lead to more editors making use of this volunteer resource.

I posted this suggestion on the project talk page yesterday, but I have since realized it has less than 30 followers and gets an average of 0 views per day.

Thanks for your consideration, Patrick (talk) 23:07, 2 January 2025 (UTC)[reply]

I don't see any downsides here. voorts (talk/contributions) 01:55, 4 January 2025 (UTC)[reply]
Support; I agree with Voorts. Noting for transparency that I was neutrally notified of this discussion by Patrick Welsh. TechnoSquirrel69 (sigh) 21:04, 6 January 2025 (UTC)[reply]
So far this proposal has only support, both here and at the Peer review talk. Absent objections, is there a place we can request assistance with implementation? I have no idea how to do this. Thanks! --Patrick (talk) 17:23, 13 January 2025 (UTC)[reply]
It might be useful to have a bot transclude the reviews automatically like ChristieBot does for GAN reviews. AnomieBOT already does some maintenance tasks for PR so, Anomie, would this task be a doable addition to its responsibilities? Apart from that, I don't think any other changes need to be made except to selectively hide or display elements on the review pages with <noinclude>...</noinclude> or <includeonly>...</includeonly> tags. TechnoSquirrel69 (sigh) 17:28, 13 January 2025 (UTC)[reply]
Since ChristieBot already does the exact same thing for GAN reviews, it might be easier for Mike Christie to do the same for peer reviews than for me to write AnomieBOT code to do the same thing. If he doesn't want to, then I'll take a look. Anomie 22:41, 13 January 2025 (UTC)[reply]
I don't have any objection in principle, but I don't think it's anything I could get to soon -- I think it would be months at least. I have a list of things I'd like to do with ChristieBot that I'm already not getting to. Mike Christie (talk - contribs - library) 22:54, 13 January 2025 (UTC)[reply]
I took a look and posted some questions at Wikipedia talk:Peer review. Anomie 16:14, 18 January 2025 (UTC)[reply]
Support -- seems like a good idea to me. Talk pages are for showing how people have discussed the article, including peer review. Mrfoogles (talk) 20:51, 23 January 2025 (UTC)[reply]

Support. This would be very, very helpful for drafts, so discussions can be made in the Talk pages to explain a problem with a draft in more detail rather than only showing the generic reason boxes. Hinothi1 (talk) 12:56, 18 January 2025 (UTC)[reply]

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Good Article visibility

I think it would be a good idea to workshop a better way to show off our Good, A-class and Featured articles (or even B-class too), and especially in the mobile version, where there is nothing. At present, GA icons appear on the web browser, but this is it. I think we could and should be doing more. Wikipedia is an expansive project where page quality varies considerably, but most casual readers who do not venture onto talk pages will likely not even be aware of the granular class-based grading system. The only visible and meaningful distinction for many readers, especially mobile users, will be those articles with maintenance and cleanup tags, and those without. So we prominently and visibly flag our worst content, but do little to distinguish between our best content and more middling content. This seems like a missed opportunity, and poor publicity for the project. Many readers come to the project and can go away with bad impressions about Wikipedia if they encounter bad or biased content, or if they read something bad about the project, but we are doing less than we could to flag the good. If a reader frequents 9 C-class articles and one Good Article, they may simply go away without even noticing the better content, and conclude that Wikipedia is low quality and rudimentary. By better highlighting our articles that have reached a certain standard, we would actually better raise awareness about A) the work that still needs to be done, and B) the end results of a collaborative editing process. It could even potentially encourage readers who become aware of this distinction to become editors themselves and work on pages that do not carry this distinction when they see them. In this age of AI-augmented misinformation and short-attention spans, better flagging our best content could yield benefits, with little downside. It could also reinject life and vitality into the Good Article process by giving the status more tangible front-end visibility and impact, rather than largely back-end functionality. Maybe this has been suggested before. Maybe I'm barking up the wrong tree. But thoughts? Iskandar323 (talk) 15:09, 11 January 2025 (UTC)[reply]

With the big caveat that I'm very new to the GA system in general and also do not know how much technical labor this would require, this seems like a straightforwardly helpful suggestion. The green + sign on mobile (and/or some additional element) would be a genuinely positive addition to the experience for users - I think a textual element might be better so the average reader understands what the + sign means, but as it stands you're absolutely right, quality is basically impossible to ascertain on mobile for non-experts, even for articles with GA status that would have a status icon on desktop. 19h00s (talk) 16:43, 11 January 2025 (UTC)[reply]
While GA articles have been approved by at least one reviewer, there is no system of quality control for B class articles, and no system to prevent an editor from rating an article they favor as B class in order to promote or advertise it. A class articles are rare, as Military History is the only project I know of that uses that rating. Donald Albury 17:16, 11 January 2025 (UTC)[reply]
I totally agree we should be doing more. There are userscript that change links to different colours based on quality (the one I have set up shows gold links as featured, green as GA, etc).
If you aren't logged in and on mobile, you'd have no idea an article has had a review. Lee Vilenski (talkcontribs) 20:15, 11 January 2025 (UTC)[reply]
A discussion was held on this about two years ago and there was consensus to do something. See Wikipedia talk:Good Article proposal drive 2023#Proposal 21: Make GA status more prominent in mainspace and Wikipedia:Good Article proposal drive 2023/Feedback#Proposal 21: Make GA status more prominent in mainspace. Thebiguglyalien (talk) 04:20, 12 January 2025 (UTC)[reply]
@Thebiguglyalien: Is that feedback discussion alive, dead, or just lingering in half-life? It's not obviously archived, but has the whole page been mothballed? So basically, there's community consensus to do something, but the implementation is now the sticking point. Iskandar323 (talk) 04:57, 12 January 2025 (UTC)[reply]
Basically, most of the progress made is listed on that feedback page and the project has moved on from it. There were a few options, like the visibility one, where it was agreed upon and then didn't really go anywhere. So there are some ideas there, but we'd basically need to start fresh in terms of implementation. Thebiguglyalien (talk) 05:16, 12 January 2025 (UTC)[reply]
  • You're barking up exactly the right tree, Iskandar323. Regarding showing the icons on mobile, that's a technical issue, which is tracked at phab:T75299. I highlighted it to MMiller (WMF) when I last saw him at WCNA, but there's ultimately only so much we can push it.
    Regarding desktop, we also know the solution there: Move the GA/FA topicons directly next to the article name, as was proposed in 2021. The barrier there is more achieving consensus — my reading of that discussion is that, while it came close, the determining factor of why it didn't ultimately pass is that some portion of editors believed (wrongly, in my view) that most readers notice/know what the GA/FA symbols mean. The best counterargument to that would be some basic user research, and while ideally that would come from the WMF, anyone could try it themselves by showing a bunch of non-Wikipedian friends GAs/FAs and asking if they notice the symbols and know what they mean. Once we have that, the next step would be running another RfC that'd hopefully have a better chance of passing. Sdkbtalk 06:50, 12 January 2025 (UTC)[reply]
    It's great that I've got the right tree, since I think that's a village pump first for me. It seems that the proposer of that original 2021 discussion already did some basic research. Intuitively, it also seems just obvious that an icon tucked away in the corner, often alongside the padlocks indicating permission restrictions, is not a high visibility location. Another good piece of final feedback in the GA project discussion mentioned earlier up this thread by TBUA is that the tooltip could also been improved, and say something more substantial and explanatory than simply "this is a good article". On the subject of the mobile version and the level of priority we should be assigning to it, we already know that per WP:MOBILE, 65% of users access the platform via mobile, which assuming a roughly even spread of editors and non-editors, implies that 2/3 of contemporary casual visitors to the site likely have no idea about the page rating system. Iskandar323 (talk) 07:31, 12 January 2025 (UTC)[reply]
    my reading of that discussion is that, while it came close, the determining factor of why it didn't ultimately pass is that some portion of editors believed (wrongly, in my view) that most readers notice/know what the GA/FA symbols mean This is not my reading of the discussion. To me it looks as though a major concern among opposers is that making GA/FA status more prominent for readers is likely to mislead them, either by making them think that GAs/FAs are uniformly high-quality even for those which were assessed many years ago when our standards were lower and have neither been maintained or reassessed, or by making them more doubtful about the quality of articles which have never gone through the GA/FA process but are nonetheless high quality. By my count at least ten of the 15 oppose !voters cite this reason either explicitly or "per X" where X is someone else who had made this point. Caeciliusinhorto (talk) 16:18, 12 January 2025 (UTC)[reply]
    I've also encountered a fair few instances of older, lower standard GA articles. But I also think greater visibility (effectively also transparency) could also benefit in that area as well. If GA status is more prominent, it provides greater cause to review and reassess older GAs for possible quality issues. Also, most of the worst GAs I have seen have come from around 2007, so it seems like one sensible solution would be for GA status to come with a sunset clause whereby a GA review is automatically required after a decade. Maybe I'm getting a little sidetracked there, but this sort of concern is also exactly what I mean by greater visibility potentially reinjecting life and vitality into the process. Iskandar323 (talk) 17:15, 12 January 2025 (UTC)[reply]
    I think you're right about that being the most major source of opposition, but most major is different than determining — I don't think those !voters will be open to persuasion unless the quality of GAs/FAs improves (which, to be fair, it definitely has somewhat since 2021). But the "they already know" !voters might be more persuadable swing !voters, and it would have passed with their support. Sdkbtalk 19:02, 12 January 2025 (UTC)[reply]
    @Sdkb: So, is there any way to poke the mobile issue a little harder with a stick? And do you think it is worth re-running the 2021 proposal or a version of it? What format should such a discussion take? Is there a formal template for making a proposal more RFC-like? Iskandar323 (talk) 12:59, 20 January 2025 (UTC)[reply]
    @Sdkb: I see that it got moved to "Incoming" after you flagged it to Miller, but then it got sent back to the "Freezer", and yesterday shunted altogether: Per the web team's quarterly grooming, these tasks are being removed from the team's backlog. Iskandar323 (talk) 12:46, 24 January 2025 (UTC)[reply]
    @MMiller (WMF) and @Jdlrobson, can you explain? Sdkbtalk 00:32, 25 January 2025 (UTC)[reply]
    I think that's a fair reading of the discussion. But, I suppose the best way to be more transparent is to tell a user that it has been rated GA after a peer review, but that doesn't mean that the article is perfect... Which is what GA (and FAs) also say. Lee Vilenski (talkcontribs) 19:54, 12 January 2025 (UTC)[reply]
  • My radical proposal would be to get rid of the whole WP:GA system (which always came across to me as a watered-down version of WP:FA). Some1 (talk) 16:31, 12 January 2025 (UTC)[reply]
    Why? TompaDompa (talk) 16:38, 12 January 2025 (UTC)[reply]
    It is a watered-down process from an FA, but it is also the first rung on the ladder for some form of peer-review and a basic indicator of quality. Not every subject has the quality sources, let alone a volunteer dedicated enough, to take it straight from B-class to Featured Article. Iskandar323 (talk) 17:17, 12 January 2025 (UTC)[reply]
    That's literally the point of it. Lee Vilenski (talkcontribs) 19:52, 12 January 2025 (UTC)[reply]

Replace abbreviated forms of Template:Use mdy dates with full name

I propose that most[a] transclusions of redirects to {{Use mdy dates}} and {{Use dmy dates}} be replaced by bots with the full template name.

Part of the purpose of {{Use mdy dates}} is to indicate to editors what they should do. Thus, readability is important. I propose all of these redirects be replaced with their target which is:

  1. More easily understood even the first time you see it.
  2. Standardized, and thus easier to quickly scan and read.

The specific existing redirects that I suggest replacing are:

  1. ^ I would probably leave alone the redirects that differ only in case, namely {{Use MDY dates}} and {{Use DMY dates}}, which are sufficiently readable for my concerns.

Daask (talk) 20:30, 18 January 2025 (UTC)[reply]

In principle I like this idea (noting my suggestion to bring it here). My only concern would be about watchlist spam, given that, while this may not technically be a cosmetic edit, it's only a hair above one. But there's only a few thousand transclusions of these redirects, so if the bot goes at a rate of, say, one per minute, it'd be done in a few days. -- Tamzin[cetacean needed] (they|xe|🤷) 21:09, 18 January 2025 (UTC)[reply]
It looks like most or all of these are already listed at Wikipedia:AutoWikiBrowser/Template redirects, so whenever anyone edits an article with AWB, they'll already be replaced. No strong view about doing so preemptively.
However, if our goal is to ensure that these templates are actually meaningfully used, then we have some bigger fish to fry. First of all, even the written-out form isn't sufficiently readable/noticeable — many newcomers may not know what it means, and many experienced editors may miss it if they aren't happening to look at the top of the article. Ideally, we would either offer to correct the date format if anyone enters the incorrect one via mw:Edit check (task) or we'd include it in an editnotice of some sort.
Second of all, roughly 2/3 of all articles still don't have a date tag, so we need to figure out better strategies for tagging en masse. There are surely some definable groups of articles that are best suited to a particular format (e.g. all U.S. municipality articles I'd think would want to use MDY) that we could agree on and then bulk tag. Sdkbtalk 21:50, 18 January 2025 (UTC)[reply]
Ideally, we would either offer to correct the date format if anyone enters the incorrect one via mw:Edit check (task) or we'd include it in an editnotice of some sort.
This could also feasibly be done with a regex edit filter, which is better than Edit check in that specific case as the latter doesn't work with the source editor as far as I know. Chaotic Enby (talk · contribs) 07:01, 20 January 2025 (UTC)[reply]
However it's done technically, it will need human supervision as some instances shouldn't be change, e.g. in quotes and the titles of sources. Thryduulf (talk) 07:08, 20 January 2025 (UTC)[reply]
A filter could only flag an issue, not fix it. And any time a user gets a warning screen when they click "publish", there is a significant chance they will abandon their edit out of confusion or frustration, so we should not be doing that for a relatively minor issue like date format. -- Tamzin[cetacean needed] (they|xe|🤷) 07:11, 20 January 2025 (UTC)[reply]
I do believe that just flagging it would be better than giving an explicit warning (that might scare the user) or automatically fixing it (which, like Thryduulf mentioned, might not be optimal for direct quotes and the likes). Chaotic Enby (talk · contribs) 07:17, 20 January 2025 (UTC)[reply]
Concur with Tamzin — the main point of Edit Check is to introduce an option to alert an editor of something without requiring a post-edit warning screen, which is all edit filters can do. The ideal form would be a combo of a flag and an automatic fix — for instance, dates not detected to be within quotes would be highlighted, clicking on it would say "this article uses the MDY date format; would you like to switch to that? learn more convert". Sdkbtalk 16:38, 20 January 2025 (UTC)[reply]
That could be great indeed! Chaotic Enby (talk · contribs) 22:14, 20 January 2025 (UTC)[reply]
Courtesy pinging @PPelberg (WMF) of the Edit Check team, btw, just in case you have anything to add. Sdkbtalk 05:11, 21 January 2025 (UTC)[reply]
To be doubly sure I'm accurately understanding the behavior y'all are trying to promote, @Sdkb, can you please give the below a read and share what I might be missing/misunderstanding?
Many Wikipedia articles include templates that specify the format that dates present within them are written in. To increase the likelihood that people follow these guidelines (on a per article basis), we propose that when people fail to format dates in the way the consensus specifies, Edit Check presents them with a suggestion that invites them to convert the date they've written into the desired format.
And hey, I'm glad you pinged, @Sdkb! PPelberg (WMF) (talk) 21:59, 24 January 2025 (UTC)[reply]
Yes, that's correct! And no problem! Sdkbtalk 23:42, 24 January 2025 (UTC)[reply]
Excellent – documented! PPelberg (WMF) (talk) 22:01, 27 January 2025 (UTC)[reply]
It's definitely a cosmetic edit, in that it only changes the wikitext without changing anything readers see. But consensus can decide that any particular cosmetic edit should be done by bots. As proposed, there are currently 2089 transclusions of these redirects, 1983 in mainspace. Anomie 14:21, 19 January 2025 (UTC)[reply]
Agree with this. Also regarding many newcomers may not know what it means (in reference to the full template names): as a reminder, we do have to opt in to display maintenance categories, many of which are far less scrutable to the uninitiated. Categories can be clicked on for explanation.
As to the proposal itself, I don't really see the value in bypassing a bunch of redirects. Redirects exist to be used, and there's nothing wrong with using them. Blowing up people's watchlists for this type of change seems inconsiderate.
Articles without a prescribed date format are a non-issue. There's no need to implement any standard format at every article, and I augur that an attempt to do so would create far more problems than it would solve. Folly Mox (talk) 16:15, 21 January 2025 (UTC)[reply]
It is a problem (albeit a small one) if an article has some dates MDY and others DMY or YMD, per MOS:DATERET, since it introduces inconsistency. Tagging the article with its preferred format helps retain it, so it's something we should ultimately strive for (particularly at GAs/FAs, but also in applicable categories as I suggested above). Sdkbtalk 17:14, 21 January 2025 (UTC)[reply]
Knowing how much each is transcluded, and relative to the most-used cousins, would be a valuable point to include in this discussion.
The more valuable change of sorts with respect to these templates is that they're clearly metadata. It would be great if we could move them over to mediawikiwiki:MCR, though IDK how much effort it would take to get that done. (And perhaps along with the settings for citations and English variety.) Izno (talk) 22:32, 23 January 2025 (UTC)[reply]

Forbid Moving an Article During AFD

There is currently a contentious Deletion Review, at Wikipedia:Deletion_review/Log/2025_January_19#Raegan Revord, about an article about a child actress, Raegan Revord. Some editors think that she is not biographically notable, and some editors think that she is biographically notable. There is nothing unusual about such a disagreement; that is why we have AFD to resolve the issue. What happened is that there were a draft version of her biography and a mainspace version of her biography, and that they were swapped while the AFD was in progress. Then User:Liz reviewed the AFD to attempt to close it, and concluded that it could not be closed properly, because the statements were about two different versions of the article. So Liz performed a procedural close, and said that another editor could initiate a new AFD, so that everyone could be reviewing the same article.

This post is not about that particular controversy, but about a simple change that could have avoided the controversy. The instructions on the banner template for MFD are more complete than those on the banner template for AFD. The AFD template says:

Feel free to improve the article, but do not remove this notice before the discussion is closed.

The MFD template says:

You are welcome to edit this page, but please do not blank, merge, or move it, or remove this notice, while the discussion is in progress.

Why don't we change the banner template on an article that has been nominated for deletion to say not to blank, merge, or move it until the discussion is closed? If the article should be blanked, redirected, merged, or moved, those are valid closes that should be discussed and resolved by the closer. As we have seen, if the move is done in good faith, which it clearly was, it confuses the closer, and it did that. I have also seen articles that were nominated for deletion moved in bad faith to interfere with the deletion discussion.

I made the suggestion maybe two or three years ago to add these instructions to the AFD banner, and was advised that it wasn't necessary. I didn't understand the reason then, but accepted that I was in the minority at the time. I think that this incident illustrates how this simple change would prevent such situations. Robert McClenon (talk) 06:06, 20 January 2025 (UTC)[reply]

  • Seems like a reasonable proposal. Something similar occurred at Wikipedia:Articles for deletion/2025 TikTok refugee crisis. AfD was initiated, then the article was renamed, an admin had to move it back, and now it has been renamed again while the AfD is still ongoing. Some1 (talk) 06:32, 20 January 2025 (UTC)[reply]
    • Thank you for the information, User:Some1. Both my example and yours are good-faith, but taking unilateral bold action while a community process is running confuses the community. I have also, more than once, seen bad-faith moves of articles during AFD. An editor who is probably a COI editor creates an article that is poorly sourced or promotional. A reviewer draftifies it. The originator moves it back to draft space. Another reviewer nominates it for deletipn, which is the proper next stop after contested draftification. The originator then moves it back to draft space so that the AFD will be stopped. Sometimes an admin reverses the move, but sometimes this stops the discussion and leaves the page in draft space. I think that any renaming should be considered within the AFD. Robert McClenon (talk) 06:52, 20 January 2025 (UTC)[reply]
      • "Renaming" and "draftifying" may be technically the same operation, but they are quite different things. I don't mind outlawing draftify during AFD, as it pre-empts the outcome, but fixing a nontrivial typo or removing a BLP-noncompliant nickname from a page title should be done immediately by anyone who notices the problem, independent of whether the page is at AFD or not. —Kusma (talk) 09:15, 20 January 2025 (UTC)[reply]
  • Oppose. Improving an article during AfD is encouraged and we must resist anything that would make it harder. Following the proposal would have meant a cut and paste move/merges would have had to happen in order to use the existing draft, making the situation more difficult to understand than a clear page swap. —Kusma (talk) 06:49, 20 January 2025 (UTC)[reply]
  • Support, the AfD deals with notability, and moving can impact the scope and thus the notability. In that specific case, during the AfD, sources from both could've been considered, as AfD is about the sources that exist rather than the current content of the article. Not sure how a merge would've made it more difficult to understand than what actually happened. Chaotic Enby (talk · contribs) 06:55, 20 January 2025 (UTC)[reply]
    • It would have hidden the actual revision history for no benefit whatsoever. —Kusma (talk) 07:25, 20 January 2025 (UTC)[reply]
      • When merging, the other article's history should be linked in the edit summary for attribution anyway. The benefit of avoiding the massive confusion for the closer (and the later deletion review) far outweighs the need for a few more clicks to find the history. Chaotic Enby (talk · contribs) 07:41, 20 January 2025 (UTC)[reply]
        • If people are discussing version A before 13 January and version B after 13 January, this may result in confusion for the closer. But the confusion arises from people discussing two different versions of the article. I am all for clearly stating in the AFD when anything like moving or merging has happened, but outlawing moves is not solving the unsolvable problem that articles can change during an AFD. —Kusma (talk) 09:11, 20 January 2025 (UTC)[reply]
Comment: In this case the AFD was closed when the content swap happened. And since people already before it made sure to say "please do include the draft text in your consideration" the swap was universally welcomed (both !deleters and !keepers agreed the draft article was far superior to the mainspace article). I would argue nobody was confused that the swap had happened when the AFD was reopened. (The impetus for this proposal is based on wrong facts. That doesn't mean the proposal in itself is bad, so do carry on) Cheers CapnZapp (talk) 12:27, 27 January 2025 (UTC)[reply]
  • Inclined to support as a draft swap seems rare, and seems somewhat at odds with the stated principle that AfD is about notability, which would not differ between a mainspace article and a draft article. In situations when there is a draft, the AfD could come to consensus to use the draft, or to keep on the topic and the draft can be moved in post-AfD. That said, regarding blanking, I have seen articles at least partially blanked due to BLP or copyright concerns. Those seem correct actions to take even during an AfD, and I suspect other instances of blanking are rare enough, and likely to be reverted if disruptive. CMD (talk) 09:31, 20 January 2025 (UTC)[reply]
  • Weak oppose forbidding the kind of move made here. We encourage improving an article during the AFD, and separately it is often said during AFDs that an article should be TNT'ed and started over. Replacing the article with a new version, whether through moving a draft or simply rewriting it in place, is a valid (if hamhanded) attempt to do both of those things to save an article from deletion. Support forbidding moving the article to a new title with no content changes, as that could be disruptive (you'd have to move the AFD for one, and what if it gets reverted?). Pinguinn 🐧 10:57, 20 January 2025 (UTC)[reply]
    • You do not have to move the AFD (and you should not, it is unnecessary and causes extra work). All you need is to make a note on the AFD what the new page title is. Of course you should almost never suppress the redirect while moving a page that is at AFD. —Kusma (talk) 14:06, 20 January 2025 (UTC)[reply]
  • @Robert McClenon Look at the timeline again, in the Revord case it did not happen while the AFD was in progress. The swapping happened while the afd was closed keep. The afd was then reopened. Gråbergs Gråa Sång (talk) 10:58, 20 January 2025 (UTC)[reply]
  • I can see the benefit of forbidding moving between namespaces, but this proposal would also catch simple renames. I've seen plenty of deletion discussions for articles with simple typos or spacing errors in their titles, where the nominating user has not corrected things before nominating. We should not forbid moving them to the correct title. Phil Bridger (talk) 13:49, 20 January 2025 (UTC)[reply]
  • I don't see the benefit of retaining poorly worded article titles for seven days or more. I'd support against moving namespaces during an AfD, but not all renaming.
  • This could actually cause an issue if someone was to move an article to a title that someone else wants to move an article to (in case of an obvious PRIMARY TOPIC/Dab change). Lee Vilenski (talkcontribs) 14:57, 20 January 2025 (UTC)[reply]
  • Oppose There are some rare cases this is a problem, but I have seen many/most cases it is helpful. In the given example, let's say the move was disallowed and the article was deleted. Now wait a few weeks and make the article again with the new content. People will complain no matter what. You've got to be reasonable. If there was a major effort to redo the article it should be discussed during the AfD. -- GreenC 18:27, 20 January 2025 (UTC)[reply]
  • Based on the comments above I think the best we can get will be a policy that requires any change of title be clearly and explicitly noted in an AfD, supplemented by a guideline that discourages controversial and potentially controversial changes in title while discussion is ongoing. Any change that would alter the scope of the article or which has been rejected by discussion participants (or rejected previously) is potentially controversial. On the other hand, a suggested change that has significant support and no significant objection among discussion participants is usually going to be uncontroversial. Thryduulf (talk) 19:02, 20 January 2025 (UTC)[reply]
  • How about we limit such moves to admins? If there is an overriding good reason to move a page as part of editing and improvement of the encyclopedia, it should be movable. BD2412 T 22:20, 20 January 2025 (UTC)[reply]
    • Not sure that restricting editorial/content choices to the discretion of admins is a good thing. While it will definitely help in case of overriding good reason, it also means an individual admin can enforce a potentially controversial choice of page title for their own reasons, and can't be reverted by another editor. And, of course, there's the wheel-warring aspect to that.
      An alternative could be to limit such moves to closing the discussion with a consensus to move – that way, we still limit spurious moves even more, but the editorial choices are still made by the community. Chaotic Enby (talk · contribs) 22:29, 20 January 2025 (UTC)[reply]
    • Would the described swap be possible without special tools? I know that the title of this thread is "move", but that was more (and much harder or impossible for a regular editor to undo) than a move. North8000 (talk) 22:34, 20 January 2025 (UTC)[reply]
  • Comment. I would be chary of preventing this completely. There are quite a few cases where it rapidly emerges that the article is clearly at the wrong title (eg a transliteration error or a woman who exclusively publishes under another form of her name) so that the results of searches for sources are completely different between the two titles; moving the article even mid-AfD might be a good response in such cases. Espresso Addict (talk) 05:33, 21 January 2025 (UTC)[reply]
    • I note that the text of the AfD notice used to read "Feel free to improve the article, but this notice must not be removed until the discussion is closed, and the article must not be blanked. For more information, particularly on merging or moving the article during the discussion, read the guide to deletion." until it was shortened in March 2021 by Kusma and then further shortened by Joe Roe in October 2023. Espresso Addict (talk) 05:47, 21 January 2025 (UTC)[reply]
      • If you can find a concise replacement for the text that actually gives pertinent information, please do edit the notice. —Kusma (talk) 08:31, 21 January 2025 (UTC)[reply]
        • I think sometimes clarity is more important than concision. Espresso Addict (talk) 09:44, 21 January 2025 (UTC)[reply]
          • If the text is restored, the guide to deletion should feature the promised information more prominently. —Kusma (talk) 10:02, 21 January 2025 (UTC)[reply]
            • Given that the current basis for the recommendation against moving is the relatively weak wording in WP:AFDEQ (While there is no prohibition against moving an article while an AfD or deletion review discussion is in progress, editors considering doing so should realize such a move can confuse the discussion greatly), highlighting this specifically in the template seems out of proportion. Perhaps we could revisit that if the consensus here is to strengthen the guidance, which would also allow us to be more concise (i.e. "do not move this page"). – Joe (talk) 18:37, 21 January 2025 (UTC)[reply]
  • Oppose. Moving an article to a new title can be confusing during an AfD, but otherwise good edits are good edits. In particular rewrites or replacements by drafts to address concerns raised in the discussion shouldn;t wait because they can make clear that a reasonable article can be (because it has been) created. Eluchil404 (talk) 06:09, 21 January 2025 (UTC)[reply]
  • Weak support I think this should be formally discouraged, but I don't think we should ban it entirely. Certainly some moves during an AfD may be tendentious. SportingFlyer T·C 06:11, 21 January 2025 (UTC)[reply]
  • Strong support This has been a problem for years. The solution is simple, there is no requirement to make such moves during an AfD duration, there is no downside to this proposal. Andy Dingley (talk) 19:30, 21 January 2025 (UTC)[reply]
  • Oppose as a blanket rule, and strongly oppose this wording. Even if it is not intended as a blanket rule, and even if there are "obvious exceptions" as detailed above, wording like this will cause people to interpret it as one even when those "obvious exceptions" apply. "Well damn looks like the New York Times just reported that the shooting of Dudey McDuderson was a hoax, but sorry, we can't fix the title, template says so." (Example chosen since it's a plausible WP:NOTNEWS AfD.) Gnomingstuff (talk) 19:46, 21 January 2025 (UTC)[reply]
    • If it's that clear and obvious that something needs to be fixed, then obtain consensus for it at the AfD (and if you can't, then it's not "clear and obvious"), speedy resolve it (close and re-open as needed, or even some sort of partial consensus for one aspect) and then do it. But we still can't do renames when we don't yet have agreement as to need and new target. Andy Dingley (talk) 20:12, 21 January 2025 (UTC)[reply]
      • What I am saying is that wording like "please do not blank, merge, or move it, or remove this notice, while the discussion is in progress" will result in people arguing "the template says don't move it so don't move it, no exceptions allowed." Gnomingstuff (talk) 00:08, 22 January 2025 (UTC)[reply]
        • The problem is less moving things during an AfD as moving them unilaterally, without consensus. We can surely demonstrate that during an AfD, or quickly, in order to resolve and close it, if it's that clear. Andy Dingley (talk) 12:03, 22 January 2025 (UTC)[reply]
          • Yes, I agree. But that's not what the proposed wording says. The wording proposed is "please do not blank, merge, or move it, or remove this notice, while the discussion is in progress" (bolding mine), not "please do not move it unless you have consensus or it's obvious." There are no carve-outs in the wording and so the wording is bad. Gnomingstuff (talk) 01:37, 26 January 2025 (UTC)[reply]
  • Oppose (except as to unilateral draftification). Renaming should be left to editors' judgment. This includes their judgment of whether the new name is likely to be controversial, or whether any past or present discussion is actually related to the new name and shows opposition to it. In other words, ordinary principles of WP:BOLDMOVE apply. There should not be a general prohibition or consensus-in-advance requirement, nor should editors revert moves solely "procedurally" because of AFD. (Editors can of course revert if they disagree on the merits of the name.) Reader-facing improvement efforts should not be held back by an overriding concern for internal administrators' confusion. That's getting priorities backward. Adumbrativus (talk) 01:21, 22 January 2025 (UTC)[reply]
  • Hard cases make bad law. I don't know if that's always true, actually, but this discussion does strike me as an overreaction to an extremely unusual set of facts. --Trovatore (talk) 04:44, 22 January 2025 (UTC)[reply]
  • Qualified support. I generally agree that editors who move pages in the middle of a consensus-building discussion should be trouted, but there can be good reasons to perform such a move. We should prohibit only controversial moves during an AfD/RM/etc., while allowing uncontroversial moves. This is the same framework used at WP:RM/TR, where it works well. Toadspike [Talk] 09:13, 24 January 2025 (UTC)[reply]
  • Definitely and most emphatically not. I have addressed deletion concerns by renaming and refactoring articles during discussion many times over the past two and a bit decades, and as the person who wrote much of the guidelines on this stuff, including helping on the AFD notices, I can report that it is perfectly fine and a practice that has worked for decades. It has happened many times, entirely uncontroversially, and not just when done by me (although I've often wikignomed the links in the AFD discussions to follow the page moves, as people forget that part).

    Years ago, I used to put a horizontal rule and a small note when doing rewrites and refactors, to mark the point in the discussion where the article changed. But I reduced to just the horizontal rule and sometimes stopped doing even that when people started recognizing my name and stopped checking things for themselves once my name came up. (The name recognition might have dropped off a bit, nowadays. I could probably get away with the horizontal rules again.) I recommend that others simply mark where a rewrite or a page move has happened, in the discussion. I didn't have trouble with that before the name recognition set in, and you probably will not either. And please make my life (on AFD patrol) and those of closing administrators easier by remembering to fix the page title links in the discussions, because the real problems for closing administrators are what the target edit history to delete or not delete is, which remains clear as long as the discussion continues to point to the same edit history that it pointed to at nomination. Edit histories are not titles, and it is the edit history that gets deleted with the administrator deletion tool.

    Uncle G (talk) 09:39, 26 January 2025 (UTC)[reply]

Proposal to prohibit the creation of new "T:" pseudo-namespace redirects without prior consensus

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Around this time last year in 2024, the phabricator ticket T363757 created a brand new alias for the template namespace. From this point on, it is possible to get to any template by appending the letters "TM:" to any search. If I wanted to reach the centralized discussion template, I could always type TM:CENT and it works like a charm, for all templates on the site. Back in the day though, typing in 8 characters to reach a page became somewhat exhausting, especially for titles that might need to be navigated to frequently. As a helpful tool, a pseudo-namespace called "T:" was deployed, to quickly let people reach pages in the template namespace. (Nevermind the fact that "T" apparently ALSO stands for the talk namespace (T:MP) and template talk namespace (T:DYKT)). Regardless, in practice, pseudo-namespaces are great tools for navigation, but they have a flaw in the fact that the software does not really support them. All pseudo-namespace redirects occupy mainspace, which means that any PNRs which exist should be maintained with care and diligence, to avoid interfering with regular readers searching for articles.

Anyway, among the four PNRs currently in use today, "T:" has been, by-and-large, the most controversial among the rest. While CAT:, P:, and H: all have some usage in different circumstances, according to WP:Shortcut#Pseudo-namespaces, "T:" titles are for "limited and specific uses only". Generally speaking, the only reason to justify the creation of a T: title, is for a template that sees regular use and maintenance by members of the project. If it's not a template one would need to return to on a regular basis, there's no need to occupy mainspace with a "T:" title, further adding to the obfuscation of other genuine articles that also start with "T:", such as T:kort, T: The New York Times Style Magazine, and many others according to Special:PrefixIndex/T:.

In regards to controversy, T: titles have been the subject of persistent RfDs since 2009, with variable results. Several RfCs have been held relating to pseudo-namespace redirects, including one from 2014 that suggests that "new T: titles should be generally discouraged", in Wikipedia:Village pump (policy)/Archive 112#RFC: On the controversy of the pseudo-namespace shortcuts. Yet, despite the multiple RfCs and RfDs, new "T:" titles continue to crop up regardless. Whether that be from people who mis-interpret or misunderstand pseudo-namespaces, or for anyone that might not've noticed WP:Shortcut saying "T:" titles are for "limited uses only", these are frequently monitored and the number always grows.

In any case, with the advent of the [[TM:]] alias, there is little to no need for new "T:" titles. It is not important enough to shrink a two-letter namespace, into a one-letter namespace, so there's really no reason to have NEW titles that start with "T:". In 2022, the "WikiProject:" pseudo-namespace was added to the disallow-list for new article titles. I don't think that "T:" as a starter should be added to such a list, but I don't think there should be any new ones of this type now that [[TM:]] is a safer alternative that works for 100% of all templates, and doesn't affect mainspace searches.

I propose that on WP:Shortcut, "T:" is moved to a new classification indicating that new titles should not be created without prior consensus, and/or that "new titles do not enjoy broad community support", i.e. the category that the WikiProject prefix is listed at currently. (For that matter, I think that the WikiProject prefix should be removed from Shortcuts because no pages contain that prefix anywhere on Wikipedia; at least not any from the last 3 years). I also propose that "T:" be removed from the shortlist on WP:PNR, because I feel that contributes to the creation of new T: titles, and we should not encourage the creation of T: titles when TM: now exists. Utopes (talk / cont) 22:17, 20 January 2025 (UTC)[reply]

Question: Is Special:PrefixIndex/T: all there is? I support at least a moratorium (consensus needed) for creating new T:, and also reeval existing T: in light of the new TM: alias. -- GreenC 14:45, 21 January 2025 (UTC)[reply]
Yes, that's all there is. —Cryptic 23:22, 22 January 2025 (UTC)[reply]
I would also support a moratorium outside of the DYK space. I note other main page uses are currently up for discussion at Wikipedia:Redirects for discussion/Log/2025 January 16#T:Pic of the day and etc., which would leave just DYK. Ideally if T: is deprecated, the DYK instructions would shift to TM: as well. I'll create a note at WT:DYK pointing to this proposal. CMD (talk) 15:57, 21 January 2025 (UTC)[reply]
We have managed a rapid consensus at WT:DYK to shift to TM as well, so my proposed exception is moot. CMD (talk) 01:15, 26 January 2025 (UTC)[reply]
  • Support I've always found "T:" titles confusing. In particular, I never understood why sometimes it worked (i.e. T:DYK) and sometimes it didn't (T:Cite journal). At some point I gave up trying to figure it out and just resigned myself to typing out "template" all the time (and occasionally typing "templare" by accident). I wasn't even aware that TM: existed.
    It's absurd that there should be namespaces, aliases, pseudo-namespaces, all of which have slightly different behaviors (not to mention Help:Transwiki). You should be able to understand what something is by looking at it, i.e. if it has a ":" after it, it's a namespace. So yeah, I wholeheartedly support getting rid of T. Getting rid of the existing T links may be painful, but it's pain we will endure once and be done with. That's better than continuing to have something that's inconsistent and confusing forever.
I ran into this recently when writing some code that handles matching template names. It turns out that if I give you a link foo:bar, you can't know if the "foo" part is case sensitive or not if you don't know what namespaces are configured on the particular wiki it came from. That's just stupid. RoySmith (talk) 16:25, 21 January 2025 (UTC)[reply]
PS, as a follow-up to You should be able to understand what something is by looking at it, I suggest people watch Richard Feynman's comments on this subject. When I'm seeking wonder and amazement at discovering a deeper understanding of the world around me, I can turn to quantum mechanics. I'd prefer wiki-syntax to be a bit less wonderous. RoySmith (talk) 16:49, 21 January 2025 (UTC)[reply]
Support – if we already have TM: as a perfectly functional pseudonamespace alias that automatically redirects to Template:, we don't need to encourage the use of T: which only works for hardcoded redirects and adds another level of confusion. After the moratorium, we can leave DYK some additional time to shift to TM: if needed. (edited 15:14, 22 January 2025 (UTC): mixed up alias and pseudonamespace again) Chaotic Enby (talk · contribs) 17:10, 21 January 2025 (UTC)[reply]
  • Oppose. "TM:" is not an intuitive redirect for "template", and longstanding usage - which I use frequently - is for "T:", e.g. T:ITN, T:DYK etc. If need be, we should tell the software to use "T:" universally for templates rather than "TM:". Using it for "Talk:" doesn't really make sense either, it's very rare to need a shortcut to a talk page, whereas templates are frequent targets. We should also add "TT:" for template talk. Editors drive how we work on the project, not suits at the Wikimedia Foundation.  — Amakuru (talk) 19:49, 21 January 2025 (UTC)[reply]
    Despite your claim, the decision wasn't made by suits at the Wikimedia Foundation, but by this very community here at VPP (link), where "TM:" was chosen over "T:". Chaotic Enby (talk · contribs) 20:15, 21 January 2025 (UTC)[reply]
    Even the code patch was written by a enwiki volunteer and the deployment was done by another volunteer developer lol. The claim of suits at the Wikimedia Foundation has no basis here. Literally nobody from the WMF was involved in this. Sohom (talk) 06:15, 23 January 2025 (UTC)[reply]
    And I'm not entirely sure there's any basis in the assumption the programmers at WMF are wearing suits. CapnZapp (talk) 13:01, 31 January 2025 (UTC)[reply]
    What one person finds intuitive isn't always necessarily what another person finds intuitive. But the link Chaotic Enby posted above shows there's a consensus that TM: is a suitable alias, so I don't think we should reinvigorate that debate. The question here isn't whether we like TM, it's whether we should get rid of T now that we have TM. Cremastra (talk) 20:56, 21 January 2025 (UTC)[reply]
Technically, the proposal doesn't say anything about getting rid of existing T's. It only proposes curtailing new ones. CapnZapp (talk) 13:02, 31 January 2025 (UTC)[reply]
Support. As Utopes points out, the advantage from writing "t" compared to "tm" is one character, however, the cons far outweigh them. Gonnym (talk) 09:22, 22 January 2025 (UTC)[reply]
@Wbm1058: You correctly note that the number has indeed gotten smaller. The explanation for the decrease is that a minimum of 7 pages have been deleted. The exact number should not actually matter though, given that this proposal seeks to formally prevent new titles, so any such database wouldn't register an increase if they get sent to discussion as soon as they are spotted, hence this discussion. This doesn't seek to rid the existing titles that have been around. But if you would like examples of discussions that ended in deletion, see Wikipedia:Redirects for discussion/Log/2025 January 3#T:Partner, Wikipedia:Redirects for discussion/Log/2025 January 3#T:WPBIO, Wikipedia:Redirects for discussion/Log/2025 January 9#T:Uw-move3, Wikipedia:Redirects for discussion/Log/2025 January 9#T:, Wikipedia:Redirects for discussion/Log/2025 January 16#T:Pic of the day and etc., which have cumulatively deleted ten redirects, three of which were brand new since your last assessment, which is perhaps why you noticed such a decline in pages. Utopes (talk / cont) 20:03, 27 January 2025 (UTC)[reply]
I see, Utopes, thanks. You're continuing to knock off a few of these, here and there. Alas, it's difficult to legislate away the constant arrival of disruptive editors like this guy (I know we should assume good faith, but editors like these stretch the limits of my assumptions). They're not gonna bother reading whatever laws you add to the policy pages. You're still going to need to occasionally deal with them, and it seems you've been doing a good job or handling it. The goal here is another criterion for speedy deletion, so you won't need to take the trickle of new ones to RfD as they drift in? wbm1058 (talk) 22:37, 27 January 2025 (UTC)[reply]
@Wbm1058: I don't think the rate of new titles is enough to justify a speedy-deletion criterion. 😅 Currently it's averaging "one new T: title a month", which thankfully is fairly manageable. That being said though, the text of WP:Shortcut and WP:PNR I think gives a bit too much validity towards T: titles.
"T:" is currently listed front and center as one of the "four PNRs" on WP:PNR, which gives people the idea of making T: titles for their favorite templates, despite them not being necessary in the year 2025 with the advent of the TM alias. Taking T:Uw-move3 as an example, the creator said the reason they created it was because "'T:' is listed at WP:PSEUDONAMESPACE". I think that removing the mention of "T:" from guideline pages such as WP:Shortcut and WP:PNR will be helpful in discouraging editors from making these, and will help diminish the new ones specifically while we work to resolve the older titles. Utopes (talk / cont) 22:48, 27 January 2025 (UTC)[reply]
  • Support. The pseudo-namespaces overall should be deprecated, as these pages bring nothing but future technical debt, but especially in case where the community agreed on a similar alias, the existing pages should just be fixed to match it and ideally removed altogether. I would note that the Big WMF does not actually prohibit T: prefix for template namespace, as it already exists fine in Russian Wikipedia (but we thankfully never adopted enWP’s practice of solving namespace aliases with futile manual labour, so we never had to deal with something like phab:T363538). stjn 16:09, 26 January 2025 (UTC)[reply]
    They're actually more than technical debt. They're an attractive nuisance. Most things on the wiki happen by cargo culting. You see something that does what you want and you copy it. That's why new ones keep popping up. Only a very very small number of the technical cognoscenti understand how a namespace differs from an alias differs from a pseudonamespace. As long as there's bad examples around for people to copy, they will do so. RoySmith (talk) 21:14, 27 January 2025 (UTC)[reply]
  • Support. But the proposal is so hair-splitting I don't think we really needed a central-notice discussion to decide it. I've seen much, much, more significant guideline changes get boldly implemented, in the project's history. – wbm1058 (talk) 23:38, 28 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

I have added "T:" prefix to the title blacklist, so only template editors, page movers, and admins can create new redirects with it. * Pppery * it has begun... 22:50, 31 January 2025 (UTC)[reply]

@Pppery: Why? The discussion above mostly does not discuss title blacklisting, but from those comments that mention it there seems to be a slight consensus against. Thryduulf (talk) 02:58, 1 February 2025 (UTC)[reply]
After taking a closer look at the discussion I've reverted that, but it seems absurd to me to set a rule like this and then refuse the obvious method of enforcing it - of course if T: redirects shouldn't be created it should be added to the blacklist, anything else is silly. * Pppery * it has begun... 03:03, 1 February 2025 (UTC)[reply]
I agree that blacklisting was not part of the consensus, but also agree that it's absurd :-) Perhaps some kind of filter could be created which detects and flags any new creations for human review? RoySmith (talk) 03:14, 1 February 2025 (UTC)[reply]
Not too sold on this, as the discussion showed that there were other existing T: redirects that didn't use it as a pseudonamespace, but just as the abbreviated form of a mainspace article title. Agree with @RoySmith that an edit filter could work, shouldn't be too hard to code one. Chaotic Enby (talk · contribs) 03:15, 1 February 2025 (UTC)[reply]
page_namespace == 0 &&
page_title rlike "T:.*" &&
page_age == 0
Chaotic Enby (talk · contribs) 03:26, 1 February 2025 (UTC)[reply]
I'll admit to raising an eyebrow when I saw the edit to the blacklist, but didn't think it worth reverting. There's enough legitimate mainspace titles starting with T: that it made sense not to make that the alias for Template:, but not enough that we can't handle new ones with edit requests (bear in mind that blacklisting is much more permissive than salting; pagemovers and template editors can create these titles as well). What this really called for was a custom blacklist message. —Cryptic 03:32, 1 February 2025 (UTC)[reply]
That could also work. For the edit filter side, I've posted the proposal at WP:EFR so we can have more eyes on that option too. Chaotic Enby (talk · contribs) 03:34, 1 February 2025 (UTC)[reply]
I've added filter 1342 (hist · log) based on Chaotic Enby's request at EFR. Currently it very simply checks (log only) for any new page in the T: namespace. It would be fairly trivial to add a check whether it's a redirect to a template, so I'm here to seek clarification on whether that's what's really wanted, whether there should be a warn action/message, and if so whether anyone would like to write the purported message. -- zzuuzz (talk) 22:56, 1 February 2025 (UTC)[reply]
I suspect these are created infrequently enough that worrying about whether it's a redirect or not is not necessary, as long as this gets the attention of one or more humans who can look at it. If I understand the situation correctly, this will probably trigger once a month or so? RoySmith (talk) 23:08, 1 February 2025 (UTC)[reply]

Would it be a good idea to build a scraper and a bot that scrapes tweets and then replaces the link to the tweet to a link to a site populated with scraped tweets? That way we don't send traffic to Twitter or whatever its called these days. Polygnotus (talk) 00:38, 22 January 2025 (UTC)[reply]

  • Wouldn't scraping be a copyright violation? —Jéské Couriano v^_^v threads critiques 00:48, 22 January 2025 (UTC)[reply]
    • @Jéské Couriano: I do not know (I am not a lawyer). I do know Google cache and the Wayback Machine and various other services that would also infringe on copyright, if that is copyright infringement. If the Wayback Machine can archive tweets, we could ask it to index every tweet and then remove every direct link to twitter. Maybe meta:InternetArchiveBot can do this and we only have to supply a list of tweets and then replace the links? Polygnotus (talk) 00:52, 22 January 2025 (UTC)[reply]
  • No. Wikipedia is not the place to try to attempt to voice your concerns with Elon Musk. Unless or until the site becomes actually harmful itself, more than others (i.e. scraping user data or similar), then there is no need to replace those links. Nobody is advocating for replacing links to Reuters, which requires you to sign up for an account and accept email ads/etc. to read articles for free. -bɜ:ʳkənhɪmez | me | talk to me! 01:00, 22 January 2025 (UTC)[reply]
    • until the site becomes actually harmful itself, more than others It is already, right? WP:RGW is about WP:OR and WP:RS, so it is unclear why you linked to it and it appears to be offtopic. Reuters, which requires you to sign up for an account and accept email ads/etc. to read articles for free. It does? I have never seen that (but I am using ublock and pihole and various related tools). Polygnotus (talk) 01:05, 22 January 2025 (UTC)[reply]
      • Why should Wikipedia be concerned what websites get traffic? If it's about the political views or actions of its owner or its userbase, then that's absolutely against the spirit of "righting great wrongs" in a literal sense, even if it's not what's specifically covered in WP:RGW. Thebiguglyalien (talk) 05:00, 23 January 2025 (UTC)[reply]
        • We already do apply, and for a long time have applied, this concept of concerning ourselves about the behaviour of the target site, to external hyperlinks that lead to copyright violating WWW sites. See Project:External links#Restrictions on linking. So the question becomes one of whether we should start concerning ourselves with behaviours other than copyright violation, spam, and harassment. I think that the answer is that no, we should not start concerning ourselves with sending traffic, especially as we had that discussion years ago when MediaWiki was changed to tell WWW browsers to not automatically pre-load the targets of external hyperlinks. Rather we should concern ourselves with whether we should be hyperlinking to Twitter posts, copied elsewhere or no, at all. That is a source reliability issue, and we already have the answer at Wikipedia:Reliable sources/Perennial sources#Twitter. Given that the blue checkmark is no longer a marker of account verification, and that it is possible to impersonate people who have since left Twitter since wholly deleted account names become available for re-use, what is said there about the problems confirming the identity of the author is now an even greater factor in source unreliability than it was a few years ago. Then there's what has been pointed out in this discussion about archiving services being unable to archive much of Twitter now. Uncle G (talk) 09:56, 25 January 2025 (UTC)[reply]
  • ~~Agree that it's better not to send traffic to Twitter, but I don't know if Twitter is exactly getting a lot of traffic through Wikipedia, and in any case linking to the actual tweet (the actual source) is important.~~ Other users suggested archives. I oppose replacing links with links to a scraper, but I wouldn't oppose replacing links with links to the Internet Archive, for example -- something reputable. Mrfoogles (talk) 21:22, 22 January 2025 (UTC)[reply]
  • The disagreement of some editors with Twitter and Elon Musk do not constitute a reason for getting rid of it.--Wehwalt (talk) 22:33, 22 January 2025 (UTC)[reply]
  • Was this idea prompted by the banning of Twitter/X links by subreddits on reddit? https://www.theverge.com/2025/1/22/24349467/reddit-subreddit-x-twitter-link-bans-elon-musk-nazi-salute I'm not opposed to the idea of doing this on Wikipedia (replacing the links with an archived version of the tweets), but it does come off as somewhat like virtue signalling, considering that links to Twitter/X aren't commonly found on Wikipedia. Some1 (talk) 00:04, 23 January 2025 (UTC)[reply]
    • Personally I'm not sure it's a good idea, but I don't think it's just "virtue signaling". Obviously the effect will not be enormous, but it will help slightly (all the subreddits together, even though they're small, have some effect) and it's good to have sort of statements of principle like this, in my opinion. As long as the goal is to actually not tolerate Nazism, rather than appear to not tolerate Nazism, I don't think it's virtue signaling. Mrfoogles (talk) 20:48, 23 January 2025 (UTC)[reply]
  • @Polygnotus what is the specific reason you are suggesting this is something that should be implemented? I'm a terrible mind reader, and wouldn't want to make presumptions of your motives for you. TiggerJay(talk) 01:21, 23 January 2025 (UTC)[reply]
  • There is clear and obvious value in ensuring all {{cite twitter}} or {{cite web}} URLs have archive URLs, what with Musk's previously shortly-held opinion about the value of externally accessible URLs. Other than that, I see little reason to "switch" things. Izno (talk) 22:23, 23 January 2025 (UTC)[reply]
    • There is also the fact that for the past two and a bit years there has been a movement amongst erstwhile Twitter users to delete all of their posts. So ignoring whether the URLs become walled off behind a forced site registration, there's the fact that they might nowadays point to posts that no longer exist, the same issue that we have with link rot in general. And others have observed in this discussion that archiver services do not ameliorate this, as they have various difficulties themselves with Twitter, which they themselves report. Twitter militates against archive services. In the end, I doubt that any sort of newly grown archiving service could do better, as it would be quickly discovered and countered by Twitter as the existing ones already are. Uncle G (talk) 09:56, 25 January 2025 (UTC)[reply]
  • Most archiving services don’t work with Twitter anymore. Archive.org doesn’t and archive.is does it poorly. The only one that works consistently is GhostArchive which has been removed before over copyright concerns. For similar reasons, existing Twitter mirrors like Nitter are either defunct or broken. This would amount to removing all Twitter links then. PARAKANYAA (talk) 22:35, 23 January 2025 (UTC)[reply]
    • This however wouldn't be terrible. Simply removing all links to Twitter would be valuable for multiple content reasons in the direction of WP:WEIGHT, WP:OR, and so on. Izno (talk) 22:38, 23 January 2025 (UTC)[reply]
      • There is already tight guidelines on where and how tweets can be used in articles, and I don't think that it is any more prevalent than it is from any other primary source website. While the use of such primary sources need to be closely monitored in any article, there are places where its inclusion is appropriate and helpful, but it certainly is on the rare side of things. I also would proffer that if the main reason to prevent having links directly to twitter is some sort of virtue signaling we're going to get into a world of problems as the values and moralities of people in Wiki differ greatly. Should we then drop all links to Russian websites to support Ukraine? What about when it comes down to PIA issues or other areas of great contention? This would be murky waters that is best avoided all together. TiggerJay(talk) 22:47, 23 January 2025 (UTC)[reply]
      • Unless you want to remove WP:ABOUTSELF broadly I don’t see the reason to apply it to Twitter instead of every other social media website there is. PARAKANYAA (talk) 22:48, 23 January 2025 (UTC)[reply]
  • Having to build and maintain our own scraping service would have high costs in terms of software engineers to build the service, then software engineers to maintain it forever. We'd also basically be reinventing the wheel since FOSS organizations like Internet Archive already do scraping. Would recommend continuing with the status quo, which is linking to Twitter, and having Internet Archive do the scraping in case the main link dies. –Novem Linguae (talk) 00:34, 24 January 2025 (UTC)[reply]
    • Note what is written above about archivers not working with Twitter. Various archiving services themselves have warning about their greatly limited abilities or even outright inability to archive Twitter nowadays. See Blumenthal, Karl. "Archiving Twitter feeds". archive-it.org. for one example, where an archive services notes that it is greatly limited to archiving only what can be seen by people without Twitter accounts. Uncle G (talk) 09:56, 25 January 2025 (UTC)[reply]
  • I think we need to be taking a harder line on citations and external links to Tweets, but not because of any recent actions by its owner. I rarely come across citations/links to tweets that aren't flagrant violations of WP:RSPTWITTER, WP:SPS, WP:ABOUTSELF and WP:TWITTER-EL. If recent events give impetus to a crackdown on overuse of tweets, I won't be opposed to it. But scraping and changing links, when there's not yet been any indication of an urgent need to do so (unlike, say, with THF), then I think that would be a bit overkill. --Grnrchst (talk) 10:36, 25 January 2025 (UTC)[reply]

Why in the world would we do this? Sure, Twitter/X is routinely not a good source, but that's because of WP:ELNO on blogs (remember, it's a micro-blogging site) and WP:RS in general, not because of some problem with the site itself. Citing a Twitter/X post by an account verified to belong to a prominent person is a great way to verify the statement "Prominent person said such-and-such on Twitter/X". Worse, it would cause major issues in places where a Twitter/X link is important to the article, e.g. Social media use by Barack Obama, which covers Obama's use of Twitter, or NJGov, which is about the official Twitter account of the state of New Jersey. For the latter item, WP:ELOFFICIAL is unquestionably applicable; it would be preposterous for an article about a Twitter account not to link the account in question. Nyttend (talk) 20:24, 29 January 2025 (UTC)[reply]

@Nyttend: If those are the best examples you can find we perhaps need to block all mentions of twitter in mainspace. Polygnotus (talk) 20:39, 29 January 2025 (UTC)[reply]
Why should an encyclopedic topic not be mentioned at all just because its CEO is an awful person? Chaotic Enby (talk · contribs) 20:55, 29 January 2025 (UTC)[reply]
@Chaotic Enby: What I mean is that those articles are not great. Polygnotus (talk) 20:58, 29 January 2025 (UTC)[reply]
NJGov is a good article. Since there aren't many articles of this sort, probably there aren't any featured articles about social media accounts or "so-and-so on social media". Nyttend (talk) 21:22, 29 January 2025 (UTC)[reply]
Agreed. It contains only 2 twitter refs, and both could be replaced with a link to an archived copy of that tweet without any problem. Polygnotus (talk) 22:02, 29 January 2025 (UTC)[reply]
What about the official URL link in the infobox and in the external links section? The only way we should serve archived pages in external links is if the official link doesn't exist anymore. Official links are exempted from many external-links requirements because they should always be included if possible. We shouldn't be imposing technical prohibitions that get in the way of such official links. Nyttend (talk) 10:25, 31 January 2025 (UTC)[reply]
Yeah I wasn't really talking about external links, only references. And a single external link on a single article is not very important. Polygnotus (talk) 13:50, 31 January 2025 (UTC)[reply]
You talked about avoiding sending traffic there, which happens when we serve an external link. And from your words it sure sounds like you're attempting to enforce a subtle non-neutral point of view. We all have our own points of view, but if you attempt to drive the site toward yours, it's not acceptable. Nyttend (talk) 09:12, 1 February 2025 (UTC)[reply]
  • No. While the site has fallen far from what it used to be, it's not serving malware or anything harmful like that which would support automatically removing all links, and replacing links to archives is problematic as already noted. It may be (likely is) collecting user data for nefarious purposes, but so do many sites we use as sources anyway, and there's only so far we can go to protect readers from the internet before we're righting great wrongs instead of making an encyclopedia. But maybe it's a good idea to add code to {{cite tweet}} so that all uses of Twitter in citations are flagged with a {{better source needed}} or {{unreliable source?}} tag, so that editors are prompted to review and replace links that are problematic? We really shouldn't be relying on Twitter or any social media as citations - if something said on Twitter needs to be used as a citation we should look for a proper reliable source quoting it, rather than linking to it directly. That's been the case since twttr first launched, but definitely more of a problem since 2021. Ivanvector (Talk/Edits) 20:57, 29 January 2025 (UTC)[reply]
    Are you sure that hate is not more harmful than malware? Polygnotus (talk) 20:59, 29 January 2025 (UTC)[reply]
    In the context of the external links guideline, absolutely yes. Wikipedia is not the Thought Police. Ivanvector (Talk/Edits) 21:04, 29 January 2025 (UTC)[reply]
  • No. We are not reddit, mercifully. — Czello (music) 13:55, 31 January 2025 (UTC)[reply]

I concur with Uncle G on the value of archiving Tweets given migration out of Twitter, account deletion removes material from the record and that is particularly unhelpful for Twitter. Two other concerns: (1) Twitter content looks very different to Twitter users than to people who don't have accounts, so an [old tweet of mine https://x.com/CarwilBJ/status/1126300200212021255] appears in the context of a thread to signed-in users, but as a disconnected solo tweet to those who aren't logged in. This could easily generate confusion both for editors seeking to add material and to readers. (2) Numerous government accounts reset when there is a change of government, taking thousands of tweets offline. Standard practice for this case is to use an archive.

Some thoughts:

  • The Library of Congress has a complete archive of public tweets from 2006 to 2017.[9] I'm not sure if this is in a linkable format, but it is likely to endure.
  • The Chicago Manual of Style has as standard practice (18th Ed., 14.106: Citing social media content ) to cite the entire text of tweets in the bibliographic reference. We could make it Wikipedia policy to do so as well.
  • The Internet Archive still seems committed to this work, see [this https://help.archive.org/help/how-to-archive-your-tweets-with-the-wayback-machine/].

--Carwil (talk) 17:24, 1 February 2025 (UTC)[reply]

Reviving / Reopening Informal Mediation (WP:MEDCAB)

OK, so this is a little bit of a long read, and for some, a history lesson. So, most of my time on Wikipedia, I've been involved in our dispute resolution processes, including MedCab, talk page mediation and other works. Back in June of 2011, I created the dispute resolution noticeboard, which I proposed in this discussion. I designed this as a lightweight process to make dispute resolution more accessible to both volunteers and editors, providing a clearer entry point to dispute resolution, referring disputes elsewhere where necessary.

For a time, this was quite effective at resolving content disputes. I stayed involved in DR, eventually doing a study on Wikipedia and our dispute resolution processes (WP:DRSURVEY), and out of that, high level, we found that too many forums for dispute resolution existed, and dispute resolution was too complex to navigate. So, out of that, a few changes were made. Wikipedia:Dispute resolution requests and the associated guide was created to help editors understand the forums that existed for resolving disputes, and a few forums were closed: Wikiettiquite assistance was closed in 2012, and as many now found MedCab redundant to the lighter-weight DRN and formal mediation, it sparked a conversation on the MedCab talk page and Mediation Committee talk page in favour of closing. This is something, as one of the coordinators of MedCab at the time, that I supported. It truly was redundant to DRN and there was some agreement at the time that more difficult cases could be referred to MedCom.

However, back in 2018, MedCom was closed as the result of a proposal here, with the thought process that it was perhaps too bureaucratic and not very active/did not accept many cases, and its effectiveness was limited, so it was closed. While RFCs do exist (and can be quite effective), the remaining dispute resolution forum (DRN) was never designed to handle long, complex disputes, and had to be shifted elsewhere. This has, in some ways, required DRN to morph into a one-size fits all approach, with some mediations moved to talk pages (Talk:William Lane Craig/Mediation, Wikipedia:Dispute resolution noticeboard/Autism) among others. The associated complexity and shift away from its lightweight original structure and ease of providing assistance on disputes has had an impact on the amount of active volunteers in dispute resolution, especially at DRN.

So, my thoughts boil down to a review of Wikipedia dispute resolution and where some sort of structured process, like mediation could work. My initial thoughts about how content DR could work is:

  • Third opinion - content issue between 2 editors on a talk page, limited responses on the talk page by a third party
  • Dispute resolution noticeboard - Simple content disputes between editors that can generally be resolved in ~2 weeks
  • Mediation: Complex content disputes where assistance from a DR volunteer/mediator can help resolve the issues, or on occasion, frame the issues into a few cohesive proposals for further community input / consensus building
  • WP:RFC: Where broader community input is required, generally on a well defined proposal (and the proposal may have come organically, or formed as a result of another dispute resolution process)

The idea would be that DRN would be returned to its lightweight, original format, which could encourage its use again (as there's been feedback that DRN is now also too bureaucratic and structured, which may discourage editors and potential volunteers alike) and informal mediation (or MedCab - name I'm not decided on at this stage) could take on the more complex issues. While RFCs have value, not every dispute is suitable for an RFC, as guidance on some disputes is needed to form cohesive proposals or build consensus. Having mediation as an option, historically, was a benefit, with many successes out of the processes. I think it's time for us to consider reviving it. Steven Crossin Help resolve disputes! 09:57, 25 January 2025 (UTC)[reply]

  • Oppose. The proposal is unclear and DRN is already the dispute resolution venue based on the idea that "assistance from a DR volunteer/mediator can help resolve the issues, or on occasion, frame the issues into a few cohesive proposals for further community input / consensus building".—Alalch E. 17:23, 25 January 2025 (UTC)[reply]
    The header on DRN was changed over time with little discussion. It was originally quite barebones, and is one of the items that will be changed back to how it was originally (see User:Steven Crossin/DRNHeader for an example. I’d encourage you to read over the history of informal mediation (MedCab) as it will give some context to how it worked (it was closed quite some time ago). The proposal is to simplify DRN to its original design - lightweight with simple processes and minimal structure, and re-establish our informal mediation process. MedCab was quite successful as a process back in the day, but DRN performs the role of complex dispute resolution poorly - a noticeboard was never going to be the best way to handle these sorts of disputes (and as such, is why DRN was intended to be lightweight). Steven Crossin Help resolve disputes! 23:19, 25 January 2025 (UTC)[reply]
  • Support as a working mediator at DRN. This idea can be seen as defining two tracks for content disputes, a lightweight track and a somewhat more formal track for more difficult disputes. I do not really care whether we have one name for the forum for the two weights of content disputes or two names, but I think that it will be useful to recognize that some cases are simpler than others. It is true that the parties and the volunteer may not know until starting into a dispute whether it is simple or difficult, so maybe most content disputes should start off being assumed to be simple, but there should be a way of changing the handling of a dispute if or when it is seen to be complex. This proposal is a recognition that content disputes are not one size, and one-size-fits-all dispute resolution is not available. Robert McClenon (talk) 03:58, 26 January 2025 (UTC)[reply]
  • Support per Steven and Robert. My outsider's perspective of DRN is that it is very bureaucratic, but also not great at handling complex, intractable cases, especially where animosity has built up between involved editors. (Please correct me if this assessment is inaccurate.) I think it makes sense to "split" it into two venues as proposed. Toadspike [Talk] 09:48, 26 January 2025 (UTC)[reply]
    I do see that it can be perceived as a bit bureaucratic, yes, and can struggle with some more difficult disputes. It used to be much more simple and less rules focused - ideally re-establishing MedCab would allow DRN to return to it simple origins, perhaps even allowing DRNs simplified structure to be more conducive to new volunteers participating. A possible style of how a dispute at DRN could look, with perhaps even less structure, is Wikipedia:Dispute resolution noticeboard#Jehovah's Witnesses (which, full disclosure, is one that I handled and is an example of my style of dispute resolution). Steven Crossin Help resolve disputes! 09:58, 26 January 2025 (UTC)[reply]
    How you handled that dispute does not require a new project page for a new process. Everything can take place at the existing WP:DRN. The DRN volunteers can opt for the less or more formal process upon their discretion. Just like you did here. —Alalch E. 13:04, 26 January 2025 (UTC)[reply]
    No, it didn’t. This is a simple one. But disputes like Talk:William Lane Craig/Mediation and some others that were forked/moved away from previous DRN discussions would benefit from this revived forum (as a noticeboard is not conducive to dispute resolution for drawn out, complex issues. How disputes are handled on DRN is open for interpretation by the volunteers, but there’s agreement among at least Robert and I (two of the main DRN volunteers) that having distinct dispute resolution process for simple versus complex disputes would be of benefit. Steven Crossin Help resolve disputes! 13:10, 26 January 2025 (UTC)[reply]
    I agree that the dispute that was processed there was a complex one, but so is Wikipedia:Dispute resolution noticeboard/Autism. So, here, we are discussing two approaches to resolving complex disputes. The approach exhibited in your example is like a three-sided peer review (similar to Wikipedia:Peer review, except it isn't just the requester and the reviewer, there's also the "other side"; but the reviewer does indeed break content down sentence by sentence as in a peer review, and make editorial assessments), and the approach taken by Robert McClenon is more like a formal debate. Do you think there's something wrong with the ongoing autism dispute resolution? Can both methods not coexist as different approaches to problems of similar complexity? —Alalch E. 14:11, 26 January 2025 (UTC)[reply]
  • Oppose, I guess? Frankly, I don't think structured mediation works on Wikipedia. What I have seen work (constantly) is: 1) talk with the other editor, but not uncommonly people will just have fundamentally different views. 2) if so, advertise to a noticeboard to get more editors to weigh in. If a specialised noticeboard captures the dispute, then any of: WP:FTN, WP:NPOVN, WP:BLPN, etc; otherwise WP:3O. That seems to work for most small to medium trafficked articles. 3) Failing that, WP:RFC.
    I can't remember when I last saw a dispute that mediation resolved. I mean, here's a random DRN archive: Wikipedia:Dispute_resolution_noticeboard/Archive_252. The discussion closures are: "opened by mistake", "premature", "withdrawn by filer", "not an issue for DRN", "closed - RFC is being used", "closed - not discussed in the proper forum", "participation declined by one editor, withdrawn by filer", "closed - one participant is an IP editor with shifting IPs", "closed as abandoned", "closed due to lack of response", "filed in wrong venue", "closed - pending at ANI", "closed as pending at WP:RSN", "closed as DRN can't be helpful here", "other editor hasn't replied", "apparently resolved - one editor has disappeared", "premature", "wrong venue", ... I kid you not, I haven't skipped any sections out, I just went off the top of the archive. Given this, it's hard to seriously say that mediation works. And it sort of lines up with my anecdotal experiences: it's pretty common for editors to never really come to a compromise agreement that all parties are happy with. Ultimately, a lot of content disputes are decided by '(maybe some compromise) and majority wins' or 'one participant disappears / gives up' or 'some compromise and universal agreement'. Though, the cases where 'some compromise and universal agreement' works appears so much like a discussion that we wouldn't even call it a dispute, and I think any cases that could be successful through mediation, the editors could've just figured it out among themselves anyway. ProcrastinatingReader (talk) 18:16, 26 January 2025 (UTC)[reply]
    I wish mediation would work more effectively in more complex disagreements on English Wikipedia. Unfortunately, as I discussed elsewhere, it doesn't scale up well to handle disputes with a lot of participants (in the real world, the mediation participants are limited to representatives for the relevant positions), and it requires sustained participation over a substantial period of time, which doesn't work well with Wikipedia's volunteer nature. For mediation to work, the community has to be willing to change something in its decision-making approach to overcome these challenges. For better or worse, I don't see sufficient signs of desire to change in this manner. isaacl (talk) 18:41, 26 January 2025 (UTC)[reply]
    My limited experience with 3O is that it's a very nice idea, but very rarely actually resolves the dispute. I've handled two, and I don't think I did a particularly bad job, but this one looks like it was solved by itself/by other uninvolved editors and this one was the typical outcome, where the 3O outcome was simply not accepted and the dispute remained unresolved. Another example (not handled by me) ended up at DRN regardless. And in all of these examples, the editors were acting in good faith. Toadspike [Talk] 09:57, 27 January 2025 (UTC)[reply]
Support - Informal mediation takes away the heavy bureaucracy costs with wikipedia and can lead to faster resolution. SimpleSubCubicGraph (talk) 00:32, 27 January 2025 (UTC)[reply]
I've no strong feelings about how we organize mediation, though I've recommended DRN to editors and I have participated in the current /Autism case at DRN. However, I do strongly believe that when editors say that something isn't working for them, especially when they're the main editors running the process in question, we should believe editors. If they think that splitting complex cases off into a separate process would help, then we should let them. WhatamIdoing (talk) 17:29, 27 January 2025 (UTC)[reply]
  • Ambivalent - I was very active on MedCab and briefly chaired MedCom, but that was 15 years ago. Me and Steve (and of course others, but just speaking from experience) have been on-and-off, come-and-go in the intervening years. Since then we've been reduced to one long-running mediator on the whole project: Robert McClenon. So you can imagine the problem this poses if we introduce another project and it comes down to mainly Robert again. Mediating is frustrating, requires a ton of patience, and it's subject to rapid burnout. Props to him; his endurance is incredible.
But let's say that this doesn't happen, that reopening MedCab brings in a bevy of new volunteers (a big if). Now let's say there's some big dispute somewhere, and it's filed at DRN. The volunteers at DRN say "this is too big for us, file it at MedCab". OK, so we have two filings now - and these are annoying to file: there's a lot of boilerplate, and even the way you file one is different from the way you file the other. But OK fine, they're filed. Now this is a particularly difficult dispute, and one or two editors says "no, I don't want to be involved in this mediation". MedCab didn't have a policy for what to do in this situation (unlike MedCom, and that policy absolutely was its death knell), but some MedCab volunteer comes along and closes it anyway because there's no consensus for mediation. What now? Well, you could refile at DRN (a third filing) and maybe a volunteer there suggests that an RfC is maybe the way to go. So that's four filings. tbqh, this might actually resolve the dispute, from the burnout of the filer alone.
I'm marking this as ambivalent because I preferred the way MedCab handled DR. To me, a lot of this "DRN is too bureaucratic" talk is just a symptom of all of DRN being on a single page; MedCab/Com subpaged its cases, so it wasn't obvious how bureaucratic they could be. How (eg) Robert currently mediates is not very different from what a typical MedCab or (especially) MedCom case looked like; it just wasn't out in the open.
I do not see why we can't formally change DRN's mandate to include lengthier mediation, and possibly subpage those cases that are more complicated. I say "formally change" because DRN already does lengthier mediations and has done so for years, it being the only option once MedCab closed and MedCom accepted zero cases for literal years.
Anyway, in conclusion: MedCab is just DRN with subpages, and I'm not convinced this doesn't solve a seeming bureaucracy with an actual one. Xavexgoem (talk) 20:49, 27 January 2025 (UTC) I'm also marking this "ambivalent" because I like the name The Mediation Cabal. I honestly believe (don't laugh) that we'd have more volunteers not because we've made another process that better suits certain editors, but because that process would be called The Mediation Cabal. It's what drew me in, anyway.[reply]
I'd say that dispute resolution on Wikipedia can be what we as volunteers make it. Part of the idea of reviving MedCab is to give dispute resolution some distinction - make DRN for the easy stuff and informal mediation for the complex. The rationale behind this has a few reasons - the perceived bureaucratic, structured nature of DRN could likely hinder participation by other volunteers (I base this mostly on anecdotal feedback I've received, reviewing talk page discussions and the fact that DRN had more volunteers historically when it was largely unstructured - and while I realise that correlation doesn't always equal causation, its a factor). This doesn't necessarily mean that editors would have to re-file at MedCab if DRN volunteers decided it was better suited to MedCab - early on when DRN was instituted, there was an idea that DRN volunteers could refer cases to MedCab/MedCom, minimising that work for the editors involved. Mediators can and should help editors draft an RFC if that's an intended way to resolve the dispute (and sometimes, that's what mediation can be - helping participants boil down the issues into a few structured proposals for an RFC) - which is something I've done in the past with good outcomes. And while mediation is voluntary, Wikipedia's always had the idea that if there's one editor that refuses to participate or work with others to form a consensus (and then comes back and says "no I disagree with you ten editors, I'm gonna edit war my way out of this" then that becomes a conduct issue). MedCab doesn't necessarily need to have all the boilerplate it did in the past - as I said we can make DR what we want. But I do see the value in trying to split out the processes, to allow us to emphasise the intended lightweight nature of DRN (and then hopefully allowing volunteers to get involved i.e. "That just looks like a normal noticeboard discussion, I'll chime in etc") but keeping a venue for those challenging disputes, and is why I think just subpaging cases we decide are challenging later isn't the right approach. Steven Crossin Help resolve disputes! 21:05, 27 January 2025 (UTC)[reply]
Very basically, I don't think we have the resources or volunteers to spare to make this process smooth from the outset. I do not understand this want to return to something more "ideal" -- as you had envisaged -- so far into the lifespan of this particular project.
My above comment was too wordy. I'll reiterate: MedCab is just DRN with subpages. Does that serve the initial purpose of DRN? No. Is it years and years later? Yes. Xavexgoem (talk) 06:02, 1 February 2025 (UTC)[reply]
  • Disclosure that I was invited to participate here.
    Weak oppose but support iterative improvements to DRN to make it easier to use. It's true that DRN started as a triaging process, with the secondary objective of resolving simpler disputes. Doing mediations under a separate project page might make them seem more structured/less off-the-cuff/whatever, but I'm not convinced that this is worth administering a whole separate process. Nor will removing mediation likely improve DRN. The trend has been towards consolidating processes (MedCab, MedCom, WA, and more having fallen away over the years). That should not be reversed without good reason. I do think that DRN should move mediations off the main noticeboard and onto subpages. Some of the mediations are also very difficult to follow, with threaded statements in the style of ArbCom and the adoption of rules like a tribunal's rules of procedure. The noticeboard instructions could be slimmed too. I would support those iterative improvements. I'm not absolutely opposed to starting a new mediation process – I just don't think it matters too much. Where the best result of a change is likely to be the same number of successful cases/editors volunteering to mediate/users agreeing to participate in mediation, the status quo should probably default.

    I also support the underlying enthusiasm to get more users doing mediation. I am unconvinced by the argument that because something like 0/20 DRN threads show a successful mediation, we shouldn't do mediation at all. Almost no disputes will be suited to mediation; it's a niche solution for use where a dispute has not been resolved by the ordinary wiki way (which includes attrition, disinterest, or removing the bad actors). Mediation can do what perhaps only a structured community RFC can achieve, and for a fraction of the time cost. arcticocean ■ 10:21, 28 January 2025 (UTC)[reply]

    Arcticocean, thanks for your comments here (and for disclosure to others, I notified them of this discussion to see if they were interested in providing their thoughts, due to their role as a former chair of the Mediation Committee, and that they were involved, on and off in Wikipedia dispute resolution for as long as I have been). I'm not opposed to the idea of trying to see if slimming down DRN's main structure and paring down the rules would have an impact, but subpaging cases we decide need mediation (or just, longer disputes). I'm just not sure about how to provide visibility of those disputes on the main DRN page, or just being able to still track the progress of them (as at present, if we subpage the dispute and it completely disappears into the ether) - and this was part of the reason why I thought splitting these two different dispute resolution styles would make the most sense. But I'm not overly fussed on the where, just the how. Do you (or others here) have any ideas on how we could implement this two-track system on the one forum? Steven Crossin Help resolve disputes! 11:12, 28 January 2025 (UTC)[reply]
    What about this, which doesn't need much to be changed…? If a dispute regarding Moon is at DRN and enters mediation, then:
    1. At WP:DRN, under the header == Moon ==, replace the noticeboard discussion with a link to the mediation page: [[/Mediation/Moon]].
    2. On the case status tracker, update the status to "in mediation".
    That would allow DRN volunteers to deliver a full mediation service where appropriate, while allowing DRN to continue functioning as a noticeboard (providing basic advice, signposting, and assistance to disputants). If I've picked you up correctly, this addresses your concerns that the noticeboard has become bloated and that delivering full mediation through it has become difficult. arcticocean ■ 11:56, 28 January 2025 (UTC)[reply]
    See, this is why I was hoping to get your thoughts. This I think is a great idea. We could even have a short blurb on the /Mediation page for reference. I’ll possibly start working on some draft amendments as I’d like the DRN bot to still
    be able to handle tracking them. @Robert McClenon: - do you think this approach could work? Steven Crossin Help resolve disputes! 12:36, 28 January 2025 (UTC)[reply]
    User:Steven Crossin, User:Arcticocean - I am not sure, but I think that either I do not understand the question or I agree with the idea. As I have said earlier, I do not have a strong opinion as to whether MedCab, or something similar to it, should have a separate door from DRN or be something that is entered via DRN. I think that it is important that we have a streamlined procedure for handling simple issues and a more structured procedure for handling more complex or more stubborn issues. Now: What was the question? Robert McClenon (talk) 20:31, 28 January 2025 (UTC)[reply]
    What was the question? It's "do you think this approach could work?" And the 'approach' is right above that question, in my comment (11:56, 28 January 2025 (UTC)). In short, when a DRN case gets referred to mediation, the discussion would move onto a subpage of DRN and the DRN report would be replaced with a link to the subpage. arcticocean ■ 08:35, 31 January 2025 (UTC)[reply]
  • Support Med Cab was awesome, especially in the quality of facilitators it attracted. A successful mediation generally takes the form of whittling down the issues through discussion, gathering 'evidence' that each side looks at critically and often comes to agreement on (at least as to it being decent evidence on the issue) -such deep dives even changes minds(!) sometimes, and constructing really useful RfC's (often with reference to evidence) through discussion/monitored drafting for what remains to be determined.
As a side benefit, if there are behavioral issues 1) the presence of the mediator often cabins it, and 2) it regularly becomes clearer for the entire project what the problem behavior is (even if its just failure by one side to even try to work it out in good faith) Alanscottwalker (talk) 16:34, 28 January 2025 (UTC)[reply]
  • Questions from a content editor Why would DRN now be limited to disputes of a (seemingly arbitrary) time period? You don't know how long a debate will last at the outset. Further, this solution seems like it would add another rule, another layer of complexity, to our on-wiki processes, whereas I like the simplicity of our current processes. JuxtaposedJacob (talk) | :) | he/him | 16:12, 31 January 2025 (UTC)[reply]
    I also think that arcticocean makes a good point regarding the trend being towards the consolidation of processes; this community consensus exists for good reason. JuxtaposedJacob (talk) | :) | he/him | 16:13, 31 January 2025 (UTC)[reply]
  • Why do we always immediately jump to voting? Perhaps the reason dispute resolution is difficult on Wikipedia is because our first instinct in a discussion is to create and affiliate with factions? Anyway, I think revisiting and discussing where our dispute resolution processes succeed and fail is a good first step to improving them. My thoughts fall somewhere between Whatamidoing and ProcrastinatingReader: I think we should trust editors active in an area to iterate on processes, but I worry that a solution based on more bureaucracy could create more problems than it solves. To me that suggests a trial period would be useful to get more info and keep iterating. Wug·a·po·des 18:59, 31 January 2025 (UTC)[reply]

Addressing Two Concerns

I will try to address the comments of User:Alalch E. and of User:ProcrastinatingReader separately, since they seem to have separate, almost opposite issues. In particular, one of them seems to be saying that content dispute resolution is working reasonably well and should not be changed, and the other one is saying that content dispute resolution works poorly, and is not worth improving.

First, I am not sure whether I understand the concerns of User:Alalch E., but I think that they are saying that DRN is currently where editors go when they have content disputes, and should continue to be able to go to DRN when they have content disputes. That will still be possible after MedCab is restarted. I do not have a strong opinion on whether DRN should be the front door to MedCab, or whether MedCab should have its own front door. However, DRN is able and will be able to refer disputes to appropriate forums. DRN sometimes refers issues to the Reliable Source Noticeboard if they are questions about the reliability of sources, and sometimes refers issues to the biographies of living persons noticeboard if BLP violations are the main concerns. If MedCab is a separate dispute resolution service, a DRN volunteer will be able to send a case to MedCab if it is either too complex for a lightweight process or the editors are too stubborn to use a lightweight process. I will point out that if the users are stubborn, the dispute is likely to go to an RFC after mediation. Although I close a dispute that ends with an RFC as a general close rather than as resolved, I consider the dispute resolution a success. The dispute likely would not have gone to an RFC in an orderly fashion without volunteer assistance.

Perhaps Alalch E. is saying either that a one-stop approach to resolution of content disputes will work better, or that there is no need for a two-track approach to content disputes, or that defining two tracks will interfere with dispute resolution. If so, I would be interested in the reason. My opinion is that the current one-size-fits-all approach works about as well as one size of clothing. On the one hand, some users have said that DRN is too bureaucratic. Moving the complex or difficult cases to another forum will allow DRN to be more informal. On the other hand, I have found the statement that DRN is mostly for cases that will be resolved in two to three weeks inconsistent with some of the more difficult cases that we have had. I would prefer not to have to ignore a guideline or to develop a special procedure for difficult cases, and those cases would fit better in MedCab.

Second, ProcrastinatingReader appears to be saying either that mediation does not work well in Wikipedia, or that content dispute resolution does not work well in Wikipedia. I may have misunderstood, but they seem to be saying that the state of dispute resolution in Wikipedia is so hopeless that it is not worth trying to improve. I will comment briefly that I consider some of the closures that they cite as successes rather than failures. An RFC resolves a content dispute. A referral to the Reliable Source Noticeboard resolves the question of reliability of a source. I am aware that content dispute resolution does not always work. I think that recognizing that there are at least two tracks for content disputes, a lightweight track and a more formal track, will improve its functioning. Also, some of the disputes that were closed as not having met preconditions might have been able to be helped if DRN were made more lightweight by transferring the responsibility for difficult cases to MedCab. I think that ProcrastinatingReader may have showed that some of those disputes could have been handled somehow if dispute resolution were improved, and I think that the two-track concept outlined here is likely to result in improvement.

These two comments appear to be almost opposite reasons for disagreeing with the plan. I have tried to address both of them. Robert McClenon (talk) 05:39, 27 January 2025 (UTC)[reply]

Thanks Robert. I'll briefly summarise my thoughts on some of the comments here. As someone that's been involved in Wikipedia content dispute resolution for over a decade, I know it's not perfect. Some disputes get logged at DRN that might be better suited for another forum, or might merit further discussion at the talk page. Some editors might decide not to participate, and that's fine - participation on Wikipedia is voluntary. DRN was never designed to be able to fix every single content dispute on Wikipedia - the original proposal was to handle lightweight content disputes, or act as traffic control for a dispute that might be better suited to somewhere like RSN or BLPN, and in my mind, that's completely fine. Does DRN close disputes a little early sometimes, where perhaps we could have helped the dispute a little better? I'm sure that's happened. But again, we're acknowledging improvements are needed and proposing change.
Mediation, both informal and formal, was never perfect either, and indeed had cases that were not successful. But it also had its successes, just as DRN does, such as this recent example that I handled, and there are others in archives too. MedCab had its share of successes, as did MedCom. And every mediator has their own style of handling disputes - mine is often more freeform, others implement a bit more structure. The discussion here is not a suggestion that mediation is the magic bullet that will fix all of Wikipedia's dispute resolution problems, or that DRN is a complete mess - it all needs improvement. One of the primary reasons I decided to return to Wikipedia after more than 2 years away is because I saw how dispute resolution is on Wikipedia, and decided it was to do something about it. Re-establishing informal mediation as a process would allow DRN to return to its lightweight original style, likely encouraging new, uninvolved editors to participate and volunteer, but provide the structure that's sometimes needed for more difficult content disputes that can benefit from an experienced hand to guide editors towards a consensus. As one of the people that pushed to close MedCab as a redundant process (which at the time, I was one of the co-ordinators), I agreed back then, it wasn't needed. But there's now a gap that I think it could fill. Heck, it could even be re-established on a trial basis. DRN was started as a one-month trial 14 years ago, and it endures today. It needs improvements. Everything on Wikipedia does. I think with many of the dispute resolution volunteers willing to try, it's worth giving it a crack. Steven Crossin Help resolve disputes! 09:51, 27 January 2025 (UTC)[reply]
The former - that mediation doesn't work well in Wikipedia. I'm not quite a nihilist :) -- I think dispute resolution in Wikipedia is a bit counter-intuitive, but I do think it works. IME it works the way I outlined, and further I do think outcomes like "one party gives up and disappears from the dispute" is a form of "dispute resolution", in that the dispute ends. I'm not sure if it's for the better, but oftentimes in these cases, another editor will asynchronously pick up where the first left off, so the end result is more or less the same. I think the (long) comment isaacl linked above has some truth in it, particularly (for this context) the comments regarding the effects of the volunteer nature of Wikipedia.
There are certainly shortcomings in dispute resolution here that can be improved, but I don't think it's through mechanisms like expanding voluntary mediation, which has (IMO) proven not to work here. I think we need to start with acknowledging how dispute resolution actually works on this site, and thus what works here and in communities like this one, as opposed to how dispute resolution works in the office.
I agree that referrals to RSN are good at solving the issue. But in this case, DRN is just acting as a very longwinded redirect, perhaps primarily useful for newer editors who aren't familiar of noticeboards here. An experienced WP:3O volunteer could've also just told the parties "hey, go to RSN for this", and it'd be much smoother. ProcrastinatingReader (talk) 09:57, 27 January 2025 (UTC)[reply]
I wonder how often DRN gets disputes involving two editors. Wikipedia:Arbitration/Requests/Enforcement has just been reduced to reports of two editors in Wikipedia:Arbitration/Requests/Case/Palestine-Israel articles 5#AE reports limited to two parties. (If you need to complain about three people, you have to file three separate reports.) This was because multiparty reports were complicated. BRD (in its original version, not Wikipedia:What editors mean when they say you have to follow BRD) similarly recommended one-on-one negotiations instead of trying to talk to a bunch of people at once. It's easier for "you and me" to agree on something than for "you and me and him and her and them", because in larger groups, one unreasonable person could prevent everyone else from discovering that they could reach an agreement. Does DRN have the same struggle? WhatamIdoing (talk) 23:26, 27 January 2025 (UTC)[reply]
Correction: AE will only consider reports about one person. The second person being considered is the filer. Best, Barkeep49 (talk) 11:20, 31 January 2025 (UTC)[reply]

Just a minor take on the relative success of mediation: A lot of the value of mediation, imo, is retention. Mediators can exercise some control on participants' behavior, which can keep them from getting blocked. You can argue that, well, maybe these people should be indef'd or banned or whatever; but we're so frequently dealing with complicated, hot-under-the-collar issues that require from some people just a capital-G Godly amount of patience. I would in general prefer editors not be blocked if despite their civility problems they are otherwise contributing solidly to the project. So it's not just the success of the case. We are never going to have, say, an Israel-Palestine case get marked "resolved" without simultaneously winning a Nobel. Xavexgoem (talk) 21:05, 27 January 2025 (UTC)[reply]

Pages (non-articles) that are not neutral must have a template disclosing of their non-neutrality

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Articles have to be neutral, however essays and other pages do not have to be neutral. While I think opinions should have a place on Wikipedia, people should be aware that the page they are looking at has opinions and does not follow a neutral point of view. The most popular essay that I can think of that is opinionated is WP:Trump. Obviously, wikipedians who support Trump will not like this essay and think "how could this be on the site after all these years?". They may blank the page, then nominate it for deletion believing it to be an attack page. That is why I propose that a template be created informing new Wikipedians that non-articles on Wikipedia do not have to follow NPOV and can contain opinions. If the article has the humor template, the template I am proposing does not need to be added. SimpleSubCubicGraph (talk) 00:44, 27 January 2025 (UTC)[reply]

What is "non-neutral" about WP:Trump? O3000, Ret. (talk) 01:05, 27 January 2025 (UTC)[reply]
@Objective3000 Did you even click the wikilink? These revisions: https://en.wikipedia.org/w/index.php?title=Wikipedia:Not_every_single_thing_Donald_Trump_does_deserves_an_article&oldid=1266665671 and the current WP:NTRUMP are very opinionated and that is why I request a template to be created that informs the wikipedian of non neutrality. SimpleSubCubicGraph (talk) 01:08, 27 January 2025 (UTC)[reply]
that informs a wikipedian that essays and other non-articles don't need to be neutral.* SimpleSubCubicGraph (talk) 01:08, 27 January 2025 (UTC)[reply]
That's what the essay tag already on the page is for. MrOllie (talk) 01:12, 27 January 2025 (UTC)[reply]
@MrOllie The essay tag just says, this is a wikipedia policy from an editor that has not been thoroughly vetted by the community. This does not say that essays don't have to be NPOV. SimpleSubCubicGraph (talk) 01:17, 27 January 2025 (UTC)[reply]
What it says is This page is not an encyclopedia article, nor is it one of Wikipedia's policies or guidelines, as it has not been thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints.. There is no need to list every policy that it hasn't been vetted to follow. MrOllie (talk) 01:25, 27 January 2025 (UTC)[reply]
The essay tag does not say "this is a wikipedia policy". And if you read WP:NPOV, you'll see it's about articles. Gråbergs Gråa Sång (talk) 09:36, 27 January 2025 (UTC)[reply]
Of course I clicked on the link you provided. No, I didn't look at every previous version of the essay. And as MrOllie says, essay is the key. Essays are opinions by definition. O3000, Ret. (talk) 01:17, 27 January 2025 (UTC)[reply]
Uh, why would "wikipedians who support Trump" think this? The WP:TDS essay is very clearly suggesting not to fill Wikipedia with reflexive anti-Trump outrage cycles. Is this something "wikipedians who support Trump" think should happen? It's even acronymed "trump derangement syndrome"! CMD (talk) 01:52, 27 January 2025 (UTC)[reply]
@MrOllie @Objective3000 Well some people don't know this, I certainly didn't. That is why I nominated for MfD. SimpleSubCubicGraph (talk) 03:54, 27 January 2025 (UTC)[reply]
What exactly didn't you know? CMD (talk) 05:06, 27 January 2025 (UTC)[reply]
@SimpleSubCubicGraph: not knowing something is a really bad motivation for then taking action "against" the target of your ignorance or to even criticize it. You are acting from a position of ignorance, and your inexperience here is quite evident. I suggest you follow some very experienced editors and make a habit of asking (not accusing) them about things you don't understand. Even your use of "neutral" (referring to NPOV policy) reveals you don't know that we do not mean it in the normal sense. Feel free to ask me about things on my talk page. -- Valjean (talk) (PING me) 17:52, 30 January 2025 (UTC)[reply]
The {{essay}} tag already says that essays don't necessarily represent the views of the community. I suggest you drop the stick regarding the Trump essay. voorts (talk/contributions) 05:22, 27 January 2025 (UTC)[reply]
@Voorts I'm not mad that i wasnt able to get a consesus on deleting the WP:NTRUMP article, it was just a good example for the proposal I am making. SimpleSubCubicGraph (talk) 05:48, 27 January 2025 (UTC)[reply]
@SimpleSubCubicGraph, were you surprised when you originally discovered that essays (i.e., ones not tagged as being humorous) did not have to comply with NPOV? WhatamIdoing (talk) 23:28, 27 January 2025 (UTC)[reply]
@WhatamIdoing Of course I was. I thought I was on a fake version of Wikipedia when I first saw it. SimpleSubCubicGraph (talk) 03:43, 30 January 2025 (UTC)[reply]
If I had not stopped and looked around I would have probably been blocked for edit warring. SimpleSubCubicGraph (talk) 03:44, 30 January 2025 (UTC)[reply]
Well, I'm glad that you stopped and looked around, then. We need to think about how people learn about Wikipedia, because it's pretty complicated.
As a step towards gathering information, perhaps you'd like to think about your overall experience, and post it at User:Clovermoss/Editor reflections. This is a page where we're trying to collect information about the differing experiences that a wide variety of editors have had. You're coming up on your two-year anniversary, so I think you'd be a perfect candidate: new enough to remember what your first edits felt like, but experienced enough to have seen some of the back end. WhatamIdoing (talk) 04:23, 30 January 2025 (UTC)[reply]
@SimpleSubCubicGraph: above you wrote: "on deleting the WP:NTRUMP article". Even in this discussion about that ESSAY!!!, you call it an article. Learn the difference. Essays are not bound by the NPOV policy (except the BLP part of it) or many other policies and are not even part of the encyclopedia. While the public can access them, they are more like inhouse communication between editors. It's one way we share our opinions with each other. -- Valjean (talk) (PING me) 18:01, 30 January 2025 (UTC)[reply]
The kind of person who's going to blank a Trump page isn't going to care about the notice. If anything it might make them more likely to, because now we're suppressing ideas and freedom of speech blah blah blah. It's like putting notices on every article reading "please don't vandalize this" and expecting people to go "oh golly gee I was going to vandalize this page but now I know not to, thanks!" Gnomingstuff (talk) 03:20, 29 January 2025 (UTC)[reply]
Is this a follow-up to WP:VPI § Ban mainstream media, two days prior? I'm not sure what this kind of activity is supposed to indicate, but {{essay}} works fine outside mainspace. Maybe I'm just grumpy, but also maybe you could get a better feel for the community and our processes before opening more threads at the Villages Pumps. Folly Mox (talk) 13:34, 29 January 2025 (UTC)[reply]
@Folly Mox It is not. "Ban mainstream media" is entirely separate from this one. I just figured a tiny update to the template of essays disclosing on non neutrality would be good to not catch newer editors off guard. SimpleSubCubicGraph (talk) 03:44, 30 January 2025 (UTC)[reply]
To be fair, the essay template does say It contains the advice or opinions of one or more Wikipedia contributors. [...] Some essays represent widespread norms; others only represent minority viewpoints. Do you have a rewording in mind to emphasize more that essays might follow a particular point of view that isn't consensus?
"Not neutral" is a bit of a clumsy wording for essays about, say, writing advice. They aren't necessarily a part of a wider debate with opposing sides in which we could be meaningfully talking about "neutrality" like in a political controversy. Even then, "neutrality" could give an impression that there is a need for a false balance between opposing sides, rather than norms being a question of community consensus. Chaotic Enby (talk · contribs) 04:28, 30 January 2025 (UTC)[reply]
No it's a follow up to SSCG trying to get the page deleted at MfD and since that didn't work, they're now trying to get it censored. voorts (talk/contributions) 13:40, 30 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Redirects not mentioned in the articles

I have proposed expanding the directions at the top of Category:Redirects to an article without mention. Please see Category talk:Redirects to an article without mention. WhatamIdoing (talk) 06:41, 27 January 2025 (UTC)[reply]

US cabinet nominees

Which is the better format for using "Nominee"?

Pam Bondi
United States Attorney General
Nominee
Assuming office
TBD
SucceedingMerrick Garland
Pam Bondi
United States Attorney General
Nominee
Assuming office
TBD
SucceedingMerrick Garland

The first example doesn't use the 'status' bar, where's the second example does. GoodDay (talk) 17:33, 1 February 2025 (UTC)[reply]

Contacting @TimeToFixThis, Mazerks, and Tomrtn:, who've also edited this area of the infoboxes-in-question. GoodDay (talk) 17:39, 1 February 2025 (UTC)[reply]

I don't see any reason for it to not go in the status bar, as the title bar simply displays the title of the person's office they are being nominated to, are assuming or hold. Putting nominee in the status bar seems reasonable to me as it distinguishes from the person being an incumbent, acting or interim official. Mazerks (talk) 18:16, 1 February 2025 (UTC)[reply]
So… Suppose a nominee isn’t confirmed by the Senate… would we put “failed confirmation” in the status bar or something? Blueboar (talk) 18:29, 1 February 2025 (UTC)[reply]
As far as I am aware they would still be considered the nominee until the president removes the nomination or picks someone else. Don't take my word as fact though. TimeToFixThis (talk) 03:22, 2 February 2025 (UTC)[reply]
As TimeToFixThis said, if the President withdraws the nomination or they fail the senate, they no longer have any relation to the position and it simply comes off of their page, as happened with Matt Gaetz. Mazerks (talk) 19:48, 2 February 2025 (UTC)[reply]
The first one seems better visually for readers; the separate shaded boxes in the status bar example look clunky and disconnected, and like really old web design. If the status bar didn't have the same shading and was just text like the rest, it would be fine. Schazjmd (talk) 18:34, 1 February 2025 (UTC)[reply]
I can see that perspective but to me it makes sense to utilize the status bar. When they are the incumbent it is disconnected also, so I don't know. TimeToFixThis (talk) 03:20, 2 February 2025 (UTC)[reply]
I would disagree with your view that it makes the web design look old, personally I think it's more aesthetic to use the status bar. Mazerks (talk) 19:49, 2 February 2025 (UTC)[reply]
Readers don't know that's something we call a "status bar". They just see two gray boxes. Schazjmd (talk) 21:04, 2 February 2025 (UTC)[reply]
I have been in support for the second option since these edit wars started. It makes sense to utilize the status bar - why else would it be there if not for that. When I started to edit these pages as more nominees were announced, I was thwarted by some editor who seemed very convinced it had to be the first option. I forget what his name was. He said initially because they were "presumptive nominees", but never made a good argument as to why they couldn't be in the status bar. He went and changed all of the presumptive nominees info box to the first option and I figured he new what he was talking about, so I just started editing them just like that.
If there is not rule on this I'd vote to use the second option. TimeToFixThis (talk) 03:16, 2 February 2025 (UTC)[reply]
I don't know about the "presumptive nominee" bit. The first option is best, as it's more compact. GoodDay (talk) 19:09, 2 February 2025 (UTC)[reply]

Idea lab

The prominence of parent categories on category pages

The format of category pages should be adjusted so it's easier to spot the parent categories.

Concrete example:

I happen to come across the page: Category:Water technology

I can see the Subcategories. Great. I can see the Pages in the category. Great. No parent categories. That's a shame --- discovering the parent categories can be as helpful as discovering the subcategories.

Actually, the parent categories are there (well, I think they are --- I'm not sure because they're not explicitly labelled as such). But I don't notice them because they're in a smaller font in the blue box near the bottom of the page: Categories: Water | Chemical processes | Technology by type

I think the formatting (the typesetting) of the parent categories on category pages should be adjusted to give the parent categories the same prominence as the subcategories. This could be done by changing: Categories: Water | Chemical processes | Technology by type to: Parent categories: Water | Chemical processes | Technology by type and increasing the size of the font of `Parent categories', or, perhaps better, by having the parent categories typeset in exactly the same way as the subcategories. D.Wardle (talk) 22:21, 22 December 2024 (UTC)[reply]

Parent categories are displayed on Category: pages in exactly the same way that categories are displayed in articles. WhatamIdoing (talk) 04:26, 26 December 2024 (UTC)[reply]
The purpose of an article page is to give a clear exposition of the subject. Having a comprehensive presentation of the categories on such a page would be clutter --- a concise link to the categories is sufficient and appropriate.
The purpose of a category page is to give a comprehensive account of the categories. A comprehensive presentation of the categories would not clutter the subject (it is the subject).
Therefore, I do not expect the parent categories to be presented the same on article and category pages --- if they are presented the same, that only reinforces my opinion that some change is necessary. D.Wardle (talk) 20:15, 27 December 2024 (UTC)[reply]
I think the purpose of a category page is to help you find the articles that are in that category (i.e., not to help you see the category tree itself). WhatamIdoing (talk) 21:40, 27 December 2024 (UTC)[reply]
Is there any research on how people actually use categories? —Kusma (talk) 21:48, 27 December 2024 (UTC)[reply]
I don't think so, though I asked a WMF staffer to pull numbers for me once, which proved that IPs (i.e., readers) used categories more than I expected. I had wondered whether they were really only of interest to editors. (I didn't get comparable numbers for the mainspace, and I don't remember what the numbers were, but my guess is that logged-in editors were disproportionately represented among the Category: page viewers – just not as overwhelmingly as I had originally expected.) WhatamIdoing (talk) 22:43, 27 December 2024 (UTC)[reply]
I'm fine with parent categories being displayed the same way on articles and categories but I think it's a problem that parent categories aren't displayed at all in mobile on category pages, unless you are registered and have enabled "Advanced mode" in mobile settings. Mobile users without category links probably rarely find their way to a category page but if they do then they should be able to go both up and down the category tree. PrimeHunter (talk) 15:39, 28 December 2024 (UTC)[reply]
Am I missing something? Is there a way of seeing the category tree (other than the category pages)?
If I start at:
https://en.wikipedia.org/wiki/Wikipedia:Contents#Category_system
... following the links soon leads to category pages (and nothing else?). D.Wardle (talk) 20:20, 28 December 2024 (UTC)[reply]
I'd start with Special:CategoryTree (example). WhatamIdoing (talk) 20:49, 28 December 2024 (UTC)[reply]
You can click the small triangles to see deeper subcategories without leaving the page. This also works on normal category pages like Category:People. That category also uses (via a template) <categorytree>...</categorytree> at Help:Category#Displaying category trees and page counts to make the "Category tree" box at top. PrimeHunter (talk) 20:59, 28 December 2024 (UTC)[reply]
Now there are three words I would like to see added to every category page. As well as `parent' prefixing `categories' in the blue box (which prompted this discussion), I would also like `Category tree' somewhere on the page with a link to the relevant part of the tree (for example, on:
https://en.wikipedia.org/wiki/Category:Water_technology
... `Category tree' would be a link to:
https://en.wikipedia.org/wiki/Special:CategoryTree?target=Category%3AWater+technology&mode=categories&namespaces=
).
I can only reiterate that I think I'm typical of the vast majority of Wikipedia users. My path to Wikipedia was article pages thrown up by Google searches. I read the articles and curious to know how the subject fitted into wider human knowledge, clicked on the category links. This led to the category pages which promised so much but frustrated me because I couldn't find the parent categories and certainly had no idea there was a category tree tool. This went on for years. Had the three additional words been there, I would have automatically learned about both the parent categories and the category tree tool, greatly benefitting both my learning and improving my contributions as an occasional editor. Three extra words seems a very small price to pay for conferring such a benefit on potentially a huge fraction of users. D.Wardle (talk) 03:43, 30 December 2024 (UTC)[reply]
I think it would be relatively easy to add a link to Special:CategoryTree to the "Tools" menu. I don't see an easy way to do the other things. WhatamIdoing (talk) 07:33, 30 December 2024 (UTC)[reply]
It's possible to display "Parent categories" on category pages and keep "Categories" in other namespaces. The text is made with MediaWiki:Pagecategories in both cases but I have tested at testwiki:MediaWiki:Pagecategories that the message allows a namespace check. Compare for example the display on testwiki:Category:4x4 type square and testwiki:Template:4x4 type square/update. PrimeHunter (talk) 18:01, 30 December 2024 (UTC)[reply]
How much evidence of community consensus do you need to make that change here? WhatamIdoing (talk) 19:16, 30 December 2024 (UTC)[reply]
I've looked at what you've done (and hopefully understood). MediaWiki:Pagecategories puts some of the words in the blue box at the bottom of all category pages. But what code makes the category pages (what code calls MediaWiki:Pagecategories)? I think the changes I'm suggested should be made to that calling code... D.Wardle (talk) 23:35, 9 January 2025 (UTC)[reply]
Is the answer to your question "MediaWiki"?
Every page has certain elements. You can see which ones are used on any given page with the mw:qqx trick, e.g., https://en.wikipedia.org/wiki/Category:Water_technology?uselang=qqx WhatamIdoing (talk) 01:58, 10 January 2025 (UTC)[reply]
I looked at the MediaWiki Help and Manual. How the formatting of namespaces is controlled might be discussed somewhere, but, at the very least, it's not easy to find (I didn't find it). I've requested this be addressed (https://www.mediawiki.org/wiki/Help_talk:Formatting#The_formatting_of_namespaces) but, thus far, no one has volunteered.
Returning to the issue here, my inference is that `normal' Wikipedia editors would not be able to implement the changes I'm suggesting (adding the word `parent' and a link to the category tree) assuming the changes were agreed upon. I therefore also conclude that the changes I'm suggesting do need to go to Village_pump_(proposals). Do you agree? D.Wardle (talk) 23:29, 17 January 2025 (UTC)[reply]
@PrimeHunter already worked out how to do this change. Go to testwiki:Category:4x4 type square and look for the words "Parent categories:" at the bottom of the page. If that's what you want, then the technical end is already sorted. WhatamIdoing (talk) 00:12, 18 January 2025 (UTC)[reply]
You are right that PrimeHunter's solution works but (not wishing to criticize PrimeHunter in any way --- I'm grateful for their input) I don't think it's the right way to do it. To explain: When an editor adds a section to an article, the edit box is initially blank. There is no code to specify e.g. the font, the size of the font, the colour of the font, the indentation from the margin, etc. These things must be specified somewhere but they are hidden from the editor. And that's a good feature (it enables the editor to do their work without having to wade through a whole heap of code specifying default formatting which isn't relevant to them). PrimeHunter's solution goes against that principle --- it's adding formatting code to the editor's box. You might argue that it's only a very small piece of code, but, if changes are routinely made in this way, over time the small pieces of code will accumulate and the editor's boxes will become a mess. D.Wardle (talk) 21:00, 18 January 2025 (UTC)[reply]
Look at the page history. PrimeHunter has never edited that page. It does not add any code to the editor's box. WhatamIdoing (talk) 21:12, 18 January 2025 (UTC)[reply]
Would a simpler cat page be easier for you to look at? Try testwiki:Category:Audio files or testwiki:Category:Command keys instead. All of the cats on that whole wiki are showing "Parent categories" at the bottom of the page. WhatamIdoing (talk) 21:18, 18 January 2025 (UTC)[reply]
Agreed. And (I think you already understand this) that is because PrimeHunter's edit of testwiki:MediaWiki:Pagecategories affects all pages on https://test.wikipedia.org.
Comparing:
https://en.wikipedia.org/w/index.php?title=Category:Wikipedia&action=edit
and:
https://test.wikipedia.org/w/index.php?title=Category:Wikipedia&action=edit
...adds weight to two of my previous comments:
  • The test.wikipedia page has this text:
Categories: Root category
...at the bottom of the edit window (my apologies --- it's not actually in the edit window) --- this is not helpful for novice editors --- they could be confused and/or deterred by it --- it should be hidden from them.
  • The en.wikipedia page has nothing analogous to the just mentioned text, suggesting that PrimeHunter's solution might not actually work in en.wikipedia.
D.Wardle (talk) 23:59, 20 January 2025 (UTC)[reply]

If editors can't see the list of categories that the page is in, how will they add or remove the categories?

On the testwiki page, the example has only one category, so this is what you see in wikitext:

[[Category:Root category]]

The analogous text in the en.wikipedia page you link is this:

[[Category:Creative Commons-licensed websites]]
[[Category:Online encyclopedias| ]]
[[Category:Virtual communities]]
[[Category:Wikimedia projects]]
[[Category:Wikipedia categories named after encyclopedias]]
[[Category:Wikipedia categories named after websites]]

I thought your concern was about what readers see. You said "But I don't notice them [i.e., the parent categories] because they're in a smaller font in the blue box near the bottom of the page: Categories: Water | Chemical processes | Technology by type".

Now you're talking about a completely different thing, which is what you see when you're trying to change those parent categories. WhatamIdoing (talk) 02:10, 21 January 2025 (UTC)[reply]

The "pre" formatting doesn't appear to play well with ::: formatting. WhatamIdoing (talk) 02:12, 21 January 2025 (UTC)[reply]
Sorry about that.
To begin again, I think it would be a good idea if all category pages had:
  • a heading `Parent categories' similar to `Subcategories' (the current `Categories' in the blue box is ambiguous and too inconspicuous).
  • a small link near the bottom of the page, the link having text `Category tree' and target the category's entry in the category tree.
I don't have the technical competence to make either of these changes. Also, given that they would affect every category page (which is a large part of the encyclopedia), before making the changes it would be prudent to check others agree (or, at least, that there is not strong opposition).
So how to make progress? (It would be great if a Wikipedian more experienced than myself would pick it up and run with it.) D.Wardle (talk) 23:46, 21 January 2025 (UTC)[reply]
We currently have something like this:
I think we can get this changed to:
I do not think we can realistically get this changed to:
Parent categories
Category name 1, Category name 2, etc.
Do you want to have the middle option, or is the third option the only thing that will work for you? WhatamIdoing (talk) 00:06, 22 January 2025 (UTC)[reply]
The middle option is definitely a step in the right direction so if you could implement it that would be great.
With regard to the third option (and also the link to the category tree), maybe the desirability of these could be put forward for discussion at a meeting of senior Wikipedians (and if they are deemed desirable but difficult to implement maybe that difficulty of implementation could also be discussed --- if the MediaWiki software does not allow desirable things to be done easily, it must have scope for improvement...)
Thank you for your assistance. D.Wardle (talk) 19:55, 22 January 2025 (UTC)[reply]
We don't have meetings of senior Wikipedians. The meetings happen right here, and everyone is welcome to participate.
I'll go ask the tech-savvy volunteers at Wikipedia:Village pump (technical) if one of them would make the change to the middle setting. WhatamIdoing (talk) 20:11, 22 January 2025 (UTC)[reply]

Break

Perhaps I don't understand what PrimeHunter has done. It's hard for me to follow: If I explore the https://en.wikipedia.org domain, I find that one of PrimeHunter's references (https://en.wikipedia.org/wiki/MediaWiki:Pagecategories) has been deleted, while, if I explore the https://test.wikipedia.org domain, I find that I cannot see what's in the edit box of one of the pages (https://test.wikipedia.org/wiki/Category:4x4_type_square) because `only autoconfirmed users can edit it'.
Given that https://en.wikipedia.org/wiki/MediaWiki:Pagecategories has been deleted, maybe PrimeHunter's solution only works in the testsite? D.Wardle (talk) 23:14, 20 January 2025 (UTC)[reply]
PrimeHunter's solution has only been created in the testsite. Nobody has ever posted it here.
You do not need to be autoconfirmed to see what's in the edit box. You just need to scroll down past the explanation about not being able to change what's in the edit box.
That said, I suggest that you stop looking at the complicated page of 4x4 type square, and start looking at a very ordinary category page like testwiki:Category:Command keys, because (a) it does not have a bunch of irrelevant stuff in it and (b) anyone can edit that cat page. WhatamIdoing (talk) 23:33, 20 January 2025 (UTC)[reply]
Maybe I'm naive, but I think it must be easy to do the two things I'm suggesting. There is a piece of code somewhere that takes the content entered by a Wikipedian using `Edit' and creates the category page. It's just a case of modifying that code to add one word and two words which are also a link. It must be similar to changing a style file in LaTeX or a CSS in html.
Again, maybe I'm naive, but it would seem to me appropriate to move this discussion to Village pump (proposals). Any objection? D.Wardle (talk) 21:07, 4 January 2025 (UTC)[reply]
If @PrimeHunter is willing to make the change, then there's no need to move the discussion anywhere. WhatamIdoing (talk) 23:19, 4 January 2025 (UTC)[reply]
We should still have an RFC before changing something for everyone, so a formal proposal sounds like a good idea. Otherwise it may be reverted on the opinion of one person. Graeme Bartlett (talk) 21:41, 22 January 2025 (UTC)[reply]
Do you personally object? Or know anyone who objects? WhatamIdoing (talk) 03:45, 23 January 2025 (UTC)[reply]

Moving categories to the top of a page

@D.Wardle I looked at your original request and it reminded me that Commons has a gadget (optional user preference) to move the categories box to the tops of all pages. That gadget is at c:MediaWiki:Gadget-CategoryAboveAll.js, and I've found it quite useful when working with files there. It's not quite what you're asking for, but it feels like it might help and be quite an easy win?

I've tested a local version of it at User:Andrew Gray/common.js - it's the last section on that page, lines 22-30, and I've set it up so that it only triggers when you're looking at a category page. If you copy that bit to your own common.js file (User:D.Wardle/common.js) then it should, touch wood, also work for you. Andrew Gray (talk) 18:31, 23 January 2025 (UTC)[reply]

Hi Andrew, thanks very much for the info but it doesn't quite address the point I'm making: If Wikipedia is perfectly designed, complete newcomers to the site should discover all the useful features rapidly and by accident (without having to read help pages or similar). At the moment, that's true for the category pages. (A newcomer starts with an article. At the end of the article is `Categories'. Curious, they click on it and discover the category pages.) From the category pages they rapidly discover subcategories. But they are unlikely to discover parent categories (the parent categories being relegated to a small, ambiguous heading at the end of the page). And they certainly won't discover the category tree tool (it being missing all together). So, from my perspective, it's what newcomers see that needs to be changed, not what I see. D.Wardle (talk) 21:25, 23 January 2025 (UTC)[reply]

Implemeting "ChatBot Validation" for sentences of Wikipedia

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hi, I propose to define a "Validation process" using Chatbots (e.g. ChatGPT) in this way:

  1. The editor or an ordinary user, presses a button named "Validate this Sentence"
  2. A query named "Is this sentence true or not? + Sentence" is sent to ChatGPT
  3. If the ChatGPT answer is true, then tick that sentence as valid, otherwise declare that the sentence needs to be validated manually by humans.

I think the implementation of this process is very fast and convenient. I really think that "ChatBot validation" is a very helpful capability for users to be sure about the validity of information of articles of Wikipedia. Thanks, Hooman Mallahzadeh (talk) 10:34, 6 January 2025 (UTC)[reply]

While it would certainly be convenient, it would also be horribly inaccurate. The current generation of chatbots are prone to hallucinations and cannot be relied on for such basic facts as what the current year is, let alone anything more complicated. Thryduulf (talk) 10:48, 6 January 2025 (UTC)[reply]
@Thryduulf The question is

Is Wikipedia hallucinations or ChatGPT is hallucinations?

This type of validation (validation by ChatGPT) may be inaccurate for correctness of Wikipedia, but when ChatGPT declares that "Wikipedia information is Wong!", a very important process named "Validate Manually by Humans" is activated. This second validation is the main application of this idea. That is, finding possibly wrong data on Wikipedia to be investigated more accurately by humans. Hooman Mallahzadeh (talk) 11:02, 6 January 2025 (UTC)[reply]
The issue is, ChatGPT (or any other LLM/chatbot) might hallucinate in both directions, flagging false sentences as valid and correct sentences as needing validation. I don't see how this is an improvement compared to the current process of needing verification for all sentences that don't already have a source. Chaotic Enby (talk · contribs) 11:13, 6 January 2025 (UTC)[reply]
If there was some meaningful correlation between what ChatGPT declares true (or false) and what is actually true (or false) then this might be useful. This would just waste editor time. Thryduulf (talk) 11:15, 6 January 2025 (UTC)[reply]
@Chaotic Enby@Thryduulf Although ChatGPT may give wrong answers, but it is very powerful. To assess its power, we need to apply this research:
  1. Give ChatGPT a sample containing true and false sentences, but hide true answers
  2. Ask ChatGPT to assess the sentences
  3. Compare actual and ChatGPT answers
  4. Count the ratio of answers that are the same.
I really propose that if this ratio is high, then we start to implement this "chatbot validation" idea. Hooman Mallahzadeh (talk) 11:24, 6 January 2025 (UTC)[reply]
There are many examples of people doing this research, e.g. [10] ranks ChatGPT as examples accurate "88.7% of the time", but (a) I have no idea how reliable that source is, and (b) it explicitly comes with multiple caveats about how that's not a very meaningful figure. Even if we assume that it is 88.7% accurate at identifying what is and isn't factual across all content on Wikipedia that's still not really very useful. In the real world it would be less accurate than that, because those accuracy figures include very simple factual questions that it is very good at ("What is the capital of Canada?" is the example given in the source) that we don't need to use ChatGPT to verify because it's quicker and easier for a human to verify themselves. More complex things, especially related to information that is not commonly found in its training data (heavily biased towards information in English easily accessible on the internet), where the would be the most benefit to automatic verification, the accuracy gets worse. Thryduulf (talk) 11:38, 6 January 2025 (UTC)[reply]
Have you read, for example, the content section of OpenAI's Terms of Use? Sean.hoyland (talk) 10:53, 6 January 2025 (UTC)[reply]
@Sean.hoyland If OpenAI does not content with this application, we can use other ChatBots that content with this application. Nowadays, many chatbots are free to use. Hooman Mallahzadeh (talk) 11:04, 6 January 2025 (UTC)[reply]
I'm sure they would be thrilled with this kind of application, but the terms of use explain why it is not fit for purpose. Sean.hoyland (talk) 11:17, 6 January 2025 (UTC)[reply]
Factual questions are where LLMs like ChatGPT are weakest. Simple maths, for example. I just asked "Is pi larger than 3.14159265?" and got the wrong answer "no" with an explanation why the answer should be "yes":
"No, π is not larger than 3.14159265. The value of π is approximately 3.14159265358979, which is slightly larger than 3.14159265. So, 3.14159265 is a rounded approximation of π, and π itself is just a tiny bit larger."
Any sentence "validated by ChatGPT" should be considered unverified, just like any sentence not validated by ChatGPT. —Kusma (talk) 11:28, 6 January 2025 (UTC)[reply]
I get a perfect answer to that question (from the subscription version of ChatGPT): "Yes. The value of π to more digits is approximately 3.141592653589793… which is slightly larger than 3.14159265. The difference is on the order of a few billionths." But you are correct; these tools are not ready for serious fact checking. There is another reason this proposal is not good: ChatGPT gets a lot of its knowledge from Wikipedia, and when it isn't from Wikipedia it can be from the same dubious sources that we would like to not use. One safer use I can see is detection of ungrammatical sentences. It seems to be good at that. Zerotalk 11:42, 6 January 2025 (UTC)[reply]
It's a good example of the challenges of accuracy. Using a different prompt "Is the statement pi > 3.14159265 true or false?", I got "The statement 𝜋 > 3.14159265 is true. The value of π is approximately 3.14159265358979, which is greater than 3.14159265." So, whatever circuit is activated by the word 'larger' is doing something less than ideal, I guess. Either way, it seems to improve with scale, grounding via RAG or some other method and chain of thought reasoning. Baby steps. Sean.hoyland (talk) 11:51, 6 January 2025 (UTC)[reply]
I do not think we should outsource our ability to check whether a sentence is true and/or whether a source verifies a claim to AI. This would create orders of magnitude more problems than it would solve... besides, as people point out above, facts is where chatbots are weakest. They're increasingly good at imitating tone and style and meter and writing nicely, but are often garbage at telling fact from truth. Cremastra (uc) 02:22, 7 January 2025 (UTC)[reply]
Writing a script that would automatically give a "validation score" to every article—average probability of True vs. False across all sentences—would be helpful. (Even if it completely sucks, we can just ignore it, so there's no harm done.) Go ahead and do it if you know how! However, WMF's ML team is already very busy, so I don't think this will get done if nobody volunteers. – Closed Limelike Curves (talk) 04:41, 11 January 2025 (UTC)[reply]
Further Reading: Wikipedia:Village pump (proposals)/Archive 211§AI for WP guidelines/ policies. ExclusiveEditor 🔔 Ping Me! 06:34, 25 January 2025 (UTC)[reply]

Using ChatBots for reverting new edits by new users

Even though the previous idea may have issues, I really think that one factor for reverting new edits by new users can be "the false answer of verification of Chatbots". If the accuracy is near 88.7%, we can use that to verify new edits, possibly by new users, and find vandalism conveniently. Hooman Mallahzadeh (talk) 13:48, 6 January 2025 (UTC)[reply]

Even if we assume the accuracy to be near near 88.7%, I would not support having a chatbot to review edits. Many editors do a lot of editing and getting every 1 edit out of 10 edit reverted due to an error will be annoying and demotivating. The bot User:Cluebot NG already automatically reverts obvious vandalism with 99%+ success rate. Ca talk to me! 14:11, 6 January 2025 (UTC)[reply]
@Ca Can User:Cluebot NG check such semantically wrong sentence?

Steven Paul Jobs was an American engineer.

instead of an inventor, this sentence wrongly declares that he was an engineer. Can User:Cluebot NG detect this sentence automatically as a wrong sentence?
So I propose to rewrite User:Cluebot NG in a way that it uses Chatbots, somehow, to semantically check the new edits, and tag semantically wrong edits like the above sentence to "invalid by chatbot" for other users to correct that. Hooman Mallahzadeh (talk) 14:22, 6 January 2025 (UTC)[reply]
Can Cluebot detect this sentence automatically as a wrong sentence? No. It can't. Cluebot isn't looking through sources. It's an anti-vandalism bot. You're welcome to bring this up with those that maintain Cluebot; although I don't think it'll work out, because that's way beyond the scope of what Cluebot does. SmittenGalaxy | talk! 19:46, 6 January 2025 (UTC)[reply]
I think you, Hooman Mallahzadeh, are too enamoured with the wilder claims of AI and chatbots, both from their supporters and the naysayers. They are simply not as good as humans at spotting vandalism yet; at least the free ones are not. Phil Bridger (talk) 20:46, 6 January 2025 (UTC)[reply]
The number of false positives would be too high. Again, this would create more work for humans. Let's not fall to AI hype. Cremastra (uc) 02:23, 7 January 2025 (UTC)[reply]
Sorry this would be a terrible idea. The false positives would just be to great, there is enough WP:BITING of new editors we don't need LLM hallucinations causing more. -- LCU ActivelyDisinterested «@» °∆t° 16:26, 7 January 2025 (UTC)[reply]
Dear @ActivelyDisinterested, I didn't propose to revert all edits that ChatBot detect as invalid. My proposal says that:

Use ChatBot to increase accuracy of User:Cluebot NG.

The User:Cluebot NG does not check any semantics for sentences. These semantics can only be checked by Large Language Models like ChatGPT. Please note that every Wikipedia sentence can be "semantically wrong", as they can be syntacticly wrong.
Because making "Large language models" for semantic checking is very time-consuming and expensive, we can use them online via service oriented techniques. Hooman Mallahzadeh (talk) 17:18, 7 January 2025 (UTC)[reply]
But LLMs are not good at checking the accuracy of information, so Cluebot NG would not be more accurate, and in being less accurate would behave in a more BITEY manner to new editors. -- LCU ActivelyDisinterested «@» °∆t° 17:24, 7 January 2025 (UTC)[reply]
Maybe ChatGPT should add a capability for "validation of sentences", that its output may only be "one word": True/False/I Don't know. Specially for the purpose of validation.
I don't know that ChatGPT has this capability or not. But if it lacks, it can implement that easily. Hooman Mallahzadeh (talk) 17:33, 7 January 2025 (UTC)[reply]
Validation is not a binary thing that an AI would be able to do. It's a lot more complicated than you make it sound (as it requires interpretation of sources - something an AI is incapable of actually doing), and may require access to things an AI would never be able to touch (such as offline sources). —Jéské Couriano v^_^v threads critiques 17:37, 7 January 2025 (UTC)[reply]
@Hooman Mallahzadeh: I refer you to the case of Varghese v. China South Airlines, which earned the lawyers citing it a benchslap. —Jéské Couriano v^_^v threads critiques 17:30, 7 January 2025 (UTC)[reply]
@Jéské Couriano Thanks, I will read the article. Hooman Mallahzadeh (talk) 17:34, 7 January 2025 (UTC)[reply]
(edit conflict × 4) For Wikipedia's purposes, accuracy is determined by whether it matches what reliable sources say. For any given statement there are multiple possible states:
  1. Correct and supported by one or more reliable sources at the end of the statement
  2. Correct and supported by one or more reliable sources elsewhere on the page (e.g. the end of paragraph)
  3. Correct and self-supporting (e.g. book titles and authors)
  4. Correct but not supported by a reliable source
  5. Correct but supported by a questionable or unreliable source
  6. Correct according to some sources (cited or otherwise) but not others (cited or otherwise)
  7. Correct but not supported by the cited source
  8. Incorrect and not associated with a source
  9. Incorrect and contradicted by the source cited
  10. Incorrect but neither supported nor contradicted by the cited source
  11. Neither correct nor incorrect (e.g. it's a matter of opinion or unproven), all possible options for sourcing
  12. Previously correct, and supported by contemporary reliable sources (cited or otherwise), but now outdated (e.g. superceded records, outdate scientific theories, early reports about breaking news stories)
  13. Both correct and incorrect, depending on context or circumstance (with all possible citation options)
  14. Previously incorrect, and stated as such in contemporary sources, but now correct (e.g. 2021 sources stating Donald Trump as president of the US)
  15. Correct reporting of someone's incorrect statements (cited or otherwise).
  16. Predictions that turned out to be incorrect, reported as fact (possibly misleadingly or unclearly) at the time in contemporary reliable sources.
And probably others I've failed to think of. LLMs simply cannot correctly determine all of these, especially as sources may be in different languages and/or not machine readable. Thryduulf (talk) 17:44, 7 January 2025 (UTC)[reply]
I believe someone else had a working implementation of a script that would verify whether a reference supported a claim using LLMs - I think I saw it on one of the Village Pumps a while back. They eventually abandoned it because it wasn't reliable enough, if I remember correctly. — Qwerfjkltalk 16:46, 20 January 2025 (UTC)[reply]
It probably struggles to understand meaning. On the other hand, I reckon you could get a working implementation to look for copyvio. CMD (talk) 18:02, 20 January 2025 (UTC)[reply]
It could be great to have an LLM-supported system to detect potential close paraphrasing. —Kusma (talk) 18:06, 20 January 2025 (UTC)[reply]
Even professional-grade plagiarism detectors are poor at that, generating both false positives and false negatives. That's fine in the environment where they are used with full understanding of the system's limitations and it is used only as one piece of information among multiple sources by those familiar with the topic area. Very little of that is true in the way it would be used on Wikipedia. Thryduulf (talk) 18:49, 20 January 2025 (UTC)[reply]

Pertty implementation of this idea in gemini.google.com

Hi, I noticed that this idea is pretty implemented in https://gemini.google.com . Here are some sentence to validate given to gemini and the result of validation:

Sentence to validate

Steven Paul Jobs was German.

The result:

No, Steven Paul Jobs was not German. He was born in San Francisco, California, and was an American citizen. His parents were American college students, and he was adopted shortly after his birth.

Sentence to validate:

The United States of America (USA), commonly known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal union of 60 states and a federal capital district, Washington, D.C.

The answer:

The United States of America consists of 50 states, not 60.

Sentence to validate:

Steven Paul Jobs was an American engineer.

Answer:

While Steven Paul Jobs was not a formally trained engineer, he was a visionary and a key figure in the development of personal computers. He was a businessman, inventor, and investor who co-founded Apple Inc. and was instrumental in the creation of iconic products like the Macintosh, iPod, iPhone, and iPad.

Finally, I should note that this idea can be pretty implemented by https://gemini.google.com . I really propose to implement that and wait for possible feedbacks and bugs. Hooman Mallahzadeh (talk) 14:59, 28 January 2025 (UTC)[reply]

Have you read the above discussion? There is clear consensus that this is a bad idea. Cremastra (talk) 15:01, 28 January 2025 (UTC)[reply]
Gemini even has a giant disclaimer repeatedly telling you not to trust it. Cremastra (talk) 15:03, 28 January 2025 (UTC)[reply]
@Cremastra No need to be trusted. My purpose for claiming this proposal was to detect "suspicious wrong sentences". The purpose of the idea is only to find "suspicious wrong sentences" to inspect by humans afterwards. Hooman Mallahzadeh (talk) 15:07, 28 January 2025 (UTC)[reply]
Here, take a look.

Is the lesser-toothed pademelon common in Sydney suburbs?
The lesser-toothed pademelon is not common in Sydney suburbs. It is found in the rainforests of eastern Australia, including New South Wales, but it prefers undisturbed habitats. It is not likely to be found in areas that have been heavily modified by human activity.

Needless to say, that animal does not exist.

Rhympophorus gigas can be dangerous if approached.
Yes, Rhympophorus gigas can be dangerous if approached. They are large, aggressive beetles that can bite if they feel threatened. Their bites are not venomous, but they can be painful and cause infection.

Ditto.

Red-toed frogs are endangered.
Red-toed frogs are not endangered. However, they are considered a species of special concern in Canada. This means that they are at risk of becoming endangered in the future.

It is not hard to fool. Cremastra (talk) 15:09, 28 January 2025 (UTC)[reply]
@Hooman Mallahzadeh Cremastra (talk) 15:10, 28 January 2025 (UTC)[reply]
Here's another one:

The brown elbow galaxy can be seen by amateur observers with larger telescopes.
The brown elbow galaxy is a popular name for the spiral galaxy M83. It is located about 15 million light-years away in the constellation Hydra.

Cremastra (talk) 15:20, 28 January 2025 (UTC)[reply]
Dear @Cremastra. If you note the Gemini site answers, It contains a three dots after each response. For example, to validate this wrong sentence:

Jobs was born in San Francisco in 1956 and adopted shortly afterwards.

In three dots part, there exists a hyperlink named "Double-check response". In this link, the correct birthdate of Steve Jobs is written as 1955 which is adopted from "www.cccco.edu".
I checked your sentence too which is "The brown elbow galaxy can be seen by amateur observers with larger telescopes." Only some parts of this answer are "Double-checked response".
Given answer for me was

"The "Brown Elbow Galaxy" isn't a commonly used name for any celestial object. It's possible that it's a nickname given to a specific galaxy by amateur astronomers in a particular region."

it is like

The brown elbow galaxy is a popular name for the spiral galaxy M83

Is not a "Double-checked response". Am I wrong? So we should only rely on parts of the response which are "Double-checked response". Hooman Mallahzadeh (talk) 15:33, 28 January 2025 (UTC)[reply]
@Cremastra Discussion was about "False positive". Please yourself try to check its false positive in https://gemini.google.com . And give me the feedback, after checking multiple sentence. Thanks, Hooman Mallahzadeh (talk) 15:03, 28 January 2025 (UTC)[reply]
Cremastra did try it 3 4 times above, and each time it said that something exists when it doesn't exist. Free AI is nowhere near as good as a human editor yet, so just give up on this silly idea. Phil Bridger (talk) 15:49, 28 January 2025 (UTC)[reply]
@Phil Bridger As I mentioned above, in the answers that Cremastra got, the first sentence parts were not "Double-checked responses", they are just dreams of AI. If I am wrong, tell me please. Hooman Mallahzadeh (talk) 15:57, 28 January 2025 (UTC)[reply]
You are wrong. It presented those dreams as if they were true. Phil Bridger (talk) 16:02, 28 January 2025 (UTC)[reply]
Dear @Phil Bridger, these exists "three dots" and click "Double-checked response" in gemini site answers. Those "dreams" are not double checked. Please try again.
If "dreams" are "double-checked", then I really "just give up on this silly idea". Hooman Mallahzadeh (talk) 16:06, 28 January 2025 (UTC)[reply]
I tried Cremastra's Pademelon question, and asked for a double check. It lit up some of the text in green, which indicates that it thinks it passed the double check. Can we end this now? MrOllie (talk) 16:14, 28 January 2025 (UTC)[reply]
@MrOllie Yes, if you are sure that such "Dreams" are "double-checked", I convinced. Please close the thread and archive that. Thanks, Hooman Mallahzadeh (talk) 16:18, 28 January 2025 (UTC)[reply]
(edit conflict) If someone or something tells me in English that the lesser-toothed pademelon or Rhympophorus gigas or Red-toed frogs or the brown elbow galaxy exists I expect to be able to believe them without checking whether it has three dots after it or that it doesn't come with "Double-checked response" or that they haven't got their fingers crossed behind their back. Phil Bridger (talk) 16:16, 28 January 2025 (UTC)[reply]

@MrOllie: @Rosguill: Final question: You said "some part of it was green". My final question is "what part" was exactly was in green? Is that part "dreams part"? See some part does not imply "total answer". Please mention exacly "what question" and "what answer" that you applied and got on Gemini. I should see what you applied and got exacly. Hooman Mallahzadeh (talk) 16:37, 28 January 2025 (UTC)[reply]

Hooman Mallahzadeh I threw pademelons at it again with the "double check". It highlighted this text in green: The lesser-toothed pademelon is a small marsupial native to Australia.. Which is, um, false.
I also tried it with the statement, Greater-toothed pademelons, Thylogale gigadonti, can be seen in wildlife preserves in Queensland.
It responded with Yes, greater-toothed pademelons are found in wildlife preserves in Queensland. They are shy and nocturnal, so it may be difficult to see them during the day. However, you may be able to see their tracks or hear them rustling in the bushes at night.
When I ran doublecheck, it highlighted this sentence in green (the rest was in grey): They are shy and nocturnal, so it may be difficult to see them during the day.
The source it "cited" for that claim (I clicked the little green down arrow) was an Australian Museum article about a real species [11]. This whole project is a wild goose chase, or, rather, some lesser-toothed pademelon tracks. (According to Gemini, I can see lesser-toothed pademelon tracks in Ku-ring-gai Chase National Park – this thing lies so pathologically I'm amazed that someone thought that even with the "double check" it was a useful tool.) Cremastra (talk) 16:58, 28 January 2025 (UTC)[reply]
It's all WP:PADEMELONS. Cremastra (talk) 17:02, 28 January 2025 (UTC)[reply]
@Cremastra I think this wrong answer is due to application of taxonomy. This "double checked" sentence:

They are shy and nocturnal, so it may be difficult to see them during the day.

is derived from behavior of "pademelons" in general, maybe from this sentence:

All pademelons are shy and nocturnal, so it may be difficult to see them during the day.

In fact, all "pademelons" threat this way, and "Greater-toothed pademelons" as a specific kind, should obey this rule as well. Hooman Mallahzadeh (talk) 17:23, 28 January 2025 (UTC)[reply]
It is still wrong.
Besides, here's another one, free of marsupials. I gave it only the scientific name, with a genus that doesn't exist, so it has no information to go on:
Kemptorus henryii was named in honour of Sir Charles Henry.
Its answer (emphasis added):
That's correct! Kemptorus henryii is a species of extinct reptile that was named after Sir Charles Henry. Sir Charles Henry was a British colonial governor who served in several colonies, including Jamaica, the Cape Colony, and Victoria. He was known for his progressive policies and his support for education and social reform. Kemptorus henryii is a small, lizard-like reptile that lived during the Triassic period. It was first discovered in South Africa in the 1960s.
You know how liberal those British colonial governors are.
The point is I gave it no information and it still hallucinated that this made up name was that of an extinct Triassic reptile. Double check gave the last two sentences a "consider searching further" (but it still generated this lie in the first place!!), but okayed Sir Charles Henry was a British colonial governor who served in several colonies, including Jamaica, the Cape Colony, and Victoria and provided this source [12] which is about a real person with a similar name.
This tool is useless. Cremastra (talk) 17:30, 28 January 2025 (UTC)[reply]

Defining reliable resources for Chatbots to validate Wikipedia sentences and implementing a chatbot-resourcing mechanism

I propose to define a set of reliable resources for chatbots like "Google Gemini" to validate Wikipedia sentences with that, and then in the "double-checking phase", it automatically would adds some references for that sentence, as a proof for its validity.

I really think that by the current way that Wikipedia resources its contents, the readers have not the opportunity to conveniently access reliable sources. This idea can make resourcing very fast, directly to the page and sentence of that reliable resource. Please discuss the idea. Hooman Mallahzadeh (talk) 13:25, 30 January 2025 (UTC)[reply]

Ye gods,Please WP:DROPTHESTICK. No-one here wants chatbots on the encyclopedia and I've shown above that they're easy to mislead. Cremastra (talk) 13:30, 30 January 2025 (UTC)[reply]
@Cremastra I should add that this idea can be implemented as a "browser extension" to apply not only for Wikipedia, but also all of the web contents. Hooman Mallahzadeh (talk) 13:31, 30 January 2025 (UTC)[reply]
@Cremastra Please achieve this thread. Thank you. Hooman Mallahzadeh (talk) 13:34, 30 January 2025 (UTC)[reply]
The thread will be archived automatically if you simply stop posting for a few days. You had an idea, but it turned out to be an extraordinarily bad idea. Just drop it. Phil Bridger (talk) 15:06, 30 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

AfD's taking too long

I've noticed that a lot of AfD's get relisted because of minimal participation, sometimes more than once. This means that in the instance where the article does get deleted in the end, it takes too long, and in the instance where it doesn't, there's a massive AfD banner at the top for two, sometimes three or more weeks. What could be done to tackle this? How about some kind of QPQ where, any editor that nominates any article for deletion is strongly encouraged to participate in an unrelated AfD discussion? -- D'n'B-📞 -- 06:59, 7 January 2025 (UTC)[reply]

I feel WP:RUSHDELETE is appropriate here. I don't understand why the article banner is a problem? Am I missing something? Knitsey (talk) 07:41, 7 January 2025 (UTC)[reply]
The banners signal to a reader that there's something wrong with a page - in the case of an AfD there may well not be. -- D'n'B-📞 -- 06:30, 8 January 2025 (UTC)[reply]
There's often a concern, and all relisted nominations seem to have reason to debate that concern, whether because someone registered an objection or the article was already nominated in the past. Aaron Liu (talk) 12:25, 8 January 2025 (UTC)[reply]
We already have WP:NOQUORUM which says that if an AfD nomination has minimal participation and meets the criteria for WP:PROD, then the closing admin should treat it like an expired PROD and do a soft deletion. I remember when this rule was first added, admins did try to respect it. I haven't been looking at AfD much lately—have we reverted back to relisting discussions? Mz7 (talk) 08:10, 7 January 2025 (UTC)[reply]
From what I've seen when I was active there in November, ProD-like closures based on minimal participation were quite common. Aaron Liu (talk) 22:47, 7 January 2025 (UTC)[reply]
Based on a recent samples, I think somewhere over a quarter of AfD listings are relistings. (6 Jan - 37 / 144, 5 Jan - 35 / 83, 4 Jan - 36 / 111, 3 Jan - 27 / 108). -- D'n'B-📞 -- 06:43, 8 January 2025 (UTC)[reply]
Those relisted have more than minimal participation in the soft deletion sense. Aaron Liu (talk) 12:22, 8 January 2025 (UTC)[reply]
so more than allows for soft deletion but not enough to reach consensus then. -- D'n'B-📞 -- 02:53, 11 January 2025 (UTC)[reply]
yes. IMO that means they have reason for discussion and debate. Aaron Liu (talk) 23:31, 11 January 2025 (UTC)[reply]
Okay, and I'm talking about encouraging that discussion to actually happen rather than fizzle out - so we're on the same page here? -- D'n'B-📞 -- 08:58, 12 January 2025 (UTC)[reply]
And that's why there's a banner on the article. Aaron Liu (talk) 16:35, 12 January 2025 (UTC)[reply]
In my experience relisting often does lead to more comments on the AFD, in practice. So the system works, mostly -- as long as the nominator doesn't have to stick around for the whole time, I don't think there's a problem. And if the page is well-frequented enough for the banner to be a problem, the AFD will probably be relatively well-attended. Mrfoogles (talk) 20:40, 23 January 2025 (UTC)[reply]

Better methods than IP blocks and rangeblocks for completely stopping rampant recurring vandals

So, I intend for this thread to be about the discussion of various theoretical methods other than IP blocks / rangeblocks that could be used to mitigate a persistent vandal highly effectively while causing little to no collateral damage.

Some background

Wikipedia was founded in 2001, a time when a good majority of residential IP addresses were relatively all static, due to the much lesser number of internet users at that time. IP blocks probably made a lot of sense at that time due to that fact - you couldn't just reboot your modem to obtain a new IP address and keep editing, and cell phones pretty much had no usable web browsing capability at the time.

Today, the only type of tool used to stop anonymous vandals and disruptors, despite dynamic IP addresses and shared IPs being very common, is still the same old IP address blocks and range blocks. While IP block are effective at stopping the "casual" / "one-off" type of vandals from editing again, when it comes to the more dedicated disruptors and LTAs, IP blocks simply don't seem to hinder them at all, due to the highly dynamic IP address nature. Okay, but range blocks exist, right? Well, unfortunately not all IP address allotment sizes are the same, and it varies a lot from ISP to ISP - some ISPs just seem to put literally all their customers on one gigantic (i.e. /16 or bigger for IPv4, /32 or bigger for IPv6) subdivision, making it straight up impossible to put a complete stop to the LTA vandal without also stopping all those thousands and thousands of innocent other people from being able to edit.

I've always had these thoughts in my mind, about what the Wikimedia team could potentially do / implement to more accurately yet effectively put a complete halt to long-term abusers. But I felt like now's the time we really could use some better method to stop LTAs, as there are just sooooo many of them today, and soooo much admin time/effort is being spent trying to stop them only for them to come back again and again because pretty much the only way to stop them is to literally block the entire ISP from editing Wikipedia.

The first thing that might come to one's mind, and probably the most controversial method too, is disabling anonymous editing entirely and making it so only registered editors can edit English Wikipedia. Someone pointed out to me before that the Portuguese Wikipedia is a registration-only wiki. I tried it out for myself, and indeed when you click the edit button while not logged in, you are brought to an account login page. I'm guessing ENwiki will never become like this because it would eliminate a large and thriving culture of "casual" type of editors who don't want to register an account and just simply want to fix a typo, update a table's data or add a small sentence. It's probably not 100% effective either, as a registered-only wiki still wouldn't stop someone from creating a whole bunch of throwaway accounts to keep vandalising, and account creation blocks on IP addresses could still be dodged by, you know, the modem power plug dance or good ol' proxies/VPNs.

I've noticed some other language wikis like the German Wikipedia have "pending changes" type protection pretty much enabled on every single page. I imagine this isn't going to work on the English Wikipedia because of the comparatively high volume of edits from anonymous editors compared to DEwiki, as it would overload the pending changes review queue and there just will never be enough active reviewers to keep up with the volume of edits.

Now here are some of my original thoughts which I don't think I've seen anyone discuss here on Wikipedia before. The first of which, is hardware ID (HWID) bans or "device bans". The reason why popular free-to-play video games like League of Legends, Overwatch 2, Counter-Strike 2 etc aren't overrun with non-stop cheaters and abusers despite them being free-to-play is because they employ an anti-cheat and abuse system that will ban the serial numbers of the computer, rather than just simply banning the user or their IP address. Now, I have heard of HWID spoofing before, but cheating isn't rampant in these games anyway so I guess they are effective in some form. Besides replacing hardware, one could theoretically use a virtual machine to evade the HWID ban, but virtual machines don't provide the performance, graphics acceleration and special features needed to get a modern multiplayer video game to work. However though, I could see virtual machines as being a rather big weakness for Wikipedia HWID bans, as a web browser doesn't need a dedicated powerful video card and any of those special features to work; web browsers easily run in virtualised environments. But I guess not a great deal of LTAs are technologically competent enough to do that, and even if they did, spinning up a new VM is significantly slower than switching countries in a VPN.

The second, and probably the most craziest one, is employing some form of mandatory personal ID system. Where, even if you're not going to sign up and only edit anonymously, you will be forced to enter a social security number or passport number or whatever ID number that is completely unique to you, to be able to edit. In South Korea, some gaming companies like Blizzard make you enter a SSN when signing up for an account, which makes it virtually impossible for a person to go to an internet cafe ("PC bang") and make a whole bunch of throwaway accounts and jump from computer to computer when an account/device becomes banned to keep on cheating (see PC bang § Industry impact). One could theoretically get the IDs of family members and friends when they become "ID banned", but after all there are only going to be so few other people's IDs they will be able to obtain, certainly nowhere near on the order of magnitude as the number of available IP addresses on a large IP subnet or VPN. I'm guessing this method isn't going to be feasible for English Wikipedia either, as it completely goes against the simple, "open" and "anonymous" nature of Wikipedia, where not only can you edit anonymously without entering any personal details, but even when signing up for an account you don't even have to enter an email address, only just a password.

A third theoretical method is that what if, the customer ID numbers of ISPs were visible to Wikimedia, and then Wikimedia could ban that ISP customer therefore making them completely unable to edit Wikipedia even if they jump to a different IP address or subnet on that ISP? Or maybe how about the reverse where the ISP themselves ban the customer from being able to access Wikipedia after enough abuse? Perhaps ISPs need to wake up and implement such a site-level blocking policy.

Here's a related "side question": how come other popular online services like Discord, Facebook, Reddit, etc aren't overly infested with people who spam, attack, or otherwise make malicious posts on the site everyday? Could Wikimedia implement whatever methods these services are using to stop potential "long-term abusers"? — AP 499D25 (talk) 13:29, 12 January 2025 (UTC)[reply]

I just thought of yet another theoretical solution: AI has gotten good enough to be able to write stories and poems, analyse a 1000 page long book, make songs, realistic pictures, and more. Wikipedia already uses AI (albelt a rather primitive and simple one) in the famous anti-vandal bot User:ClueBot NG. What if, we deploy an edit filter based on the latest and greatest AI model, to filter out edits based on past vandalism/disruption patterns? — AP 499D25 (talk) 13:37, 12 January 2025 (UTC)[reply]
I'll preface this by saying that I have quite a few problems with this idea (although I may be biased because I'm strongly opposed to the direction that modern AI is going); but I'd like to hear why and how you think this would work in more detail. For instance, would the AI filter just block edits outright? Would they be flagged like with WP:ORES? What mechanisms would the hypothetical AI use to detect LTA? How would we reduce false positives? And so on. Thanks, /home/gracen/ (they/them) 17:24, 13 January 2025 (UTC)[reply]
The AI idea I have in mind is a rather "mild" form of system, where it only works on edits based on past patterns of disruption. Take for example, MAB's posts. They are quite easily recognisable from a distance even with the source code obscuring that makes it impossible for traditional edit filters to detect the edits. Maybe an AI could perform OCR on that text to then filter it out?
The AI will not filter out new types of vandalism, or disruptive edits that it isn't "familiar" with. There will be an "input text file" where admins can add examples of LTA disruption for the AI to then watch for any edits that closely resemble those examples. It will not look for, or revert edits that aren't anywhere near as being like those samples. That way I think false positives will be minimised a lot, and of course there shall be a system for reporting false positives much like how there exists WP:EFFP. — AP 499D25 (talk) 22:44, 13 January 2025 (UTC)[reply]
Ah, thanks! I'm immediately hesitant whenever I hear the word "AI" because of the actions of corporations like OpenAI, among others. However, given what you've just said, I actually think this might be an interesting idea to pursue. I'm relatively new to WP and I've never looked at WP:SPI, so I'd rather leave this to more experienced editors to discuss, but this does seem like a good and ethical application of neural networks and is within their capabilities. /home/gracen/ (they/them) 16:16, 14 January 2025 (UTC)[reply]
AI techniques have been used here for about 15 years already. See Artificial intelligence in Wikimedia projects and ClueBot. Andrew🐉(talk) 20:53, 25 January 2025 (UTC)[reply]

The second, and probably the most craziest one, is employing some form of mandatory personal ID system. Where, even if you're not going to sign up and only edit anonymously, you will be forced to enter a social security number or passport number or whatever ID number that is completely unique to you, to be able to edit.

This means that editors will have to give up a large amount of privacy, and the vast majority of people casually editing Wikipedia aren't ready to give their passport number in order to do so. Plus, editors at risk might be afraid of their ID numbers ending in the wrong hands, which is much more worrying than "just" their IP address.

Here's a related "side question": how come other popular online services like Discord, Facebook, Reddit, etc aren't overly infested with people who spam, attack, or otherwise make malicious posts on the site everyday?

They are, it's just that the issue is more visible on Wikipedia as the content is easy to find for all readers, but it doesn't mean platforms like Discord or Reddit aren't full of bad actors too. Chaotic Enby (talk · contribs) 13:38, 12 January 2025 (UTC)[reply]
Portuguese Wikipedia is not a registration-only wiki. They require registration for the mainspace, but not for anything else. See RecentChanges there. (I don't think they have a system similar to our Wikipedia:Edit requests. Instead, you post a request at w:pt:Wikipédia:Pedidos/Páginas protegidas, which is a type of noticeboard.) I'm concerned that restricting newbies may be killing their community. See the editor trends for the German-language Wikipedia; that's not something we really want to replicate. Since editors are not immortal, every community has to get its next generation from somewhere. We are getting fewer new accounts making their first edit each year. The number of editors who make 100+ edits per year is still pretty stable (around 20K), but the number of folks who make a first edit is down by about 30% compared to a decade ago.
WMF Legal will reject any sort of privacy invasion similar to requiring a real-world identity check for a person. A HWID ban might be legally feasible (i.e., I've never heard them say that it's already been considered and rejected). It would require amending the Privacy Policy, but that happens every now and again anyway, so that's not impossible. However, I understand that it's not very effective in practice (outside of proprietary systems, which is not what we're dealing with), and the whole project involves a significant tradeoff with privacy: Everything that's possible to track a Wikipedia vandal is something that's possible to track you for advertising purposes, or that could be subpoenaed for legal purposes. Writing a Wikipedia article (in the mainspace, to describe what it is and how it works) about that subject, or updating device fingerprint, might actually be the most useful thing you could do, if you thought that was worth pursuing. If a proposal is made along these lines, then the first thing people will do is read the Wikipedia article to find out what it says.
I understand that when Wikipedia was in its early days, a few ISPs were willing to track down abusive customers on occasion. My impression now is that basically none of them are willing to spend any staff time/expense doing this. We can e-mail their abuse@ addresses (they should all have one), but they are unlikely to do anything. A publicly visible approach on social media might work in a few cases ("Hey, @Name-of-ISP, one of your customers keeps vandalizing #Wikipedia. See <link to WP:AIV>. Why don't you stop them?"). However, if the LTA is using a VPN or similar system, then the ISP we claim they're using might be the wrong one anyway. WhatamIdoing (talk) 03:58, 13 January 2025 (UTC)[reply]
I dont know exactly what is meant by hardware id (something like [13]?), but genrrally speaking most things that come under that heading require you to be using a native app and not a web browser. Web Environment Integrity is a possible exception but was abandoned. Bawolff (talk) 00:13, 14 January 2025 (UTC)[reply]
I was thinking that it might be something like a MAC address (for which we had MAC spoofing). WhatamIdoing (talk) 08:00, 21 January 2025 (UTC)[reply]
We do not have access to the MAC addresses when a user is accessing from a web browser. For mobile apps you generally need special permissions to access it, and I suspect our app would be rejected from the app store if we tried. Bawolff (talk) 13:04, 24 January 2025 (UTC)[reply]
@AP 499D25
Web browsers (Chrome, Edge, Firefox) do not allow a site to access your HWID information. That would be a huge invasion of privacy.
Submitting IDS, SSN, etc is a massive invasion of privacy as well and will make people not want to use wikipedia. Why submit your ID every single time you enter a session on Wikipedia? Not to mention that its very inconvienent. Its also ineffective as you can use fake IDS, unless you want to check everything which would cost millions to employ recognition software, make a request to the databases, wait for a response, and then authorize editing. Manual checking is even worse.
Advanced artificial intelligence to scan wikipedia pages for vandalism, trained on previous vandalism incidents could work but it would be very ineffective. The site is coming on 7 million wikipedia articles, now to give you leniency, lets say that only pages with no protection or auto-confirmed protection are scanned. That would still be the majority of articles and it would cost billions yearly to constantly check each page for vandalism. Abuse filters already cover a lot of common vandalism (replacing words with swear words, blanking, spamming the article with letters, etc) and volunteers go by and check pages to see if vandalism has occurred and revert it.
Registration only editing is not a good idea. Trolls that are even mildly dedicated only need a couple minutes to sign up and vandalize the page again.
Browsers will not hand over MAC address info.
Getting ISP's to block a user just because they vandalized a page is only going to cause controversy. It would be a huge invasion of privacy, "why is wikipedia reaching across their site and attacking me and invading privacy?" would likely be the response. Also VPNs are a thing and ISPs dont want to waste money tracking down a petty vandalizer.
In conclusion, the best form against vandalism is the one we have right now. Applying higher protections every time vandalism occurs. SimpleSubCubicGraph (talk) 04:11, 25 January 2025 (UTC)[reply]
Even with all that in mind, I still think the least we could do is implement an OCR-based edit filtering system where it performs OCR on the output text rather than checking the source code of the edit against regular expressions. Some of the non-stop vandalism that happens on this site everyday involves using strange text or Unicode characters to circumvent what a traditional regex-based edit filter can do. I'm not sure if you have seen an LTA called 'MAB' but they are a vandal responsible for making just about every centralised discussion page and noticeboard on Wikipedia semi-protected. Edit filters simply don't seem able to stop them.
"it would cost billions yearly to constantly check each page for vandalism" - why would it cost billions of dollars for an AI to scan every edit? — AP 499D25 (talk) 04:23, 25 January 2025 (UTC)[reply]
Part of what motivated me to make this VPIL post is this:
As someone who does recent changes patrolling from time to time and spends a quite a bit of wiki-time in general fighting out LTAs and sockpuppets, I am getting quite tired of vandal-fighting in general - the more and more I do it, the more it feels like I'm a "man fighting a machine" and not someone actually stopping another person's destructive actions. In this present-day world of highly dynamic IP addresses, it's become very difficult to fight a lot of vandals that edit using IPs, as a lot of the time when you block a singular IP address they'll come back and vandalise using another IP address, sometimes within hours or even mere minutes.
Rangeblocks exist as a further solution I already know, and I make rangeblock calculations on the regular when reporting certain IP-hopping editors, but you probably already know the issue with them - the accuracy - both in terms of how much it actually stops the malicious editor, and in terms of how much other constructive editors in that range are affected. The accuracy of rangeblocks varies wildly from ISP to ISP. When they're actually feasible (i.e. the person's different IPs used are all in a range where there's little to none legitimate editors tipping the scale) they work great, but when they're not practicable due to high potential of collateral damage, it becomes a huge pain in the rear, as then you have yet another IP range to monitor the contributions of on your 'watchlist'.
As for page protections, they are conveniently practical when the person is only attacking one page or a small number of pages. But what if they are regularly disrupting dozens/hundreds of different pages every day? I've also never really been a big fan of page protections in general - semi-protection shuts out pretty much every newbie and 'casual' types of editors, who make up a significant percentage of the Wikipedia editor base. I remember seeing some statistic where the number of new editors joining Wikipedia has significantly dropped over the last 20 years. It certainly doesn't help with that. Excluding these groups of editors from editing pages also further contributes to the already bad systemic bias we have on Wikipedia. Furthermore, have you not seen how pages like the Teahouse, Help desk, and all the various admin help noticeboards (e.g. WP:AN) are protected seemingly every day? All caused by one person hopping from proxy to proxy and circumventing edit filters with special characters. If that wasn't bad enough, take a look at their talk pages. All in all, in other words: page protection comes with its own "collateral damage" much like shared IP range blocks. — AP 499D25 (talk) 04:52, 25 January 2025 (UTC)[reply]
The solution to that is probably better character normalization for the regexes. Not OCR. OCR would probably be easier to trick than the current system. Bawolff (talk) 08:05, 26 January 2025 (UTC)[reply]

An interesting idea I saw on the internet (Which is probably not viable here, but interesting nonetheless) is https://github.com/zk-passport/openpassport which the blockchain people have been working on. Essentially blockchain stuff has a similar problem where they worry about people having sockpuppets. They've devised a system where you can use your passport to prove that you don't have any sockpuppets without revealing any private data from the passport. Its hard to imagine that gaining traction here, but its one of the first genuinely new ideas to solve the sockpuppet problem that I have heard in a long time. Bawolff (talk) 13:48, 26 January 2025 (UTC)[reply]

Give patrollers the suppressredirect right?

As part of New Page Patrol, a lot of articles are draftified, which is done by moving the it to the Draft: or User: namespace. The problem is that without page mover rights, patrollers are forced to leave redirects behind, which are always deleted under speedy deletion criterion R2. Giving patrollers the suppressredirect right would make the process easier and reduce workload for admins. What do you think? '''[[User:CanonNi]]''' (talkcontribs) 11:02, 13 January 2025 (UTC)[reply]

Draftifying is happening far too much. But the idea has merit, as then the last log entry will say the page was moved, rather than a redirect deleted. Graeme Bartlett (talk) 11:11, 13 January 2025 (UTC)[reply]
Note: This has been proposed before. See Wikipedia:Village pump (proposals)/Archive 203 § Give NPR additional rights? JJPMaster (she/they) 14:55, 13 January 2025 (UTC)[reply]
The other option would be to not have it automatically given, but to make it easy to grant to new page reviewers frequently doing draftifications, and encourage them to apply. Chaotic Enby (talk · contribs) 15:36, 13 January 2025 (UTC)[reply]
I don't think this is a good idea. Suppressing the redirect right away (whether you're an admin or not) makes it harder for people to find the page they were editing. WhatamIdoing (talk) 18:52, 13 January 2025 (UTC)[reply]
Opening up the page will show the log entry that the page was moved (allowing people to easily find it). Current policy does not place a time limit on when to delete pages that qualify for WP:R2 (beyond the standard wait an hour before draftifying). Once that happens, it's nominated for speedy deletion if the patroller isn't a page mover or an admin. R2s are usually dealt with immediately, so it's not like forcing people to nominate them for speedy deletion is going to accomplish much other than make their workflow slightly longer. Clovermoss🍀 (talk) 23:18, 17 January 2025 (UTC)[reply]
This is de facto already the case. It's quite easy for an NPR to become a page mover on those grounds alone. JJPMaster (she/they) 19:16, 13 January 2025 (UTC)[reply]
Reluctantly oppose not per WhatamIdoing but because the suppressredirect right has too much ancillary power for me to be comfortable bundling it in like this. * Pppery * it has begun... 18:59, 13 January 2025 (UTC)[reply]
I also oppose bundling it with anything else beyond pagemover, per both Pppery and WAID. I'm also minded to agree with Graeme Bartlett that drafifying is happened too often (but I realise that it's been a while since I looked at this in detail). Nobody should be granted the suppressredirect right without it being clear they understand the policy surrounding when redirects should and should not be suppressed specifically. Thryduulf (talk) 14:21, 14 January 2025 (UTC)[reply]
I agree with JJPMaster that NPPers that qualify for the right don't much trouble gaining it. I think each case should be examined individually because draftifying on a frequent basis isn't required to be a new page patroller. User right requests also provide a chance to double check that such drafticiations are actually being done correctly. Clovermoss🍀 (talk) 23:25, 17 January 2025 (UTC)[reply]
I think each case should be examined individually because draftifying on a frequent basis isn't required to be a new page patroller. Fully agree. I have both NPP and pagemover but I rarely (never?) draftify, so NPP would've been a terrible reason for me to request pagemover. I am not a prolific reviewer, but it proves Clovermoss's point. Toadspike [Talk] 11:17, 26 January 2025 (UTC)[reply]
Just give reviewers with a good track record of draftifying the pagemover right. The G2 deletions can be used as evidence for the good track record. —Kusma (talk) 11:40, 26 January 2025 (UTC)[reply]

Using a Tabber for infoboxes with multiple subjects

There are many articles that cover closely related subjects, such as IPhone 16 Pro which covers both the Pro and Pro Max models, Nintendo Switch which covers the original, OLED, and Lite models, and Lockheed Martin F-35 Lightning II which covers the A, B, C, and I variants. Most of these articles use a single infobox to display specifications and information about all of the covered subjects, leading to clutter and lots of parentheticals.

I propose that a tabber, like Tabber Neue, be used to instead create distinct infobox tabs for each subject. This would allow many benefits, such as clearly separating different specifications, providing more room for unique photos of each subject, and reducing visual clutter. An example of good use of tabs is one of my personal favorite wikis, https://oldschool.runescape.wiki, which uses tabs effectively to organize the many variants of monsters, NPCs, and items. A great example is the entry for Guard, a very common NPC with many variants. It even uses nested tabs to show both the spawn location grouped by city, and the individual variants within each city. While this is an extreme example in terms of the raw number of subjects, it provides a good look at how similar subjects can be effectively organized using tabs. Using Wikipedia's system instead, it would be substantially more cluttered, with parentheticals such as: Examine: "He tries to keep order around here" (Edgeville 1, Edgeville 2, Falador (sword) 1...) If you tried to save space using citations, it becomes very opaque: Examine: "He tries to keep order around here" [1][2][7][22]...

Overall I think this would make infoboxes more easily readable and engaging. It encourages "perusing" by clicking or tapping through the tabs, as opposed to trying to figure out what applies where. DeklinCaban (talk) 18:42, 16 January 2025 (UTC)[reply]

That would be an interesting idea! To go back to you iPhone 16 Pro example, a lot of information gets repeated in both tabs – maybe there could be a way to have it so that it only has to be added to the article in one place (even if shown in both tabs) to make them easier to keep in sync? Chaotic Enby (talk · contribs) 18:46, 16 January 2025 (UTC)[reply]
There definitely is - a lot of tab implementations allow for this, for example by using "value1", "value2", etc. to specify individual tabs, and "value" to specify all tabs. DeklinCaban (talk) 14:43, 27 January 2025 (UTC)[reply]
If it can print and display without JS effectively. From my testing under these environments, Tabber(Neue) makes these awkward line/paragraph-breaks that don't display the header at all. $wgTabberNeueUseCodex may be promising, but at least with the examples at wmdoc:codex/latest/components/demos/tabs.html, it's even worse: the tabs don't expand for the printing view at all, and the info under the other tabs will just be inaccessible on paper. Aaron Liu (talk) 20:21, 16 January 2025 (UTC)[reply]
A couple points at first blush: first, having a tabbed infobox seems like it's a usability nightmare. Secondly, it seems to be doing an end run around the overarching problem, which is that the infobox for iPhone 16 Pro is terrible. Software and tech articles are often like this (bad) where they try and cram an entire spec sheet into the infobox, and that's a failing of the infobox and the editors maintaining it. Trying to create a technical solution rather than the obvious one (just edit what's in the infobox to the most important elements) seems like a waste of everyone's time. Der Wohltemperierte Fuchs talk 20:33, 16 January 2025 (UTC)[reply]
I suspect that our users would not even realise that they could click the tabs to see other info. So it will make it harder for our readers. Alternatives are to have multiple infoboxes, but this does take up space, particularly on mobile. Another way is to use parameter indexing as in the Chembox. Parameters can have a number on the end to describe variations on related substances in the one infobox. Graeme Bartlett (talk) 20:37, 16 January 2025 (UTC)[reply]
Tabs are widely used even on amateur wikis like 90% of Fandom Wikia. I'm sure readers know how to use them. (In fact, the "Article/Talk" "Read/Edit/View history" thing on the top is a tab.) Aaron Liu (talk) 21:27, 16 January 2025 (UTC)[reply]
Judging by how few readers understand we have or ever see the talk pages, I'm not sure that's exactly a good argument. Der Wohltemperierte Fuchs talk 22:10, 16 January 2025 (UTC)[reply]
[citation needed] for that. I started out processing semi-protected edit requests and there were a ton of clueless readers' requests. Aaron Liu (talk) 00:00, 17 January 2025 (UTC)[reply]
Readers and potential editors don't know what the protection, good article, featured article, and other icons mean. I'm just one person but I'd never heard of tabs like that until I read this. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 01:35, 17 January 2025 (UTC)[reply]
Sorry. That should read "Some readers..." CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 01:37, 17 January 2025 (UTC)[reply]

dissensus as an alternative to consensus

For contentious pages, from what I can tell, there is no way in Wikipedia to come to a consensus when both camps are not making a good faith effort, and maybe even then. My proposal is: an expert could start an alternative page for one that he thinks is flawed, and have the same protections from further editing as the original? Then there could be a competition of narratives Iuvalclejan (talk) 19:32, 17 January 2025 (UTC)[reply]

We call those WP:POVFORKs and we try to prevent them from happening. Simonm223 (talk) 19:42, 17 January 2025 (UTC)[reply]
Honestly, the consensus system works especially well on contentious pages, even if the discussions can sometimes get heated. Having content forks everywhere would not really be preferable, as, not only would you not have a single place to link the reader to, but you would quickly end up with pages full of personal opinions or cherry-picking sources if each group was given its own place to write about its point of view. A competition of narratives could be interesting as a website concept, but it would be pretty far from an encyclopedia. Chaotic Enby (talk · contribs) 19:43, 17 January 2025 (UTC)[reply]
The competition would not be the last step. Selection of alternatives could happen by votes, with some cutoffs: if a fork does not get votes above a cutoff, it is eliminated. That would prevent proliferation of narratives. Or you could have the selction criteria be differential instead of absolute: if one narrative gets 2x (for example) more votes than another, the other one is eliminated. Consensus does not work if pages become protected but the disagreement is still strong. Iuvalclejan (talk) 19:48, 17 January 2025 (UTC)[reply]
Honestly, the consensus system works especially well on contentious pages,
I'd agree, but I'd also say we don't actually use the consensus system for contentious pages in practice—the more controversial the topic, the more I notice it devolving into straight voting issue-by-issue. (Even though that's the situation where you actually need to identify a consensus that all sides can live with.) – Closed Limelike Curves (talk) 21:42, 20 January 2025 (UTC)[reply]
Interestingly, it's been theorized ([14], pg 101) that we already have a "community of dissensus" whereby contentious and poorly-supported claims are weeded out from our articles until only that which can be verified remains. signed, Rosguill talk 19:45, 17 January 2025 (UTC)[reply]
The problems I see are not due to poorly supported claims. They are due to a biased reporting, that is technically correct (e.g. "hostilities erupted", rather than side A attacked side B), or outright omissions (e.g. the leader of said group is not mentioned because of his shady associations with Nazis, whereas the leader of the other group is mentioned many times). Iuvalclejan (talk) 20:29, 17 January 2025 (UTC)[reply]
In that case, we should stick to what sources say, rather than making multiple versions trying to please each editor. If sources mention the names of both leaders, then we should have them both in the article, rather than hiding one in a separate article. Chaotic Enby (talk · contribs) 20:36, 17 January 2025 (UTC)[reply]
So that addresses one issue, but evern there, if the page is protected, you can't "mention them both". What about the way of presenting a phenomenon, that while technically correct, is misleading by omission of important details? Iuvalclejan (talk) 20:42, 17 January 2025 (UTC)[reply]
For both cases: page protection doesn't mean that no one can propose any changes, it just means that you have to go to the talk page and discuss them with other editors (usually, to avoid someone else coming just after you and reverting it). If you feel like the discussion isn't going anywhere, we have channels for Wikipedia:Dispute resolution. Chaotic Enby (talk · contribs) 20:49, 17 January 2025 (UTC)[reply]
That said, there are special restrictions on articles related to Palestinian–Israeli conflicts, and you shouldn't attempt to edit them or discuss them until you have made 500+ edits elsewhere. This will give you a chance to learn our processes, jargon, and rules in a less fraught context. WhatamIdoing (talk) 08:13, 21 January 2025 (UTC)[reply]
This might be a good idea for social media, but this is an encyclopedia. Phil Bridger (talk) 20:45, 17 January 2025 (UTC)[reply]
Even more important then, so as not to deceive Iuvalclejan (talk) 20:48, 17 January 2025 (UTC)[reply]
Our existing POV-fork articles are effectively a trial of this idea, and demonstrate that it doesn't work well in practice. People create forks when they feel the original article is being gate-kept by someone with ownership issues who's pushing a particular POV. Having two articles is then very confusing for our readers, and there's a real risk that they will find only one or the other, thereby missing half the story. But worse, the proponents of each article often feel that the other article is still misrepresenting the subject, so they inevitably want it deleted or edited to reflect their viewpoint - the conflict remains! At the other extreme, sometimes the proponents of each article seem to go into denial about the other article (or maybe just don't want to draw readers to it), so they avoid all cross-referencing between the articles - making it still more misleading for the reader. If there is disagreement, it's better to get it all in one article, with sources and discussion, so the reader sees the whole picture - even if it means some fairly heated stuff in the talk page. Elemimele (talk) 17:19, 27 January 2025 (UTC)[reply]

More levels of protection and user levels

I think the jump from 4 days and 10 edits to 30 days and 500 edits is far too extreme and takes a really long time to do it when there are many editors with just 100, 200 edits (including me) that are not vandals, they do not have strong opinions on usually controversial opinions and just want to edit. Which is why I want the possibility for more user levels to be created. For example one for 200 edits, and 15 days that can be applied whenever vandalism happens somewhat, in that case normally ECP would be applied however I that is far too extreme and a more moderate protection would be more useful. Vandals that are that dedicated to make 200 edits and wait 30 days will be dedicated enough to get Extended Confirmed Protection. Though I want to see what the community thinks of sliding in another protection being ACP and ECP. 2 levels should suffice to bridge the gap between 4 edits and 500 edits would allow low edit count editors to edit while still blocking out vandalism. This is surprisingly not a perennial proposal. SimpleSubCubicGraph (talk) 02:19, 21 January 2025 (UTC)[reply]

It's more that editors who have 500/30 generally have been in enough situations to hold Wikipedian knowledge that's in-depth enough. That doesn't necessarily hold true for those you've proposed. Time is part of the intention. Aaron Liu (talk) 02:28, 21 January 2025 (UTC)[reply]
possibility for more user levels to be created I had thought about this before and think more levels (or at least an additional level with tweaks to the current ones) would be a good idea. Something along the lines of:
1. WP:SEMI - 7 days / 15 edits
2. WP:ECP - 30 days / 300 edits
3. WP:??? - 6 months / 750 edits (reserved for pages with rampant sockpuppetry problems, such as those in the WP:PIA topic area). Some1 (talk) 02:50, 21 January 2025 (UTC)[reply]
@Aaron Liu Yes, that may be apart of the intention but I feel like there are editors with under 500 edits who can make just a good enough edit to not get it instantly reverted. Also protection is there mainly for vandalism, if we lived in a perfect society anyone could edit wikipedia pages without needing accounts and making tons of edits.
@Some1 I think 180/750 would be far too harsh, not even the most divisive topics and controversial issues get vandalized often with ECP.
My idea generally was keeping ECP the same but inserting another type of protection level in-between for mildly controversial topics and pages that are vandalized infrequently. SimpleSubCubicGraph (talk) 03:25, 21 January 2025 (UTC)[reply]
Can you give some specific examples of "controversial topics and pages that are vandalized infrequently"? Is there a particular article you want to edit but are unable to? Some1 (talk) 03:29, 21 January 2025 (UTC)[reply]
SimpleSubCubicGraph, if this is regarding Skibidi Toilet (per the comments below), then under my proposed ECP level requirements (30 day/300 edits), you would be able to edit that article. Some1 (talk) 12:35, 21 January 2025 (UTC)[reply]
There is not too much utility to creating a variety of new levels, as it generally gets clunky trying to define everything, and it makes the system less easy to grasp. What differentiates 100 edits from 200 from 300? ECP is not usually for vandalism, it is deployed for topics that receive particular levels of non-vandalistic (WP:VAND is very narrow) disruption. These are topics where experience is usually quite helpful, where editors who just want to edit are more likely to get in trouble. However, it is also a very narrow range of topics, apparently only affecting 3,067 articles at the moment, or less than 0.05% of articles. CMD (talk) 03:39, 21 January 2025 (UTC)[reply]
Isn't EC protection just for contentious topics? I didn't think we were using it just to protect against common or garden vandalism. Espresso Addict (talk) 05:59, 21 January 2025 (UTC)[reply]
@Espresso Addict even though there are 3,000 articles that have ECP protection, many articles are often upgraded to ECP in light of infrequent vandalism (once a day, few times a week, etc). I know Skidibi Toilet was upgraded to ECP when the page was vandalized a few times. It was quite hilarious but it demonstrates a wider problem with liberally putting ECP on everything that gets even remotely vandalized. SimpleSubCubicGraph (talk) 07:06, 21 January 2025 (UTC)[reply]
Now, are there that many people that care for Skidibi Toilet? No. But it is also liberally applied to other wiki pages that are infrequently vandalized and editors can be there, wanting to edit, but they have to wait until an admin removes the protection which can vary depending on how active they are. It can be a day, to a week, and up to a month if you are really unlucky and the article is not that well known/significant. Which is why another type of protection can allow these editors to edit their favorite subject while still preventing vandalism. There are very few ECP users and that is with counting alternate accounts. So this change will affect a lot with how wikipedia works. SimpleSubCubicGraph (talk) 07:09, 21 January 2025 (UTC)[reply]
ECP is not liberally applied. Admins are usually very cautious about applying it, and if there is a particular case where you think it is no longer needed, raise it and it will very likely be looked at. CMD (talk) 08:11, 21 January 2025 (UTC)[reply]
It wasn't "infrequent" vandalism. Just look at the page history. Though I would use PC protection instead. Aaron Liu (talk) 15:40, 21 January 2025 (UTC)[reply]
500 edits is also when you earn access to Wikipedia:The Wikipedia Library.
Editors who make it to about ~300 edits without getting blocked or banned usually stick around (and usually continue not getting blocked or banned). So in that sense, we could reduce it to 300/30 without making much of a difference, or even making the timespan a bigger component (e.g., 300 edits + 90 days). But it's also true that if you just really want to get 500, then you could sit down with Special:RecentChanges and get the rest of your edits in a couple of hours. You could also sort out a couple of grammar problems. Search, e.g., on "diffuse the conflict": diffuse means to spread the conflict around; it should say defuse (remove the fuse from the explosive) instead. I cleaned up a bunch of these a while ago, but there will be more. You could do this for anything in the List of commonly misused English words (so long as you are absolutely certain that you understand how to use the misused words!). WhatamIdoing (talk) 08:36, 21 January 2025 (UTC)[reply]
[to SimpleSubCubicGraph] Sorry, I must have missed the various RfCs that extended the use outside contentious topics. SimpleSubCubicGraph, if you finding pages that could safely be reduced in protection level, and that don't fall within contentious topics, then you should ask the protecting admin to reduce the level on their talk page. But if you have an urge to edit Skibidi Toilet then the simplest thing to do is make small improvements to mainspace for a couple of hundred edits. If you don't have a topic you are interested in that isn't protected just hit random article a few times or do a wikilink random walk until you find something that you can improve. Espresso Addict (talk) 08:47, 21 January 2025 (UTC)[reply]
For anyone who wants to run up their edit count: Search for "it can be argued that", and replace them with more concise words, like "may" ("It can be argued that coffee tastes good" → "Coffee may taste good"). WhatamIdoing (talk) 00:27, 22 January 2025 (UTC)[reply]
I'm one of those affected by it, and I'm all for an open encyclopedia, but I'd honestly say that the 500/30 ECP makes sense. There is a great depth to this project, from the philosophy (e.g. standards for inclusion, notability, reliability) and practice (a million gray areas in PAG) of building an encyclopedia, to the philosophy (e.g. idea darwinism and convergence to a good result) and practice (the heavy bureaucracy and politics) of running a productive wiki project. If an editor comes in unfamiliar with these ideas, encountering and absorbing them organically takes time. spintheer (talk) 06:20, 25 January 2025 (UTC)[reply]
This thread seems dead. So I am reviving it. SimpleSubCubicGraph (talk) 00:04, 31 January 2025 (UTC)[reply]
It seems to me that there is consensus that the existing levels are enough. Aaron Liu (talk) 00:58, 31 January 2025 (UTC)[reply]

Disambiguation

I don't know if this is technically feasible or not (advice sought) but would it be possible to create a shortcut for disambiguation? Something like [[Joseph Smith (general)!]] where the bang causes it to display as Joseph Smith rather than having to write [[Joseph Smith (general)|Joseph Smith]] which can be error prone. (I am not attached to the form in the example, it is the functionality I am interested in.) Hawkeye7 (discuss) 21:33, 21 January 2025 (UTC)[reply]

Isn't that how Wikipedia:Pipe trick works? Schazjmd (talk) 21:46, 21 January 2025 (UTC)[reply]
Yes. Phil Bridger (talk) 21:52, 21 January 2025 (UTC)[reply]
I did not know that! I was aware of the pipe trick suppressing the namespaces but not the disambiguation. Thanks for that! Hawkeye7 (discuss) 23:16, 21 January 2025 (UTC)[reply]

Editors' using multiple pronouns

I have an idea: What if there was a more concrete way to let users set their pronouns, e.g. for users who use multiple pronouns like me? This would go beyond what is in WP prefs (feminine, masculine, neuter terms) and allow users to specifically set their pronouns and allow for multiple (e.g. she/they he/xem, he/they/she, etc.)? This data could be used by user scripts and could be displayed on a user's user page/talk page. thetechie@enwiki (she/they | talk) 03:48, 24 January 2025 (UTC)[reply]

I suspect that the best way to achieve this would be to improve MediaWiki's pronoun options. phab:T61643 is possibly the relevant task for that. Thryduulf (talk) 11:28, 24 January 2025 (UTC)[reply]
That's probably not the best task for that. The existing preference is to let MediaWiki know things like "if the software needs to reference you to someone who has their user interface set to German, should it refer to you with 'Benutzer' or 'Benutzerin' or possibly 'Benutzer/in'?". To avoid making things unnecessarily complicated for translators, adding more values to that preference should only be done if it would further that goal. Past discussions like phab:T61643 have largely been a huge mess because some people can't seem to accept that that specifically is the use case rather than signalling gender identity.
If you want to have a way to specify your pronouns and/or gender identity as a preference (rather than just using a userbox or other wikitext on your user page), you'd probably do better to ask for one or more new preferences to hold that information, explicitly separate from the existing "How should MediaWiki refer to you?" preference.
Also, keep in mind that if you're really hoping scripts might actually use these (versus just display the preference to a human), just saying "xe/xem" is probably insufficient. You'd also need to specify for the script whether the person uses "xyr" or "xir" for the possessive, and whether "xyrs"/"xirs" is used (compare "his" versus "their"/"theirs"), and whether they use "xyrself"/"xirself" or "xemself", and possibly whether it's morphosyntactically plural (compare "they are" rather than "they is"). And that's just for English, I don't know what considerations there might be if you want it to be usable for other languages too. Anomie 13:29, 24 January 2025 (UTC)[reply]
I personally use they/them for everyone here, as it is how we may informally show respect to person of any sex in India's English (Not Indian English). ExclusiveEditor 🔔 Ping Me! 17:16, 24 January 2025 (UTC)[reply]
Since there may be no limit on what users may desire, and the main use of pronouns is by other users, the best way is just to explain the wished for pronoun use on the user page. My POV is that it is up to the language user, and not the subject of discussion, but that we should be kind and respectful to others. So "they" is less respectful than someone's wish to be called "she" or "he". Inventing new words for pronouns is disrespectful for everyone. And I think I should not tell others what pronouns I desire. Graeme Bartlett (talk) 04:32, 25 January 2025 (UTC)[reply]
If someone has expressed a preference that people use certain pronouns when referring to them, it is generally regarded as respectful to use those pronouns when referring to them. It would certainly be useful to make it easy to discover that someone has expressed such a preference without having to remember to visit their userpage and hunting for the existence of a declaration that may be in almost any format in almost any location on the page. {{they}} and related templates (e.g. {{they|TheTechie}} → she) has some of this functionality but it is limited to just three options. Thryduulf (talk) 05:01, 25 January 2025 (UTC)[reply]
If the user has explicitly mentioned there pronouns in their signature (without '/'they them) then those should be used to reply them wherever possible. But for most users who do not put it in their signature, finding out each time what pronouns they prefer on their user page could be tedious especially when replying to a club of users. Although some may not find it that difficult, there surely are others who will. Additionally different cultures might see pronouns differently. Using they/ them to person may disrespectful in somebody's zone, and a friendly greeting in others. Creating complex rules for these may itself be unfriendly to a few.
I however think that this task could be easier for automated messages, and a custom character-limited pronoun is a good idea. For where the grammar is unclear as said by Anomie, we can use gender neutral pronouns, but then some may debate this inconsistency as unfriendly itself. ExclusiveEditor 🔔 Ping Me! 06:13, 25 January 2025 (UTC)[reply]
I wonder whether we could/should create a more flexible pronoun infrastructure. If every user who cares has a /pronoun subpage of their userpage, we could have {{they}} or a variant check that page and output the correct pronoun. Basically {{they|Kusma}} could transclude User:/Kusma/pronoun or, if that page does not exist, just default to its current behaviour. —Kusma (talk) 20:01, 25 January 2025 (UTC)[reply]

Ban mainstream media

Not going anywhere. Discuss individual sources at WP:RSN. —Kusma (talk) 20:33, 25 January 2025 (UTC)[reply]
The following discussion has been closed. Please do not modify it.

Its obvious and open that all mainstream media is bought out by billionaires who support the democrat party. This may not seem like a big issue at first, but theres this tiny thing called "ESG" and "DEI" that is very common in workplaces. Due to this, news media will lean heavily liberal and broadcast left wing politics. This is further amplified by the fact that these news sources are owned by billionaires who lean to the democrat party and therefore force left wing politics to be the "highlight" of the media when they talk about politics. I think we should ban mainstream media from being used as citations as they are not reliable. Only nonpartisan, moderate news sources that are not funded by big corporations or billionaires or governments should be used as news sources. SimpleSubCubicGraph (talk) 03:55, 25 January 2025 (UTC)[reply]

You have a USA-centric view there, and there are other POVs even in the USA such as FOX. Anyway we just have to take biases in whatever media into account. Non-mainstream will also have biases, but they mey not be so well known. Graeme Bartlett (talk) 04:22, 25 January 2025 (UTC)[reply]
@Graeme Bartlett Fox news is one conservative media out of all the other mainstream ones that all lean liberal. There needs to be more reliable sources as the billionaires openly control the narrative. Look what they did to all of the republican candidates, slandered them, defamed them, made up false lies, compared them to nazis and more. All of those are fake and is a demonstration of the dirty tactics billionaires will use to further their own interest. SimpleSubCubicGraph (talk) 17:54, 25 January 2025 (UTC)[reply]
"nonpartisan,... that are not funded by big corporations or billionaires or governments" may themselves have their own biases. In multiple parts of the world, you see such small media to be biased to their local area, portraying them better compared to other areas, where there audience doesn't lie, something like audience-centered bias, and because the writers work locally they are not be able to provide strong coverage beyond their own locality without seeking information from the large media outlets you seek to ban. There are so many biases, if we deselect all of them then we may have only a handful of news outlets left, whose future biases will directly shape Wikipedia. ExclusiveEditor 🔔 Ping Me! 06:24, 25 January 2025 (UTC)[reply]
It's hard to even take this seriously when the immediate counterpoint to the first sentence is "Fox News". This is tosh EvergreenFir (talk) 06:34, 25 January 2025 (UTC)[reply]
Academia and intellect has a liberal bias (though only in US terms). Wikipedia is a repository of intellect. Aaron Liu (talk) 14:27, 25 January 2025 (UTC)[reply]
Adding to the other replies, it's a bit baffling to imagine that billionaires would lean more towards left-wing politics (who might not really be into them) than the average population. Chaotic Enby (talk · contribs) 14:40, 25 January 2025 (UTC)[reply]
It looks like the OP would be against anything supported by the richest person in the world. And I would point out that Rupert Murdoch is a billionaire. Hardly left-wing. Phil Bridger (talk) 15:32, 25 January 2025 (UTC)[reply]
@ExclusiveEditor But their biases are not as extreme compared to left wing liberal bought out media. Sure they may favor their own place, present it better than others. But in the grand scheme of things, they are far better than billionaires who donate to kamala and who own the media. Which one is more likely to be biased? Obviously the big news corporation. SimpleSubCubicGraph (talk) 17:51, 25 January 2025 (UTC)[reply]
Which one is more likely to be biased? Obviously the big news corporation. Not sure about your point, since Fox Corporation is also a "big news corporation" (and not one famous for its reliability, although again biased and unreliable are different things). Chaotic Enby (talk · contribs) 18:11, 25 January 2025 (UTC)[reply]
@Chaotic Enby@Phil Bridger More billionaires have donated to left wing politics than to right wing ones (republican party). From what I recall, about 70 billionaires donated to Kamala, 50 to Trump. As it just so happens, those billionaires control most of the media. SimpleSubCubicGraph (talk) 17:52, 25 January 2025 (UTC)[reply]
Why are you talking about Kamala and Trump (it was your choice to use the first name for one and the surname for the other)? Less than 5% of the world's population could vote for either of them. Phil Bridger (talk) 18:31, 25 January 2025 (UTC)[reply]
@SimpleSubCubicGraph: I think you should have started with bias in general, rather than directing it to a single side on political spectrum, especially when asking to ban any media. 𝓔xclusive𝓔ditor Ping Me🔔 18:15, 25 January 2025 (UTC)[reply]
The opposite is also possible – the Wikipedia community knows that most sources are biased, and our policy on this is that biased sources are acceptable provided that they're reliable and correctly attributed when needed. You should either show that some of the "mainstream" sources are not only biased but actually unreliable (you can take a look at Wikipedia:Reliable sources/Noticeboard), or, like said above, start a discussion to disallow biased sources in general. Chaotic Enby (talk · contribs) 18:25, 25 January 2025 (UTC)[reply]
I am not into it. 𝓔xclusive𝓔ditor Ping Me🔔 18:27, 25 January 2025 (UTC)[reply]
@ExclusiveEditor@Chaotic Enby There is proof that the media lies, slanders, and is biased. I don't know how the wikipedia community could accept it. I've seen many articles on post 1992 US politics that are wrong but whenever I try to do something about it I get struck down. SimpleSubCubicGraph (talk) 18:46, 25 January 2025 (UTC)[reply]
If you have actual example of fabricated stories or lies, you can put it at the talk page of the article in question, or, if it is a recurring pattern for a news source, you can post it at Wikipedia:Reliable sources/Noticeboard so it can be reevaluated. Again, bias alone is not unreliability, and multiple sources can and will often present the same facts with different spins on them without any of them outright lying. Chaotic Enby (talk · contribs) 19:45, 25 January 2025 (UTC)[reply]
This is pretty cool I guess. How about we hat this thread for total impossibility and wrong venue for AP2 drama? Folly Mox (talk) 20:27, 25 January 2025 (UTC)[reply]

WP:CRITICISM's status as an essay

I was a bit surprised last night to discover that WP:CRITICISM is "only" an essay. I see people try to follow it on a somewhat frequent basis for best practices and was under the impression that it must be a guideline. But it's not. Should it be? I've never tried to "upgrade" the status of something before and I'm assuming to some extent that would be controversial, but input would be welcome. I'm assuming some things might need to be finetuned if it does get that extra status. Clovermoss🍀 (talk) 17:31, 26 January 2025 (UTC)[reply]

Regarding the process, Wikipedia:Policies and guidelines § Life cycle describes the process of establishing consensus for guidance to be designated as a guideline or policy. Before having a request for comment discussion, it would probably be good to have a discussion reviewing its current content and establishing consensus amongst interested editors, before moving to a broader sampling of the community in an RfC. isaacl (talk) 17:46, 26 January 2025 (UTC)[reply]
That's why I came here. Clovermoss🍀 (talk) 18:05, 26 January 2025 (UTC)[reply]
Maybe put a pointer on the talk page for Wikipedia:Criticism then? The idea lab page is usually more for brainstorming than establishing consensus, but probably not a big deal if the discussion happens here or, say, the miscellaneous village pump, as long as they're pointers at the other places. isaacl (talk) 18:10, 26 January 2025 (UTC)[reply]
I don't think the talk page has that many watchers. I came here for brainstorming and wider community input. I thought that's was what one should do before even attempting an rfc. It seems to fit exactly with the stated purpose of this page. Clovermoss🍀 (talk) 18:15, 26 January 2025 (UTC)[reply]
Sure; I gave my brainstorming thoughts that it would probably be good to have a discussion to do that finetuning you described to ensure that the page was a good representation of in-practice consensus, before having an RfC. A village pump is a fine place to have a discussion. I was just suggesting that it might be helpful to attract interested editors with pointers on the corresponding talk page and the miscellaneous village pump, since the idea lab typically discusses less fully-formed ideas, and so its set of page watchers might not cover enough of the desired audience. isaacl (talk) 18:24, 26 January 2025 (UTC)[reply]
I've posted on the talk page about this too now. Clovermoss🍀 (talk) 18:36, 26 January 2025 (UTC)[reply]
I think your instinct is correct; Special:PageInfo says that 9 editors who have the page on their watchlist looked at the WP: page during the last 30 days, and 10 of them looked at the talk page. That's not a lot.
A note at the Wikipedia talk:Manual of Style main talk page would also be appropriate. WhatamIdoing (talk) 19:20, 26 January 2025 (UTC)[reply]
(edit conflict) X 2. Often too much notice is made of which word an essay, guideline or policy has at the top. How much it is binding depends more on how widely it is accepted rather than its formal status. Having said that, I would follow Isaacl's advice before "upgrading" anything. Phil Bridger (talk) 18:12, 26 January 2025 (UTC)[reply]
I would support a process to bring CRITICISM to at least a guideline. This might mean an initial stage to review and revise the text to make it appropriate for a guideline before bringing an RFC to make it a guideline. Masem (t) 18:42, 26 January 2025 (UTC)[reply]
It might be interesting to see whether editors actually support the content. For example, I tend to favor the approach of Wikipedia:Criticism#Integrated throughout the article, but when I have suggested that, other editors generally want to have a place where the Correct™ POV can be easily found. See, e.g., my suggestion a few months ago, and yet Talk:Cass Review#Criticism section has been recreated. WhatamIdoing (talk) 19:24, 26 January 2025 (UTC)[reply]
I don't see why, in that particular case, there are separate "reception" and "criticism" sections. Surely any criticism is part of the reception? Maybe if this was made a guideline it would help. Phil Bridger (talk) 19:41, 26 January 2025 (UTC)[reply]
"Criticism" is a type of "reception", so it doesn't seem reasonable to have them be separate, but what I really mean in that instance is that it doesn't make sense to say in the first or second section something like "It proposes changing the rules for this class of medications" and then you have to scroll through 14 other sections to get to a sentence that says "And this advocacy group thinks that changing the rules for that class of medications is a really bad idea". Those two sentences are on the same subject, so they belong together. WhatamIdoing (talk) 23:36, 27 January 2025 (UTC)[reply]
I think it would be useful as a guideline, after the community signs off on the wording. A few years ago, @HaeB: ran a query to identify BLPs with controversy sections, then I went through those to see which could be integrated or at least more appropriately titled. (It was fun, actually.) I couldn't come up with solutions for all of them, but we cleaned up quite a few. I think some readers/editors like "controversy" sections because that's where the "juiciest" content is. Schazjmd (talk) 20:39, 26 January 2025 (UTC)[reply]
Frequently, the content I see in criticism sections for businesses would be better integrated into the history sections for the company, so the events can be placed into context. As touched upon in Wikipedia:Criticism § Organizations and corporations, though, there are some cases where there is an ongoing criticism that spans across an extended period of time, and it's more easily described in a separate section that pulls together various threads.
I do think there are some editors who take any negative news, and describe it as a controversy or criticism when it's not really either. This type of info usually should be integrated into the sections for the overall history of the subject. isaacl (talk) 00:38, 27 January 2025 (UTC)[reply]
I agree. I can also see how a cohesive "counter-arguments" section might be helpful to readers, such as on an article about a theory or concept. Schazjmd (talk) 23:49, 27 January 2025 (UTC)[reply]
WP as a whole has a larger problem that editors want to include every bit of negative content they can find about a topic when documented in reliable sources, particularly for BLP and more particularly for certain types of BLP due to recent events. Criticism and controversy gets added far more faster and without regards to trying to integrate it better than other types of content for the most part, and we really need stronger guidelines that stem from NPOV that not all negative content or criticsm is appropriate or needs to be included, and when included, it generally should be integrated better in the article than as a standalone section or the like. — Masem (t) 03:57, 27 January 2025 (UTC)[reply]
At some level, we teach them to do this. If editors, especially new editors, add positive content, then someone comes around and smacks them for being "promotional". Take a look at the history of Pickathon. An inexperienced editor tried to add some content, and got told off for, among other things, adding facts that were reported in major newspapers. How dare you say that free things are free. How dare you say that it's plastic-free. How dare you say that they offer childcare services, because "many, many" festivals – none of which anyone can find or name, and we did look – do the same. But do feel free to add anything negative you can find, because that's "balance". WhatamIdoing (talk) 23:50, 27 January 2025 (UTC)[reply]
Criticism sections should already be disallowed per WP:STRUCTURE, but for some reason there's a footnote in there saying that actually it doesn't count. They're almost always problems when it comes to NPOV, and their fiercest advocates tend to be people who want to present a negative point of view on the subject. Adding them to BLPs is even worse and in my opinion should be considered a serious policy violation. Thebiguglyalien (talk) 21:37, 26 January 2025 (UTC)[reply]
The fact that WP:STRUCTURE already points to WP:Criticism as a source for further guidance on the subject supports the idea that WP:Criticism is more weighty than a mere essay. Especially when dealing with BLP situations, there should usually be a better option than grouping a labeling content by the POV it represents. -- LWG talk 23:40, 26 January 2025 (UTC)[reply]
Related thought: the criticism section template only has 405 transclusions in mainspace, so it might be a valid option to just make a push to clean up those 405 articles and then retire/rework the template. -- LWG talk 23:46, 26 January 2025 (UTC)[reply]
I've never actually seen a criticism section with that tag so I suspect that the total amount of these sections is a much higher number. Clovermoss🍀 (talk) 23:50, 26 January 2025 (UTC)[reply]
That's fair, and also related to a conversation happening on the talk page for that template about the appropriate use of the tag. Currently that tag falls under the broad heading of "POV Dispute tags" which means it is only meant to be used to make the presence of an ongoing discussion/dispute. In the absence of an active consensus building process the appropriate thing to do is not to tag the article, but to fix it. But an argument could be made that the presence of a badly-structured crit section is less a matter of dispute and more a matter of content quality, in which case the time and effort involved in fixing the article might merit some amount of "drive-by tagging". My general thoughts on the matter are that for topics whose controversial nature is itself a subject of comment by RSs, it may be appropriate to dedicate a section in the article to opposing viewpoints, but in that case the section should usually be given a more informative title than just "Criticism". -- LWG talk 00:22, 27 January 2025 (UTC)[reply]
New editors and IP use it a lot, specifically the WP:CSECTION, to remove criticisms or controversial items from articles. Most of the times this is a COI/NPV issue and the criticism they tend to wrongly remove is justified by WP:DUE. Turning it into a guideline or policy, as in its current version, could just empower them more. We need to fix this for sure. --𝓔xclusive𝓔ditor Ping Me🔔 07:25, 27 January 2025 (UTC)[reply]

WikiRPG Gadget/Extension

I had this idea a good bit ago. A gadget or extension for Wikipedia that turns it into an RPG, where you can like, get XP for making good edits and stuff, or maybe even fight enemies on wikipedia articles to make it a real rpg. This is obviously a non-serious idea, and it's just an idea to make editing a bit more fun for some, but I do think it'd be cool. Discuss in the comments, I'm excited to see what y'all add!
From Rushpedia, the free stupid goofball (talk) 18:18, 28 January 2025 (UTC)[reply]

Thinking about it now, this would probably be an extension. It wouldn't fit in as a gadget, since it wouldn't be useful. From Rushpedia, the free stupid goofball (talk) 18:19, 28 January 2025 (UTC)[reply]
It sounds like you'd be interested in Wikipedia:Wikipedia is an MMORPG. WhatamIdoing (talk) 02:09, 29 January 2025 (UTC)[reply]
I don't want to encounter grinders spamming minor edits to level up quick. It would also buff trolls because they'd be treated as a proper enemy instead of something to deny and clean up after.[Humor] ABG (Talk/Report any mistakes here) 05:56, 29 January 2025 (UTC)[reply]

Ask the chatbot

Have you seen the "Ask the chatbot" feature in Britannica? Honestly I am a bit suprised that they developed something like that. I think that it's a good way to consume the encyclopedia contents for quick questions. One of the most frequent uses for ChatGPT and similar tools (hi DeepSeek R1) is Q&A, and they use to reply using our contents (their models are trained partially using Wikipedia after all), so why don't we develop our own chatbot? What do you think? Regards. emijrp (talk) 18:18, 28 January 2025 (UTC)[reply]

I'm not really a fan of AI, being an artist and all, but this could maybe help students and stuff. I kinda like this idea. From Rushpedia, the free stupid goofball (talk) 18:21, 28 January 2025 (UTC)[reply]
Considering all of the discussion above, what would you consider an acceptable error rate for answers? Donald Albury 18:48, 28 January 2025 (UTC)[reply]
Ideally 0, but that's probably impossible. I am not an expert in the field, though IMHO we could train the model excluding articles with maintenance tags, all sentences without a reference, pages written by newbiews or few editors, etc. Also, adding the "I don't know" sentence to the chatbot vocabulary could be a feature, it's not bad to say it. Furthermore, more than a purely conversational chatbot that can hallucinate, I propose one which replies to your questions pasting the relevant sentences in the articles, with minimum originality. Other features could be "please summarize this article, or all articles in this category, tell me three writers born in France in the 17th century, the most important Van Gogh paintings, etc". Definitely, an improved search engine which helps to consume the content. emijrp (talk) 19:10, 28 January 2025 (UTC)[reply]
That would mean training the model to exclude nearly all of our content. WhatamIdoing (talk) 02:10, 29 January 2025 (UTC)[reply]
To be fair, "pages written by few editors" isn't necessarily something we should exclude: GAs and FAs are usually written by a few dedicated editors, rather than in a slow incremental way (my own example).
Regarding the proposal of adding the "I don't know" sentence to the chatbot vocabulary, while the idea is certainly good, there isn't a specific "chatbot vocabulary" that can be edited: rather, that's something that has to be pushed for during training. However, I do like your proposal of one which replies to your questions pasting the relevant sentences in the articles (there's something similar that can be found in the literature, namely retrieval-augmented generation, which directly adds the relevant sentences to the prompt and answers from there). Chaotic Enby (talk · contribs) 02:17, 29 January 2025 (UTC)[reply]
I don't see any particular need why we should integrate chatbots – a rapidly changing and frequently flawed technology – into our own rapidly changing and frequently flawed encyclopedia. It'll only make matters worse. Cremastra (talk) 20:10, 29 January 2025 (UTC)[reply]
Agree with this. I did give some technical advice above, but it doesn't mean I'm sold on the proposal at all. Chaotic Enby (talk · contribs) 20:15, 29 January 2025 (UTC)[reply]

Adding a TLDR section for AFC submissions

Currently, a lot of AFC drafts get stuck in the middle of the queue due to refbombing by inexperienced editors. Would it make sense to modify the AFC process so that the new article wizard asks the user to provide three reliable sources and a blurb ? The blurb and three refs would be added to the top of article which would then be used by AFC reviewers to assess notability (after which they can work with the submitter to move the rest of the draft to mainspace). Sohom (talk) 02:49, 29 January 2025 (UTC)[reply]

also cc @Chaotic Enby with whom I was discussing this idea on the Wikimedia Discord -- Sohom (talk) 02:50, 29 January 2025 (UTC)[reply]
Thanks! For the specific implementation, I was thinking that we could help the new writers evaluate what is and isn't a reliable source, by pointing them towards something like WP:THREE or directly having a clear wording similar to WP:42. Something such as:

Here, please link what you think are the three best reliable sources that are independent and provide significant coverage of the topic:

(with or without piped links to WP:RS, WP:SIGCOV, etc.)
We can clarify that it is the quality of sources, and not the quantity, that matters, and that highlighting your best sources makes the article easier to assess, and thus more likely to be accepted soon. Chaotic Enby (talk · contribs) 02:56, 29 January 2025 (UTC)[reply]
In fact, now that I've had a look at the AfC wizard itself, I'm realizing that the code part might be a bit scary and not necessarily conductive for adding the important sources. We could have the new users enter the sources before (between Wikipedia:Article wizard/Referencing and Wikipedia:Article wizard/CommonMistakes), which would also help them not write the article backwards. Chaotic Enby (talk · contribs) 03:00, 29 January 2025 (UTC)[reply]
I made a prototype of what I had in mind at User:Chaotic Enby/Article wizard, although the buttons aren't functional yet. My idea for the technical part is to have the first two link to the same page with themselves as added hyperlink parameters, and the third send to the next page with all the hyperlink parameters, which can then be passed to the draft's source editor (in a suitable template we can create). Chaotic Enby (talk · contribs) 03:14, 29 January 2025 (UTC)[reply]
Nice mock-up! I'd replace the word "link" with "cite" though, to be inclusive of offline sources. Ca talk to me! 11:40, 29 January 2025 (UTC)[reply]
Good idea, I made the change! Chaotic Enby (talk · contribs) 13:38, 29 January 2025 (UTC)[reply]
Strongest possible support from an AFD and lapsed NPP reviewer perspective. I'll highlight Wikipedia:Articles for deletion/Patrick Bet-David (3rd nomination) as an AFD where this would have been useful. I would also suggest an additional step during finalisation where the TLDR sources are reviewed and replaced if better ones are identified during the process. ~Hydronium~Hydroxide~(Talk)~ 03:21, 29 January 2025 (UTC)[reply]
That could also be a great idea! Since the last phase of the article wizard is the source editor where the draft is written and published, we could have the WP:THREE sources be in a template that is clearly visible in the source editor, so that the user can review and edit the sources if needed.
The current code displayed in the editor is:
{{subst:AfC submission/draftnew}}<!-- Important, do not remove this line before article has been created. -->

== References ==
<!-- Inline citations added to your article will automatically display here. See en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->
{{reflist}}
We could have it become something like:
{{subst:AfC submission/draftnew}} <!-- Important, do not remove this line before article has been created. -->
{{best sources <!-- Your three best sources -->
| $1
| $2
| $3
}}

== References ==
<!-- Inline citations added to your article will automatically display here. See en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->
{{reflist}}
With the three fields being autofilled by the parameters passed in the previous form, but still editable by the user. Chaotic Enby (talk · contribs) 03:38, 29 January 2025 (UTC)[reply]
To make it more practical for future reviewers, I'm also thinking that the template {{best sources}} should allow the reviewer to fill a {{source assess table}} to help point out to the nominator and future reviewers what the issues with the sources are. Chaotic Enby (talk · contribs) 03:41, 29 January 2025 (UTC)[reply]
Sounds good! Depending on how energetic code support is feeling, the tool could actually step through the "best sources" one at a time, with an explanatory checklist for the editor to complete (for various aspects of independence, reliability, significant coverage, etc.) ~Hydronium~Hydroxide~(Talk)~ ~Hydronium~Hydroxide~(Talk)~ 03:49, 29 January 2025 (UTC)[reply]
I've made {{User:Chaotic Enby/Best sources}} which either shows the sources (if not yet reviewed) or generates a source assess table (if reviewed), that's definitely a functionality that could be added to the AfC helper script. Chaotic Enby (talk · contribs) 14:10, 29 January 2025 (UTC)[reply]
  • I like this proposal, but I'd caution against 100% relying on this system when reviewing. Most newbies don't have a good idea of what a GNG compliant source is, a skill that comes through experience. I'd imagine this would be helpful for obvious cases, but borderline cases will require closer examination that just three sources. Ca talk to me! 11:38, 29 January 2025 (UTC)[reply]
Strong support for an excellent idea. qcne (talk) 17:31, 29 January 2025 (UTC)[reply]

Reading and going through normal webpages and sites, I started to wonder: Do we really need links to shine blue these days? We are all so used to just "test clicking" on anything, since most things online are clickable these days, so I'm just wondering if we still need to differentiate our links by having them shine blue. This topic might have been discussed before and I've simply missed it.

We have several rules on how to limit the number of links on an article, simply because too many of them disrupts the reading experience. It also takes a lot of time to weed out double links or anything that doesn't fall inside the guidelines. With black links, there wouldn't be any problem with over-linking.

Sure, this is just a very rudimentary idea that would need to be sorted for editors. Like perhaps the links could turn blue (and red) when you open a page in the editing window, black links could be default for when you are not logged in, and when logged in you could select the color in your settings. Or: the second, third, etc. link in an article could automatically be displayed in black, leaving only the first time blue (surely we have the tech for that now). It needs to look good on all platforms. And what about all the menus on wiki pages, do they really need to be blue? What normal websites these days have their menus in a "click color". I think that a reduction of colored links would be a way to give the layout of Wikis a bit of an update.

This question might belong on another forum, but this place is as good as any to start. Cart (talk) 20:21, 29 January 2025 (UTC)[reply]

Respectfully, I don't think that this would be a very practical navigating experience. The idea of having blue links is specifically so the readers don't have to "test click" on every word, and I don't think Wikipedia readers are actually doing it. Having links be black by default would make for a more painful navigating experience, as you'd have to try to click on every word to see if there is a link hidden there.
However, I do agree that making the menus (and only the menus) be black could be a possibility (although not sure if it would be an improvement) as the readers already expect them to be links. Chaotic Enby (talk · contribs) 20:28, 29 January 2025 (UTC)[reply]
I am not used to test clicking. Not differentiating before hovering is just bad UX and has not been and should not be normalized. Not to mention the print view.

With black links, there wouldn't be any problem with over-linking.

I don't understand this rationale at all. We should make it easy for readers to go places. "Solving" navigational issues by breaking navigation entirely is the "nuke the world and just die out" solution. Aaron Liu (talk) 13:28, 30 January 2025 (UTC)[reply]
"We are all so used to just "test clicking" on anything" [citation needed]--User:Khajidha (talk) (contributions) 13:39, 30 January 2025 (UTC)[reply]
Not an expert, but I believe there has to be some way to distinguish between links and plain text to meet accessibility standards (this is why link colors were lightened between Vector 2010 and Vector 2022 – to increase contrast between text colors). RunningTiger123 (talk) 14:11, 30 January 2025 (UTC)[reply]
Do you have any evidence of "We are all so used to just 'test clicking?'" I certainly do not do that and looking over peers shoulders, I don't think I've ever seen anyone do that, period. This is anecdote vs anecdote, but I feel that you're making an extraordinary claim. Most people don't click something unless there's something to differentiate it from surrounding text (or it's obviously part of a menu.) Nebman227 (talk) 14:23, 30 January 2025 (UTC)[reply]
Ok, I was obviously unclear in calling it 'test clicking'. But say that you go on a news website like BBC or CNN, you see no links at all. On a computer the links will show up as underlined when you move your cursor over them, but not so on a phone. You just assume that the headlines are clickable and so you 'test click' on them, and that usually works. You are directed to the article you are looking for.
As for how having the second, third etc. links to the same article on a page turn black automatically, it would reduce the visual number of links, making the page easier to read. The links would be there, just not so much in your face. And if you happen to click on that word it will link as usual. Cart (talk) 15:04, 30 January 2025 (UTC)[reply]
True, but people know that headlines are clickable. People don't know in advance which words inside a chunk of text will be clickable. Chaotic Enby (talk · contribs) 15:07, 30 January 2025 (UTC)[reply]
Those headlines fall under Nebman227's point about "differentiate[d] from surrounding text (or it's obviously part of a menu)". Nobody just randomly clicks on words in the middle of an article.--User:Khajidha (talk) (contributions) 15:09, 30 January 2025 (UTC)[reply]

proposed template for justified criticism

Please share your thoughts on the draft of this template, which could be named {{Care}} or something else.

Arbabi second (talk) 19:41, 30 January 2025 (UTC)[reply]

I think that sort of thing is better conveyed with a personalised message rather than a template. Phil Bridger (talk) 19:55, 30 January 2025 (UTC)[reply]
That message is making a judgment about a user's internal state. We warn users when their conduct does not conform with policies and guidelines, and may sanction them when their behavior continues to not conform with policies and guidelines, but we should never be commenting on their intentions, beliefs, or other internal states. Donald Albury 22:10, 30 January 2025 (UTC)[reply]
@Donald Albury
I basically agree with your opinion, but" Discussion of behavior in an appropriate forum (e.g. user's talk page or Wikipedia noticeboard) does not in itself constitute a personal attack." Anyway, in practice, there are far more severe criticisms in messages between users, and a mild and humorous example might have a place to test. Arbabi second (talk) 22:24, 30 January 2025 (UTC)[reply]
There is a difference between discussing overt behavior and discussing internal intentions. We assume good faith, but we may sanction problematic behavior even if it may have been done in good faith. We don't know why an editor does something, we only know what they did. Telling someone that they don't care is not assuming good faith. I'm sorry to jump on you like this, but I think AGF is a very important principal to maintain. Donald Albury 23:15, 30 January 2025 (UTC)[reply]
Personally I think it's better to focus on what behaviour is desired, rather than any internal motivation for someone exhibiting poor behaviour. For instance, it's helpful when commenters acknowledge the viewpoints of others to let them know that their points have been considered, even if the commenters disagree with the conclusions being drawn. isaacl (talk) 23:19, 30 January 2025 (UTC)[reply]
You think that the recipient doesn't care about other people's opinions, but you still expect this to accomplish anything? --User:Khajidha (talk) (contributions) 22:13, 30 January 2025 (UTC)[reply]
@Khajidha
Like many others, I often forget that there is another side to an argument, and now, in my old age, I have finally learned not to be offended by others' reminders and to try to listen to the other side. Arbabi second (talk) 22:41, 30 January 2025 (UTC)[reply]
Perhaps your meaning would be closer to "You are very precise and fluent in explaining your points and opinions, but right now, I wish you were paying attention to my points and opinions." WhatamIdoing (talk) 23:49, 30 January 2025 (UTC)[reply]
@WhatamIdoing
Your wording is definitely better than mine. It didn't occur to me. I will correct the message according to your guidance. Arbabi second (talk) 02:24, 31 January 2025 (UTC)[reply]

I modified the message text with the guidance of WhatamIdoing. Is this template now usable? Arbabi second (talk) 10:07, 31 January 2025 (UTC)[reply]

In my view, it would be better to demonstrate the desired behaviour by example first (or refer to where the desired behaviour has already been exhibited), and then ask for a response to your expressed viewpoints. By itself, just asking for attention can come across as being self-centred. isaacl (talk) 15:19, 31 January 2025 (UTC)[reply]
Note by modifying your original post, you've made it difficult for new people joining the thread to understand the previous responses. Perhaps you can restore the original post, and post the modified message separately? isaacl (talk) 15:22, 31 January 2025 (UTC)[reply]
For posterity, the previous text was "You are very precise and fluent in explaining your points and opinions, but unfortunately you don't care much about the opinions of others." The current text is "You express your points and opinions very precisely and fluently, but at this moment, I wish you would pay more attention to mine." I personally find both examples to be rude, and the first one probably counts as a personal attack. Thebiguglyalien (talk) 17:37, 31 January 2025 (UTC)[reply]
@Thebiguglyalien
In my experience after several years on Wikipedia, it is rare for a user to personally attack others or to be offended by the normal, mild sarcasm of others. It is not unlikely that there are other intentions behind extreme actions and reactions. I invite you to read Wikipedia:"Breaching experiment" considered harmful to understand the complexity of behavioral issues. A template like this may be a useful tool for new users who have unknowingly been exposed to "Breaching experiment". Arbabi second (talk) 08:46, 1 February 2025 (UTC)[reply]
In my longer experience I have seen numerous people feel offended or even attacked by harmless phrases. --User:Khajidha (talk) (contributions) 21:23, 1 February 2025 (UTC)[reply]
@Khajidha You might be right. Arbabi second (talk) 05:27, 2 February 2025 (UTC)[reply]

Creating a template named "Template:High visit is predicted"

Hi, high visit (jump in visit) of an article on Wikipedia may be caused by different reasons:

  1. TV and satellite
  2. News web sites
  3. Social media (e.g. Instagram)
  4. Reaching a milestone (for example, birthday of a scientist)

and other reasons. So in my opinion, a jump in page views can be predicted by these causes in advance a couple of minutes/hours/days before.

So I propose to create a template named "Template:High visit is predicted", to alarm editors to pay more attention to such articles in advance and try to improve its quality as much as they can, due to such predicted increase in page views. This template should has a category named "Category:Most precicted visit articles" that shows all such articles at a galnce.

Please discuss the idea. Thanks, Hooman Mallahzadeh (talk) 17:41, 2 February 2025 (UTC)[reply]

I don't think enough people would notice a template / maintenance category for this to be helpful. Better to post at a non-dead relevant discussion page (for example, a WikiProject talk page). —Kusma (talk) 17:51, 2 February 2025 (UTC)[reply]
Number four seems to be the only one in that list (I understand that the list is not exclusive) that is predictable a couple of days in advance. Phil Bridger (talk) 17:52, 2 February 2025 (UTC)[reply]
Yes, these changes (Instagram and Tv) impact page views very faster, may be the prediction is "they impact page views a couple of hours, minutes or seconds later". Hooman Mallahzadeh (talk) 18:04, 2 February 2025 (UTC)[reply]
Awards where nominees are known in advance but winners will get lots of attention (say, Oscars) would also qualify I guess. —Kusma (talk) 17:56, 2 February 2025 (UTC)[reply]
For more sudden changes, we already have Template:Current. CMD (talk) 18:18, 2 February 2025 (UTC)[reply]

Change Wikipedia themes using View > Page Style in Firefox

This would make it so logged out, or logged in users of Wikipedia could change the theme of Wikipedia by pressing Alt > View > Page Style and selecting from a drop down of available themes. This is a standard feature in HTML: https://developer.mozilla.org/en-US/docs/Web/CSS/Alternative_style_sheets Northpark997 (talk) 19:27, 2 February 2025 (UTC)[reply]

This is not really what I would call a "standard feature" – it has been deprecated from several browsers (Chrome, Opera), and doesn't appear to have ever been implemented in other browsers, besides Firefox. Chaotic Enby (talk · contribs) 19:35, 2 February 2025 (UTC)[reply]

WMF

Taking stock of the new Community Wishlist process

Over on the Meta talk page of the new Community Wishlist process I've done a post taking stock of the changes so far. Followers of this page may be interested in that discussion. Best, Barkeep49 (talk) 17:48, 7 January 2025 (UTC)[reply]

From The Forward. Any comment/advice from the WMF on this? Gråbergs Gråa Sång (talk) 10:52, 8 January 2025 (UTC)[reply]

I see Wikipedia:Village_pump_(miscellaneous)#Heritage_Foundation_intending_to_"identify_and_target"_editors is ongoing. Gråbergs Gråa Sång (talk) 11:09, 8 January 2025 (UTC)[reply]

WMF annual planning: How can we help more contributors connect and collaborate?

Hi all - the Wikimedia Foundation is kicking off our annual planning work to prepare for next fiscal year (July 2025-June 2026). We've published a list of questions to help with big-picture thinking, and I thought I'd share one of them here that you all might find interesting: We want to improve the experience of collaboration on the wikis, so it’s easier for contributors to find one another and work on projects together, whether it’s through backlog drives, edit-a-thons, WikiProjects, or even two editors working together. How do you think we could help more contributors find each other, connect, and work together? KStineRowe (WMF) (talk) 20:27, 10 January 2025 (UTC)[reply]

@KStineRowe (WMF), by providing more funding for scholarships to Wikimania and other conferences, for one thing. Sdkbtalk 22:57, 10 January 2025 (UTC)[reply]
Anyone is invited to collaborate and provide feedback on the page, Meta:Meta:Neuro-inclusive event strategies. I think working on this could go a long way. Hexatekin (talk) 19:33, 28 January 2025 (UTC)[reply]

We want to buy you books

I've opened a discussion on Wikipedia talk:WikiProject Resource Exchange/Resource Request to get your input on a pilot project that would fund resource requests to support you in improving content on Wikipedia. The project is very much in its early stages, and we're looking for all of your thoughts and suggestions about what this pilot should look like. Best, RAdimer-WMF (talk) 23:36, 22 January 2025 (UTC)[reply]

Wikimedia Foundation Bulletin 2025 Issue 1


MediaWiki message delivery 16:58, 27 January 2025 (UTC)[reply]

WMF annual planning: What information or tools could help you choose how you spend your time?

Hey everyone, I'm Sonja. I lead some of the teams at WMF that design and build tools for contributors. One of the things we're thinking about for next (fiscal) year is ways we can make it easier for volunteers to find meaningful tasks to focus on. What information or tools could help you choose how you spend your time? And how do you currently organize and prioritize your on-wiki activity? This is just one of many questions we look forward talking with you about. SPerry-WMF (talk) 22:14, 29 January 2025 (UTC)[reply]

Hi @SPerry-WMF! One of the considerations I'd have in mind for finding meaningful work is how to prioritize the most important articles, since focusing attention on them will lead to more meaningful impacts for readers. This applies both to quasi-automated tasks (e.g. I feel like AWB's default sorting does a pretty good job of it, although I'm not sure what algorithm they use) and finding articles to improve manually. We have crude metrics like pageviews (that are easily influenced by recency/systemic/pop culture bias), as well as lists like Vital Articles, but there is room for improvement. Sdkbtalk 23:55, 29 January 2025 (UTC)[reply]
Thanks for your reply, @Sdkb, good to hear from you! Some of the newcomer tools we've been investing in, such as Structured Tasks, are getting to what you're suggesting, and I think there is a big opportunity for us to expand that concept to recommend tasks to more experienced editors as well, for example by featuring things like vital articles that require updates. As you're suggesting, there are some tools for that out there already, but the burden to find them is on the volunteer, taking up precious time. If you had recommendations available like that, how or where would you like to receive them? SPerry-WMF (talk) 23:11, 30 January 2025 (UTC)[reply]
One thing to keep in mind is that different people have different definitions of "most important articles". Wikipedia:WikiProject Vital Articles is only one project among many, for instance. And I suspect most people consider "topics I want to write about" the most important. Jo-Jo Eumerus (talk) 10:03, 31 January 2025 (UTC)[reply]
In most cases, within the task I'm already working on. So e.g. within structured tasks, the first suggested task. But it'd also be useful to have the ability to customize the list, similar to AWB filtering, so that I could easily make a query like "what are the most important articles that have X maintenance tag?" Sdkbtalk 16:41, 31 January 2025 (UTC)[reply]
Thank you @Sdkb and @Jo-Jo Eumerus for weighing in here - I totally agree that customization is key for these types of recommendations. One way to do that is to enable customization for each volunteer individually, but I also see an opportunity for wikis to nudge their community in specific directions by making it possible to set some recommendation parameters for the entire community, for example by promoting projects or articles that could help close specific content gaps. Where do you think customization could be most impactful? SPerry-WMF (talk) 23:50, 31 January 2025 (UTC)[reply]
Hi @SPerry-WMF, if you want to build these tools for helping volunteers find tasks en.wiki, the best thing you could spend time on by far is rethinking and rebuilding the infrastructure that developed around WikiProjects. WikiProjects are on average dead, but their technical existence is needed to track and monitor articles. A WikiProject is needed to enable the generation of Wikipedia:Version 1.0 Editorial Team/Index summaries of article number and quality, which could direct editors to pages they are interested in that need help. A WikiProject is needed for Wikipedia:Article alerts to allow people to be aware of significant discussions within its topic. A WikiProject is needed to generate maintenance categories of issues editors can look for within topics. These tools are all useful for helping editors find meaningful tasks to focus on, but keeping these tools around means leaving in place a system of ghost towns that serve mostly to mislead new editors. CMD (talk) 10:27, 31 January 2025 (UTC)[reply]
Thank you @Chipmunkdavis, that’s a very valid point. In fact, we recently completed some research on WikiProjects with really interesting findings that support what you’re highlighting. For example, we found strong validation that WikiProjects serve a variety of purposes, and especially English contributors reported getting value from backlog drives. However, we also learned that WikiProjects experience common challenges, particularly: finding participants, engaging newcomers, and keeping people continually engaged. We have recently developed some new features that can help people discover WikiProjects (the Collaboration List) and be invited to WikiProjects based on their edit history (Invitation List), through the CampaignEvents extension. We're currently exploring ways to potentially generalize tools like Event Registration, so that it's easier for WikiProjects to develop contests, events, and drives that are friendly to newcomers and that can be broadly promoted on the wikis. This makes me wonder: What do you think are the biggest challenges that prevent people from creating or sustaining WikiProjects? How do you think our current (or future) tools could help in these efforts, so WikiProjects can stop feeling like “ghost towns”? SPerry-WMF (talk) 23:38, 31 January 2025 (UTC)[reply]
The point I was making was that the best use of time would be to make the infrastructure available without needing a WikiProject. The challenges to Wikiprojects are social, although having tools already available would contribute to removing a technical barrier and perhaps a social barrier regarding momentum, if you want to look at it that way. CMD (talk) 01:10, 1 February 2025 (UTC)[reply]
Some engineering effort was spent a couple years ago on improving WikiProject software. Please see mw:Extension:CollaborationKit and https://www.mediawiki.org/w/index.php?oldid=6590981. Sadly it was never deployed and interest in it seems like it was low. Perhaps the process of going "we need to improve WikiProjects" to an actual concrete thing that improves WikiProjects that will actually be used and the community will be excited about is a bit harder than it appears. –Novem Linguae (talk) 04:02, 1 February 2025 (UTC)[reply]

Miscellaneous

Related articles at Lepa Brena has second item 'Hajde da se volimo (film series)' with description below: "1987 [[SFRY|Yugoslavia]] film".

I could not find a source for tried wikilinking, while the article itself has no short description template and Wikidata description was not good ("1987 film by Aleksandar Đorđević"; now changed to "1987–1990 Yugoslav film series"). Expect refreshed import from Wikidata in a while, and/or find where exactly is "1987 [[SFRY|Yugoslavia]] film"? 5.43.67.103 (talk) 03:50, 25 January 2025 (UTC)[reply]

Got a specific question for us? How can we help? –Novem Linguae (talk) 09:37, 26 January 2025 (UTC)[reply]
Please, remove unwikilinked text for proper display below the item 'Hajde da se volimo (film series)' that is "1987–1990 Yugoslav film series" (current description at Wikidata). 5.43.67.103 (talk) 12:38, 26 January 2025 (UTC) [e][reply]
 Done, though I reworded the short description a bit. Dsuke1998AEOS (talk) 03:46, 27 January 2025 (UTC)[reply]

Help

Hi ,what happen this (File:Logo Jubilee 2025.png) its my first time uploaded a non free but how deleted this template? AbchyZa22 (talk) 08:28, 26 January 2025 (UTC)[reply]

It sounds like only an old revision of the file will be deleted. The file itself (the current revision) will be kept. I imagine that's probably an acceptable outcome. –Novem Linguae (talk) 09:36, 26 January 2025 (UTC)[reply]
This is routine for non-free pics uploaded on en-WP. There is a "proper size", and a bot comes by to impose it. The bot-approved version of the pic will remain, and the old version will be automatically deleted after awhile. Gråbergs Gråa Sång (talk) 10:11, 27 January 2025 (UTC)[reply]
@Novem Linguae @Gråbergs Gråa Sång:Thanks but i didn't see my notifications in Miscellaneous (google translator). AbchyZa22 (talk) 17:47, 30 January 2025 (UTC)[reply]

Regarding of name

Is Wikipedia the actual name or is it like “WikiPedia” or “WikipediA”

By the way, this may be placed in the wrong place, if so, tell me to please move my question to a different place. SCiteguy1024 (talk) 03:51, 27 January 2025 (UTC)[reply]

Wikipedia is the standard capitalization. I think one of our old logos stylizes it as WikipediA, but I've never seen that written in text. –Novem Linguae (talk) 10:02, 27 January 2025 (UTC)[reply]
That would have been the Usemod era, I would guess that WikipediA was a play on article titles sometimes having that final capital letter. You can see some such titles at Wikipedia:Usemod article histories. Very old history though, I wonder how it's come up now. CMD (talk) 10:17, 27 January 2025 (UTC)[reply]
The logo currently at the top of this page (at least in Vector 2022 and Monobook) mixes normal caps and small caps in a way (WikipediA) that makes it look like WikipediA. Anomie 12:48, 27 January 2025 (UTC)[reply]

SEO impact of wikipedia citations

Hello,

I would like to understand your perspectives on SEO and Wikipedia, particularly regarding these points:

  1. What are your thoughts on the relationship between SEO and Wikipedia citations?
  2. Does secretly adding a website as a reference in Wikipedia articles have any positive SEO impact on that website?
  3. How can we avoid this type of manipulation?

as In my community (fawiki), there is a significant number of such manipulated links. Additionally, there are numerous websites that offer "Wikipedia SEO and link building services" for a fee - essentially monetizing the manipulation of Wikipedia citations, Could you please share any relevant links, discussions, or resources where I can learn more about this topic?

Thank you for your insights. WASP-Outis (talk) 16:12, 27 January 2025 (UTC)[reply]

Of course this goes on, and frequently. If we had a proper database of citations (User:Harej) it might be possible to build applications to statistically check for possible SEO abuse of citations. -- GreenC 16:32, 27 January 2025 (UTC)[reply]
It's already being avoided. It has been since at least 2007 and I'm guessing it has been from the beginning. As you can see if you examine the source HTML behind a Wikipedia page in your browser, Wikipedia adds a rel="nofollow" attribute to references and external links, in the same way that any decent blogging platform does to reader comments. Mainstream search engines, on encountering these, don't count the links in page rankings, so they're of no use at all for SEO. This is explained at Wikipedia:Spam, as it is in the level 2 and level 3 warnings that can be posted on the talk pages of users who appear to be spamming. Largoplazo (talk) 17:56, 27 January 2025 (UTC)[reply]
Yeah, but that doesn't affect spammers who see a positive in putting their link everywhere. Adding spam is low effort with a potential payoff of more traffic. Johnuniq (talk) 22:42, 27 January 2025 (UTC)[reply]
Very little more traffic.
@WASP-Outis, I don't know if the numbers will be different at fawiki, but here, the research shows that a reader clicks a link in one ref on 1 out of 300 page views. For the median enwiki article (4 refs, 1 page view per week), that means the spammer's link will probably get clicked on once every 25 years. For a "higher traffic" article, it might be once a month.
Perhaps if we wrote an article about Wikipedia and SEO, more people would discover how pointless it is. WhatamIdoing (talk) 23:03, 27 January 2025 (UTC)[reply]
I was answering the question, about whether it influences SEO. It doesn't, regardless of whatever other benefit they get out of spamming, or think they're getting out of it. Largoplazo (talk) 23:20, 27 January 2025 (UTC)[reply]
@GreenC,@Johnuniq,@Largoplazo,@WhatamIdoing:
Based on my internet research, nofollow links can significantly impact a website’s SEO. Moreover, if a link remains on a page for more than a month, it has a positive effect as a backlink on search engines. From what I understand, if a website is cited as a reference in Wikipedia, Google eventually recognizes that website as a credible source in the long term.
These are the findings I discovered through my research on backlinks and SEO, and you’ll find similar information when searching on Google.
In WikiFa, we have questionable links that have been cleverly embedded in articles and may have remained in the wiki for years.
What made this topic interesting to me was a conversation I had yesterday with an SEO expert. He mentioned that he uses Wikipedia for link building and has methods to prevent his links from being removed. No matter how much I tried to convince him that Wikipedia has no effect on his website’s SEO, he wouldn’t accept it and claimed he had seen its positive impact firsthand.
I really don’t know how to combat this issue. Perhaps instead of having a blacklist for untrustworthy links, if we had a whitelist for verified links in the wiki, this problem could be resolved. WASP-Outis (talk) 07:57, 28 January 2025 (UTC)[reply]
Now I wonder: Did you ask him to show you an example of how he did it? WhatamIdoing (talk) 08:11, 28 January 2025 (UTC)[reply]
@WhatamIdoing: Of course, since he knew I was a Wikimedian, he wouldn't answer such a question:) WASP-Outis (talk) 08:32, 28 January 2025 (UTC)[reply]
Too bad. It would have been interesting to see what he was doing. WhatamIdoing (talk) 20:42, 28 January 2025 (UTC)[reply]
I know nothing about how Google works but what you are saying makes sense. My point earlier was that the details don't matter to most spammers. They just take every opportunity to post links because it is a very low cost and has a potential for a good benefit. However, I agree that it makes sense that Google would have algorithms which notice the longevity of links at Wikipedia. Johnuniq (talk) 08:22, 28 January 2025 (UTC)[reply]
First you wrote that based on your own research, nofollow links can significantly impact a website's SEO. Then you wrote that an SEO expert claimed this and you tried to convince him that it isn't true. Can you clarify?
Anyway, I see what's going on: the convention that held for years, the the major search engines by convention would ignore "nofollow" links, was exited by Google in 2019. It says it now treats "nofollow" only as a "hint", whatever that means. It's a wishy-washy statement saying more or less that they'll count the links toward rankings if they feel like it. It also added two new hints, "ugc" (user-generated content) and "sponsored", with the same lack of commitment to treat them any particular way. So, basically, Google said, "Hey, go ahead and spam websites, you might get somewhere!" Thanks, Google. Largoplazo (talk) 00:10, 29 January 2025 (UTC)[reply]

Interested in participating in an interview study regarding LLMs?

Dear Wikipedia editors,

It is our pleasure to invite you to join a study at the University of Minnesota! The objective of the study is to understand how large language models (LLMs) impact the collaborative knowledge production process, by investigating knowledge contributors’ interactions and experience with LLMs in practice.

If you have used LLMs (e.g., GPT, Llama, Claude...) in the process of contributing to Wikipedia (eg. grammar check, finding resources, writing scripts...), we’d love to join the study! You will be engaging in a 45-60 min interview, talking and reflecting about your experience with Wikipedia and your perception/usage of LLMs in Wikipedia. Your valuable input will not only help us understand practical ways to incorporate LLMs into the knowledge production process, but also help us generate guardrails about these practices. All participation would be anonymous.

In addition, if you know any editor who may have used LLMs during their edits, we highly appreciate it if you could share their contact with us, as we can reach out to them.

To learn more, please feel free to start a talk page discussion with me or send me an email or take a look at https://meta.wikimedia.org/wiki/Research:How_LLMs_impact_knowledge_production_processes or direcly sign up: https://umn.qualtrics.com/jfe/form/SV_bqIjhNRg9Zqsuvs

Thank you so much for your time and consideration!

All the best, LLMs and knowledge production Research Team

Phoebezz22 (talk) 20:39, 27 January 2025 (UTC)[reply]

Have not… and will not. Thanks. Blueboar (talk) 21:42, 27 January 2025 (UTC)[reply]
I think that by limiting your survey to people who have actually used LLMs you are completely invalidating your study. Many people on Wikipedia have suffered at the hands of LLMs rather than using them. Phil Bridger (talk) 21:47, 27 January 2025 (UTC)[reply]

Feminism and Folklore 2025 starts soon

Please help translate to other languages.

Dear Wiki Community,

You are humbly invited to organize the Feminism and Folklore 2025 writing competition from February 1, 2025, to March 31, 2025 on your local Wikipedia. This year, Feminism and Folklore will focus on feminism, women's issues, and gender-focused topics for the project, with a Wiki Loves Folklore gender gap focus and a folk culture theme on Wikipedia.

You can help Wikipedia's coverage of folklore from your area by writing or improving articles about things like folk festivals, folk dances, folk music, women and queer folklore figures, folk game athletes, women in mythology, women warriors in folklore, witches and witch hunting, fairy tales, and more. Users can help create new articles, expand or translate from a generated list of suggested articles.

Organisers are requested to work on the following action items to sign up their communities for the project:

  1. Create a page for the contest on the local wiki.
  2. Set up a campaign on CampWiz tool.
  3. Create the local list and mention the timeline and local and international prizes.
  4. Request local admins for site notice.
  5. Link the local page and the CampWiz link on the meta project page.

This year, the Wiki Loves Folklore Tech Team has introduced two new tools to enhance support for the campaign. These tools include the Article List Generator by Topic and CampWiz. The Article List Generator by Topic enables users to identify articles on the English Wikipedia that are not present in their native language Wikipedia. Users can customize their selection criteria, and the tool will present a table showcasing the missing articles along with suggested titles. Additionally, users have the option to download the list in both CSV and wikitable formats. Notably, the CampWiz tool will be employed for the project for the first time, empowering users to effectively host the project with a jury. Both tools are now available for use in the campaign. Click here to access these tools

Learn more about the contest and prizes on our project page. Feel free to contact us on our meta talk page or by email us if you need any assistance.

We look forward to your immense coordination.

Thank you and Best wishes,

Feminism and Folklore 2025 International Team

Stay connected  

--MediaWiki message delivery (talk) 02:35, 29 January 2025 (UTC)[reply]

Wiki Loves Folklore is back!

Please help translate to other languages.

Dear Wiki Community, You are humbly invited to participate in the Wiki Loves Folklore 2025 an international media contest organized on Wikimedia Commons to document folklore and intangible cultural heritage from different regions, including, folk creative activities and many more. It is held every year from the 1st till the 31st of March.

You can help in enriching the folklore documentation on Commons from your region by taking photos, audios, videos, and submitting them in this commons contest.

You can also organize a local contest in your country and support us in translating the project pages to help us spread the word in your native language.

Feel free to contact us on our project Talk page if you need any assistance.

Kind regards,

Wiki loves Folklore International Team --MediaWiki message delivery (talk) 02:35, 29 January 2025 (UTC)[reply]

Smithsonian has made millions of images available under Creative Commons Zero

Not sure if this is already well known, but I just came across this info and it seemed worth sharing: Smithsonian Open Access allows people to "download, share, and reuse millions of the Smithsonian’s images ... more than 5.1 million 2D and 3D digital items from our collections—with many more to come. This includes images and data from across the Smithsonian’s 21 museums, nine research centers, libraries, archives, and the National Zoo." FAQ here. FactOrOpinion (talk) 03:10, 29 January 2025 (UTC)[reply]

Possibly a lot at Commons:Category:Smithsonian Institution CMD (talk) 05:55, 29 January 2025 (UTC)[reply]
We should download all of these to preserve them before somebody decides to shut it down. RoySmith (talk) 16:45, 2 February 2025 (UTC)[reply]

Again with Alexander McQueen?

Of all the possible articles, we get another about superficial trash. Is it random? 97.126.180.47 (talk) 19:16, 30 January 2025 (UTC)[reply]

Which Alexander McQueen, who are "we", who/what gave it to you and is what random? Gråbergs Gråa Sång (talk) 19:48, 30 January 2025 (UTC)[reply]
They might be talking about Today's featured article. XtraJovial (talkcontribs) 22:33, 30 January 2025 (UTC)[reply]
To paraphrase an old adage, one man's superficial trash is another's substantial treasure. Largoplazo (talk) 22:42, 30 January 2025 (UTC)[reply]
Keep an eye out for tomorrow's, which is about an eccentric old man. WhatamIdoing (talk) 23:53, 30 January 2025 (UTC)[reply]
So Stanley Green died in 1993. It is really true that the passage of time speeds up as you get older, as it seems to me that I saw him only recently. Phil Bridger (talk) 07:30, 31 January 2025 (UTC)[reply]
Old? I'm an eccentric older man than James Joyce ever was. —Tamfang (talk) 21:03, 1 February 2025 (UTC)[reply]
Related discussion at Talk:Main_Page#Alexander_McQueen. Gråbergs Gråa Sång (talk) 10:15, 31 January 2025 (UTC)[reply]
Interesting how the ones about women's fashion are the ones that are "superficial trash". Kind of says more about you, friend. Thebiguglyalien (talk) 17:41, 31 January 2025 (UTC)[reply]

Global ban proposal for Shāntián Tàiláng

Hello. This is to notify the community that there is an ongoing global ban proposal for User:Shāntián Tàiláng who has been active on this wiki. You are invited to participate at m:Requests for comment/Global ban for Shāntián Tàiláng. Wüstenspringmaus talk 12:19, 2 February 2025 (UTC) Hope, that this message is well placed here. If not, please feel free to move it[reply]