This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page.
Addition of more options in visual uploader
Latest comment: 10 months ago4 comments3 people in discussion
I have noticed there is a lack of commonly used public domain options in the visual uploader.
I have seen logos liscensed wrong
I propose this addition. {{pd-textlogo Cyberwolf (talk) 15:48, 10 June 2025 (UTC)
This is intentional as there would be to many incorrect licensed uploads. If we could get a special mode only available for users with autopatrol rights such a mode could include tags like {{Pd-textlogo}}. GPSLeo (talk) 17:13, 10 June 2025 (UTC)
But- all logos are incorrectly liscensed as a ultimate result. We are actually in violation of copyright laws by letting people upload stuff under cc as they have no right to set a liscense. I estimate 70% of logos are liscensed wrong. So I don’t think the intentional effect even work to the extent I see it, it increases mis Liscensed images Cyberwolf (talk) 22:22, 11 June 2025 (UTC)
Creating a Copyright Template for the Federal Reserve Bank of Saint Louis
Latest comment: 10 months ago5 comments4 people in discussion
Following several discussions and Deletion requests/Undeletion requests (Two main discussions/DRs: 1, 2), it has been determined that graphs and charts made by company the Federal Reserve Bank of Saint Louis (FRED) are all in the public domain, and that FRED’s Legal Page, which states, “You can do a lot of things with FRED data, graphs, maps, software, and mobile apps for your own personal, non-commercial use…Series with a copyright notice are owned by third parties and have special restrictions. Before using data with a copyright notice for anything other than your own personal use, you must contact the data owner to obtain permission. Unfortunately, the Federal Reserve Bank of St. Louis cannot give you such permission.” (bolding all done by FRED), is in fact, an invalid legal statement for their charts/graphs.
As such, I am proposing the creation of either a stand-alone licensing template, to help editors and uploaders understand the decisions made from these discussions regarding FRED’s invalid legal disclaimer. In my opinion, this would be much nicer and more beneficial than slapping a generic {{PD-chart}} (or as some have, an invalid {{PD-USGov}}) as the licensing template for a website with numerous images on the Commons. Pings for the users involved in those discussions: @Josve05a, @Jameslwoodward, @Abzeronow, @Ymblanter. WeatherWriter (talk) 13:55, 12 June 2025 (UTC)
I would support creating a new template {{PD-Federal Reserve Chart}} that explained that while the FED can create copyrighted material because it is independent of the Federal Government, time series charts rarely can have a copyright as they are almost always created by a computer and in most cases the underlying data are facts that cannot have a copyright. I would not support a template that did not limit itself to charts. Perhaps:
While the United States Federal Reserve System can create copyrighted material and claims copyright in this chart, the underlying data are facts that cannot have a copyright and the chart itself is almost certainly created by a computer and therefore cannot have a copyright. Even if created by hand, there is no creativity in plotting a time series.
I don't see how this needs its own template. If the underlying argument is that these images are ineligible for copyright because they are charts with no creative content, {{PD-chart}} seems to cover the matter adequately. Omphalographer (talk) 19:12, 12 June 2025 (UTC)
They could, but a more specific template would be better (e.g. if some later research or court ruling causes us to want to delete all the FRED charts). One thing that happened in the last few years, which unfortunately may be a bit too late to reverse, is that {{PD-algorithm}} has been stretched to cover all AI-generated images, which is IMO an abuse of the original purpose of the tag. -- King of ♥ ♦ ♣ ♠19:29, 23 June 2025 (UTC)
Is there a summary of all filters that run on commons?
Latest comment: 10 months ago18 comments11 people in discussion
I am a staff member of Wikimedia Taiwan. I just found out that PDF files uploaded by new accounts will be rejected by the filter when I was conducting a workshop last Sunday. Fortunately, this only reduced the results of the workshop I had planned by half. If my only goal was contribution on Commons, nothing would be accomplished that day. There was no way I was aware of this filter when planning the course because my account no longer triggers it.
I know the filter rules can't be made public, which would allow people to bypass him. But is it possible to get some summary so that organizers can avoid this situation? Reke (talk) 23:26, 10 June 2025 (UTC)
@Reke: I believe you should be able to see all public information at Special:AbuseFilter. Let me know if that doesn't work.
By the way, as a staff member with a reasonable amount of on-wiki activity, you can almost certainly request and receive autopatroller rights (Commons:Requests for rights#Autopatrol). I'm sure in that circumstance we'd waive the usual minimum number of edits. - Jmabel ! talk00:18, 11 June 2025 (UTC)
Thank you. I could upload pdf file without problem, get autopatroller rights for me is helpless for the workshop. I was successful when I demonstrated it, but all the students who had just registered successfully on site were blocked by the filter at the last step. Reke (talk) 05:30, 11 June 2025 (UTC)
May I ask what you're doing in the workshop which involves uploading PDFs? Use of this file type on Commons should generally be pretty uncommon. Omphalographer (talk) 00:19, 11 June 2025 (UTC)
w:Wikisource commonly uses pdf-files and about 5% of Commons files are pdfs. In terms of number of files it is not so common than jpg, but it is more common than svg or png. --Zache (talk) 02:53, 11 June 2025 (UTC)
Perhaps I misspoke. We've certainly got PDF files, but the vast majority of them are uploaded en masse by a very small number of users - most users will never upload a PDF. Omphalographer (talk) 03:16, 11 June 2025 (UTC)
@Grand-Duc Thank you. Yes, the file is a part of all document we plan to upload.
I agree that PDF files is not common style. However, the museum that provided these files originally used this format to digitize the document. After this experience, when I hold another similar workshops in the future, I will convert PDFs into JPGs first. But I have to know in advance what filters are there, otherwise my account permissions won't trigger them. Reke (talk) 05:54, 11 June 2025 (UTC)
It is generally advised to let people create their accounts before the workshop that they are autoconfirmed during the workshop or if not possible just request confirmed rights for participants. GPSLeo (talk) 05:05, 11 June 2025 (UTC)
This can be difficult for some events. My event was held in Penghu, an island off the coast of Taiwan. Some participants are older and lack the skills to use 3C products. They need to operate immediately after watching the demonstration before they know how to register an account.
If this is a collaboration with a university, I wouldn’t worry about this issue because, in addition to asking students to pre-register for an account, even if they run into problems, I can help them apply for permissions after the demonstration course, allowing them to complete the work from memory in subsequent courses. But for the participants of this workshop, they won't have the next step if they don't complete the work on that day. Reke (talk) 05:41, 11 June 2025 (UTC)
Then you need to request confirmed rights. As we soon get an event organizer user group anyway I will propose that this group also gets the right to grant confirmed rights to users. I also stated creating a list of the filtered actions: Commons:User access levels#Specific actions. GPSLeo (talk) 06:43, 11 June 2025 (UTC)
@GPSLeo, that doc is out of date, Registered users are not allowed to edit other people's userpages (among many other typos). All the Best -- ChuckTalk05:13, 20 June 2025 (UTC)
I've read the message that day, and understood the reason why the filter was set. So I don't want to challenge its necessity. I just wish I had known about the filters so I could have planned around them.
The only thing that I find unreasonable is that the blocking action should be done when uploading the file, but the filter does not display the blocking message until all the data is filled in. Reke (talk) 07:10, 23 June 2025 (UTC)
Maybe something similar to Wikipedia's en:WP:Event coordinator role should be implemented, so that event coordinators can allow participants to bypass filters (since their uploads are being supervised at the event). Zanahary (talk) 07:51, 24 June 2025 (UTC)
What to do with mass uploaded PDF/DJVU which are not in use or verified?
Latest comment: 10 months ago20 comments9 people in discussion
Back in 2020 Fae and others uploaded a substantial proportion of the IA's collections. There was a reasonable good faith assumption at the time that these would be integrated into Commons and ultimately Wikisource, and that at some point the files would be checked for compatibility with Commons and Wikimedia objectives.
This didn't happen, and subsequently, it has been found that due to the metadata provided, and asepcts of the upload process used, widespread license errors, a number in copyright works (both PDF, dju and image media), and faulty metadata has resulted.
So what is the sensible response to this, given that asking Commons license reviewers to check every single upload is a farcical task,
as is requesting a wipe of all unused files?
I'm against in favour of a massive 'nuke' delete, given that I have found compatible items, which HAVE subsequently been useful to Wikisource, without needing to re-upload them from IA directly. However, something has to be done, to avoid the 'problem' files becoming a time-bomb of potential issues. Would a more nuanced removal of post 1930 file be a possible solution? (I.E WE adopt the Hathi-Trust policy approach, of only considering pre 1930 media 'safe', and only allow media after this date on a case by case review by appropriate experts.)
ShakespeareFan00 (talk) 13:41, 8 June 2025 (UTC)
A lot of the work of getting rid of these, without throwing out the baby with the bathwater, does not need to be done by people who are officially license reviewers. In particular, anyone can mark clearly problematic files for deletion. - Jmabel ! talk03:57, 9 June 2025 (UTC)
Specic howlers, include clearly post 1930 works in error marked as PD-USGov, which they clearly are not.
@Jmabel: Are you able to help provide some hints or filters? I favour the removal of anything post 1930 on the grounds that date can be looked for relatively easily without getting into soecfic nuances, (or need to check external catalogs.).. ShakespeareFan00 (talk) 10:14, 9 June 2025 (UTC)
I've stopped short of calling for a CCI on Fae, given that all thier uploads were made in good faith, based on the infromation (and tools they had.) . But if it's going to take a CCI to solve issues that have persisted since the left, I think it should be. Is there a process for iniating one? Files tagged in error as PD-USGov is the largest problem. (An aside is the quality of the scans in places, but that's not something Commons generally worries about) ShakespeareFan00 (talk) 10:14, 9 June 2025 (UTC)
Some key areas I've noted:
Seed catalogs - Pre 1930 are PD-US and the license can be updated. I did some recatting this moring to diffuse the relevant categoru.
USPTO Library items, which are not US Gov works , but where tagged as such because they were tagged as FEDLINK collection items in the metadata. Some where PD-US, but not all.
Post 1978 Naval Postgraduate Student thesis. These are not PD-US-Gov works, despite some on Commons arguing that the writers of some of them are serving military, in some instances. Many deleted items of this type were not and have not been authored by Federal entities ( Typically these are civilian, state or Foreign military.) Pre 1978 items have been considered no-notice(when none was found), but the collection includes items from various other institutions, and requires review.
Clearly post 1930 journals, tagged in PD as error, because IA had followed a library practice of recording serials date as the date of the first issue of the serial, as opposed to the cover date of the publication, in metadata, which was used in error during the upload process.
Flickr 'Commons' images tagged as PD by default, although examining the source publication proved the publciation and images were not.
and others...
Fae uploaded at least 500,000 IA items I think, and it's far far easier to make a bold decision and conclude the process to check these pre or post upload failed, or never happened, then it is to expect Commons users and viewers to do audits themselves that they would have reasonably expected a responsible library or archive to have undertaken beforehand. A broad post 1930 cutoff cull deletion would quickly resolve the issue, without the extended period of non-action from Commons Processes. ShakespeareFan00 (talk) 10:33, 9 June 2025 (UTC)
Not to say I think there should be a mass deletion of everything uploaded by Fae but they imported a ton of stuff from GLAMs that have questionable origins, licenses, Etc Etc. And a good percentage of the files have never been organized or checked to make sure they are PD. Like photographs that have no evidence of prior publication but they aren't being deleted because of coming from a library or wherever. Deleting everything post 1930 would obviously be an option but then there's a lot of post 1930 stuff that's still PD in the USA and it wouldn't deal with pre-1930 works that haven't been published before either. Individual DRs for a million files to make its done right obviously wouldn't scale. But then again, I'm probably against a mass culling. So I don't know. It doesn't seem like there's a good solution here. --Adamant1 (talk) 15:49, 9 June 2025 (UTC)
All books which are in the public domain are also in scope. So I oppose deleting any of them. I have quite a number of reservations about Fae's uploads (notably poor categorization), but I don't see any reason to delete them. Yann (talk) 16:48, 9 June 2025 (UTC)
Books have to have some plausible educational value. If I took a random assortment of non-educational freely-licensed photos off of Flickr and made them into a book, they wouldn't ipso facto become educational or in scope. That said, to your main point, I would certainly prefer if we didn't delete Fae's uploads en masse, but I can understand why someone would want to, as copyright violations need to be taken seriously and Fae evidently just noped out of the project years ago and will not be around to fix the mess personally. —Justin (koavf)❤T☮C☺M☯16:57, 9 June 2025 (UTC)
'Books have to have some plausible educational value' - you are treading on DANGEROUS GROUND right there :P WHO decides? & your "if i made a book of random flickr pics" arguement is a straw-man @ best; particularly in that we are discussed historical published works collected & uploaded on here. Lx 121 (talk) 10:43, 30 June 2025 (UTC)
There's definitely a problem of some of these government collections including works authored by non-government entities, probably by accident. I seem to recall finding a couple of PDFs of mid-20th century novels (unambiguously under copyright) in one of these collections, and I wouldn't doubt for a moment that there's more buried in there. Omphalographer (talk) 20:09, 10 June 2025 (UTC)
Strongly Oppose. I understand the problem, but files should never be prejudged because of file format. Of course, every detected copyvio must be deleted. But no file should ever be deleted “just in case” without any evidence against it. In fact, given certain problems with Internet Archive (low budget, judicial cases against it, bad backup policy while located in an earthquake-prone area, etc), I think that the more IA books that we host in Commons, the best.
If we don't mass delete JPG images "because they are not verified", the same applies to PDF files. Moreover, most public domain PDF files are "autoverified" (date of publication, lack of copyright notice for pre-1978 works from the USA, etc), so files should only be deleted on evidence. MGeog2022 (talk) 09:54, 21 June 2025 (UTC)
Sorry if I "I became passionate" too soon :-). I see now that we are talking only about a very specific mass upload, not about PDF files from IA generally. This is a really complex situation. I maintain that we should keep all public domain (or freely licensed) books imported from IA, but the possibility of having thousands of copyvios is not a minor problem. A possible solution would be to add a template to all those files, saying that there is some possibility that the work is copyrighted, so it is to be used at the user's own risk. Perhaps a software script (specifically developed with this purpose) could be used to find most copyvios (based on author, date, country, presence of copyright notice or copyright registrations and renewals where applicable, etc). It would be really complex (not impossible, use of AI tools could help; ChatGPT tells you that for any well known book or author), but the script could be kept to do the same with every book uploaded in the future. MGeog2022 (talk) 10:10, 21 June 2025 (UTC)
@Omphalographer: of course. But an evidence about why its status is unclear is needed, case by case. We sholudn't delete any >100 year old work, any work that is clearly a work of the US federal government, etc. Precautionay principle is applied case by case, it's not "delete everything unless the few things that have been carefully reviewed". MGeog2022 (talk) 11:33, 22 June 2025 (UTC)
The precautionary principle is that where there is significant doubt about the freedom of a particular file, it should be deleted.. I understand that this doubt is after having a look at the file, never before doing so. MGeog2022 (talk) 11:35, 22 June 2025 (UTC)
STRONGLYOppose - any mass-deletion; we can & should go through the material item-by-item (especially since the heavily favoured probability is that most of the material in question will be FINE, & we are talking about only a limited fraction that might be problematic). if we find a big enough problem, we can create a PROJECT for sorting it. Lx 121 (talk) 10:48, 30 June 2025 (UTC)
Comment - btw, what is a CCI? i get that IA = internet archive, but i just spent few minutes on a commons search for cci didn't get anything useful (thus far) xD Lx 121 (talk) 11:13, 30 June 2025 (UTC)
Latest comment: 9 months ago10 comments9 people in discussion
The Collections extension is, for some reason, still enabled on Commons, and even has its own "create a book" tool link in the Tools menu (or in the sidebar on older skins). It probably should not be.
This extension was designed for creating printable collections of pages on Wikipedia. It does not serve any useful purpose on Commons. Some users are occasionally using it to create completely useless books, typically consisting mostly or entirely of pages unsuitable for print such as Main Page, the user's own user page, or Special:Upload.
Even in the rare instances where a user has created a collection of gallery pages (such as User:Mrchristian/Books/berlin), there is currently no way for the user to obtain a printable copy of that collection through Wikimedia - Wikimedia's PDF renderer for collections was disabled in 2017, and there is no indication that it will ever be reenabled. The currently available "download as PDF" tool only renders the content of a single page, not a collection of pages. PediaPress is a commercial service, and does not allow the download of PDFs at all.
Support I've deleted all the books with nonsense test edits and self-promotion. A grand total of 6 plausible uses are left - that means this extension has (at most) been used correctly 6 times on Commons. Even if the technical issues are resolved, this is simply not useful for Commons. Pi.1415926535 (talk) 21:22, 30 June 2025 (UTC)
Comment - i am not clear on why this feature is problematic? as long as it is NOT being mis-used in the mainspace, why do we care what or how ppl organize materials in their userspaces? (OR for off-wiki end uses) is it a data-storage &/pr data-processing issue? the finished pdf's might gulp a lot of bytes, but the CODE to assemble them surely does not?
the lack of tech support & development if unfortunate, as is the apparent difficulty in exporting the results. but these are technical challenges, & fixable ones. it does not seem like a good reason to completely kill off a potentially very useful feature. Lx 121 (talk) 16:01, 5 July 2025 (UTC)
This feature was being misused in the mainspace until Pi recently deleted the books which users had created there (none of which were meaningful books). More broadly, though - the Collections feature is largely broken as it currently exists. Its objective was to allow users to create downloadable PDF books; it does not currently allow users to do that, it has not allowed users to do that in the last eight years, and there is no indication that this is likely to change at any point in the future. Users can continue to create lists of pages in their userspace, should they desire to do so, but I don't see the continued purpose in having a button on every page that offers to let users "Create a book", but which doesn't follow through on that promise. Omphalographer (talk) 20:14, 5 July 2025 (UTC)
Proposal to convert Fake SVG to timed deletion template
Latest comment: 9 months ago10 comments7 people in discussion
There are just under 2,500 fake SVGs - which Commons defines as files that use the .svg (vector file) extension but are entirely composed of raster (non-vector) data.
If I have some time, I'd like to go through them and either reupload them under raster image formats or nominate them for deletion. Once we get the number down though, I think it makes sense to convert {{FakeSVG}} to a timed deletion notice (a "fix this or it will be deleted 7 days after being tagged" template like {{Dw no source since}}). I can't think of any good reason to host raster graphics under a vector format. The Squirrel Conspiracy (talk) 00:31, 3 June 2025 (UTC)
Oppose. Fake SVG files are legitimate images; they will have a modest bloat because the underlying JPEG or PNG file goes through another level of encoding. A few years back some Commons users debated ripping out the underlying bitmap image, uploading it, and then deleting the original SVG. That seems an enormous waste of effort. The files work right now, so they can be left alone. If they are used, then great. If they are not used, then leave them around and do not spend any effort on them. Many of the bitmap-only files could be (or should be) improved to use the SVG vector format, so you get into this strange cycle of uploading the embedded PNG file but then immediately tagging that PNG with {{Convert to SVG}}. The proper course is to consider {{FakeSVG}} as an invitation to improve that SVG file by uploading better versions rather than a slightly misused format that should be eliminated. See Category:Fake_SVG#Replacing_fakes_by_real_SVGGlrx (talk) 01:34, 3 June 2025 (UTC)
SupportThe files that are in use should be removed from other projects and replaced with better versions first. Otherwise, I'd probably support it. I'm actually kind of surprised there isn't already efforts being made to phase the files out. I'm willing to support this when, or if, that happens though. --Adamant1 (talk) 07:23, 3 June 2025 (UTC)
Oppose any arbitrary deletions for this reason. KEEPING or DELETING files should be based on the merits of the file, not whether it has been "correctly labelled" or not. I HAVE NO PROBLEM WITH sorting out & correcting the mislabeled items though (& i might be willing to help doing so).
STRONGLY 'Oppose the idea of automatically deleting files ("timed" or not) simply because they are mis-labeled as svg's. that just seems really dumb & arbitrary & problematic, in so many ways... Lx 121 (talk) 11:45, 30 June 2025 (UTC)
Oppose. I'm fully in favor of deleting "fake SVGs" which have been converted to real raster images (or to better SVGs). But we shouldn't delete newly uploaded images without replacement just because they're technically imperfect. Omphalographer (talk) 23:15, 20 July 2025 (UTC)
Oppose As others have mentioned, the files are totally usable, just could be improved. If they're the best we have of a subject, then keep until such time as we have a better one--then delete as a poorer technical version compared to another we have. That said, I would support (semi-)automated "un-wrapping" process, whereby the internal raster is extracted and uploaded itself, with preservation of metadata. but I wouldn't support blind (semi-)automated vectorizing, because I've seen numerous examples where someone who doesn't actually understand the subject gets actually poorer results due to "better looking graphics" not realizing the meaning is weaker or less-standardly presented. DMacks (talk) 23:56, 20 July 2025 (UTC)
As the given arguments prevail, I'm repeating the proposal:
Commons:Username policy since Special:Diff/355439634 says: "Use of the names of organizations is allowed on Commons only if you verify your account, proving that you are or represent the respective organization."
It has never been Common sense since then that account verification is mandatory at Commons. There are currently only 550 transclusions of Template:Verified account.
Compared to Wikipedias, either company accounts are discouraged completely, or account verification is done to lay open concflicts of interest, or to grant company accounts some leeway when uptating employee numbers or similar without proof. Nothing of all that applies to Commons, account verification doesn't make any sense here, not least as it doesn't and cannot replace file permissions. The Volunteer Response Team doesn't have procedures for account verification at Commons, nor any capacity to handle them.
At Commons:Volunteer Response Team/Noticeboard once in every while there are discussion about such verification, sometimes requested by third party users, sometimes requested by the affected users themselves, but in nearly all cases for no practical reason, but just because the policy says so.
To adjust the username policy to common practice and common sense, I suggest to replace:
Use of the names of organizations is allowed on Commons only if you verify your account, proving that you are or represent the respective organization.
to:
Use of the names of organizations is allowed on Commons. Account verification, proving that you are or represent the respective organization, shall happen only in controversial cases.
Support I agree with the stated proposal to remove the current text. I care so much about this issue that I wrote manifestos for reform at en:Wikipedia:Identity verification and Commons:Role account. My further opinions are beyond the scope of this stated proposal, and I could accept this proposal as is, but I do not like perpetuating the current system of identity verification even though this proposal would reduce it. As an alternative or supplement to this proposal, I would like to reform the identity verification process to be public with what users post on-wiki rather than private in VRT email, especially for corporate identity verification. If the process were more public and automated, then organizations could verify their own identities at will instead of getting a haphazard private review, and the wiki community could better review their identity evidence. This proposal is not about identity verification in general, but only about corporate identity verification. I think it would be too disruptive to quickly reform all of our processes, but starting only with corporate identity verification is a great start. Perhaps we could have a recommended user page template for corporations, and if they want to verify identity, then they fill it out and post to their userpage. Bluerasberry (talk)15:09, 4 June 2025 (UTC)
If There are currently only 550 transclusions of Template:Verified account then where is the problem? If companies and other orgs don't want to have to verify which doesn't all that much effort, then they could just use a name that isn't their company name.
Could you or Krd, please respond to the concern and reason for why this policy exists, namely the potential problem of impersonation and otherwise misleading people (for example making them think badly about the org)?
I saw the policy grounded in the wish to alleviate any fears about mischievous impersonifications, as it is actually already worded "[...]proving that you are or represent the respective organization." This is IMHO sensible and should be kept, I do not see the verification as proxy for copyright statements or the like. Regards, Grand-Duc (talk) 15:12, 4 June 2025 (UTC)
How could such mischievous impersonifications look like at Commons? Is there any know case? Who can handle this in the future, if VRT cannot? Krd15:35, 4 June 2025 (UTC)
Before I could construct a hypothetical case, Jmabel said something below (-> "doing some borderline shitposting") that is likely in the same vein as my thinking. I do not know about an actual case, and as VRT handles the "Benutzernamensverifizierungen" in DE-WP, I do not see any technical hurdle to do the same for Commons (may it be a manpower issue?). Correct me please if I'm wrong. Regards, Grand-Duc (talk) 22:51, 4 June 2025 (UTC)
As I already mentioned in my initial post, the fact that other projects do it is a moot argument. Perhaps it makes sense for them, perhaps not, but Commons is not Wikipedia. If one user misbehaves, they will be blocked, and then perhaps renamed, but that doesn't mean all user have to verify because one could or could not do wrong. Krd04:11, 5 June 2025 (UTC)
Oppose per Grand-Duc's explanation. And maybe one could allow/better-enable orgs to prove they actually run the account without VRT involvement. Prototyperspective (talk) 21:33, 5 June 2025 (UTC)
Re "The Volunteer Response Team doesn't have procedures for account verification at Commons, nor any capacity to handle them": We do occasionally do account verifications (see {{Verified account}}) so that prolific uploaders who also upload their work elsewhere do not have to repeatedly send permissions to VRT. To me, whether verification is required does not depend on whether the account represents an organization or a person - it depends on whether the account represents some person/entity that is famous or has an alternate online presence (such as social media or personal website) that would make it theoretically possible for someone to impersonate them by downloading their images from their website and uploading them to Commons. -- King of ♥ ♦ ♣ ♠17:00, 4 June 2025 (UTC)
I'm trying to understand how this fits in with other policies. (1) If an account is verified as belonging to an organization, and that account uploads content for which that organization clearly holds copyright, am I correct that we can skip the VRT process for those uploads? Because that is the main context in which I have seen accounts with organizational names used. (2) If we do not require that accounts with names of organizations be verified, am I correct that we will still have to have a process by which an organization can say, "That account does not represent us" and have it renamed or blocked? For example, someone could wreak quite a bit of havoc opening an account as, say, User:General Motors or User:UNESCO and doing some borderline shitposting. - Jmabel ! talk18:30, 4 June 2025 (UTC)
Per all VRT experience, account verification cannot replace any file permisison at all. That a file is uploaded by User:General Motors doesn't at all mean the GM company is copyright holder of the files they upload. For the exact scenario VRT does have a permission procedure, i.e. Category:Custom license tags with ticket permission. Krd04:06, 5 June 2025 (UTC)
If they upload content with a free license on their website we normally trust them that they have the permission to do so. If they do not upload it to their website but directly on Commons we require additional proof beyond who operates the account? This does not make sense. Independent of this I think account verification should not be done on every Wiki but centralised on meta. GPSLeo (talk) 05:12, 5 June 2025 (UTC)
Yes, the VRT offers them additional support to make them aware that they mistaken and are not copyright holder, which is the case in the majority of such cases per my VRT experience. Creating the impression that an upload by a verified account is more trustworthy than an unverified account, puts additional risk on the re-user. You can of course say that it's not our business to care. That's a valid argument, but I'd disagree. We can achieve a better result and at the same time have less work, without user verifications. Krd07:28, 5 June 2025 (UTC)
Support admittedly this isn't something I have much experience on. Nor do I necessarily understand how it fits with other policies per Jmabel's comment. That said, Krd has enough experience on here to know what their doing and there isn't any obvious issues with the proposal from what I can tell. So it makes sense. Except for the parts that don't, but whatever. I trust Krd knows what their proposing even if I don't completely understand it myself. --Adamant1 (talk) 07:20, 6 June 2025 (UTC)
Support I have never requested verification differently as in the proposed phrasing. As a VRT agent however, I did process requests from institutions that requested verification because the username policy says so.
In general, it makes uploads done by accounts with for instance the names of GLAM orgs and (well known) artists more trustworthy, and those users don't have to request VRT permission for every image, or every batch of images, anymore since the account is already verified through VRT. The verification that the account actually is who they say they are, helps them to re-use images they own the copyrights to, and that for instance they might have used on their websites or in communications before. This brings down the work load in image patrolling, in deletion requests, in restoring deleted images, and in VRT.
However it does not mean they are knowledgeable on the topic of copyrights, and does not mean that, for instance, if any GLAM org would upload an image of a work by Karel Appel, there is no need to request a validation for this specific image. Commons users need to be aware that the verification tag is for the account, and they can still raise concerns about the individual image copyrights as they see them. Ciell (talk) 07:40, 8 June 2025 (UTC)
CONDITIONALSupport - the wording on the proposed change is a bit sloppy (no offense intended). i would suggest change the wording to THIS instead (the all caps is optional, using it to draw attention to the alteration):
Use of the names of organizations is allowed on Commons. Account verification, proving that you are or represent the respective organization, shall *BE REQUIRED* only in controversial cases."
i see no reason to stop orgs from VOLUNTARILY verifying their username? which is what the original text of the proposed change would seem to mandate:
Use of the names of organizations is allowed on Commons. Account verification, proving that you are or represent the respective organization, shall happen only in controversial cases.
A reason for not offering voluntary verifications could be, as stated above, that there is no defined procedure and no ressources. Who will be handling these voluntary verifications? Krd17:56, 20 July 2025 (UTC)
Oppose as worded The reason why we allow organization accounts on Commons is because we want to make it easy for organizations to upload files under free licenses. The reason why we want to verify such accounts is because, if there's any question as to whether an account is actually controlled by the organization it claims to be, per the precautionary principle, we have to delete. I'm not sure what "controversial cases" actually means here, but I can see it being misinterpreted in ways that are counterproductive (interpretations of "controversial cases" where the organization is controversial in the opinion of the person demanding enforcement.) What I would support is a revision that doesn't make it mandatory, but that's much more explicit about why verification for organization exists. For example, Use of the names of organizations is allowed on Commons. Whether an account uses the name of an organization or not, if an account uploads a work that was previously published elsewhere not under a free license, or if there is another reason to doubt that the account is authorized to release the work under a free license, the account will be required to verify that they are authorized by the copyright holder to release that work. When an organization account goes through this verification process for the first time, their account will be marked as verified, so that they won't need to go through the same process for every upload. Organization accounts can also proactively verify before they begin uploading, to make the process smoother. (The wording can get tweaked / condensed; I'm more fussed about the rationale for verification being explicit). The Squirrel Conspiracy (talk) 11:06, 4 July 2025 (UTC)
Please see above: There seldom is any doubt that an organization account is operated by the organization. Even if there is some merginal doubt, is it not relevant, and to prove that is not the point. The point is that organizations often think they are copyright holders of their stuff, i.e. their CEO photo or their shop front photo, while they are not. Having such account verified not only addresses the wrong part of the problem, but also creates the wrong impression about a verified permission, which it isn't at all.
Copyright cases are different from one file to the next. It doesn't make any sense at all to verify an account because the first file is a work-for-hire portrait when the next photo has no permission from the photographer and is a DW / has FOP issues. If an upload of an organization is doubtful, permission should be obtained for the doubtful file. Regardless if the account is verified. It it doesn't matter anywhere if an account if verified, account verification is useless work. --Krd06:31, 6 July 2025 (UTC)
Comment - i have no problem with tightening up the wording on "controversial". i do think it would certainly apply in any case where the claim to represent a particular org is disputed & there is a clear need for verification (which covers most of the big legal concerns), &/or when the user account in question is acting strangely or problematically. (whether the org itself is "controversial" is a MUCH stickier problem...) Lx 121 (talk) 15:51, 5 July 2025 (UTC)
I don't believe that corresponds to what Krd said is how the VRT views the matter. @Krd: would you care to comment? - Jmabel ! talk00:21, 5 July 2025 (UTC)
Oppose I would like the current policy to be more diligently enforced. Any account who uploads on behalf of an organization should be verified. Yann (talk) 15:41, 4 July 2025 (UTC)
Support as Túrelio. Trusting Krd here if he says that the Commons VRT team does not have any capacity to handle account verifications at Commons and regular file permissions would be needed anyway. --Rosenzweigτ15:57, 4 July 2025 (UTC)
Hence my position that the verification should be tacked onto the permission request. An org account sends in permission, and when the VRT agent tags the file as having permission, they just tag the account as being verified. The Squirrel Conspiracy (talk) 03:50, 5 July 2025 (UTC)
Support with LX 121 wording changes. I'd prefer to have corporate accounts verified, but I'll trust in what VRT agents are saying and the policy can be changed. (I'll note that Krd left me a talk page message asking for feedback, which I don't consider canvassing because I hadn't decided before I read all the responses). Abzeronow (talk) 21:28, 4 July 2025 (UTC)
Comment I am still not able to comprehend what "shall happen only in controversial cases" implies: how do we identify what is controversial or not. Imagine me creating an organisational account for the university I studied at (hitting no controversy), and then uploading stuff under free licenses. This would be misconduct, even "controversial cases" fails to recognise it. VRT definitely does not have the capacity to handle organisational account verification requests and I believe it should be pushed to the Foundation to curate a designated email queue to officially verify such accounts. TSC makes some good points. Even though the thing is not worded rightly, I will probably think of it as Accounts with organisational accounts are allowed on Wikimedia Commons subject to mandatory account verification. (how? that's a challenge because VRT does not have that capacity). I will support a re-phrase than an omit. I deem organisational account verifications as mandatory because there's an inherent risk as Yann points above. signed, Aafi (talk)04:36, 5 July 2025 (UTC)
Can you elaborate on what the inherent risk is in your opinion? Yann said organizations should be verified but didn't explain why. (Or did you perhaps link the wrong username?) whym (talk) 10:23, 10 July 2025 (UTC)
The inherent risk that I perceive is all around impostering (pretty much clear in the example I mentioned). We can't check on what email an account is registered with (unless we have CU/lookinfo permissions - VRT doesn't have that usually). I don't want to put the load of verifying accounts on my co-VRT agents, but making verification mandatory for controversial cases alone - seems odd. How do we decide what is controversial? Isn't this controversy in itself a result of the inherent doubts and risks that are visibly visible. I quoted Yann rightly about mandatory organisational verifications. However, how should the verifications happen is a question to debate. I don't have a very clear go around for that. Except that probably the Foundation helps the community. signed, Aafi (talk)17:25, 20 July 2025 (UTC)
Oppose per Yann. The need for account verification is a must as anyone can create an account that "represents" an organization and this will be a problem with licensing files. --Min☠︎rax«¦talk¦»06:39, 5 July 2025 (UTC)
I would not block an organizational account which doesn't upload anything, but what's the point to have an account on Commons which can't upload anything? Yann (talk) 07:41, 5 July 2025 (UTC)
Oppose At least in the currently proposed form. I think we should try the following first: 1. Procedure for transfer of verification at an other Wiki to Commons or (better) global verification. 2. Make a call for more people joining the VRT. GPSLeo (talk) 07:57, 5 July 2025 (UTC)
Which VRT queue is supposed to be used for this now (and in the past)? I have access to permissions queues on VRT, and I cannot view the ticket linked from User:Jeff_G.. ([1]) --whym (talk) 13:30, 5 July 2025 (UTC)
Support I believe the proposal will reflect the reality more closely than the existing one. If people wish more proactive verification of organizations than now, that could be a good thing, but it would need to be done by a new set of volunteers who commit to perform the verification. A wording change (back) alone won't accomplish what they want. Also, I think it won't have to be mainly done using the VRT system - organizations often can and do use their official websites to verify accounts on external platforms. whym (talk) 12:13, 17 July 2025 (UTC)
Support per Túrelio and using the better wording as suggested by Lx 121. As of now we have 19,403 verified accounts at de:wp with a lot of tracking behind it. Commons and the VRT team handling Commons are absolutely unprepared for a mandatory process of this scale. That We can and should ask for verifications in selected cases whenever serious concerns arise about a particular user account. A few cases are easier to handle than all such user names. And I also agree that a verification can never be a shortcut to handle permissions in individual cases. --AFBorchert (talk) 07:20, 19 July 2025 (UTC)
Support as per Krd, I note that almost all opposing vote, if not all, comes from non-VRT members. Note that they can said their opinion of course, but that's is quite interesting. Christian Ferrer(talk)18:30, 20 July 2025 (UTC)
Proposal: Tighten upload process and related policy.
Latest comment: 9 months ago18 comments7 people in discussion
Given that uploaders are expected to know and understand the status of works they upload, I am asking for opinions about tightening the upload and related policy, for post June 2025 uploads.
Namely :
Unless a (revocable) trusted uploader status is held by a User, any upload from the which has date later between 1930 and 2005 (Essentialy the pre Commons era), is automatically flagged for license review, and is indicated as such, the relevant template being added during the upload process. Bot users, GLAM and bulk uploaders would be able to apply for "trusted" status.
Commons should keep a list of Users granted 'trusted' status.
Uploads utilising specfic 'exemption' licenses such as {{PD-USgov}} should include a rationale as to why the exemption applies. In respect of 'non-renewal' of US works, this rationale should ideally include original registration numbers in the Catalog of Copyright Entries or Copyright Record Books,(Commons trust it's users to have undertaken a reasonable effort to identify the status of a work.)
Uploads from Internet Archive, Hathi and Google should not use bare links, but the appropriate templates, with bare links being converted by an appropriate bot. As part of the use of these templates, the additional reviewed template should be added either by trusted uploaders, or during the license review. On new uploads, identifiable bare links, are bot migrated to the templated forms.
Works upload without a license, are 'auto-flagged; for deletion during the upload process, or by a bot subsquently.
Unchallanged speedy/copyvio deletions, are deleted on 'expiry' of the time period in the relevant template (admin bot enforced.) Striking this as comments below say this already happens.
Uncontested "Deletion requests" (Those that get no replies) are after the expiration of the consultation period, automated closes, with the files being removed, but with "Unedeletion information" being given when the request is auto closed.
Abolish "no consensus to delete" outcomes. A file is either 'Delete' or 'Keep' with a decision reached.
On uploads of books or publications, if multiple volumes, printings or editions exist, then it's the date of most recent applicable edition used to determine the license, if there is no information about earlier ones. (This would mean that if a 'Revised' edition of a work by the same authors exists, It is the revised edition that is used to determine if the status is acceptable on Commons, even if the earlier edition would otherwise be permitted. This is to ensure text integrity.)
Didn't understand all the points you propose but Oppose for now point Abolish "no consensus" outcomes. A file is either 'Delete' or 'Keep' with a decision reached. – if there is no consensus either way then it's not good to claim or call it otherwise; moreover I don't know what you even mean by that since files are already either kept or deleted. Also #5 Works upload… is already done. Prototyperspective (talk) 10:39, 6 June 2025 (UTC)
A firm decision has to be reached about retaining (Keep) or removal (Delete) with a clear rationale as to either outcome. "No conesnsus to delete" cannot be used as a reason for closing the DR, which typically favours a file being retained. If a file/media is retained a compelling rationale for doing so should be stated on closure (which could include a 'Withdrawal(Kept) if the nominator found new information for example. Most DR's I've been involed with end in a clear outcome.
Commons needs to have a 'can we realisticaly prove it's 'status' (and usability)' culture, rather than a 'retain because we couldn't work out what was wrong' culture. Quite a lot of the contributors I've dealt with have the former rather than latter stance though, and such a shift, would to my viewpoint, further the sort of academic approaches Wikimedia projects aim for. ShakespeareFan00 (talk) 12:30, 6 June 2025 (UTC)
"No consensus for deletion" is a necessary rationale for DR closures that concern COM:SCOPE rather than questions concerning copyright. Not all DRs are about copyright issues and therefore consensus might matter. Some cases of De Minimis might also fall under "no consensus" (while others might fall under COM:PCP). Nakonana (talk) 13:11, 6 June 2025 (UTC)
1. There's already the user group "autopatrolled" for that, I think?
7. Oppose Such DRs shouldn't be auto-closed / automatically deleted, because a lack of replies doesn't necessarily mean that there's no opposition to the DR. It could just mean that the DR in question did not attract anyone's attention and that nobody has looked at it (yet). It still requires at least one human user to assess the validity of the DR rationale. A bot cannot make such an assessment based on such a non-indicative criteria as "no replies". "No replies" doesn't say anything about copyright, SCOPE, etc. Nakonana (talk) 13:21, 6 June 2025 (UTC)
Not a specific tag for license review, but autopatrolled means that the upload doesn't show up in "recent uploads" (or at least that's how it was explained to me here). Nakonana (talk) 17:11, 6 June 2025 (UTC)
Oppose Although purely procedurally. The upload/deletion process clearly needs to be tightened and there's some good ideas here about how to do that. I don't think it's a good idea to combine 9 different ideas into the some proposal. Each one needs it's own separate discussion and consensus. Otherwise it's just risks this turning into a potshot of random support and oppose votes without a clear outcome and no way to deal with it. --Adamant1 (talk) 13:56, 6 June 2025 (UTC)
@Adamant1: Another idea not listed here, is going to be controversial, and that is that the Upload process should look for specific authors, titles or publishers, and warn during the Upload process. PG has a set of Renewals for post 1964 "Books" (Commons also has a complete run of the CCE to 1978!), and if those sets of data were in Wikidata, it would not only be useful Biblographic data to have, it's potentially a means of having a more friendly Wikimedia way of doing pre upload screening, warning uploaders, rather than "Never uploaded anything by Disney!" filters more aggressive rights holder want to mandate on hosting sites. Of course getting that Data in Wikidata would be a Mars-shot project of itself. :( ShakespeareFan00 (talk) 15:55, 6 June 2025 (UTC)
That's an interesting idea. I thought about something similar awhile back where the uploader could put the creators name in and it would check the copyright status of their works on Wikidata. Since that information exists for a lot of people, if not individual works yet. Although it would take them putting in the name to begin but it's better then nothing. Maybe a lot of this stuff can be semi-automated once AI gets better. Who knows. Something needs to be done about the amount of COPYVIO that gets uploaded on here though. --Adamant1 (talk) 16:01, 6 June 2025 (UTC)
Oppose - too restrictive, & also off-putting to users. what we have now works reasonably well; & if/when/as new problems develop, we can fine-tune our approach. there is no need to "bring the hammer down" rn; too much of a hammer/blanket/one-size-fits-all solution for a problem that we already have a process for.
also, on the matter of usa pre-1978 copyright NON-RENEWAL; ultimately the legal onus would be to show that a copyright WAS renewed, not to prove that it wasn't.... {insert joke about proving the negative here} ;P - Lx 121 (talk) 10:14, 30 June 2025 (UTC)
AND i do not like the "countdown" to automatic deletion if no one opposes IN TIME. that is pretty much ALWAYS a terrible idea. we should not have to babysit commons' files to make sure that they do not get deleted "by default", NOR should we have to waste our time lurking/stalking deletion discussions in order to prevent it. it MAKES EXTRA WORK for everybody.
& the list-keeping of "trusted" users just for uploading is another "CREEP" in administrative overhead & privacy-intrusiveness (& a fairly massive one at that). & we would STILL have to police those same "trusted" users to make sure that they aren't violating that "trust". so the net result is MORE WORK (again), & barely any real gain in the copyright-quality of the uploaded material as a whole, AND we make things @ commons a little bit MORE off-putting to noob users. Lx 121 (talk) 19:49, 5 July 2025 (UTC)
AND in practical terms, eliminating "no consensus to delete" outcomes means that either: a) unresolved deletion discussions GO ON & ON & ON & ON...... OR b) (more than likely) "no consensus" becomes DELETE BY DEFAULT. neither of which is an attractive result. Lx 121 (talk) 19:54, 5 July 2025 (UTC) .
and the user who drafted the proposal basically ADMITTED that deletion by default IS their goal, in some of their ^above^ comments. Lx 121 (talk) 19:59, 5 July 2025 (UTC)
ALSO - the proposal as written has way too many separate proposals all lumped together(!) it would have been better to deal with these item-by-item, one at a time. Lx 121 (talk) 20:17, 9 July 2025 (UTC)
There are many different things proposed here. Support the basic idea behind first point (of course, only if it doesn't result in a big work overload for administrators or license reviewers), but only for non-own work files, and with a date far before than 2005 (let's say, 1940?). Totally Oppose point 7: if for whatever reason nobody looks at the DR, this could even lead to a vandal being able to delete a file by nominating it for deletion. MGeog2022 (talk) 10:38, 20 July 2025 (UTC)
Deemed approval of proposals
Latest comment: 9 months ago32 comments17 people in discussion
There are many proposals with very low discussion and voting participation. We currently do not have a clear guideline if these proposals are considered as accepted or not. I therefore suggest that we introduce a deemed approval policy: If there is no opposition against a proposal it is considered approved after a given time. The steps needed would be the following:
There is no opposition against the proposal indicated by an Oppose mark.
The last comment on the proposal section was 14 days ago.
After the 14 days there is a "deemed approval warning" to be posted on the Commons:Village pump.
If there is no opposition after 7 more days the proposal is considered accepted.
if your proposal were adopted, would my comment here be considered as not opposing because I didn't use the {{O}} template?
what if someone responds with a comment requesting a clarification, and the original poster ignores it? Does that not put things on hold unless someone overtly opposes? - Jmabel ! talk18:41, 12 June 2025 (UTC)
For the first point I would suggest that user has to be asked if this should be considered as a veto against a deemed approval or not. Same for the second: The user with the question should be asked if this should be considered as opposing as long as the question is not answered. If the proposer and the user who had the question do not react I would consider this a dead proposal if there are no other users supporting it and not even the proposer reacts to comments. GPSLeo (talk) 19:56, 12 June 2025 (UTC)
Oppose. Policy? Why not a deemed denied policy? A proposal with little participation does not sound like something that should get automatic approval. Foundationally, deemed approval is dubious. We do not want involved parties closing a discussion; this lets involved parties do that. What about proposals that have been shot down in the past but are ignored today? Is this an opportunity for forum shopping? What about technical proposals that may have implications that few understand? If a proposal does not find good traction, then it should just fade away. Glrx (talk) 14:13, 14 June 2025 (UTC)
Support. The de facto situation is that it seemed that only Admins were allowed to approve, and all active Admins were allowing many sections to just wither on the vine. In parliamentary proceedings, proposals sometimes start with "without objection ...", perhaps we could consider that here, with one "support" considered as "seconded". — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 15:05, 14 June 2025 (UTC)
I would take lack of response as "not sure". Why not have a small scale trial when there is no substantial reaction? A trial might provide some data for people to discuss about. By trial, I mean something like "let's try holding this event just one time at a limited scale" or "let's try applying this rule for the next 2 weeks", without making it a done deal. whym (talk) 00:54, 15 June 2025 (UTC)
I've always understood a lack of response as tacit approval. Otherwise nothing could be done on here due to the inherently low participation with most discussions. But to me, if no one disapproves of something then it should be treated the same as approval. --Adamant1 (talk) 20:49, 15 June 2025 (UTC)
In my experience, Centralized Discussion is basically useless. It's not uncommon for discussions to sit on the list for years, long after any actual discussion has ended. (For instance, until a few minutes ago, there was an 18-month-old discussion on the list, with the last comment just over a year ago.) Omphalographer (talk) 19:47, 19 June 2025 (UTC)
It's not about whether actual discussion "has ended" in your view but whether the issue has been decided/settled or not. If it hasn't been closed and the question is valid and of significance, then it makes sense to let it sit there. The problem would be just with too few users weighing in on these. Prototyperspective (talk) 20:29, 19 June 2025 (UTC)
The template describes itself as "a compact index to active discussions of potential community-wide interest" (emphasis mine). It's not meant as an index of every discussion ever which failed to reach a definitive conclusion; that would make it even less useful than it currently is. Omphalographer (talk) 20:37, 19 June 2025 (UTC)
It's still active if it is of substantial large-scale community-wide interest and the last comment is from over 1 year ago if it's not yet solved, especially if it's a complex difficult subject. Such need more users to participate and links to these belong there which has few links anyway. Prototyperspective (talk) 11:36, 20 June 2025 (UTC)
Oppose No need for this; a case of a solution in search of a problem that may cause problems when e.g. people don't comment on something because they didn't see it as a proposal or because for example there are technical difficulties making people unable to comment or when it's something absurd on which people don't comment so it gets archived off the page. It's not too much to require at least 1 support comment. If there is an approval policy then it would need to be far more nuanced and e.g. consider account age/experience so lots of new nonwikimedia accounts commenting that template doesn't lead to proposals being accepted; however there is no need for this. Prototyperspective (talk) 11:41, 20 June 2025 (UTC)
So you propose that if a proposal has one support and no oppose vote it is accepted as long as these accounts are not new? GPSLeo (talk) 05:42, 21 June 2025 (UTC)
There is no need to define that. It depends on the case, maybe usually yes but that's pretty rare. (Also, it's not just whether accounts are new but whether or not or how much they are experienced contributors who for example understand what the implications of the proposed changes will be and some other things.) Prototyperspective (talk) 12:26, 21 June 2025 (UTC)
Oppose I am sympathetic to the problems of getting the Commons community to turn out for discussions. But a silent consensus is not sufficient to show the broad consensus required to create policy. --AntiCompositeNumber (talk) 20:17, 21 June 2025 (UTC)
replacing human decision (that is based on reason, logic and common sense) with mechanical stringent rules is bad. RoyZuo (talk) 13:45, 23 June 2025 (UTC)
If this doesn't go anywhere then there should at least be a requirement that proposals with low or no turnout after 7 days are announced on the Village Pump to increase participation. Otherwise what's the alternative here? Just accept that most, if not, all proposals are going to be rejected on their face just because know one votes on most of them? If so, that seems kind of muh. --Adamant1 (talk) 17:24, 23 June 2025 (UTC)
Oppose lack of enthusiasm is a silent “oppose” vote. I’ve closed lots of project proposals on Meta not because there was a consensus against them, but a complete lack of enthusiasm for them. Unopposed DRs (which seem to be the precedent here) get closed as “delete” because deletion is cheap (files can be undeleted at any time and we have over 121 million files, 90% of which are wholly redundant and never see use). Implementing a new rule is not cheap. There should be widespread unambiguous support for the rule. Dronebogus (talk) 22:08, 24 June 2025 (UTC)
Unopposed DRs (which seem to be the precedent here) get closed as “delete” because deletion is cheap (files can be undeleted at any time. Really sad, unless copyvio is reasonably suspected. For example, Wikipedia page history allows to see past versions of articles, but its images may be absent if they were deleted. In addition, deleted files don't save storage space in WMF servers (they are kept in the same way as if they were not deleted). I understand that the thousandth photo of a person's intimate parts is deleted. I can't understand that the thousandth photo of a cocker dog may be deleted only because there are better ones. Aside from copyvios and legal matters, the reason for out of scope deletions should only be to prevent Commons from becoming what it isn't (a site for amateur art, porn, social networking, etc). Other than that, it's a destructive nonesense. MGeog2022 (talk) 09:36, 27 July 2025 (UTC)
This supposing that out of scope is the proposed reason: it may be the best image on its topic, but, if unused, it may be deleted only because nobody noticed the DR, according to what you said. Yes, undeletion is easy, but someone must notice the file's deletion first. It should not be taken for granted that the file's uploader will remain here, let's say, 25 years after the upload, to protect his/her file or request its undeletion. If what you said, as you said it, is right, this seems to be one of the worst problems facing Commons. What's the point in taking all precautions with speedy deletion requests (something that is fully needed, no doubt), when even a vandal can delete a file by using a normal DR? MGeog2022 (talk) 09:50, 27 July 2025 (UTC)
Sorry for so many replies, but this has ignited me :-). In the same way we have a precautionary principle to delete copyvios, perhaps another precautionary principle is needed to keep non-copyvio files. Deletion or non-deletion is not based on non-opposition, nor in votes: it's based in policies. Non-opposition to a DR doesn't mean that the file doesn't comply with policies. Even when policies are subjective, as is the case of project scope or redundant files, the opinion of 1 or 2 persons isn't relevant about it. So caution should be applied: there is no damage in keeping a file that has some remote option of being out of scope or being totally redundant. There is huge damage in doing the opposite. MGeog2022 (talk) 10:08, 27 July 2025 (UTC)
I can't understand that the thousandth photo of a cocker dog may be deleted only because there are better ones. When I say this, I'm talking about having variety, but, if lots of people start uploading photos of their own pet, house, car, etc, of course, it should be stopped, but always respecting all pre-existing files, unless they are extremely low quality. If a topic is really well covered, including all possible variety that is deemed convenient (and that's a very huge number of files for most topics), new uploads can be restricted to high quality files, but decent files that have been here for years, maybe decades, shouldn't be removed because of this. Wikipedia takes care about its own past (full edit history is available for all pages, except for deleted revisions), why shouldn't Commons do the same? MGeog2022 (talk) 10:58, 27 July 2025 (UTC)
Keep in mind that not all images are photographic. We delete unused diagrams somewhat regularly, typically because they're of low quality, outdated, inaccurate, or are otherwise unlikely to be reusable. Omphalographer (talk) 16:54, 27 July 2025 (UTC)
Inaccurate or very low quality: out of scope.
Outdated diagram: if formerly used in a Wikipedia article, very useful when viewing past versions of the article. It's a pity how little historic preservation is being considered. That's similar to when some TV stations reused video tapes, erasing some relevant historical filmations. Maybe even worse here, since storage doesn't cost so much to WMF, and no space saving is gained by doing it, since deleted files are (fortunately) kept on storage.
Returning to the original topic, I refuse to believe that this is true:
Unopposed DRs (which seem to be the precedent here) get closed as “delete” because deletion is cheap (files can be undeleted at any time
In fact, this very conversation, will be preserved and publicly accesible indefinitely, as part of Village Pump's archived pages. I think that any image that was used for years in Wikipedia has far more historical value. MGeog2022 (talk) 20:01, 27 July 2025 (UTC)
not only Oppose with the same reasoning as Dronebogus/Christian Ferrer. Possibly however, I think it should be common sense that approved policies can be opened up for discussion again when enough opposition has cropped up after they were originally approved/adopted. Meaning, a hypothetical 6:3 policy should be open for debate again on this proposal page once at least three opposing voices have gathered on the policy talk page, given that the post-adoption votes turn it into a 6:6 policy. Such a rule would allow proposals with very low support to be adopted, but rescinded later if they turn out to be disadvantageous. --Enyavar (talk) 19:08, 27 June 2025 (UTC)
Oppose - respectfully this suggestion is "ass-backwards"; any proposal should be considered as default "rejected" IF/UNLESS there is some clear indication of consensus-support. if one persons says "hey let's set the building on fire" & nobody responds/objects, that DOES NOT mean the proposal should be accepted.... :P Lx 121 (talk) 09:25, 29 June 2025 (UTC)
Strongly Oppose, terrible things could be approved in this way. Maybe some kind of "This proposal needs attention" warning could be used, or, simply, the admin could judge if the proposal is right or wrong to approve or reject it (this is far from ideal, but it would be far better than approve by default, or even that approving or rejecting because of only 1 vote from an ordinary user). MGeog2022 (talk) 09:54, 27 July 2025 (UTC)
Enforce Cross-Wiki upload restrictions
Latest comment: 8 months ago43 comments14 people in discussion
Proposal passes. If cross-wiki uploads are not turned off in software for non-autoconfirmed users by August 26 (two months from today), the edit filter will be activated. Pi.1415926535 (talk) 22:50, 26 June 2025 (UTC) Edited 23:05, 2 July 2025 (UTC) to clarify. EDIT: Start date changed to August 16 to have more time between this change and temporary account activation. GPSLeo (talk) 15:47, 18 July 2025 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
In August 2024 we had a clear consensus that we need to restrict cross-wiki uploads using the the mw:Upload dialog. That decision was announced on meta multiple times (most recent) and also tracked on phabricator phab:T370598. But nothing happened to make this technically working in a good way. Therefore I propose that we demand a technical implementation to restrict uploads using the Upload dialog. Upload using this feature should only be possible to users who are (auto)confirmed ether on Commons on the Wiki where they are using the Upload dialog or even more restricted. If there is no solution until August we change our already existing AbuseFilter to block all uploads using the Upload dialog by users how are not (auto)confirmed on Commons. This would mean that users see the upload feature when editing but the message that they do not have sufficient rights is only shown after the upload process. GPSLeo (talk) 11:09, 15 May 2025 (UTC)
Support as long as we can provide clear messages explaining what is going on when people are refused this possibility. - Jmabel ! talk17:50, 15 May 2025 (UTC)
Support unfortunately, the interface used to upload through the visual editor is inadequate to handle/screen uploads -- information about licensing/ownership isn't sufficient, and it will upload even if the user never actually saves their edits (with no communication to the user that this is the case). Hopefully there will be improvements, but in the meantime it's too much of a risk to continue allowing this. — Rhododendritestalk | 23:17, 25 May 2025 (UTC)
Support as it currently is, as everyone has already mentioned, it's a hazard waiting to cause confusion again. I'd actually disable it even for people who are autoconfirmed on only the non-Commons wiki, as it doesn't take much to become autoconfirmed and you still may have no idea what Commomns is. The Tduk (talk) 02:36, 26 May 2025 (UTC)
And I noticed that warning messages are not possible. When the warning message is shown there is no save button visible. GPSLeo (talk) 21:08, 28 May 2025 (UTC)
This is the filter in production (I made it blocking uploads by the account TestLeo) that works. The thing that does not work is showing a message in warning mode of the filter that allows uploading after confirming the warning. The warning mode shows the same way as the blocking mode with the proceed button greyed out after closing the message through the dismiss button. GPSLeo (talk) 09:10, 30 May 2025 (UTC)
Thanks, I think I understand it now. I wonder if it's a bug to be filed and resolved in the AbuseFilter side, if not in the upload dialog. whym (talk) 09:48, 1 June 2025 (UTC)
The problem is in the Upload dialog and if someone touches this tool to make warning abuse filters working they could just implement the hiding of the tool for users with insufficient rights. GPSLeo (talk) 16:09, 1 June 2025 (UTC)
The idea with the warning was to notify users that this tool will be limited soon. But it would not be that informative anyways as most users who see the message are autoconfirmed until the blocking for not autoconfirmed users is in force. GPSLeo (talk) 17:51, 1 June 2025 (UTC)
Ah, now I understand. And I agree that the warning is not necessary, because the blocking will only affect newbies, and they are often editing for the first time when they also try to upload an image. Do you know if it's possible to translate the filter notice to other languages or is it only going to be in English? kyykaarme (talk) 17:06, 2 June 2025 (UTC)
I think a better wording for the "headline" would be "You do not have sufficient rights on Commons to perform cross-wiki uploads." Clearer about what rights are in question, and about what exactly they cannot do. - Jmabel ! talk23:27, 28 May 2025 (UTC)
Shall I create a Phab task asking implementation to restrict the Upload dialog then? I see consensus so far. —George Ho (talk) 12:42, 1 June 2025 (UTC)
Nonetheless, I think I made the task broader than intended, didn't I? From what I read, the upload dialog and FileEx/Imp are different tools, and there's yet to support restricting other tools that aren't the upload dialog, especially at the Meta RFC discussion. I'm thinking about creating a child subtask (to that existing parent task). George Ho (talk) 00:12, 2 June 2025 (UTC)
I think there are no other tools they work like the Upload dialog. File import is already restricted. GPSLeo (talk) 05:06, 2 June 2025 (UTC)
It seems that the assumption behind the proposal is that cross-wiki uploads by new users are worse than average. How do we know that is true? I compared uploads by new users with cross-wiki-upload tag and all uploads by new users in the same date range. I found no evidence that CWU is worse in terms of the ratio of deleted uploads. One example: 15110 uploads and 5193 deleted (34.4% deleted) regardless of tags vs 3455 uploads and 968 deleted (28.0% deleted) for the cross-wiki-upload tag, both between January 11 ("20250111") and January 30 ("20250130"). This is only a quickly done database query. I'd be happy to correct any mistake I might have made (or you can fork it and run a modified query, if you want). --whym (talk) 06:11, 8 June 2025 (UTC)
The statistic would need to be cleaned from out of scope deletions they are much less problematic than copyright violations. All files uploaded through the cross-wiki upload are in use and therefore in scope unless they are removed from the article. But that is complicated to evaluate. GPSLeo (talk) 08:11, 8 June 2025 (UTC)
In my opinion the issue is not whether they're worse than the average: the issue is that they're problematic and that's why something needs to be done. The WMF's own study showed that newbies trust that because they're able to upload an image with a couple of clicks it must be fine, because otherwise Wikipedia wouldn't let them do it. Then they get slapped with a deletion notice and told they're infringing on someone's copyright. If newcomers upload deletion-worthy images with the UW as well, then that can be addressed by modifying the Wizard. With the cross-wiki tool the newbies don't even have a chance of finding out what they're doing wrong before they've already done it. kyykaarme (talk) 09:01, 8 June 2025 (UTC)
I see this message in the cross-wiki upload dialog: "If you do not own the copyright on this file, or you wish to release it under a different license, consider using the Commons Upload Wizard" (link), and you are supposed to agree to "I attest that I own the copyright on this file, and agree to irrevocably release this file to Wikimedia Commons under the Creative Commons Attribution-ShareAlike 4.0 license, and I agree to the Terms of Use" (link) before uploading. Newbies might ignore it or fail to understand, but I don't think it's completely accurate to say they are not given a chance to know. It's not too difficult to modify these MediaWiki messages to include more information - we don't need the apparently non-existent developer resources for just changing text. whym (talk) 09:20, 17 June 2025 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Activation date
@Pi.1415926535: you have scheduled the activation for August 26. I have some concerns with that date because of the planned activation of temporary accounts in September. There is no exact date but it could be at the very beginning of September. Both changes are potentially larger changes to moderation procedures. I therefore think we should not have both changes within 7 days. Do you think it would be fine to active this one or two weeks earlier? GPSLeo (talk) 07:29, 3 July 2025 (UTC)
Latest comment: 8 months ago27 comments16 people in discussion
Hello,
I’d like to propose two amendments of Commons:AI-generated media. Firstly, that it gets upgraded from guideline to policy.
Then, a content change: Commons should in my opinion ban the utterly large majority of AI generated media, at least for the time being. I’m advocating for a wait-and-see approach.
This is a direct outcome of ongoing judicial challenges in the US and our actual policies Commons:Fair use and Commons:Precautionary principle. The purpose is to avoid harm to Commons, should courts deem the current procedures of AI compagnies to be against the law. I’ll detail my reasoning below.
I got aware of several court cases through articles in several news outlets (partly in German, but they often happen to cite their sources) about the broad subject of AI generated media:
https://www.theguardian.com/technology/2025/jun/25/anthropic-did-not-breach-copyright-when-training-ai-on-books-without-permission-court-rules “Anthropic did not breach copyright when training AI on books without permission, court rules“ – but this relies on US fair use provisions, according to the judge. Excerpt: „Alsup added, however, that Anthropic’s copying and storage of more than 7m pirated books in a central library infringed the authors’ copyrights and was not fair use – although the company later bought “millions” of print books as well. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement.“
There’s ample further evidence that LLM training can eventually be admissible by US fair use, but constitutes first and foremost copyright infringements. That means that output by these LLM can be seen as fruits of the poisonous tree and would go against out two policies stated above, COM:Fair use and COM:PRP.
Hence, at least imagery that was generated in full by DALL-E, Midjourney, ChatGPT, etc. is certainly problematic.
So, while it’s possible that court cases in the future may result in the legality of AI generated media and hence their admissibility here, the current quagmire is so that we should, even if not strictly constrained by law, have a narrow interpretation of extant site policies and generally disallow AI-generated media site-wide for the time being. That may change in the future.
I can envision (and thus propose) a single exception: modern Photoshop or other photo editing software versions offer AI assisting tools. Their usage can constitute the borderline for current admissibility, as I think that any AI generated parts of such imagery stand behind the human-controlled parts and should be COM:De minimis. Regards, Grand-Duc (talk) 02:03, 27 June 2025 (UTC)
This is not about the resulting images. Also e.g. hereLegal experts agree that AI-generated works do not normally have an author in the legal sense, says lawyer Joerg Heidrich, who also advises AI companies: “For the user, this means that they can freely use an AI-generated image and post it on their website, for example - but on the other hand, anyone else can do the same with the same image.”. This has been discussed many times and you're not even talking about the copyright license of the images themselves. Prototyperspective (talk) 13:46, 27 June 2025 (UTC)
Yes, this is a key distinction. The law with respect to copyright on holding and training on copyrighted materials is distinct from the copyright status of materials generated from those acts. Zanahary (talk) 17:28, 29 August 2025 (UTC)
Comment The "fruits of the poisonous tree" argument resonates strongly with me. I've long held that it's ethically dubious/hypocritical for a WMF project to take a strong stance on respecting copyright and licensing, while also accepting images from software tools that explicitly state that they do not respect copyright and licensing, and in fact are only able to exist because they knowingly ignore copyright and licensing.
That said, the legal landscape is a mess at the moment. There are a lot of cases working through a lot of different courts, and the results that have trickled in have been contradictory, meaning that we're almost certainly going to see cases reach higher courts. It's premature to draw any conclusions about how AI images are ultimately going to be classified vis-a-vis fair use.
In the meantime, my observation has been that the overwhelming majority of AI images uploaded to the project thus far have been out of scope, but that a small number of users have fought tooth and nail to keep them, using what I view to be weak arguments about hypothetical future use. I think Just because an AI image is interesting, pretty, or looks like a work of art, that doesn't mean that it is necessarily within the scope of Commons. needs to be given much more consideration. We should continue to aggressively cull the existing pool of AI images and stem the tide of new images based on scope, rather than legal principles. The Squirrel Conspiracy (talk) 16:01, 27 June 2025 (UTC)
That's not a good reason to shoehorn restrictions outside of their intended scope. The precautory principle is about comitting a known copyright violation on the assumption that we can get away with it, not about a deletionist understanding of what is and what isn't a copyright violation. Cambalachero (talk) 19:59, 1 July 2025 (UTC)
Oppose: There are trials, yes, but how many of those trials ruled that AI images are copyright violations, and how many reached the point where no further appeals are possible? If the law rules that, all AI images can be deleted in minutes by a bot (they are all tagged with {{PD-algorithm}}), but until then, according to the law AI images are free. Cambalachero (talk) 16:39, 27 June 2025 (UTC)
Comment Legal opinions about this should come from a lawyer in the US, preferably one representing the WMF. Apocheir (talk) 21:10, 27 June 2025 (UTC)
Comment This is a meta comment, but why not use Commons talk:AI-generated media for this discussion? I realize not every topic has an obvious centralized talk page to use, but this topic seems to have one. I think using dedicated places helps to save everyone's time. One giant page is more difficult to skim through than separate pages. That said, I think it's fine to leave short pointers here to advertise newly started discussions elsewhere, as written in Template:Village pump/Proposals/Header, if you want to get attention. whym (talk) 02:34, 28 June 2025 (UTC)
I was referring to what the template says about "significant discussions taking place elsewhere". It's been there since 2011. I think it was always something you can choose to do, while I don't know how common it was. I just wanted to raise awareness for the possibility. whym (talk) 09:39, 30 June 2025 (UTC)
Oppose: - ai images are NOT copyrightable in the usa. that part of the law is pretty solid. the ROUNDABOUT arguements about IP rights on using materials for training models are "in play" & tbd (& imho actually not very valid, just a money grab, & likely to Be DEEPLY problematic in all sorts of ways, if the rent-seekers do prevail) BUT that decision would effect TRAINING MODELS/MATERIALSnot "end-product" directly. (AND the big gorilla in the room is that, for many / most purposes the ai-biulders could pretty easily switch to PUBLIC DOMAIN materials for training; with a whole lot of open-source material as well). so tl,dr - whatever the outcome of these trials/legal disputes, is is UNLIKELY to change the fact that ai created content is NOT COPYRIGHT-ABLE, & it is also pretty unlikely that all pre-existing ai-created content would suddenly be banned. the lawsuits are about the "old-school" content-creation industry trying to wring money out of the AI boom. (hence rent-seeking :P) Lx 121 (talk) 09:15, 29 June 2025 (UTC)
You are much mistaken in thinking "for many / most purposes the ai-biulders could pretty easily switch to PUBLIC DOMAIN materials for training". In fact, the big players are experiencing since ~2023 or 2024 the same thing about data that conglomerates in the petro business are experiencing with peak oil. There are quite not enough native raw materials to train and improve the LLM any further, and that is including the whole available internet. So, the limited subset of public domain stuff is woefully inadequate for the business of OpenAI, Meta, Anthropic, etc. Because of that, they are actually trying to train their LLM with synthetic data generated by other LLM, reinforcing (cultural) biases, racism, other errors in the process.
The issue with problematic training processes in regard to the outputs is that a potentially unlawful "machine" can produce "contaminated" output, meaning that a LLM trained with unlawful sources makes unlawful derivatives. That's IMHO the difference between a human, producing copyrightable content and a LLM which output is not protected in the US. The human, who lawfully (with bought or licensed books, movies, songs, art...) or unlawfully (when this human consumes e.g. pirated movies, books or songs...) learned about e.g. textual or musical content and who bases own creativity upon these sources, always adds own brain-processed things upon any thing he makes. The LLM doesn't work like this, it always only makes mash-ups, derivatives, of existing things (from the point of the law).
I would rather put the brakes upon uploading AI stuff now and wait for where the judicial challenges end. That's IMHO better than incurring a much larger clean-up endeavour later if higher courts deem AI generated media as unlawful derivatives of protected materials. I'm open to allow such media again, when the law framework has been clarified (in regard to LLM training), provided the stuff is in scope. Regards, Grand-Duc (talk) 18:47, 29 June 2025 (UTC)
respectfully i must DISAGREE with your assertion that: THE ENTIRE HISTORY OF HUMAN ART & LITERATURE up to & before 1930 is inadequate as material for an(y) AI LLM training models (PLUS the entire universe of open source materials available!).
& "waiting for the judicial challenges to end" is like waiting for the weather to stop happening, or waiting for gravity to stop working, or at least waiting for the next geological epoch to begin. i.e.: NOT GOING TO HAPPEN ANYTIME SOON. i see no valid reason to DENY/IGNORE/BLOCK all AI-created content/media-files on this basis. it sounds more like a backdoor-roundabout arguement from the "old school" flesh & blood content-creators who want to ban/block AI works as much as possible, for as long as possible... Lx 121 (talk) 10:27, 30 June 2025 (UTC)
May I suggest that you google "Growth of human knowledge"? The "THE ENTIRE HISTORY OF HUMAN ART & LITERATURE up to & before 1930" and the remainder of FOSS stuff is actually only a drop in the bucket for LLM training purposes - the human knowledge grows nearly exponentially with a doubling rate between 24-12 months in the current decade. Furthermore, everything that is related to information technologies, genetics, oncology, antibiotics, modern pop culture, etc., cannot be trained with raw materials from 1930 and before - these ideas didn't exist then or only barely.
About "waiting for the judicial challenges to end" - OK, that may have been poorly worded. I meant to say to wait for the development of a sound case law base, meaning several appellate court rulings or one from the Supreme Court. That is a limited time frame. (BTW, some scientists actually do advocate for the establishing of a new geological epoch, setting the border between the Holocene to the suggested Anthropocene around the 1950s. So, your example of "at least waiting for the next geological epoch to begin" has somewhat backfired...). Regards, Grand-Duc (talk) 23:31, 30 June 2025 (UTC)
comment - HA! i knew you were going to bring that up, BUTif we accept the anthropocene, then: a) we are already in it, the starting point is already in the past tense & b) the idea has been pretty thoroughly rejected by geologists, especially with an arbitrary 20th century start date (the most notable change in rock strata being radioisotopes). an earlier date like the neolthic revolution might have merits, but then it is basically a rename of most of the holocene.
& as for llm's; technical specialisations might need more, newer material. but general language, arts, philosophy, etc. are more than adequately represented by the world up to dec 31st, 1929 (& counting, plus open source!). so a lot of ai work, esp creative arts stuff, can be done with that. & you could probably train a pretty good AGI on that too (then take it to school for inadequately represented/updated subjects).
AND there is the whole "quantity vs quality" thing. re: growth of human knowledge. especially for "creative content" (which is what we are mostly talking about here re: ai-generated materials). nearly ALL human art-forms (of all types, including music, literature,etc.) are building on previous works. IF you trained a creative content ai on ALL AVAILABLE works that are pre-1930, & cut it off there (&/or only included subsequent works that are also pd or open source), you would STILL have covered almost all of the critical source material for human artistic inspiration, you would only be missing the last ~95 years of "iterations" (& you would even get early modern art, cinema, & scifi into the mix, plus jazz) Lx 121 (talk) 15:23, 5 July 2025 (UTC)
Oppose This proposal overreaches both legally and practically. None of the cited court cases have resulted in a final ruling that AI-generated media are inherently unlawful, nor that such output is retroactively tainted as "fruits of the poisonous tree". The training methods of some LLMs may be under scrutiny, but Commons evaluates media files, not machine learning pipelines. Commons' role is not to preemptively ban entire classes of media based on speculative future outcomes. We should stick to evaluating files individually under existing policy (especially COM:SCOPE and COM:LICENSE), not rewrite site-wide rules based on uncertainty or worst-case hypotheticals. A blanket ban would be premature, overly broad, and inconsistent with Commons' precedent of requiring concrete legal reasons, not conjecture, to exclude file types. --Jonatan Svensson Glad (talk) 18:56, 29 June 2025 (UTC)
+1 to Jonatan. We should not stop scrutinizing uploads. If AI files are evaluated as OoS and/or Licence violations, we must delete them. --Enyavar (talk) 10:08, 30 June 2025 (UTC)
Comment neutral, I just make a comment about the following wording within UK section: "it may therefore be necessary to determine whether the work was generated in the United Kingdom". Note that copyright protection generally comes with a publication (i.e. content being made available to the public), so the word "generated" looks not adequat to me when we talk about copyright. E.g. if I generate an artwork (with AI or not) in my computer in UK (or France, ect..., and that I firstly publish it online e.g. within Wikimedia Commons, then (if eligible) it is potentially under US copyright protection, and only under UK copyright protection via international convention(s). Of course if it is generated online, therefore automatically published, then this is well the first publication, and I guess that in that case the country of origin depend from where are the server. I'm not a native English speaker but I think that the words "generated" and "published" are two different things especially in matter of copyright. Christian Ferrer(talk)12:20, 1 July 2025 (UTC)
Comment support. Unlike some of the previous contributors, I still fail to see Commons as a repository for any free stuff that can be scrounged from the net. "Original" AI works have failure built-in, either by getting us in legal trouble at some point, or by flooding us with trash. We should limit the exposure of the project to it as much as possible. Alexpl (talk) 09:42, 4 July 2025 (UTC)
It does not because SCOPE gets trumped by INUSE. You can't get e.g. a trashy AI hallucination of the likeness some long-dead historical figure (a Viking, a Maya ruler, a Persian scientist...) deleted with SCOPE when some guy added it to a Wikipedia article in a smaller project that hasn't a policy of its own governing AI-generated media; despite it adding ZERO knowledge and only providing eye candy. Regards, Grand-Duc (talk) 14:56, 4 July 2025 (UTC)
Actually, INUSE clearly says "It should be stressed that Commons does not overrule other projects about what is in scope". What you want to do has already been considered, and it is already rejected in Commons' policies. Cambalachero (talk) 18:43, 4 July 2025 (UTC)
Oppose yet another attempt to purge as many AI generated images from Commons as possible as sloppily as possible. There is no actual legal basis here, in actuality coming down to “I don’t like it”. Dronebogus (talk) 21:58, 7 July 2025 (UTC)
A new desk or task force for contacting rightsholders asking them to release under WMC licenses?
Latest comment: 6 months ago59 comments18 people in discussion
What does this community think about a “help desk” or volunteer team (I imagine something like Wikipedia’s Resource Exchange) dedicated to taking requests to contact the owners of specific media works to ask them to release the work under a WMC license, and facilitating that release?
So, someone posts a video of a never-before-filmed beetle molting on Instagram. Someone on Commons makes a post at the help desk referring to that video, with a link. A volunteer reaches out to the videographer, explaining that the video is of great encyclopedic value, and encouraging the release of the video under an open license. If the owner is willing, the volunteer can help them release it, either through the VRT process or by making a declaration on the original platform of posting (as simple as commenting under one’s own social media post).
There’s so much valuable media shared online, and every time I have reached out to a poster to ask if they’d be willing to release, they have responded with great enthusiasm and done so. I think that a desk to streamline this and process requests from volunteers could lead to a lot of amazing encyclopedic material getting added to Commons. Eager to hear the community’s thoughts. Zanahary (talk) 02:48, 24 June 2025 (UTC)
I would love to do that! I have reached out a few times to people for their photos and videos—both scientists and ordinary online posters—to facilitate their release via VRT and publicly-declared CC licenses, and I’d be very happy to be a part of a more systematized effort to help free media online. Zanahary (talk) 03:06, 24 June 2025 (UTC)
Support Although I feel like it should be more of something like an informal Wikiproject/group then something like the Volunteer Response Team. But having a central place for people to chat about and coordinate contacting rightsholders to see if they will freely license their works is a pretty good idea. I think there's something similar for lobbying governments to create better FOP laws. --Adamant1 (talk) 08:25, 24 June 2025 (UTC)
Support And we should also establish best practices and scripts (not JavaScript scripts but pre-written words to copy and paste) for this purpose. I've gotten some great work off of Reddit (which I no longer use) and Flickr by asking. —Justin (koavf)❤T☮C☺M☯21:03, 24 June 2025 (UTC)
Strong support This is really needed. Often, only the awareness of free license is missing, and raising awareness can lead to new opportunities --PantheraLeo1359531 😺 (talk) 09:53, 25 June 2025 (UTC)
I'm glad people like this idea. I've initiated a draft for the desk, based on the defunct WikiProject Permission requests here. I encourage others to edit and workshop it, directly and via this discussion. @Adamant1 @Jmabel @Koavf @PantheraLeo1359531User:Chaotic Enby, I know you're a wizard—if this idea interests you at all, please take a crack! Zanahary (talk) 08:47, 29 June 2025 (UTC)
Hooray! I really know nothing about coding and wizardry, so please (a message to you and everyone): run wild. My dream is that this is linked in the community portal. Zanahary (talk) 17:50, 29 June 2025 (UTC)
Just completed it at User:Chaotic Enby/Request desk.js – to test it, you can install my script and the wizard should then display on your page.Ideally, it should be made into a MediaWiki namespace script so it can be called through the URL (and not require installing). This way, we can add a default button to User:Zanahary/Request desk that reloads the page while activating the JS through a URL parameter. Chaotic Enby (talk) 19:04, 4 July 2025 (UTC)
Since the new request desk is still in userspace, it doesn't make sense to add a link to it right now, but I'll do it once the discussion is closed and all is "officially" moved into place! Thanks for the reminder. Chaotic Enby (talk) 00:27, 5 July 2025 (UTC)
& comment - if we are doing this, we should give some thought on how to prioritise both sources to ask, & materials to seek permission for. :) Lx 121 (talk) 15:29, 5 July 2025 (UTC)
Another thought: it would be good if, in cases of publicly declared free licensing, requested materials could be uploaded either normally (with a link to the post or comment by the author indicating release) or via the VRT process, so that the uploaded could verify for the VRT without doxxing their social media account (by publicly sharing a link to a post under which they have commented inquiring about releasing under a CC license). Zanahary (talk) 23:35, 6 July 2025 (UTC)
Don't we already have this? A desk for contacting rightsholders and email template resources? What would be different? I know the former is pretty dead, but that leads to the question: why would this succeed where that failed? IMO the most valuable thing we could do is to create a centralized, named "team" that's vetted and can introduce themselves as "Hi, I'm from the Wikimedia License Requests Team" rather than "Hey, I'm a random person from Wikipedia". That would require some thought as to who we would want on that front line, and I wonder about our ability to maintain an active volunteer base of people willing to take responsibility for not jsut requests but follow-up questions and the upload process. Really, most requests can be as simple as a boilerplate message to send the rights holder and a boilerplate release for them to submit to VRT, in which case I'd say efforts are best spent revamping instructions for volunteers. It's just in those rare cases where you have to make a good sales pitch to overcome objections or answer technical licensing questions that you need some advanced knowledge/skills. I could see a noticeboard being useful to get help with those questions, but don't know why a VP wouldn't work well enough. — Rhododendritestalk | 21:09, 14 July 2025 (UTC)
I think a reboot of WikiProject Permission Requests that is more heavily featured (qua desk rather than WikiProject) rather than tucked away, would have a much easier time building momentum. I really like your idea of creating a centralized team. Zanahary (talk) 21:39, 14 July 2025 (UTC)
Support: The more avenues that may lead to more high-quality media, the better. Might even bring a few more good people to the project. Don't see any harm in trying. -- Cl3phact0 (talk) 07:09, 15 July 2025 (UTC)
Strong oppose If one wants a file to be on Commons, they are free to obtain permission themselves. No additional team is needed for that, and no need exists for a new repository of ideas that never will be worked on. If I'm mistaken, will the list of supporters be the list of volunteers for the new team? How much capacity do they have, and why isn't that better spent on existing backlogs? --Krd16:19, 26 July 2025 (UTC)
I think one problem is that some institutions may not know about Commons or the advantages of free licenses (or that they even exist), and some may not know how to work on Commons (too complicated etc., had already similar cases). Many of us here are experts in the topics "Commons" and "IT", but many in general are not. I think we have to remember this --PantheraLeo1359531 😺 (talk) 19:06, 12 August 2025 (UTC)
Sure. The other side of the medal is that we need to protect the qualified users from being saturated by routinne tasks offloaded to them by those not unable but unwillig to do it themselves. Krd05:30, 27 August 2025 (UTC)
Support but doubt it would work out well and with a certain condition: users can already reach out to ask rightsholders to give them explicit permission to upload the media or to upload the media by themselves or to change the license so that they can upload the media – so this is somewhat redundant or not nearly as impactful as you may possibly assume and also the existence and recommendation of this task force may make it appear as if users shouldn't directly ask rightsholders themselves. So the limitation of my support for this is that it's clearly communicated wherever the project is linked and/or at the top of the taskforce page that users can just do all that themselves (which often would be more efficient and effective because for example the taskforce may often not see the value the given files would add right away or have no interest/relation to the specific topic). It could be impactful nevertheless because few users actually do that or look for and collect media that may be valuable to have on Commons to contact rightsholders en-masse/systematically/often. Prototyperspective (talk) 10:35, 15 September 2025 (UTC)
Implementation
Right, time to worry about how to do this now. Proposing the following:
Mention it at the Main Page after everything else is done as the following: change "To fulfill the free license requirements, please read our Reuse guide. You can also request a file." to "To fulfill the free license requirements, please read our Reuse guide. You can also request a fileor request permission for a file already on the internet. Also advertise at VP, the usual.
Put some advice to volunteers at Commons:Permission requests e.g. don't forward emails yourself, ask them to forward to VRT. This should be done with consultance with VRT agents on Commons so the two systems can work together.
Make some requirements to join. The most obvious is license reviwers get it free and other people can go through the manual application processs.
That's fine. The old project was moribund enough that there is nothing to lose. Do look for links to that and update them, though. - Jmabel ! talk20:44, 28 August 2025 (UTC)
I support this initiative in principle, and am optimistic that it might lead to positive outcomes. However, one barrier for some users may be breaking the implicit "fourth well" of anonymity by using one's private (or professional) email address to communicate (essentially "on behalf of" Wikimedia Commons) with rights holders, etc. There are many people or organisations that one might wish to contact, but not necessarily be inclined to do so – at risk of opening what could conceivably be a Pandora's box of unknown unknowns. With this in mind, I'm wondering if there might be a way for editors who wish help with this initiative to do so without necessarily using their own email address? One idea would be to create a tool that allows this communication to take place via the Commons platform itself (i.e., similarly to what happens here on this Talk page). Essentially, a "Send message" tool that would serve as the point of contact, and then, ideally, collate the email exchange as a quasi-Talk page thread. If there are unsurmountable technical complexities to that, it could be as simple as a pre-approved message from "xyz@wikimedia.commons" that invites people to visit a specific Commons Talk page to open a discussion. In addition to preserving anonymity, this method would also make the communication open and visible to other editors. -- Cl3phact0 (talk) 08:18, 5 September 2025 (UTC)
I don’t think the process via this new desk needs to be more open than the normal VRT clearance process, which takes place through normal email addresses. But @wikimedia email addresses would be awesome! Zanahary (talk) 08:35, 5 September 2025 (UTC)
I'm not too sure about collating the email exchange as a talk page thread – from the point of view of the rights holder receiving the email, there is no expectation of the communication being publicly visible on Commons, and that could be seen as a breach of privacy. I do also like the idea of having @wikimedia email addresses for volunteers, if that is technically possible, although I don't think that wikimedia.commons would work since .commons isn't a TLD. Chaotic Enby (talk) 23:18, 5 September 2025 (UTC)
The core idea is simply to give project participants the means to contact rights holders on behalf of the project (rather than using their IRL identity). If this is something that is desirable, then the question is how could it be accomplished from a technical standpoint. The "xyz@..." example was only meant to illustrate a concept, not as an actual TLD. (As such, .commons was a poor choice of syntax to express this concept.) Sorry to introduce unnecessary confusion. -- Cl3phact0 (talk) 12:09, 6 September 2025 (UTC)
Hmm, I think we would have to at least ask legal if we were allowed to do that - they don't like it when people conflate VRT for the actual WMF, for example (which is why VRT can't ask people directly). —Matrix(!)ping onewhen replying {user - talk? - uselesscontributions} 10:41, 6 September 2025 (UTC)