Jean-Pierre JANUEL | 22 Jun 17:04 2016
Picon

Re: [fr] problèmes avec les graphiques .SVG

Voici un lien vers le graphique que j'ai téléchargé avant-hier :

[[File:Flux d'énergie automobile.svg|thumb|Flux d'énergie automobile]]

Merci d'avance à celui qui me donnera une solution.

Le 22/06/2016 à 14:10, Lionel Allorge a écrit :
> Bonjour,
>
>> bonjour,
>> quelqu'un peut-il m'expliquer pourquoi, lorsque je télécharge sur
>> Commons un graphique SVG, tous les libellés se trouvent déportés sur le
>> bord gauche du cadre ?
>> Pourtant, quand je clique sur le bouton "Fichier d'origine", le
>> graphique apparait bien conforme à celui que j'ai créé.
>> Précision : j'utilise le logiciel Open Draw de la suite Open Office.
> Peut-tu nous donner un lien vers un exemple ?
>
> Bonne continuation.
>

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
(Continue reading)

Jean-Pierre JANUEL | 22 Jun 10:20 2016
Picon

[fr] problèmes avec les graphiques .SVG

bonjour,
quelqu'un peut-il m'expliquer pourquoi, lorsque je télécharge sur 
Commons un graphique SVG, tous les libellés se trouvent déportés sur le 
bord gauche du cadre ?
Pourtant, quand je clique sur le bouton "Fichier d'origine", le 
graphique apparait bien conforme à celui que j'ai créé.
Précision : j'utilise le logiciel Open Draw de la suite Open Office.

Jpjanuel

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Tuszynski, Jarek W. | 21 Jun 18:53 2016

Commons , Wikidata and page refresh

The  categories in Category:Authority_control_maintenance are used to help with transition of Template:Authority_control identifiers from Commons to Wikidata, so that Authority control template does not have to keep and maintain all the identifiers but only a link to Wikidata where they are held and shared among all the projects.  The categories there depend on comparison of Commons identifiers with wikidata ones. However I run into a curious problem where some pages are in some category, but that categore does not list them. For example Creator:Titus_Livius lists that it belong to Category:Pages_using_authority_control_with_all_identifiers_matching_Wikidata but that category does not lists it as one of the pages that is part of it. Other pages like that are: Creator:Alonzo_Rodriguez or Creator:Perrin_Remiet .

 

The only solution to the problem I found is to do an edit to the creator page, or run Pywikibot/touch.py if I had a list of files that need it. I guess some kind of event is needed to trigger page refresh and with activation of wikidata arbitrary access, we run into a problem that the event is happening on Wikidata and does not trigger the refresh on Commons. Page Purge does not seem to help. In the meantime I think I have about 100 pages in such a limbo state, but I have no way of finding them, other than running “touch” on large number of pages.

 

Any ideas what to do? Is that a bug that should be reported, or new norm. Does Wikipedia and other projects that used wikidata for a while run into it?

 

Jarek T.

(user:Jarekt)

_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Steinsplitter Wiki | 21 Jun 17:27 2016
Picon

Reiss-Engelhorn Museum (REM) of the City of Mannheim v. Wikimedia Foundation

Unfortunately, the German trial court in the Reiss-Engelhorn Museum of the City of Mannheim v. Wikimedia Foundation case has ruled in favor of REM.


See https://commons.wikimedia.org/wiki/Commons:Village_pump/Copyright#Info_about_Reiss-Engelhorn_Museum_.28REM.29_of_the_City_of_Mannheim_v._Wikimedia_Foundation for details



and https://blog.wikimedia.org/2015/11/23/lawsuit-public-domain-art/ for the background

_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Romaine Wiki | 17 Jun 01:38 2016
Picon

Freedom of panorama today approved by Belgian parliament

Hi all,

Great news!

Freedom of panorama has been voted today in the Belgian parliament.
A mayority voted in favour of freedom of panorama, including commercial use.

Soon images of artworks and modern buildings in Belgium can be restored on Commons.

But first the law needs to be published in the Staatsblad, and ten days later it will be official, but that is just a formality. (Will keep you updated on that.)


Article in the news in Dutch:
http://deredactie.be/cm/vrtnieuws/politiek/1.2685852


In the past weeks, as well as since the campaign in Europe last year, we from Wikimedia Belgium have worked hard on this subject and communicated with the members of the parliament informing what this subject means for Wikipedia.

With the founding of Wikimedia Belgium in 2014, this subject was a priority for us.

Thanks all for the support!

Let's get this implemented elsewhere too!

Greetings from Belgium,
Romaine


_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Abdeali Kothari | 11 Jun 19:29 2016
Picon
Gravatar

GSoC 2016 | Porting catimages to pywikibot Updates - Release v0.1.0


Hello,

About a month ago I had sent a mail[0] about my GSoC project on porting catimages to pywikibot-core with my mentors DrTrigon[1] and jayvdb[2]. As a step towards that, we've made a library to analyze files on commons using exif data and computer vision techniques which will be used in the bot. We recently released a v0.1.0.

Currently, the library is able to identify mimetypes, detect barcodes, detect faces, read exif data, and measure the average color. You can read more about the library at User:AbdealiJK/file-metadata[3]. It contains installation instructions and also a simple script using pywikibot which can be used to analyze files on commons.

We've been running the library on a number of files (35,000+) on commons to test for corner cases and to check it's validity. You can find the logs of that analysis on https://commons.wikimedia.org/wiki/User:AbdealiJKTravis/logs . There are some discrepancies which have been seen, and it would be great to hear your comments on it.

It would also be immensely helpful if users can install the library and test it out. If any problems arise, please make an issue at the bug tracker[4] on github or on the Talk page so that we can help you out and also make the library more robust.

Regards,
Abdeali JK

References:


_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
| 10 Jun 19:11 2016
Picon

Re: [Wikimedia-l] Picture of the Year 2015 Results

I'm surprised one of my UK Ministry of Defence uploads is on the list. Nice reward to start the weekend.

Fae

On 10 Jun 2016 17:29, "Steinsplitter Wiki" <steinsplitter <at> wikipedia.de> wrote:
Dear Wikimedians,



The tenth Picture of the Year competition (2015) has ended and we are pleased to announce the results:

In both rounds, people voted for their favorite media files.

In Round 1, there were 1322 candidate images.In the second round, people voted for the 56 finalists (the R1 top 30 overall and top 2 in each category).




We congratulate the winners of the contest and thank them for
creating these beautiful media files and sharing them as freely licensed
 content:

658 people voted for the winner, File:Pluto-01 Stern 03 Pluto Color TXT.jpg (https://commons.wikimedia.org/wiki/File:Pluto-01_Stern_03_Pluto_Color_TXT.jpg)
In second place, 617 people voted for File:Nasir-al molk -1.jpg (https://commons.wikimedia.org/wiki/File:Nasir-al_molk_-1.jpg)
In third place, 582 people voted for File:Heavens Above Her.jpg (https://commons.wikimedia.org/wiki/File:Heavens_Above_Her.jpg)


See https://commons.wikimedia.org/wiki/Commons:Picture_of_the_Year/2015/Results to view the top images »



We also sincerely thank to all voters for participating. We invite you to continue to participate in the Commons community by sharing your work.



Thanks,
Steinsplitter on behalf of
the Picture of the Year committee

_______________________________________________
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
New messages to: Wikimedia-l-RusutVdil2icGmH+5r0DM0B+6BGkLq7r@public.gmane.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:wikimedia-l-request-RusutVdil2icGmH+5r0DM0B+6BGkLq7r@public.gmane.org?subject=unsubscribe>
_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Steinsplitter Wiki | 10 Jun 18:29 2016
Picon

Picture of the Year 2015 Results

Dear Wikimedians,


The tenth Picture of the Year competition (2015) has ended and we are pleased to announce the results:
In both rounds, people voted for their favorite media files.
  • In Round 1, there were 1322 candidate images.
  • In the second round, people voted for the 56 finalists (the R1 top 30 overall and top 2 in each category).


We congratulate the winners of the contest and thank them for creating these beautiful media files and sharing them as freely licensed content:
  1. 658 people voted for the winner, File:Pluto-01 Stern 03 Pluto Color TXT.jpg (https://commons.wikimedia.org/wiki/File:Pluto-01_Stern_03_Pluto_Color_TXT.jpg)
  2. In second place, 617 people voted for File:Nasir-al molk -1.jpg (https://commons.wikimedia.org/wiki/File:Nasir-al_molk_-1.jpg)
  3. In third place, 582 people voted for File:Heavens Above Her.jpg (https://commons.wikimedia.org/wiki/File:Heavens_Above_Her.jpg)


See https://commons.wikimedia.org/wiki/Commons:Picture_of_the_Year/2015/Results to view the top images »


We also sincerely thank to all voters for participating. We invite you to continue to participate in the Commons community by sharing your work.


Thanks,
Steinsplitter on behalf of the Picture of the Year committee
_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Federico Leva (Nemo | 2 Jun 13:58 2016
Picon

Gotthard Base Tunnel

The tunnel under the Gotthard was just opened, as you surely heard from 
the news: https://en.wikipedia.org/wiki/Gotthard_Base_Tunnel

Is someone going to visit the tunnel on June 4 or 5 to take photos?
http://www.gottardo2016.ch/

Nemo

_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
| 24 May 13:32 2016
Picon

Re: GSoC 2016 | Porting catimages to pywikibot-core

Replies in-line.

On 24 May 2016 at 06:57, Dr. Trigon <dr.trigon <at> surfeu.ch> wrote:
>> * incomplete uploads resulting from
>> server failures. Checksum
>> comparisons would mean re-
>> downloading files, which would be
>> unnecessarily bandwidth expensive, but
>> local image analysis would
>> highlight these.
>
> What about local checksum comparison?

Yes, we have SHA1 values for the Commons hosted images, however a
local checksum is not normally available from the source (e.g. NYPL)
which means re-downloading the original to do the comparison. As some
of my uploads are over 100mb for one page, it's an expensive solution.

>> * uploads that are mostly blank pages
>> in old scanned books. I have a
>> simple detection process, but it would
>> be neat to have a more common
>> standard way of doing this.
>
> Depends on the format. For PDF you can try to use Poppler/poppler-utils or
> MuPDF. For images it will be bit more involved ... but intressting.

Formats are normally jpeg or TIFF. My blank detection uses analysis of
pixel colour deviations over parts of the image to deduce if it looks
blank. This uses the basic Python Image Library rather than any
sophisticated math. This can happen pre-upload by testing a
client-side image. See
<https://commons.wikimedia.org/wiki/User:Fae/Project_list/Internet_Archive#Blank_pages>

...
>> Hi Fæ,
>>
>> Thanks a lot for the ideas !
>> The ideas you mentioned are awesome, and something I'll definitely look
>> into !
>>
>> The second and third ideas mentioned are, I believe, do-able within the
>> scope of my GSoC. For the first idea to be implemented, as you mentioned
>> local image analysis would be needed, which we've not planned (But i'll add
>> it to the "to plan" list :) ). Currently we're planning on downloading the
>> image and performing the analysis on ToolsLab or a personal computer.
>>
>> Thank you for the project list ! I was looking for a good dataset to test
>> things out on and this will be immensely helpful.
>>
>> Regards
>> Abdeali JK
>>
>> On Wed, May 18, 2016 at 5:25 PM, Fæ <faewik <at> gmail.com> wrote:
>>>
>>> (Just replying on Commons-l with a non-tech observation. If more tech
>>> stuff arises I'll add it to Phabricator instead)
>>>
>>> This looks like a useful contained project, though a lot to be done in
>>> 12 weeks. :-)
>>>
>>> I was not familiar with catimages.py. It would be great if using the
>>> module for the preparation or housekeeping of large batch uploads were
>>> easy and not time consuming to try. As Commons grows we are seeing
>>> more donations over 10,000 images and have had a few with over 1m.
>>> Uploads of this size make manual categorization a huge hurdle, so
>>> automatic 'tagging' of image characteristics would be a useful way of
>>> breaking down such a large batch to highlight the more interesting
>>> outliers or mistakes, which can then be prioritized on a backlog for
>>> human review.
>>>
>>> For example, in my upload projects I have problems detecting:
>>> * incomplete uploads resulting from server failures. Checksum
>>> comparisons would mean re-downloading files, which would be
>>> unnecessarily bandwidth expensive, but local image analysis would
>>> highlight these.
>>> * uploads that are mostly blank pages in old scanned books. I have a
>>> simple detection process, but it would be neat to have a more common
>>> standard way of doing this.
>>> * distinguishing between scans with diagrams and line
>>> drawings/cartoons, printed old photographs, newsprint and text pages.
>>>
>>> It would be great if the testing routines you use during the project
>>> could tackle any of these and be written up as practical case studies.
>>>
>>> As well as the Phabricator write-up/tracking of the project, it would
>>> be useful to have an on-wiki Commons or Mediawiki user guide. Perhaps
>>> this can be sketched out as you go along during the project, giving an
>>> insight into what other users or amateur Python programmers might do
>>> to customize or make better use of the module? Having an more easy to
>>> find manual, might avoid others going off on their own tangents using
>>> various off the shelf image modules, when they could just plug in
>>> catimages with a smallish amount of configuration.
>>>
>>> P.S. If you would like to test the tool on some large collections with
>>> predictable formats, try looking through <
>>> https://commons.wikimedia.org/wiki/User:Fae/Project list >. The 1/2
>>> million images in the book plates project would be an interesting
>>> sample set.
>>>
>>> Thanks,
>>> Fae
>>>
>>> On 18 May 2016 at 02:53, Abdeali Kothari <abdealikothari <at> gmail.com>
>>> wrote:
>>> > Hi,
>>> >
>>> > I'm a student from Chennai, India and my project is going to be related
>>> > to
>>> > performing image processing on the images on commons.wikimedia to
>>> > automate
>>> > categorization. DrTrigon had made the script catimages.py a few years
>>> > ago
>>> > which was made in the old pywikipedia-bot framework. I'll be working
>>> > towards
>>> > updating the script to the pywikibot-core framework, updating it's
>>> > dependencies, and using newer techniques when possible.
>>> >
>>> > catimages.py is a script that analyzes an image using various computer
>>> > vision algorithms and allots categories to the image on commons. For
>>> > example, consider algorithms that detect faces, barcodes, etc. The
>>> > script
>>> > uses these to categorize images to Category:Unidentified People,
>>> > Category:Barcode, and so on.
>>> >
>>> > If you have any suggestions and categorizations you think might be
>>> > useful to
>>> > you, drop in at #gsoc-catimages on freenode or my talk page[0]. You can
>>> > find
>>> > out more about me on User:AbdealiJK[1] and about the project at
>>> > T129611[2].
>>> >
>>> > Regards
>>> >
>>> > [0] - https://commons.wikimedia.org/wiki/User_talk:AbdealiJK
>>> > [1] - https://meta.wikimedia.org/wiki/User:AbdealiJK
>>> > [2] - https://phabricator.wikimedia.org/T129611
>>> >
>>> >
>>> > _______________________________________________
>>> > Commons-l mailing list
>>> > Commons-l <at> lists.wikimedia.org
>>> > https://lists.wikimedia.org/mailman/listinfo/commons-l
>>> >
>>>
>>>
>>>
>>> --
>>> faewik <at> gmail.com https://commons.wikimedia.org/wiki/User:Fae
>>> Personal and confidential, please do not circulate or re-quote.
>>
>>
>
> Dr. Trigon

--

-- 
faewik <at> gmail.com https://commons.wikimedia.org/wiki/User:Fae
Personal and confidential, please do not circulate or re-quote.

_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Sunder Thadani | 17 May 16:12 2016
Picon
Gravatar

acknowledged with thanks.

I am proud to be  a part of such a huge family of commons.
thanks
sunderkt-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org
https://www.youtube.com/Thesunderkt
_______________________________________________
Commons-l mailing list
Commons-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l

Gmane