Pine W | 20 Oct 20:27 2014

MediaViewer on mobile

Hi Multimedia and mobile folks,

I have found that I like MV on mobile, particularly when browsing galleries.

I have a request. While I can zoom in on an image that is loaded in MV on mobile, I can't pan the zoomed image. Is that a feature in the queue?



This is an Encyclopedia
One gateway to the wide garden of knowledge, where lies
The deep rock of our past, in which we must delve
The well of our future,
The clear water we must leave untainted for those who come after us,
The fertile earth, in which truth may grow in bright places, tended by many hands,
And the broad fall of sunshine, warming our first steps toward knowing how much we do not know.
—Catherine Munro

Multimedia mailing list
Multimedia <at>
Mark Holmquist | 16 Oct 16:46 2014

Enable/disable panel errors in Media Viewer

Especially  <at> pginer, but others can chime in -

The disable/enable dialog is coming, but we need to be able to display
errors. Is there a decent way to show errors in the dialog(s)? Do we need
a new design for that? I don't think there's any particularly prevalent
error we need to worry about, but I'd rather not get caught without info
when people do get errors for whatever reason.



Mark Holmquist
Software Engineer, Multimedia
Wikimedia Foundation
Multimedia mailing list
Multimedia <at>
Fabrice Florin | 16 Oct 01:33 2014

Structured Data Update | IRC chat tomorrow

Hi folks,

Here's a quick update on the Structured Data project, which proposes to make multimedia data easier to
search, view, edit, curate and re-use on Wikimedia Commons.

Today, information about media files on Wikimedia sites is stored in unstructured formats that cause a
range of issues: for example, file information is hard to search, some of it is only available in English,
and it is difficult to edit or re-use files to comply with their license terms.

Last week, a first bootcamp was held in Berlin to discuss this project and explore possible solutions,
based the same technology as the one developed for Wikidata. Participants included community
volunteers, as well as the Wikidata and Multimedia teams. This blog post gives an overview of what was
discussed and accomplished. (1)

Some good ideas came out from this event, but many questions remain unanswered. We would now like to invite
more community members to help plan next steps for this project: everyone is welcome to join the
discussion and/or subscribe to the newsletter on the new Structured data hub on Commons. (2)

We also invite you to join tomorrow's live IRC chat about Structured Data: this Thursday, October 16 at
18:00 (UTC), on #wikimedia-office (3). The development teams would love to discuss this project with you.

Going forward, our community liaison Keegan Peterzell will be managing communications for this project.
You will be hearing from him about our next discussions and other ways you can get involved in this
important initiative. 

We look forward to working with you to better support the needs of our users and modernize our multimedia
infrastructure together. 

Best regards,

Fabrice -- for the Structured Data team





Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation

Multimedia mailing list
Multimedia <at>
Brion Vibber | 13 Oct 23:32 2014

Review needed for mobile video overlay (native and ogv.js playback)

I've gotten some great review feedback from Gilles on the desktop-web integration of my ogv.js JavaScript & Flash compatibility layers for Ogg Theora/Vorbis media files -- thanks Gilles!

* libraries:
* desktop integration:

These are getting pretty close to ready to land, I think.

I would love to get some review on the mobile overlay I've whipped up as well. This supports both native WebM playback (Android Chrome, Android Firefox, Firefox OS) and ogv.js playback (iOS 7/8 Safari).

* mobile overlay:
* Live demo:

A few open questions:

1) Is this the right way to do mobile overlay code? (It's basically a rip of the existing photo viewer overlay in MobileFrontend, but lives in TimedMediaHandler.) Is the overlay interface stable enough for other extensions to use it for mobile-specific features? (I had to make updates for object-model and template things that changed since this summer.)

2) Is the inline icon too huge/ugly here for audio files? Should it be arranged differently, or display the player inline instead of as an overlay for audio?

3) Should more controls be added to the overlay's bottom toolbar, such as manual resolution selection or an 'Open in VLC' link to support HD playback on iOS?

4) Should we autoplay when opening the overlay, or require a second tap?

5) How should we handle devices with no native playback that are either too slow (iOS 6 Safari) or lack necessary features needed for the player (Windows Phone)?

Current known bugs in the mobile overlay:

* CPU speed check not yet integrated to force to lowest resolution for old iPhones/iPads (this exists on the desktop integration, just needs to be moved to common code)

* autoplay doesn't seem to work with native playback right now

-- brion
Multimedia mailing list
Multimedia <at>
Mark Holmquist | 8 Oct 22:48 2014

Enable/disable dialogs now on alpha


Open a lightbox, click on the settings/cog icon near the top right of
your screen, and be amazed.

That is all.

(warning: alpha quality, don't expect it to be perfect, meant for testing
by our developers and designers, but no reason not to let y'all try it!)


Mark Holmquist
Software Engineer, Multimedia
Wikimedia Foundation
Multimedia mailing list
Multimedia <at>
Fabrice Florin | 8 Oct 16:59 2014

Join our Structured Data Q&A on IRC - Thu. Oct. 16 at 18:00 UTC


We invite you to join our next Structured Data Q&A on IRC office hours next Thursday, Oc. 16, at 18:00 UTC.

Our Multimedia team and the Wikidata team will be on hand for this discussion, as well as some of the community volunteers who are helping guide this project, such as Multichill and TheDJ.

During this hour-long IRC chat, we will discuss our next steps for this Structured Data project, and give you an update on our bootcamp in Berlin. Please RSVP here, so we know who plans to attend:
Early next week, we will update our Structured Data pages with our latest work on this project, and send another email to invite you to review them. 

And if you are based in Europe, we also invite you to join the Amsterdam Hackathon on November 14-16 , 2014. Many of us will be at this event, and plan to give more updates as well as do some hacking together. You can register here:

Please spread the word in your community, and invite them to join this chat, and/or the hackathon.

We look forward to a productive discussions with many of you tomorrow.

Regards as ever,

Fabrice — for the Structured Data team


Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation

Multimedia mailing list
Multimedia <at>
Brian Wolff | 2 Oct 23:57 2014

Enabling VIPS experimentally for tiff files > 50 MP

Hi everyone.

tl;dr: Can we do

Now that the pre-requisite patches for using VIPS with tiff has been
merged (Woo!), lets umm use it.

So for those who don't know what vips is, vips is an alternative to
image magick which can scale certain file formats in essentially
constant memory (Or probably to be pedantic, linear in the number of
pixels in the resulting file, instead of linear in the number of
pixels in the source). This means we would be able to make thumbnails
no matter how big the source file is. Which is good because we have
lots of very high resolution tiff files, such as [[File:Zoomit2.tif]]
and [[File:Zentralbibliothek Z├╝rich - Mittelalterliche Stadt -
000005203.tif]]. We already use VIPS to scale png files larger than 20
megapixels, and non-progressive jpeg files can be scaled efficiently
with image magick, so tiff is the current pain point in terms of
scaling limits (although GIF is also painful).

I would like to propose the following:

First we experiment with turning it on for files > 50 megapixels.
Currently we do not even try to render such files, so I doubt this
will cause any community angst. To that end I proposed a patch ( ) that uses the following

                       'conditions' => array(
                               'mimeType' => 'image/tiff',
                               'minShrinkFactor' => 1.2,
                               'minArea' => 5e7,
                       'sharpen' => array( 'sigma' => 0.8 ),

This will turn the feature on for big files (which currently do not
render), and also enable sharpening (Most tiff images benefit from it
and the community has asked for it repeatedly, I think its less
disruptive to enable sharpening at the same time as VIPS, instead of
two separate changes to tiff rendering).

I would propose we let that sit for a little bit. We should than have
a community discussion (With the commons community, since its hard to
have a discussion with every community, and commons (+esp. Glams) are
the people who care the most about this) to see if the community likes
that. Hopefully if all is well we could move to stage 2, which would
be something like:

                       'conditions' => array(
                               'mimeType' => 'image/tiff',
                               'minShrinkFactor' => 1.2,
                       'sharpen' => array( 'sigma' => 0.8 ),
                       'conditions' => array(
                               'mimeType' => 'image/tiff',

Anyways, thoughts. Does this sound like a good plan? Someone want to
be bold and deploy my change ;)


Multimedia mailing list
Multimedia <at>
Fabrice Florin | 1 Oct 21:23 2014

Warning users when they click to enlarge huge images

Hi guys, 

Geni brought up a good point that Media Viewer doesn’t provide a warning when users click to enlarge huge files (e.g.: 400 Mb) on our talk page:

This is not a new issue, as this is the same functionality we have provided for years on the File: page. But Media Viewer makes it a lot easier for users to accidentally load a huge file. So I think we should seriously consider providing a warning, if it is easy to implement and if we can identify a threshold that is based on data and that is acceptable to our communities.

Do any of you have data on what the threshold might be for identifying file sizes that might crash your browser? Or do you know what best practices are on that point? It would be good if we could agree on a limit that is at least partly informed by data. 

If there is no reliable data or best practices, we might have to determine this threshold together arbitrarily, based on common sense. In that case, what do you think would be a reasonable threshold when we would start giving the warning? 50Mb or above? 100Mb or above?

For now, I just filed this ticket #933 to track this issue:

Thanks for any recommendations you might have,



Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation

Multimedia mailing list
Multimedia <at>
Fabrice Florin | 1 Oct 01:19 2014

Media Viewer Update: First Improvements

Hi folks,

I am happy to announce that we have just released a first round of improvements to Media Viewer, based on community feedback.

The goal for these improvements is to make Media Viewer easier to use by readers and casual editors, our primary target users for this tool. 

To that end, we created a new 'minimal design’, with these features:
* "More Details” button: a more prominent link to the File: page
* separate icons for “Download" and "Share or Embed" features
* an easier way to enlarge images by clicking on them
* a simpler metadata panel with fewer items
* faster image load with thumbnail pre-rendering

These features are now live on Wikimedia Commons and sister projects (1), and will be deployed on all Wikipedias this Thursday by 20:00 UTC.

Next, we plan to work on these other improvements:
* an easier way to disable Media Viewer for personal use
* a caption or description right below the image

Learn more about these features on the Media Viewer Improvements page (2). They are based on findings from our recent community consultation (3) and ongoing user research (4). For more information, visit the Help FAQ page (5).

Please let us know what you think of these new features on the Media Viewer talk page (6). 

We would like to thank all the community members who suggested these improvements. Our research suggests that they offer a better user experience, that is both clearer and simpler -- and that clarifies the relationship between Media Viewer and the File: description page. 

We will send another update in October, once the next round of improvements has been released.


Fabrice and the Multimedia Team

(1) Pictures of the Day on Commons: 

(2) Improvements page:

(3) Community suggestions:

(4) User Research:

(5) Help page:

(6) Talk page:


Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation

Multimedia mailing list
Multimedia <at>
James Heald | 30 Sep 02:21 2014

Inclusion criteria for Wikidata items for paintings, engravings, illustrations, manuscript folios, photographs, old postcards, etc ?

Hi everybody,

With the Structured Data for Commons project about to move into high 
gear, it seems to me that there's something the Wikidata community needs 
to have a serious discussion about, before APIs start getting designed 
and set in stone.

Specifically: when should an object have an item with its own Q-number 
created for it on Wikidata?  What are the limits?  (Are there any limits?)

The position so far seems to be essentially that a Wikidata item has 
only been created when an object either already has a fully-fledged 
Wikipedia article written for it, or reasonably could have.

So objects that aren't particularly notable typically have not had 
Wikidata items made for them.

Indeed, practically the first message Lydia sent to me when I started 
trying to work on Commons and Wikidata was to underline to me that 
Wikidata objects should generally not be created for individual Commons 

But, if I'm reading the initial plans and API thoughts of the Multimedia 
team correctly, eg

there seems to be the key assumption that, for any image that contains 
information relating to something beyond the immediate photograph or 
scan, there will be some kind of 'original work' item on main Wikidata 
that the file page will be able to reference, such that the 'original 
work' Wikidata item will be able to act as a place to locate any 
information specifically relating to the original work.

Now in many ways this is a very clean division to be able to make.  It 
removes any question of having to judge "notability"; and it removes any 
ambiguity or diversity of where information might be located -- if the 
information relates to the original work, then it will be stored on 

But it would appear to imply a potentially *huge* increase in the 
inclusion criteria for Wikidata, and the number of Wikidata items 
potentially creatable.

So it seems appropriate that the Wikidata community should discuss and 
sign off just what should and should not be considered appropriate, 
before things get much further.

For example, a year ago the British Library released 1 million 
illustrations from out-of-copyright books, which increasingly have been 
uploaded to Commons.  Recently the Internet Archive has announced plans 
to release a further 12 million, with more images either already 
uploading or to follow from other major repositories including eg the 
NYPL, the Smithsonian, the Wellcome Foundation, etc, etc.

How many of these images, all scanned from old originals, are going to 
need new Q-numbers for those originals?  Is this okay?  Or are some of 
them too much?

For example, for maps, cf this data
, each map sheet will have a separate Northernmost, Southernmost, 
Easternmost, Westernmost bounding co-ordinates.  Does that mean each map 
sheet should have its own Wikidata item?

For book illustrations, perhaps it is would be enough just to reference 
the edition of the book.  But if individual illustrations have their own 
artist and engraver details, does that mean the illustration needs to 
have its own Wikidata item?  Similarly, if the same engraving has 
appeared in many books, is that also a sign that it should have its own 
Wikidata item?

What about old photographs, or old postcards, similarly.  When should 
these have their own Wikidata item?  If they have their own known 
creator, and creation date, then is it most simple just to give them a 
Wikidata item, so that such information about an original underlying 
work is always looked for on Wikidata?  What if multiple copies of the 
same postcard or photograph are known, published or re-published at 
different times?  But the potential number of old postcards and 
photographs, like the potential number of old engravings, is *huge*.

What if an engraving was re-issued in different "states"  (eg a 
re-issued engraving of a place might have been modified if a tower had 
been built).  When should these get different items?

where I raised some of these issues a couple of weeks ago, there has 
even been the suggestion that particular individual impressions of an 
engraving might deserve their own separate items; or even everything 
with a separate accession number, so if a museum had three copies of an 
engraving, we would make three separate items, each carrying their own 
accession number, identifying the accession number that belonged to a 
particular File.

(See also other sections at for 
further relevant discussions on how to represent often quite complicated 
relations with Wikidata properties).

With enough items, we could re-create and represent essentially the 
entire FRBR tree.

We could do this.  We may even need to do this, if MM team's outline for 
Commons is to be implemented in its apparent current form.

But it seems to me that we shouldn't just sleepwalk into it.

It does seem to me that this does represent (at least potentially) a 
*very* large expansion in the number of items, and widening of the 
inclusion criteria, for what Wikidata is going to encompass.

I'm not saying it isn't the right thing to do, but given the potential 
scale of the implications, I do think it is something we do need to have 
properly worked through as a community, and confirmed that it is indeed 
what we *want* to do.

All best,


(Note that this is a slightly different discussion, though related, to 
the one I raised a few weeks ago as to whether Commons categories -- eg 
for particular sets of scans -- should necessarily have their own 
Q-number on Wikidata.  Or whether some -- eg some intersection 
categories -- should just have an item on Commons data.   But it's 
clearly related: is the simplest thing just to put items for everything 
on Wikidata?  Or does one try to keep Wikidata lean, and no larger than 
it absolutely needs to be; albeit then having to cope with the 
complexity that some categories would have a Q-number, and some would not.)

Multimedia mailing list
Multimedia <at>
Gergo Tisza | 27 Sep 17:40 2014

UploadWizard funnel - findings and next steps

Hi all,

a little more detail from the funnel analysis of UploadWizard (if you haven't been following the other funnel thread, [[mw:UploadWizard/Funnel_analysis]] has a quick summary).

Users repeat the upload process many times

The main thing I am trying to understand at this point is why people use the "upload another file" button so much. UploadWizard allows uploading up to 50 files at the same time, which should be more then enough for the average user, but our click-tracking data shows that most people click through the tutorial-file-deed-details-thanks screens, then click on the upload more button (which effectively resets the process and starts again from the file screen), then click through the screens again, then click on the upload more button again, then do the same again, and again, and again. (Doing this fifty times in a row is not uncommon.) This suggests some fundamental failing in UW - Sage suggested it is the instability of uploading more than a few files at the same time. I wonder if others have relevant experience?

Errors do not seem to be the main problem

I have tried to identify the reason for failed UploadWizard sessions (a series of UploadWizard events logged on the same page which are not terminated by reaching the thanks page) by checking what the last event was, and assuming that for failed sessions caused by errors, that error would be the last event. Assuming this is sound, errors do not seem to be the main problem - they only appear at the end of ~25% of the failed sessions (which is ~8% of the total sessions).

Top errors

That said, here is a list of error codes (these are mostly API error codes, but a few are internal to UploadWizard) sorted by frequency, collected over ~1000 sessions:

| filename             |    20 |
| badtoken             |    19 |
| missingresult        |    14 |
| title                |    13 |
| publishfailed        |    11 |
| stasherror           |     7 |
| server-error         |     3 |
| fileexists-forbidden |     2 |
| filetype-banned-type |     1 |
| unknown              |     1 |
| verification-error   |     1 |
| unknownerror         |     1 |

A little explanation about the more frequent ones:
  • filename: these seem to be user errors - most often invalid filetype (doc, bmp etc), sometimes no extension at all or trying to add the same file twice.
  • badtoken: some sort of CSRF token expiration; bug 69691
  • missingresult: returned by the upload API in the details step when the uploaded file has gone missing; bug 43967
  • title: an error about duplicate files (i.e. the same file already exists on Commons) that somehow happens in the details step instead of the file step.
  • publishfailed: this seems to be some sort of race condition: first api call to publish a file from stash puts it into the job queue and sets it status to pending, second call will throw this error.
  • stasherror: could be lots of things. bug 56302bug 54028 and more.

Some suggestions based on the findings so far

Quick wins:
  • review UX for "fatal user errors" (i.e. when UploadWizard says "you can't upload this file type") - is the error message helpful?
  • review and improve api error messages (api-error-*), possibly override them with UW-specific ones. Do they identify next steps? Do they even exist?(e.g. api-error-publishfailed does not.)
  • renew token on badtoken error (bug 69691)
  • make sure that the specific error message thrown by ApiUpload::dieUsage gets logged somewhere. Currently we only log a generic message derived from the API error code, so e.g. all the dozen different UploadStashException subclasses are reported with the same message.
  • poll for success on publishfailed error (unlike its name suggest, it seems to be actually meaning something like "publish in progress")
Medium wins:
  • understand better why people repeat the upload process so often. This might reveal serious UX deficiencies or functional errors (e.g. in an older thread about funnel analysis, Sage claims uploading more than three files at the same time is too unreliable for him).
  • Investigate if there is a low-effort way to recover entered details when the upload process has to be restarted. (There are drop-in solutions like garlic.js or sisyphus.js but the very dynamic nature of UW forms might be a problem.)
  • figure out why are some title errors only reported in the details step
  • log information about uploaded files to better identify size- or filetype-specific issues
Bigger / longer-term effort:
  • figure out a way to retry when the user already entered all the details but publishing the file failed. (This points towards the per-file-workflow-instead-of-global-workflow direction.)
  • make stashed / async uploads rely on the database instead of the session (bug 43967)
Multimedia mailing list
Multimedia <at>