Brion Vibber | 24 May 18:44 2016
Picon

Variable bit rate encoding for video transcodes?

In the past, we've had a mixture of fixed bitrates and quality-based settings for producing video transcodes.

Each has its advantages: fixed bitrates are more predictable for watching while streaming, while fixed quality settings allow for reducing the bitrate on low-complexity scenes to save bandwidth (and increasing it on high-complexity scenes to keep quality up!)

Since "download and watch it later" is less of a thing on today's internet than "stream it right now!", I'd been leaning for a while towards moving more things to fixed bitrates. However, I'm starting to come down on the side of a fixed quality setting with a variable bitrate...

Overall variable rate encoding should lead to lower bandwidth usage for most parts of most files, while still maintaining high quality on scenes that need it.


The downside is that a high-complexity scene encoded at a higher bitrate might cause buffering to run out during playback that had been working ok on earlier scenes at a lower bitrate.

Once we support adaptive streaming (using MPEG-DASH, or something like it) the system should be able to provide a detailed enough manifest[1] to show which segments of the file are low-bandwidth and which are high-bandwidth, so if there's a bandwidth limitation that stops us from viewing one particular segment at the current resolution, we can bump down and then bump resolution back up again when the bandwidth usage goes down.

If there's no strong objection, I'm going to tinker with the quality settings for WebM and Ogg Theora video transcodes to try to find quality settings I'm happy with that result in reasonable bandwidth averages.


[1] An MPEG-DASH manifest (.mpd) specifies a target bitrate on each resolution representation, but the actual segments can be different sizes. When they're specified as byte ranges of a source file, the exact segment size is conveniently available!

-- brion

_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Brion Vibber | 5 May 15:49 2016
Picon

Reviving SVG client-side rendering task

For the last decade we've supported uploading SVG vector images to MediaWiki, but we serve them as rasterized PNGs to browsers. Recently, display resolutions are going up and up, but so is concern about low-bandwidth mobile users.


This means we'd like sharper icons and diagrams on high-density phone displays, but are leery of adding extra srcset entries with 3x or 4x size PNGs which could become very large. (In fact currently MobileFrontend strips even the 1.5x and 2x renderings we have now, making diagrams very blurry on many mobile devices. See https://phabricator.wikimedia.org/T133496 - fix in works.)


Here's the base bug for SVG client side rendering: https://phabricator.wikimedia.org/T5593
I've turned it into an "epic" story tracking task and hung some blocking tasks off it; see those for more details.

TL;DR stop reading here. ;)


One of the basic problems in the past was reliably showing them natively in an <img>, with the same behavior as before, without using JavaScript hacks or breaking the hamlet caching layer. This is neatly resolved for current browsers by using the "srcset" attribute -- the same one we use to specify higher-resolution rasterizations. If instead of PNGs at 1.5x and 2x density, we specify an SVG at 1x, the SVG will be loaded instead of the default PNG.

Since all srcset-supporting browsers allow SVG in <img> this should "just work", and will be more compatible than using the experimental <picture> element or the classic <object> which deals with events differently. Older browsers will still see the PNG, and we can tweak the jquery.hidpi srcset polyfill to test for SVG support to avoid breaking on some older browsers.

This should let us start testing client-side SVG via a beta feature (with parser cache split on the user pref) at which point we can gather more real-world feedback on performance and compatibility issues.


Rendering consistency across browser engines is a concern. Supposedly modern browsers are more consistent than librsvg but we haven't done a compatibility survey to confirm this or identify problematic constructs. This is probably worth doing.


Performance is a big question. While clean simple SVGs are often nice and small and efficient, it's also easy to make a HUGEly detailed SVG that is much larger than the rasterized PNGs. Or a fairly simple small file may still render slowly due to use of filters.

So we probably want to provide good tools for our editors and image authors to help optimize their files. Show the renderings and the bandwidth balance versus rasterization; maybe provide in-wiki implementation of svgo or other lossy optimizer tools. Warn about things that are large or render slowly. Maybe provide a switch to run particular files through rasterization always.

And we'll almost certainly want to strip comments and white space to save bandwidth on page views, while retaining them all in the source file for download and reediting.


Feature parity also needs more work. Localized text in SVGs is supported with our server side rendering but this won't be reliable in the client; which means we'll want to perform a server side transformation that creates per-language "thumbnail" SVGs. Fonts for internationalized text are a big deal, and may require similar transformations if we want to serve them... Which may mean additional complications and bandwidth usage.


And then there are long term goals of taking more advantage of SVGs dynamic nature -- making things animated or interactive. That's a much bigger question and has implementation and security issues!

-- brion
_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Brion Vibber | 25 Apr 13:41 2016
Picon

Hypermedia linking in new media types ("Myst for Wikipedia")

At Wikimedia Conference in Berlin I met with Felix from Wikimedia Ghana, who is super interested in getting more immersive media available such as 360-degree panoramic photos ("photo spheres"); I showed him the tool labs widget using panellum to do WebGL spherical photo viewing -- see https://phabricator.wikimedia.org/T70719#2204864 -- and he was very excited to see that it's something we could probably work out how to integrate in the nearish term.


That got me thinking more generally about new media types (video, panos, stereoscopic photos/videos/panos, 3D models, interactive diagrams, etc) and how we can extend them to support annotations and linking in a way that could create immersive visual experiences with the same kind of rich information and interlinking that Wikipedia is famous for in the world of text articles.

Ladies and gentlemen, I give you: "Epic saga: immersive hypermedia (Myst for Wikipedia)"

I would be real interested to hear y'all's ideas on medium to long term feasibility and desirability of this sort of system, and what we can pull more directly into the short term.

For instance I would love to get the panoramic / spherical viewers integrated in MMV, which is much easier than figuring out how to do clickable annotations in 3d environment. ;)


Medium term, I would also love to see us look at the annotation system that's on Commons done in site JS, and see if we can build a future-extensible system that's more integrated into the wiki and can be used in MMV.

Longer term, I think it'll just be nice to have these kinds of long-term goals to work towards.

Thoughts? Ideas? Am I crazy, or just crazy enough? ;)

-- brion
_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Brion Vibber | 13 Apr 21:41 2016
Picon

Preparing for VP9/Opus video/audio transcodes

In addition to the smaller bandwidth requirements for VP9 video encoding versus Theora or VP8, Microsoft is adding support for VP9 video and Opus audio in WebM to Windows 10 in the summer 2016 update.

Currently in Win10 preview builds this only works in Edge when using Media Source Extensions, and VP9 is disabled by default if not hardware-accelerated, but it's coming along. :)

If the final version lands with suitable config, users of Edge version 15 and later shouldn't need the ogv.js JavaScript decoding shim to get media playback on Wikipedia. Neat!

Things still to do in TimedMediaHandler:
* add transcode output for audio-only files as Opus in WebM container (Brion)
* keep working on the Kaltura->VideoJS front-end switch to make our lives easier fixing the UI (Derk-Jan & Brion) and to prep for...
* eventually we'll want to use MPEG-DASH manifests and Media Source Extensions to implement playback that's responsive to network and CPU speed and can switch resolutions seamlessly. This may or may not be a prerequisite for Win10 Edge playback if MS sticks with the MSE requirement.
* consider improving the transcode status overview at Special:TimedMediaHandler; it reports errors in a way that doesn't scale well.

Things still to do in Wikimedia site config:
* add VP9/Opus transcodes to our config (audio, 240p, 360p, 480p, 720p, 1080p definitely; consider 1440p and 2160p for Ultra-HD videos)
* consider dropping some VP8 sizes (desktop browsers that support VP8 should all support VP9 now; old Android versions that don't grok VP9 might be main target remaining for VP8)

Things to consider:
* VP9 is slower to encode than VP8, and a transition will require a lot of back-running of existing files. We *will* need to assign more video scalers, at least temporarily.
* I started writing a client-side bot to trigger new transcodes on old files. Prefer I should finish that, or prep a server side script that someone in ops will have to babysit?
* in future, the ogv.js JS decoder shim will still be used for Safari and IE 11, but I may be able to shift it from Theora to VP9 after making more fixes to the WebM demuxer. Decoding is slower per pixel but at a lower resolution you often get higher quality because of better compression and handling of motion -- and bandwidth usage is much better, which should make it a win on iPhones. This means eventually we may be able to reduce or drop the Ogg output. This will also tie in with MPEG-DASH adaptive streaming, so should be able to pick best size for CPU speed more reliably than current heuristic.
* longer term, AOMedia codec will arrive (initial code drop came out recently, based on VP10) with definite support from Google, Mozilla, and Microsoft. This should end up supplementing VP9 in a couple years, and should be even more bandwidth-efficient for high resolutions.

-- brion

_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Gilles Dubuc | 13 Apr 15:11 2016
Picon

Image scaling on the fly (closed source, AWS only)

Claims resizing JPEGs "any size" in 25ms or less on m3.medium AWS instances.

While this is closed source, and is part of a worrying trend of closed SaaS frameworks one has to pay by the hour, the author reveals enough technical details to figure out how it works. Namely:

- the inspiration for the code is an unnamed Japanese paper which describes how to process the Y, U and V components of a JPEG in parallel
- it uses a similar technique as the jpeg:size option of ImageMagick, whereby only parts of the JPEG are read, instead of every pixel, according to the needed target thumbnail size
- it leverages "vector math" in the processor, which I assume means AVX instructions and registries

Essentially, it's parallelized decoding and resizing of JPEGs, using hardware-specific instructions for optimization.

Of course writing something similar would be a large undertaking. Let's hope that the folks who work on ImageMagick/GraphicsMagick take note and try to do just that :)

I have confirmation that it's extremely fast from pals at deviantArt (whose infrastructure is on AWS) who tried it out. To the point that they're likely getting rid of their storage of intermediary resized images.

I have a feeling that we'll be seeing more of this sort of hardware-optimized JPEG decoding/transcoding once Intel releases their first CPUs with integrated FPGAs, which is supposed to happen soon-ish. Unfortunately these Xeon CPUs will be released "in limited quantities, to cloud providers first". Here's that annoying trend again...
_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Gilles Dubuc | 13 Apr 14:54 2016
Picon

Facebook surround 360

Facebok just announced this 360 video camera, whose hardware and stitiching software will be open source and released this summer.
_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Daniel Mietchen | 18 Mar 12:54 2016
Gravatar

Candidature for the European Commission Open Science Policy Platform

Dear all,

the European Commission is setting up an advisory group for its Open
Science Agenda:
https://ec.europa.eu/research/openscience/index.cfm?pg=open-science-policy-platform

This group - called 'Open Science Policy Platform' (OSPP) - has an
open call for candidates, and I am amongst those who have publicly
declared their interest in serving on this group.

Since many of my Open Science activities have a Wikimedia component, I
posted my candidacy at
https://meta.wikimedia.org/wiki/User:Daniel_Mietchen/European_Commission_Open_Science_Policy_Platform
and would welcome endorsements from individuals and groups, especially
if you are active at the interface of Open Science with Wikimedia or
other open movements.

I also set up
https://meta.wikimedia.org/wiki/Research:European_Commission_Open_Science_Policy_Platform
in case others here would like to serve as candidates as well, which I
would welcome.

Thanks and cheers,

Daniel

_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Claudia Garád | 10 Mar 13:23 2016
Picon

[Wikivideo-l] Invitation: Explainer video workshop during Wikimania Pre-Conference

Hi everyone,

are you coming to Esino Lario in June and looking for some interesting things to do during Wikimania Pre-Conference? Well, perhaps this is for you: like last year, we plan to host a workshop for creating animated explainer videos for Wikimedia projects:
https://wikimania2016.wikimedia.org/wiki/Explainer_Video_Workshop

Videos have become a popular format for the dissemination of information - especially short animated explainer videos which provide a quick overview over complex topics. The same principle applies to Wikipedia: videos can enrich exisiting content and provide an entertaining and easily comprehensible access to free knowledge. Together with experts from the simpleshow foundation Wikimedia Österreich offers workshops for Wikimedians which are designed to learn the tricks of the trade of screenwriting for explainer videos.
The workshop might also be interested for people who attended last year, as we will also introduce a new freeware, which enables you to produce the videos yourself!

Like the other submissions we need at least 3-5 interested Wikimedians to sign up for the workshop. So please do so at the link above if you are interested :-)

Content of the workshop

   * Interested participants will learn the basic skills for creating plots of explainer videos: how to explain complex topics in a short and comprehensible way, how to work with cut-out animation, storyboard conception and visualization.
   *  The participants can create and refine their scripts and storyboards together with the simpleshow foundation after the workshop.
   *  Depending on the topic & previous research participants can turn their scipts directly into an ilustrated video using “mysimpleshow”-freeware.
    * The realization and production of the finished storyboard could be done either using the “mysimpleshow”-freeware or by the simpleshow foundation. The result will be published under the CC-BY-SA license on Wikimedia Commons.
   *  The screenwriters can use the videos for Wikipedia articles and other purposes.

What do you need?

   * Interest in storytelling
   * A topic for an explainer video
   * First research on the topic (Wikipedia article niveau)
   * No previous knowledge in screenwriting or video production necessary

Please sign up here:
https://wikimania2016.wikimedia.org/wiki/Explainer_Video_Workshop

Claudia
--
Claudia Garád
Executive Director

Wikimedia Österreich - Verein zur Förderung Freien Wissens
Siebensterngasse 25/15
1070 Wien
0699 141 28615
www.wikimedia.at
_______________________________________________
Wikivideo-l mailing list
Wikivideo-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikivideo-l
Mark Holmquist | 16 Feb 18:55 2016
Picon
Gravatar

ImageTweaks demo now on display at Wikimedia Labs

Hi Multimedia enthusiasts, Commonists, and Wikitechers,

The Multimedia team has been hard at work building a new extension for 
editing images on-wiki, and we believe we now have a workable demo 
running on Labs! You can find it on our Multimedia Alpha Wiki[0], where 
there are also instructions for testing.

Note that we have a list of known bugs and failings on that wiki, and we 
are working on getting those fixed before we push the extension into any 
kind of deployment - our next steps will likely be to put it on 
test2wiki, then to push a BetaFeature to Commons if all goes well. We 
will keep you updated with the status of the project as we progress.

If you find more bugs, or have concerns about this extension, you can 
share them on the Village Pump[1]. You can also file a Phabricator task 
against ImageTweaks[2] if you prefer to be in more direct contact with 
the team about a technical issue.

Thanks for helping us test new stuff, and I look forward to getting this 
great tool out to you soon!

[0] http://multimedia-alpha.wmflabs.org/wiki/index.php/Main_Page
[1] 
https://commons.wikimedia.org/wiki/Commons:Village_pump#ImageTweaks_extension_now_on_display_at_Labs
[2] 
https://phabricator.wikimedia.org/maniphest/task/create/?projects=ImageTweaks

--

-- 
Mark Holmquist
Lead Engineer
Multimedia Team
Wikimedia Foundation
http://marktraceur.info

_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Mark Holmquist | 25 Jan 15:58 2016
Picon
Gravatar

Multimedia data being crunched, expanded - first look

Hello, friends!

We have some preliminary numbers and graphs for Commons, English 
Wikipedia, and German Wikipedia on the following:

* Uploads per month
* Unique uploaders per month
* New uploaders per month
* Cross-wiki uploads per month (currently wonky, patch in to fix it)
* UploadWizard uploads per month (based on categories, might be flawed)

You can find the graphs here:

https://edit-analysis.wmflabs.org/multimedia-health

The raw numbers are available, if you're into it:

http://datasets.wikimedia.org/limn-public-data/metrics/multimedia-health/

These numbers will automatically update each month, and we have 
historical data as far back as is necessary (but feel free to disagree 
with that assessment).

Upcoming numbers:

* Uploaders by tool per month (i.e. people using UW, CWU, etc.)
* New uploaders by tool per month
* Deletions

Numbers I want but haven't totally sussed out how to find (but I'm close!):

* Number of pages with images per month
* Number of images on pages per month

All of those numbers and graphs will show up in the same places (see 
links above) and will also be updated automatically, so we never have to 
think about implementing metrics ever again.

If you want to mess up my code, you can try to do so in the 
analytics/limn-multimedia-data repository on gerrit, and the 
configurations for Dashiki are here:

https://meta.wikimedia.org/wiki/Config:MultimediaHealth
https://meta.wikimedia.org/wiki/Dashiki:CategorizedMetrics

Let me know if you have any questions, suggestions, complaints, or 
praise for these efforts - I'm available on- or off-list, on 
Phabricator, or on IRC in the #wikimedia-multimedia channel as always :)

And, side plug, the wonderful Analytics humans who brought you the 
reportupdater and Dashiki tools can be found on the analytics list (one 
of the addressees of this message) or in #wikimedia-analytics.

Thanks everyone, here's to more great numbers this year!

--

-- 
Mark Holmquist
Lead Engineer, Multimedia
Wikimedia Foundation
mtraceur <at> member.fsf.org
http://marktraceur.info

_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia
Pine W | 15 Jan 01:24 2016
Picon

Commons upload wizard

Hi Multimedians,

Are there any significant interface or capability changes planned in the near future for the Commons upload wizard? The reason that I ask is that I plan to demonstrate the use of the wizard in my IEG-funded educational video project, and I would like to future-proof the content to the extent that it's possible to do so.

Thank you,

Pine
_______________________________________________
Multimedia mailing list
Multimedia <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/multimedia

Gmane