Amir Ladsgroup | 26 Jul 17:00 2016

The Revision Scoring weekly update

This is the 14th weekly update from revision scoring team that we have sent
to this mailing list.

New developments:

   - We have a new grafana dashboard for requests that the ORES beta
   feature makes [1] [2]

   - There is a grafana chart in ORES service showing total number of
   requests (including requests hitting cache). Showing we had 20 million
   requests last month! [3]

   - ORES extension can be set to score edits only in a given set of
   namespaces. This was useful to reduce failure rate of jobs in ORES
   extension from 4% to 1% [4]

   - Scap3 now automatically deploys to canary node for ORES first (it's
   scb2001.codfw.wmnet). So we can better test deployments beforehand. [5]

Maintenance and robustness:

   - We are removing a redundant index on ores_classification. No harm
   would be made but we'll save disk space. [6]

   - In footer of, we had "Hosted in Wikimedia Labs". It
   got fixed now. [7]

   - Cache.php in ORES extension which stores the scores into database has
   a better structure for handling errors.[8]
(Continue reading)

Purodha Blissenbach | 25 Jul 23:34 2016

How to find a repository?

Hi, says to belong to 
Tool-Labs-tools-Pageviews but how to find the repository for it, if I 
want to work on it? Gerrit does not find anything when I enter this 
sring in its repository search.
Is there any tool or tutorial for this question?


Wikitech-l mailing list
Wikitech-l <at>
Tyler Cipriani | 25 Jul 21:35 2016

Canary Deploys for MediaWiki

tl;dr: Scap will deploy to canary servers and check for error-log spikes in the next version (to be released Soon™).

In light of recent incidents[0] which have created outages accompanied by large, easily detectable,
error-rate spikes, a patch has recently landed in Scap[1] that will:

    1. Push changes to a set of canary servers[2] before syncing to proxy servers
    2. Wait a configurable length of time (currently 20 seconds[3]) for any errors to have time to make
themselves known
    3. Query Logstash (using a script written by Gabriel Wicke[4]) to determine if the error rate has increased
over a configurable threshold (currently 10-fold[5])

Big thanks to the folks that helped in this effort: Gabriel Wicke, Filippo Giunchedi and Giuseppe
Lavagetto, Bryan Davis and Erik Bernhardson (for their mad Logstash skillz)!

It is noteworthy, that in instances where expedience is required—we're in the middle of an outage and who
cares what Logstash has to say—the `--force` flag can be added to skip canary checks all together (i.e.
`scap sync-file --force wmf-config/InitialiseSettings 'Panic!!'`).

The RelEng team's eventual goal is still to move MediaWiki deployments to the more robust and resillient
Scap3 deployment framework. There is some high-priority work that has to happen before the Scap3 move. In
the interim, we are taking steps (like this one) to respond to incidents and keep deployments safe.

Hopefully, this work and the error-rate alert work from Ori last week[6] will allow everyone to be more
conscientious and more keenly aware of deployments that cause large aberrations in the rate of errors.

Your Friendly Neighborhood Release Engineering Team

[0]. is the
recent example I could find, but there have been others.
(Continue reading)

Adam Baso | 25 Jul 21:18 2016

Wednesday, 3-August-2016 CREDIT


The next CREDIT showcase is Wednesday, 3-August-2016 at 1800 UTC (1100 SF).

Please add your demos to the Etherpad.

See you soon!
Wikitech-l mailing list
Wikitech-l <at>
Greg Sabino Mullane | 22 Jul 16:29 2016

Re: MediaWiki pingback

> The configuration variable that controls this behavior ($wgPingback) will
> default to false (that is: don't share data). The web installer will
> display a checkbox for toggling this feature on and off, and it will be
> checked by default (that is: *do* share data). This ensures (I hope) that
> no one feels surprised or violated.

Sounds sane, as long as the installer makes it quite clear what it is going 
to be doing.

> - The chosen database backend (e.g., "mysql", "sqlite")

Would love to have DB version information as well (getServerVersion)

Lua version?

> Please chime in if you have any thoughts about this. :)

Many of the wikis I install are on intranets behind heavy firewalls. I'd be happy 
to submit this data however if there were an optional method to do so.


Greg Sabino Mullane greg <at>
End Point Corporation
PGP Key: 0x14964AC8
Wikitech-l mailing list
Wikitech-l <at>
(Continue reading)

Birgit Müller | 22 Jul 15:53 2016

RevisionSlider: First round of deployment as a beta feature

(sorry for cross posting)

Hi all,

on July 21 23:00-00:00 (UTC) RevisionSlider got deployed as a beta feature
on German Wikipedia, Arabic Wikipedia and Hebrew Wikipedia.

The RevisionSlider extension adds a slider interface to the diff view, so
that you can easily move between revisions. It helps users to view edit
summaries and other meta data of all revisions while hovering over the
slider interface. At the current state, the last 500 revisions can be
loaded. [1]

The RevisionSlider extension was developed by Wikimedia Deutschland's TCB
team and fulfills a wish from the German-speaking community's Technical
Wishlist. [2] It is based on a rough prototype by the Community Tech Team
who we love collaborating with. [3]

The feature already was presented at WMF's last metrics meeting, so if you
are interested to hear more about it, you can also have a look into the
video record. [4]

Why German, Arabic and Hebrew Wikipedia in the first round?

As a first step, we want to see if RevisionSlider works well on both, LTR
and RTL Wikipedias. The decision for German Wikipedia is probably obvious,
as the RevisionSlider addresses a wish from the German-speaking community.
But the team spent also some work in optimizing the feature for RTL
languages, and we were talking to people from Arabic and Hebrew Wikipedia
at Wikimania. Both communities created a site request ticket to deploy
(Continue reading)

Ori Livneh | 22 Jul 02:29 2016

MediaWiki pingback

What proportion of MediaWiki installations run on 32-bit systems? How much
memory is available to a typical MediaWiki install? How often is the Oracle
database backend used?

These are the kinds of questions that come up whenever we debate changes
that impact compatibility. More often than not, the questions go
unanswered, because we don't have good statistical data about the
environments in which MediaWiki is running.

Starting with version 1.28, MediaWiki will provide operators with the
option of sharing anonymous data about the local MediaWiki instance and its
environment with MediaWiki's developer community via a pingback to a URL
endpoint on

The configuration variable that controls this behavior ($wgPingback) will
default to false (that is: don't share data). The web installer will
display a checkbox for toggling this feature on and off, and it will be
checked by default (that is: *do* share data). This ensures (I hope) that
no one feels surprised or violated.

The information that gets sent is described in <>. Here is a
summary of what we send:

- A randomly-generated unique ID for the wiki.
- The chosen database backend (e.g., "mysql", "sqlite")
- The version of MediaWiki in use
- The version of PHP
- The name and version of the operating system in use
- The processor architecture and integer size (e.g. "x86_64")
(Continue reading)

Pine W | 22 Jul 01:19 2016

How widely is Flow being used?

Hi Wikitech,

I believe that Flow is being used on an opt-in basis on Wikidata and
Catalan Wikipedia, and is used on user talk pages by default on MediaWiki.

Is that correct?

Are there any wikis other than MediaWiki where Flow is enabled on all talk
pages by default?

Are there any wikis where Flow is widely used on an opt-in basis, even if
it's not the default?


Wikitech-l mailing list
Wikitech-l <at>
Chad Horohoe | 21 Jul 17:08 2016

Gerrit downtime this Monday!


So the time is upon us to finally upgrade Gerrit. I thank everyone in
advance for all of your patience
and good work testing things out for us. The plan is outlined in detail on
Phabricator[0], but I'll give
everyone the short version here.

The downtime will be on Monday, July 25th from 01:00 to 04:00 UTC; that's
Sunday night for those
of us who are US-based. This time was picked based because it's one of our
lowest traffic times
on Gerrit. There's never a *good* time to bring it down, it's always being
used, but this looks like it'll
impact the fewest number of users. I do not anticipate the process actually
taking the full 3 hours,
but I'm giving us a lot of extra time just in case.

Jaime will be on hand to assist with a final DB snapshot of the old version
and possible rollback,
and Daniel Z. is going to assist me with the puppet work. Most of this is
already prepared so the
amount of "change" to do the swap has been kept to a minimum.

Gerrit is a critical service to all developers, so the plan includes a
generous provision to roll back
if things are not operating properly--I'd rather us be on the old version
that works on a Monday
morning than be stuck broken going into the work week.

(Continue reading)

James Forrester | 20 Jul 23:53 2016

Anyone use the ImageMetrics extension's data?


From what we in Multimedia can tell, the ImageMetrics extension was put
into production as an EventLogging source to measure data about users and
images, to make decisions about MediaViewer whilst it was being actively

As that's now in the past, is there any interest in continuing to store
this data, or can we kill it and reduce the number of extensions in
production by one?

Follow-up on please.


James D. Forrester
Lead Product Manager, Editing
Wikimedia Foundation, Inc.

jforrester <at> |  <at> jdforrester
Wikitech-l mailing list
Wikitech-l <at>
Daniel Barrett | 20 Jul 21:58 2016

Help with unit test involving MovePage and caching in 1.27?

In a custom extension, I have a unit test that was working fine in MediaWiki 1.26, but it fails in MediaWiki
1.27. I'd appreciate any pointers in the right direction.

The extension performs a page move, more or less like this (but with error checking):

  $mp = new MovePage($source, $destination);
  $result = $mp->move($context->getUser(), $reason, false);

The code works fine on a running wiki, but in a unit test under MW 1.27, I get this error:

  MWException: No valid null revision produced in MovePage::moveToInternal

The test seem to be running into issues with the LinkCache. The call to MovePage::move hits this line:

  $pageid = $this->oldTitle->getArticleID( Title::GAID_FOR_UPDATE );

which returns zero instead of the page's proper article ID. (This did not happen in MediaWiki 1.26.) Inside
of Title::getArticleID we find these lines:

  if ( $flags & self::GAID_FOR_UPDATE ) {
    $oldUpdate = $linkCache->forUpdate( true );
    $linkCache->clearLink( $this );
    $this->mArticleID = $linkCache->addLinkObj( $this );
    $linkCache->forUpdate( $oldUpdate );

The call to LinkCache::addLinkObj($this) returns zero. As a result, MovePage::moveInternal fails and
we get the error message shown above.

So, I am wondering if anyone has any insights why this began happening with MW 1.27, and what directions I
(Continue reading)