Brion Vibber | 1 Oct 04:42 2007
Picon

Re: Arabic OTRS encoding

Muhammad Alsebaey wrote:
> Actually, I just found out you are correct, I just scanned the messages and
> almost all problematic ones come from Yahoo mail :) , however it seems like
> more than 75% of our traffic on the Arabic queue comes from Yahoo? that's
> why I had the impression it is a general problem.
[snip]
> One note though, when I change the encoding to Windows-1256 in the plain
> view I can read it, that shouldnt be the case if the encoding is ISO-8859-1?

There are two parts to the problem:

1) The original mailer marks the mail with the wrong encoding

2) OTRS believes the incorrect header and thus converts the text to
Unicode incorrectly for web output.

As a result, you can't change the encoding setting of the web browser to
fix it.

However, OTRS also does its 'plain view' incorrectly by outputting the
raw original bytes instead of converting to Unicode to match the
surrounding page -- which has the convenient property that you *can*
change the encoding setting in the browser.

So I'd recommend using that as a workaround for the moment until we
figure out a way to patch up OTRS to handle it better.

[snip]
> The ticket Mido posted has someone who replied back that he received
> gibberish.
(Continue reading)

Mohamed Magdy | 1 Oct 05:50 2007
Picon

Re: Arabic OTRS encoding

Will otrs change the encoding of the outgoing message if we change the 
encoding to win56 (or whatever) in the send form?
brion | 1 Oct 09:16 2007
Picon

MediaWiki automated test run failure 2007-10-01

An automated run of parserTests.php showed the following failures:

This is MediaWiki version 1.12alpha (r26265).

Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
Reading tests from "extensions/LabeledSectionTransclusion/lstParserTests.txt"...

  17 still FAILING test(s) :(
      * URL-encoding in URL functions (single parameter)  [Has never passed]
      * URL-encoding in URL functions (multiple parameters)  [Has never passed]
      * Table security: embedded pipes
(http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html)  [Has never passed]
      * Link containing double-single-quotes '' (bug 4598)  [Has never passed]
      * message transform: <noinclude> in transcluded template (bug 4926)  [Has never passed]
      * message transform: <onlyinclude> in transcluded template (bug 4926)  [Has never passed]
      * BUG 1887, part 2: A <math> with a thumbnail- math enabled  [Has never passed]
      * HTML bullet list, unclosed tags (bug 5497)  [Has never passed]
      * HTML ordered list, unclosed tags (bug 5497)  [Has never passed]
      * HTML nested bullet list, open tags (bug 5497)  [Has never passed]
      * HTML nested ordered list, open tags (bug 5497)  [Has never passed]
      * Inline HTML vs wiki block nesting  [Has never passed]
      * Mixing markup for italics and bold  [Has never passed]
      * dt/dd/dl test  [Has never passed]
      * Images with the "|" character in the comment  [Has never passed]
      * Parents of subpages, two levels up, without trailing slash or name.  [Has never passed]
      * Parents of subpages, two levels up, with lots of extra trailing slashes.  [Has never passed]

Passed 527 of 544 tests (96.88%)... 17 tests failed!
(Continue reading)

vasilvv | 1 Oct 14:07 2007
Picon

Re: MediaWiki automated test run failure 2007-10-01

brion writes:
> An automated run of parserTests.php showed the following failures:
> 
> This is MediaWiki version 1.12alpha (r26265).
> 
> Reading tests from "maintenance/parserTests.txt"...
> Reading tests from "extensions/Cite/citeParserTests.txt"...
> Reading tests from "extensions/Poem/poemParserTests.txt"...
> Reading tests from "extensions/LabeledSectionTransclusion/lstParserTests.txt"...
> 
>   17 still FAILING test(s) :(
>       * URL-encoding in URL functions (single parameter)  [Has never passed]
>       * URL-encoding in URL functions (multiple parameters)  [Has never passed]
>       * Table security: embedded pipes
(http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html)  [Has never passed]
>       * Link containing double-single-quotes '' (bug 4598)  [Has never passed]
>       * message transform: <noinclude> in transcluded template (bug 4926)  [Has never passed]
>       * message transform: <onlyinclude> in transcluded template (bug 4926)  [Has never passed]
>       * BUG 1887, part 2: A <math> with a thumbnail- math enabled  [Has never passed]
>       * HTML bullet list, unclosed tags (bug 5497)  [Has never passed]
>       * HTML ordered list, unclosed tags (bug 5497)  [Has never passed]
>       * HTML nested bullet list, open tags (bug 5497)  [Has never passed]
>       * HTML nested ordered list, open tags (bug 5497)  [Has never passed]
>       * Inline HTML vs wiki block nesting  [Has never passed]
>       * Mixing markup for italics and bold  [Has never passed]
>       * dt/dd/dl test  [Has never passed]
>       * Images with the "|" character in the comment  [Has never passed]
>       * Parents of subpages, two levels up, without trailing slash or name.  [Has never passed]
>       * Parents of subpages, two levels up, with lots of extra trailing slashes.  [Has never passed]
> 
(Continue reading)

VasilievVV | 1 Oct 14:26 2007
Picon

Re: MediaWiki automated test run failure 2007-10-01

vasilvv-Re5JQEeQqe8AvxtiuMwx3w@... ?????:
> brion writes:
>> An automated run of parserTests.php showed the following failures:
>>
>> This is MediaWiki version 1.12alpha (r26265).
>>
>> Reading tests from "maintenance/parserTests.txt"...
>> Reading tests from "extensions/Cite/citeParserTests.txt"...
>> Reading tests from "extensions/Poem/poemParserTests.txt"...
>> Reading tests from "extensions/LabeledSectionTransclusion/lstParserTests.txt"...
>>
>>   17 still FAILING test(s) :(
>>       * URL-encoding in URL functions (single parameter)  [Has never passed]
>>       * URL-encoding in URL functions (multiple parameters)  [Has never passed]
>>       * Table security: embedded pipes
(http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html)  [Has never passed]
>>       * Link containing double-single-quotes '' (bug 4598)  [Has never passed]
>>       * message transform: <noinclude> in transcluded template (bug 4926)  [Has never passed]
>>       * message transform: <onlyinclude> in transcluded template (bug 4926)  [Has never passed]
>>       * BUG 1887, part 2: A <math> with a thumbnail- math enabled  [Has never passed]
>>       * HTML bullet list, unclosed tags (bug 5497)  [Has never passed]
>>       * HTML ordered list, unclosed tags (bug 5497)  [Has never passed]
>>       * HTML nested bullet list, open tags (bug 5497)  [Has never passed]
>>       * HTML nested ordered list, open tags (bug 5497)  [Has never passed]
>>       * Inline HTML vs wiki block nesting  [Has never passed]
>>       * Mixing markup for italics and bold  [Has never passed]
>>       * dt/dd/dl test  [Has never passed]
>>       * Images with the "|" character in the comment  [Has never passed]
>>       * Parents of subpages, two levels up, without trailing slash or name.  [Has never passed]
>>       * Parents of subpages, two levels up, with lots of extra trailing slashes.  [Has never passed]
(Continue reading)

Platonides | 1 Oct 14:27 2007
Picon

Re: MediaWiki automated test run failure 2007-10-01

> Is there any documentation on all tests? Maybe any bugs?
> --VasilievVV

The tests are available at 
http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/maintenance/parserTests.txt?view=markup

When there's a related bug, it's noted on the name.
Luca de Alfaro | 1 Oct 18:13 2007

Wiki splitting code available

Dear All,

I posted at http://trust.cse.ucsc.edu/Code a tiny bit of code that enables
you to split a Wikipedia .xml dump into n-page chunks, for a given n.  The
chunks are then immediately (on the fly) compressed with a compression
algorithm you can choose (default: gzip).

We are using this to split a dump, to be able to analyze it in pieces in a
more manageable way.  We hope the code is useful to others as well. (It is a
tiny and trivial piece of code, btw).

Luca
Travis Derouin | 1 Oct 18:22 2007

creating an extension that extends QueryPage

Hi,

I'm trying to extend QueryPage into an extension, which should be
pretty simple. If I declare the class in a new file located in the
extensions subdirectory to extend QueryPage:

class BuddyPage extends PageQueryPage {
}

I get:

Fatal error: Class 'PageQueryPage' not found in
/var/www/html/mediawiki-1.11.0/extensions/BuddyPage.php on line 21

But if I include QueryPage.php:

require_once('QueryPage.php');

I get the error:

Fatal error:  Call to undefined function wfRunHooks() in
/var/www/html/mediawiki-1.11.0/includes/QueryPage.php on line 46

Seems like a bit of a chicken and egg problem, any ideas?

Travis
Brion Vibber | 1 Oct 20:00 2007
Picon

Re: creating an extension that extends QueryPage

Travis Derouin wrote:
> I'm trying to extend QueryPage into an extension, which should be
> pretty simple. If I declare the class in a new file located in the
> extensions subdirectory to extend QueryPage:
> 
> class BuddyPage extends PageQueryPage {
> }
> 
> I get:
> 
> Fatal error: Class 'PageQueryPage' not found in
> /var/www/html/mediawiki-1.11.0/extensions/BuddyPage.php on line 21

Generally what you should be doing is adding your file to
$wgAutoloadClasses in the extension's loader file; then the file gets
loaded on-demand when the class actually gets used. Something like:

$wgAutoloadClasses['PageQueryPage'] =
dirname(__FILE__).'/PageQueryPage.php';

QueryPage.php will be similarly loaded on demand when actually needed,
at a time when things are initialized properly.

> But if I include QueryPage.php:
> 
> require_once('QueryPage.php');
> 
> I get the error:
> 
> Fatal error:  Call to undefined function wfRunHooks() in
(Continue reading)

Brion Vibber | 1 Oct 20:01 2007
Picon

Re: Wiki splitting code available

Luca de Alfaro wrote:
> I posted at http://trust.cse.ucsc.edu/Code a tiny bit of code that enables
> you to split a Wikipedia .xml dump into n-page chunks, for a given n.  The
> chunks are then immediately (on the fly) compressed with a compression
> algorithm you can choose (default: gzip).
> 
> We are using this to split a dump, to be able to analyze it in pieces in a
> more manageable way.  We hope the code is useful to others as well. (It is a
> tiny and trivial piece of code, btw).

Thanks!

-- brion vibber (brion  <at>  wikimedia.org)

Gmane