Daniel Hernández | 1 Nov 21:49 2011
Picon

#If function is not computed

Hello,

I'm having problems with functions. I wrote the next code in a wiki
page:

{{{#if: 0 | yes | not }}}

{{{#if: | yes | not }}}

{{#if: 0 | yes | not }}

And I get

<pre>si
</pre>
<pre>si
</pre>
<p>{{#if: 0 | si | no }}
</p>

But I supposed to get "yes" in the 3th example and that functions are
enclosed only with 2 parenthesis.

I have tested it in the Wikipedia and in another wiki that I have
installed recently (using MediaWiki 1.16.5). ¿Why the function is not
computed? ¿I need to configure some parameter in the wiki?

Thanks,
Daniel

(Continue reading)

Platonides | 1 Nov 23:32 2011
Picon

Re: #If function is not computed

El 01/11/11 21:49, Daniel Hernández escribió:
> Hello,
> 
> I'm having problems with functions. I wrote the next code in a wiki
> page:
> 
> {{{#if: 0 | yes | not }}}
> 
> {{{#if: | yes | not }}}
> 
> {{#if: 0 | yes | not }}
> 
> And I get
> 
> <pre>si
> </pre>
> <pre>si
> </pre>
> <p>{{#if: 0 | si | no }}
> </p>
> 
> But I supposed to get "yes" in the 3th example and that functions are
> enclosed only with 2 parenthesis.
> 
> I have tested it in the Wikipedia and in another wiki that I have
> installed recently (using MediaWiki 1.16.5). ¿Why the function is not
> computed? ¿I need to configure some parameter in the wiki?
> 
> Thanks,
> Daniel
(Continue reading)

Daniel Hernández | 2 Nov 03:20 2011
Picon

Re: #If function is not computed

On Tue, 2011-11-01 at 23:32 +0100, Platonides wrote:
> El 01/11/11 21:49, Daniel Hernández escribió:
> > Hello,
> > 
> > I'm having problems with functions...
>
> You need to use just two {{, not three.
> And to have installed the ParserFunctions extension.
> http://www.mediawiki.org/wiki/Extension:ParserFunctions

Thanks!
Daniel

_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l
Sumana Harihareswara | 4 Nov 14:56 2011
Picon

citation generator

Oliver Keyes asked whether the visual editor will include a citation
generator, and Trevor's response is useful enough I figured it should go
on this mailing list for reference.

-Sumana

-------- Original Message --------
Subject: Re: Question for the devs...
Date: Wed, 2 Nov 2011 09:46:37 -0700
From: Trevor Parscal <tparscal <at> wikimedia.org>
To: Oliver Keyes <okeyes <at> wikimedia.org>
CC: Sumana Harihareswara <sumanah <at> wikimedia.org>

Our plan for this kind of thing is pretty simple initially, we are just
going to have a way to create and edit <ref> tags. Because citations are
important, you might expect we would want to integrate a solution problem
into the editor. However, citation templates are templates, which means
they exist on-wiki, not in the core software. Because of this, it's
important to make sure the software that helps users make use of these
templates and the templates themselves live in the same places and can be
changed by the same people.

This is our plan to support the kind of work you are talking about:

   - Templates will be editable as forms, automatically generated by
   inspecting the transclusion code
      - Not all possible parameters will be shown
      - The order of named parameters may vary from one transclusion to
      another
      - The labels will be crudely converted to title case (zip_code
(Continue reading)

Trevor Parscal | 4 Nov 22:33 2011
Picon

Re: WikiDom serializers

The other day I took the time to migrate the serializers into the latest codebase.


There are lots of things we can do to finish and or complete them, but they are a good way to hit the ground running.

Some key things that need to be worked on:
  • es.HtmlSerializer needs an algorithm that expands the flat list items in WikiDom into a tree structure more suitable for HTML rendering
  • es.HtmlSerializer and es.WikitextSerializer need support for more things, namely definition lists but other gaps may exists as well (and we need to define what definition lists look like in WikiDom)
  • es.AnnotationSerializer needs some nesting smartness, so that overlapped regions open and close properly (<b>a<i>b</b>c</i> should be <b>a<i>b</i></b><i>c</i> - es.ContentView does this correctly but is working from the linear data model)
  • We need some sort of context that can be asked for the HTML of a template, whether a page exists, etc. Initially this work is all done on the client, which means this is a wrapper for a lot of API calls, but either way, having a firm API between the renderer and the site context will help keep things clean and flexible
The serializers depend on some static methods in es and es.Html - but are otherwise very stand-alone. We may even want to move es and es.Html (which are very general purpose libraries) to a shared library that the parser and es code can both depend on.

- Trevor

On Thu, Oct 27, 2011 at 1:38 PM, Gabriel Wicke <wicke <at> wikidev.net> wrote:
Hi,

today I started to look into generating something closer to WikiDom from the
parser in the ParserPlayground extension. For further testing and parser
development, changes to the structure will need to be mirrored in the
current serializers and renderers, which likely won't be used very long once
the editor integration gets underway.

The serializers developed in wikidom/lib/es seem to be just what would be
needed, so I am wondering if it would make sense to put some effort into
plugging those into the parser at this early stage while converting the
parser output to WikiDom. The existing round-trip and parser testing
infrastructure can then already be run against these serializers.

The split codebases make this a bit harder than necessary, so maybe this
would also be a good time to draw up a rough plan on how the integration
should look like. Swapping the serializers will soon break the existing
ParserPlayground extension, so a move to another extension or the wikidom
repository might make sense.

Looking forward to your thoughts,

Gabriel


_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l

_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l
Trevor Parscal | 5 Nov 00:22 2011
Picon

Re: WikiDom serializers

As an update, definition lists are now supported, they are simply defined as "term" and "definition" in the WikiDom listItem styles attribute, in addition to the already supported "bullet" and "number" styles.



It was disturbingly easy to add this functionality.

- Trevor

On Fri, Nov 4, 2011 at 2:33 PM, Trevor Parscal <tparscal <at> wikimedia.org> wrote:
The other day I took the time to migrate the serializers into the latest codebase.


There are lots of things we can do to finish and or complete them, but they are a good way to hit the ground running.

Some key things that need to be worked on:
  • es.HtmlSerializer needs an algorithm that expands the flat list items in WikiDom into a tree structure more suitable for HTML rendering
  • es.HtmlSerializer and es.WikitextSerializer need support for more things, namely definition lists but other gaps may exists as well (and we need to define what definition lists look like in WikiDom)
  • es.AnnotationSerializer needs some nesting smartness, so that overlapped regions open and close properly (<b>a<i>b</b>c</i> should be <b>a<i>b</i></b><i>c</i> - es.ContentView does this correctly but is working from the linear data model)
  • We need some sort of context that can be asked for the HTML of a template, whether a page exists, etc. Initially this work is all done on the client, which means this is a wrapper for a lot of API calls, but either way, having a firm API between the renderer and the site context will help keep things clean and flexible
The serializers depend on some static methods in es and es.Html - but are otherwise very stand-alone. We may even want to move es and es.Html (which are very general purpose libraries) to a shared library that the parser and es code can both depend on.

- Trevor

On Thu, Oct 27, 2011 at 1:38 PM, Gabriel Wicke <wicke <at> wikidev.net> wrote:
Hi,

today I started to look into generating something closer to WikiDom from the
parser in the ParserPlayground extension. For further testing and parser
development, changes to the structure will need to be mirrored in the
current serializers and renderers, which likely won't be used very long once
the editor integration gets underway.

The serializers developed in wikidom/lib/es seem to be just what would be
needed, so I am wondering if it would make sense to put some effort into
plugging those into the parser at this early stage while converting the
parser output to WikiDom. The existing round-trip and parser testing
infrastructure can then already be run against these serializers.

The split codebases make this a bit harder than necessary, so maybe this
would also be a good time to draw up a rough plan on how the integration
should look like. Swapping the serializers will soon break the existing
ParserPlayground extension, so a move to another extension or the wikidom
repository might make sense.

Looking forward to your thoughts,

Gabriel


_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l


_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l
Gabriel Wicke | 7 Nov 19:59 2011
Picon

Re: WikiDom serializers

Hello Trevor and list,

since last week's integration in the VisualEditor extension, things are 
progressing well. Lists including definition lists and tables are parsed to 
WikiDom and rendered in the HTML serializer. The parser is still quite rough 
and in flux at this stage, but the general structures mostly work when 
running the parser tests using node.js. The Wikitext serializer is not yet 
wired up as I currently concentrate on the parser and its WikiDom output, 
but will be added for round-trip testing.

Apart from general grammar tweaking I am now working on a conversion of 
inline elements into WikiDom annotations. The main challenge is the 
calculation of plain-text offsets. I am trying to avoid building an 
intermediate structure, but might fall back to it if things get too messy 
when interleaving this calculation with parsing.

>>    - es.AnnotationSerializer needs some nesting smartness, so
>>    that overlapped regions open and close properly (<b>a<i>b</b>c</i>
>>    should be <b>a<i>b</i></b><i>c</i> - es.ContentView does this
>>    correctly but is working from the linear data model)

Parsing these overlapped annotations is not supported too well right now, 
but should be doable using a multi-pass or shallow (token-only) parsing 
strategy for inline content. Pushing nesting and content model fix-ups to 
the serializer should also make it easier to approximate the parsing rules 
in the HTML5 specification [1] without forcing too much normalization. 
Mostly, the HTML5 parsing spec is a bit more systematic version of what tidy 
does right now after the MediaWiki parser has tried its best. 

Some early fix-ups seem to be needed to allow proper editing in particular 
of block-level elements, so I am currently a bit sceptical about avoiding 
normalization completely.

>>    - We need some sort of context that can be asked for the HTML of a
>>    template, whether a page exists, etc. Initially this work is all done
>>    on the client, which means this is a wrapper for a lot of API calls,
>>    but either way, having a firm API between the renderer and the site
>>    context will help keep things clean and flexible

Brion already implemented a simple context object for transclusion tests. 
This is probably not yet the final API, but already a good start.

Gabriel

[1]: HTML5 parsing spec: http://dev.w3.org/html5/spec/Overview.html#parsing
Sumana Harihareswara | 10 Nov 15:34 2011
Picon

nested definition lists

https://bugzilla.wikimedia.org/show_bug.cgi?id=6569

Gabriel mentioned that he'd like the list's input on this patch,
regarding how we treat nested lists like

; bla : blub

--

-- 
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
Gabriel Wicke | 10 Nov 17:00 2011
Picon

Re: nested definition lists

Sumana,

the regular

> ; bla : blub

is actually not the issue. More problematic are for example:

  ;; bla :: blub

  *; bla : blub

or even the simple

  ;; bla

Right now the behavior is quite inconsistent:
  http://www.mediawiki.org/wiki/User:GWicke/Definitionlists

The bug discussing this is
  https://bugzilla.wikimedia.org/show_bug.cgi?id=6569

Treating '; bla : blub' as a tightly-bound special-case construct seems
to me the simplest way to make this area more consistent while avoiding
very ugly syntax. This would mean that

  *; bla : blub

is treated as equivalent to
  *; bla
  *: blub

and
  ;; bla :: blub

is equivalent to
  ;; bla
  ;: :blub

What are your preferences on this? Is any of these cases commonly used
today?

Gabriel
Trevor Parscal | 10 Nov 19:34 2011
Picon

Re: nested definition lists

Can we deconstruct the current parser's processing steps and build a set of rules that must be followed?

This stikes me as an area where the very few places where this kind of strange mixed style nesting is rare enough we may even be able to introduce a little bit of reform without much ill effect on the general body of wikitext out there.

I think we need to get a dump of English Wikipedia and start using a simple PEG parser to scan through it looking for patterns and figuring out how often certain things are used - if ever.

Ward Cunninham had a setup that could do this sort of thing on a complete en-wiki dump in like 10-15 minutes, and a fraction of the dump (still tens of thousands of article in size) in under a minute. We supposedly have access to him and his mad science laboratory - now would be a good time to get that going.

- Trevor

On Thu, Nov 10, 2011 at 8:00 AM, Gabriel Wicke <wicke <at> wikidev.net> wrote:
Sumana,

the regular

> ; bla : blub

is actually not the issue. More problematic are for example:

 ;; bla :: blub

 *; bla : blub

or even the simple

 ;; bla

Right now the behavior is quite inconsistent:
 http://www.mediawiki.org/wiki/User:GWicke/Definitionlists

The bug discussing this is
 https://bugzilla.wikimedia.org/show_bug.cgi?id=6569

Treating '; bla : blub' as a tightly-bound special-case construct seems
to me the simplest way to make this area more consistent while avoiding
very ugly syntax. This would mean that

 *; bla : blub

is treated as equivalent to
 *; bla
 *: blub

and
 ;; bla :: blub

is equivalent to
 ;; bla
 ;: :blub

What are your preferences on this? Is any of these cases commonly used
today?

Gabriel


_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l

_______________________________________________
Wikitext-l mailing list
Wikitext-l <at> lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l

Gmane