John Nowak | 13 Feb 14:37 2012

[stack] Jon Purdy: Why Concatenative Programming Matters

 
__,_._,___
William Tanksley, Jr | 13 Feb 18:08 2012
Picon

Re: [stack] Jon Purdy: Why Concatenative Programming Matters

 

John Nowak <john <at> johnnowak.com> wrote:
> http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html

Not bad at all.

We've talked about the wikipedia page... It's an accurate description,
but doesn't communicate very well. I don't know what else to say.

-Wm

__._,_.___
Recent Activity:
    .

    __,_._,___
    eas lab | 13 Feb 18:27 2012
    Picon

    Re: [stack] Jon Purdy: Why Concatenative Programming Matters

     

    It's going to take me a few days to work through: -
    /2012/02/why-concatenative-programming-matters.html
    but I'm sending some immediate feedback, because I was grappling with this
    exact issue lately.

    IMO one should state up-front:
    BECAUSE WE WANT MORE RESULTS WITH LESS EFFORT.
    That's what Backus' paper did.
    And then one should backwards-chain eg.
    because 'serial-data-transformation' means that when you do stage N, you.
    don't need to be concerned about earlier stages except N-1.
    Minimal-coupling's value is founded on human psychology: less to remember.

    And [for high level programming] each data-transformer is a stand-alone
    module, whose cost can be amortized over multiple projects.

    When you explain how a TV receiver works, you start from the picture;
    not from the RF-amplifier.

    I particularly appreciate the ascii-diagrams of how the 'impedance
    matching' of the functions works. Last week I wasted many hours
    researching how monads achieve this, but right now I haven't got a clue.

    Thanks,

    == Chris Glur.

    On 2/13/12, John Nowak <john <at> johnnowak.com> wrote:
    > http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html
    >

    __._,_.___
    Recent Activity:
      .

      __,_._,___
      John Nowak | 13 Feb 20:05 2012

      Re: [stack] Jon Purdy: Why Concatenative Programming Matters

       

      On 02/13/2012 12:08 PM, William Tanksley, Jr wrote:

      > We've talked about the wikipedia page... It's an accurate
      > description, but doesn't communicate very well. I don't know
      > what else to say.

      Apologies: This email is a bit of a brain dump and ill-proofread.

      I think maybe the most interesting thing about concatenative
      languages isn't stressed enough in the article: Unlike most every
      other type of programming language on the planet, there is no
      syntax for function application. All you get is composition and,
      if higher-order, quotation.

      I think the second most interesting thing is that all functions
      operate on possibly empty sequences containing arbitrary amounts
      of program state. It's this "all the state" element that gives
      concatenative languages their character. For example, Backus's FL
      does not do this; functions that need one number take just that
      number and functions that need two take a pair; never do
      functions take the entire state of a program as they usually do
      in Forth or Factor.

      For example, a tail-recursive 'length' function in a
      concatenative language might be:

      length = [0] dip loop where
      loop = [null?] [drop] [[1 +] dip tail loop] if
      end

      And in FL:

      def length ≡ f ∘ [id, ~0] where {
      def f ≡ isnull ∘ xs → n; loop ∘ [tl ∘ xs, + ∘ [~1, n]]
      def xs ≡ s1
      def n ≡ s2
      }

      A quick guide to the FL program:

      - '∘' is composition
      - '~' is constant (i.e. \x y -> x)
      - the square brackets denote "construction"; construction returns
      a function that, when applied to an argument, returns a list of
      length N where N is the number of functions provided by the
      programmer; each element in the result corresponds to one
      function after it was applied to the same input as all other
      functions (e.g. '[f, g]:x == <f:x, g:x>', where ':' is FL's
      syntax for application and angle brackets denote lists)
      - conditionals are written 'a → b; c', where 'b' and 'c' are the
      functions conditionally applied to the input to get the result
      - 's1' and 's2' select the first and second elements from a list
      - 'tl' is the tail operation

      These two programs are, to me, radically different. The
      concatenative program is decidedly "flat" and feels very
      imperative. The FL program, on the other hand, is "nested" like
      applicative programs usually are and feels, at least to me,
      rather declarative; the fact that you can consider each function
      within the two constructions to be completely independent is
      probably the main reason for this. The independence of functions
      also allows you to write aliases for selectors as I did above. In
      fact, FL offers a form of pattern matching for conveniently
      locally-defining such selector functions via double-square
      brackets:

      def length ≡ f ∘ [id, ~0] where {
      def f ≡〚xs, n〛→
      isnull ∘ xs → n; loop ∘ [tl ∘ xs, + ∘ [~1, n]]
      }

      This is very different from concatenative languages where
      combinators like 'bi <at> ' allow the functions provided to step on
      each other as the second function runs on top of the stack
      returned from the first. This is the reason I gave up on
      concatenative languages; the equational reasoning possible when
      all functions operate on "the stack" just isn't great. I think
      the fact that faking variables in FL is trivial while faking
      variables in something like Factor requires a rather complex
      transformation is evidence of this. I also think the lack of
      useful equational reasoning is why things feel so imperative once
      you get past the pretty examples in the Joy literature. Maybe I
      just need more combinators.

      To be fair, I should point out that having all functions work on
      "the stack" does have its advantages. In particular, higher-order
      programming in FL is pretty gnarly. In fact, doing it requires
      pervasive use of "lifting" which is, essentially, a way of
      modifying functions so that they take an additional argument full
      of all the other stuff you want to use (and hence reminds me a
      bit of stack-based programming). More info here:

      http://theory.stanford.edu/~aiken/publications/trs/FLProject.pdf

      I guess what I'm trying to say is that "every function takes the
      entire program state as input", or at least a good chunk of it
      (as with Joy's 'infra'), is *really important*. While this fact
      is arguably implicit in the definition on Wikipedia, it's not
      spelled out at all. Maybe I can add some insight to it in the
      next few days.

      - jn

      __._,_.___
      Recent Activity:
        .

        __,_._,___
        William Tanksley, Jr | 13 Feb 21:20 2012
        Picon

        Re: [stack] Jon Purdy: Why Concatenative Programming Matters

         

        John Nowak <john <at> johnnowak.com> wrote:
        > I think the second most interesting thing is that all functions
        > operate on possibly empty sequences containing arbitrary amounts
        > of program state. It's this "all the state" element that gives
        > concatenative languages their character.

        Indeed. And you're right when you elsewhere say that this gives
        concatenative languages an imperative character.

        You're right that any arbitrary function must be able to access all of
        the state; but it need not be true that any specific function must be
        able to access all the state. Modern concatenative languages use
        typechecking to make sure that no function attempts to consume stack
        state that doesn't match the function's declaration; but other types
        of checking don't have such an obvious typechecking solution. (Were
        you the one who designed a concatenative language with side-effect
        checking?)

        It seems obvious to me that in order to add effect checking, you have
        to formally define the effect. Forth's dictionary could be formally
        defined and checking there from implemented.

        Of course, this is easier to say than to do, and I haven't even said it.

        I also admit that by saying this, I have conceded your point. This has
        the result of making reordering hard.

        > This is very different from concatenative languages where
        > combinators like 'bi <at> ' allow the functions provided to step on
        > each other as the second function runs on top of the stack
        > returned from the first. This is the reason I gave up on
        > concatenative languages; the equational reasoning possible when
        > all functions operate on "the stack" just isn't great. I think

        That specific example need not retain all of its problems, obviously
        -- you can define similar combinators to require the forks to have
        identical static types, and then effectively isolate the forks from
        each other. Factor chooses not to do that, and thereby adds semantics
        which are useful for its purposes.

        > (as with Joy's 'infra'), is *really important*. While this fact
        > is arguably implicit in the definition on Wikipedia, it's not
        > spelled out at all. Maybe I can add some insight to it in the
        > next few days.

        Fair point. We await your input. Fthagn.

        > - jn

        -Wm

        __._,_.___
        Recent Activity:
          .

          __,_._,___
          John Nowak | 13 Feb 22:23 2012

          Re: [stack] Jon Purdy: Why Concatenative Programming Matters

           

          On 02/13/2012 03:20 PM, William Tanksley, Jr wrote:

          > You're right that any arbitrary function must be able to access
          > all of the state; but it need not be true that any specific
          > function must be able to access all the state. Modern
          > concatenative languages use typechecking to make sure that no
          > function attempts to consume stack state that doesn't match the
          > function's declaration;

          This is certainly true; or at least it would be if the languages
          actually excited. With types, something like 'bi' can be
          restricted so that the second function uses no more than one
          element and thus you can consider both functions independently.
          (For the record, I mistakenly referred to 'bi <at> ' in a previous
          email. I meant 'bi'.)

          Even without types, this is possible if you have 'infra' to run a
          function on a stack-on-the-stack and some sort of 'concat'
          operation to join a stack-on-the-stack onto the main stack
          itself. Assuming curly braces denote stacks, we need only the
          following primitives:

          {A} [B] infra == {A B}
          A {B} concat == A B
          {A} B push == {A B}

          This is enough to write a "safe" 'bi':

          bi = [keep] dip [{} swap push] dip infra concat

          With this version of 'bi', you'll get a stack underflow error if
          the second quotation to 'bi' attempts to use more than one
          element. Since we're not using types, you might as well pass an
          argument to some generic version of 'bi' that says how many
          arguments to reserve for the second function; just push that
          many onto the stack before calling 'infra'.

          > Were you the one who designed a concatenative language with
          > side-effect checking?)

          Yes, although it is just a little trick of attaching a variable
          to function types that can either be "effectful" or "not
          effectful". You can infer it via unification. Not much to it.
          It's a bit coarse though because it doesn't offer any form of
          local effects with a pure interface as you can get with the ST
          monad or with linear or uniqueness types (though they could be
          added if you wanted).

          - jn

          __._,_.___
          Recent Activity:
            .

            __,_._,___
            John Nowak | 13 Feb 22:25 2012

            Re: [stack] Jon Purdy: Why Concatenative Programming Matters

             

            On 02/13/2012 04:23 PM, John Nowak wrote:

            > This is certainly true; or at least it would be if the languages
            > actually excited.

            Existed. Sorry.

            - jn

            __._,_.___
            Recent Activity:
              .

              __,_._,___
              William Tanksley, Jr | 13 Feb 23:05 2012
              Picon

              Re: [stack] Jon Purdy: Why Concatenative Programming Matters

               

              John Nowak <john <at> johnnowak.com> wrote:
              > William Tanksley, Jr wrote:
              >  > You're right that any arbitrary function must be able to access
              >  > all of the state; but it need not be true that any specific
              >  > function must be able to access all the state. Modern
              >  > concatenative languages use typechecking to make sure that no
              >  > function attempts to consume stack state that doesn't match the
              >  > function's declaration;

              > This is certainly true; or at least it would be if the languages
              > actually excited.

              Fair point. (I misread this word as "exited", and thought you were
              talking about termination proofs. Yow.)

              > With types, something like 'bi' can be
              > restricted so that the second function uses no more than one
              > element and thus you can consider both functions independently.
              > (For the record, I mistakenly referred to 'bi <at> ' in a previous
              > email. I meant 'bi'.)

              Well, you can also copy the needed elements into place -- type
              checking can handle the rest. That way you can provide every branch
              with the data it needs according to the documented semantics of the
              forking word (whether each branch sees the same stack, or the branches
              see sequential data from the initial stack).

              And you're right that the easy way is to use "infra"... Much less
              infrastructure.

              >  > Were you the one who designed a concatenative language with
              >  > side-effect checking?)
              > Yes, although it is just a little trick of attaching a variable
              > to function types that can either be "effectful" or "not
              > effectful". You can infer it via unification. Not much to it.
              > It's a bit coarse though because it doesn't offer any form of
              > local effects with a pure interface as you can get with the ST
              > monad or with linear or uniqueness types (though they could be
              > added if you wanted).

              Ah, I see. Yes, I was thinking more of defining structures on which
              effects could occur, and checking those effects.

              Forth has a dictionary on which effects can occur, for example.

              > - jn

              -Wm

              __._,_.___
              Recent Activity:
                .

                __,_._,___
                eas lab | 14 Feb 06:01 2012
                Picon

                Re: [stack] Jon Purdy: Why Concatenative Programming Matters

                 

                This commentor appropriately acknowledges the important cognitive-load aspect:-
                ]the code was always *intensely* write-only since you had to have a
                ]mental model of exactly what was on the stack where to even hope to
                ]follow it.

                *nix piping is beautifully simple yet powerfull, since the stack only contains
                "it" [the previous output]. So that at each/every stage, you only need to know
                about your single-supplier.

                ] 2 :: FAA. (A) -> (A, int)
                ] 3 :: FAB. (B) -> (B, int)
                ok

                ] We match up the types:
                ]
                ] (A, int) = (B)

                ?! does this mean 'assume they match and see what that implies'?
                -------------
                Apparently, if the output of stageN includes item/s in addition to what
                stageN+1 needs, they can be left on the stack for stageN+2. But he and
                Haskell/monads seems to 'wrap' the extra item/s and pass it on, and
                imply that some automagic relieves you from the mental load of
                remembering what was left on the stack for all possible
                combinations of stages afterN+1 ?

                __._,_.___
                Recent Activity:
                  .

                  __,_._,___
                  eas lab | 14 Feb 07:05 2012
                  Picon

                  Re: [stack] Jon Purdy: Why Concatenative Programming Matters

                   

                  ] By substituting, we get a new polymorphic type for 3 within the
                  ] expression:
                  ? does this mean '3 is polymorphic because it can be written as
                  infinitely typed'?
                  ? what does "within the expression" mean ?

                  ] 3 :: FAC. (C, int) -> (C, int, int)
                  => let 3 be a function, which ForAllC, maps (C, int) to (C, int, int)

                  ] This matches the non-polymorphic type:

                  ] 3 :: FAA. (A, int) -> (A, int, int) = FAB. B -> (B, int)
                  ? is there a bracket missing here ?

                  ] So the final type of the expression becomes:

                  ] 2 3 :: FAA. (A) -> (A, int, int)

                  ? why/how ? is this formal maths or Haskell-poetry ?

                  __._,_.___
                  Recent Activity:
                    .

                    __,_._,___

                    Gmane