3 May 2004 12:37

### What's the core of an inference system?

```
Hello Frogs,

so far, we informally defined the core of an inference system as the set
of those of its inference rules that are needed in order to reduce both
the cut as well as identity to atomic form. (That's not yet a formal
definition, for example there could be two different ways to do this
reduction, but this has never happened for any of our systems.) Atomic
cut and identity were not included in this definition of core.

I noticed that it would be elegant to include atomic cut and atomic
identity in the definition of core, because then cut = core-up rather
than cut = core-up + atomic cut. Several lemmas, e.g. the replacement of
a rule by its dual+core, become more succinct.

On a more conceptual level, I also like the idea of being able to derive
cut and identity in the core, which is impossible with the current
notion of core. Every system of inference rules that I know and that I'd
call "logical" has some form of an identity axiom and some form of cut.
To me, reflexivity and transitivity are a minimum requirement for a
logical cosequence relation. Don't they fully deserve to be part of
whatever it will be that we call "the core"?

I'd like to include atomic cut/identity in the core. I'm looking for
arguments against doing so.

Peace!

-Kai

```

3 May 2004 03:58

### Re: Lambda abstraction

```
Hi,

>  for several reasons (Elaine and intersection types, Kai and
>  intuitionistic logic, etc.), we got to the point in which we should
>  formalisms.

>  I just noticed a trivial fact, I don't know whether it might be
>  important or not.

>  We all know that in natural deduction we can make the deduction
>  transformation

>               [A]
>     A          |
>     |   ~>     B    ,
>     B        ------
>              A -> B

>  which corresponds to forming the term

>     t   ~>   lambda x.t .

>  There are a few cumbersome technicalities one has to deal with, like
>  discharging hypothesis and taking care of `parcels' of them, which
>  means using some care in variable bindings, etc.

Yep, I've thought quite a bit about these issues; they are one of the
driving forces behind the design of my partial sharing diagrams.  At
```

4 May 2004 17:48

### Re: Lambda abstraction

```Hi Charles,

from your answer I understand that I didn't manage at all to make
myself clear. Let me try again; this time, I'm writing the email
while listening to Arvo Part's Te Deum (by Kaljuste, of course), so
I'm sure I'll be clearer.

1) I'm not proposing any term calculus: in fact, I'd be happy if
partial sharing diagrams were used, you know I like them.

2) I'm just making an observation; in formalism A one has:

a) `local' discharging of hypotheses, like in natural deduction;
b) all the good proof theoretical properties of the sequent calculus.

So, if what I'm saying is true and there are no problems of different
kind, I would expect that finding a term calculus for formalism A
will be more productive than doing the same for CoS. Since the
relation between A and CoS is simple, doing it for A would also mean
doing it for CoS, and then for the logics which are better
represented in CoS than in other formalisms.

At 1:58 +0000 3.5.04, Charles Stewart wrote:
>>   We all know that in natural deduction we can make the deduction
>>   transformation
>>
>>                [A]
>>      A          |
```

5 May 2004 18:24

### Re:proof nets for MLL with units

```On Monday 26 April 2004 15:34, Alessio Guglielmi wrote:
> I'd like to know whether people believe or not that identity of
> proofs in sophisticated logics, like classical logic, can effectively
> be captured by simple `decorations' on top of the formula tree.

I believe that a proof is the formula tree plus "additional information".
This is extremely vague, and it depends on the logic in question, what
"additional information" means (it could even be the full sequent proof on
top).

It is true that in our proof nets for MLL with units, and also for classical
logic turns out to be some sort of linking of the leaves of the formula tree.
However, I consider this as a (lucky) coincidence. I take it by no means for
granted that "simple decorations" do the job. I guess that as soon as we go
to the first order (or second order) case, we will get much more
sophisticated things.

> My (certainly naif!) point of view is that proof nets should *easily*
> correspond to normal deductive proofs, but in a formalism that
> doesn't force unnecessary permutations. As a further property, they
> should preserve `identity' through some form of normalisation, sure.
> And, by the way, this will definitely not be cut elimination!! This
> is another perversion dictated by our current (soon to be over)
> misery. (What I'm trying to do with my formalisms `A' and `B' is
> setting up a principled approach to this problem.)

I agree that a priori cut elimination has nothing to do with the identity of
proofs. However, it turns out that for simple logics, like MLL or classical
propositional, there is a simple notion of identity that is based on cut
elimination, and that does the job. For MALL, MELL, full LL, of first order
```

5 May 2004 09:21

### Re: proof nets for MLL with units

```
Hi,

> I'd like to know whether people believe or not that identity of
> proofs in sophisticated logics, like classical logic, can effectively
> be captured by simple `decorations' on top of the formula tree.

I think so, and I'd go so far as to say that if a particular notion of
proof can be captured at all, it can be captured this way, but I think
this is a very risky way in which to proceed, because *any* information
at all might be added as `decoration'.  Indeed you never need to
decorate proofs, you can just decorate sequents, and make the rules
conditional on this decoration.  If you follow this route to its
ultimate end, you get labelled deduction.  There is then no way
internal to the framework to tell good from bad.

> My gut feeling is that some more `deductive' information will be
> necessary or convenient. In other words, encoding the entire
> deductive information of a, say, sequent calculus proof, in linkings
> and boxes might be possible, but could very easily be perverse, and
> so inconvenient, one way or another.

Advisable, yes, never necessary.  Pervesity is exactly the risk.

Charles

```
5 May 2004 05:15

### Re: Clarification on rule schemes

```
Dear all, dear sundry,

Just a brief addendum to what Alessio says, that I shall take in
a different direction.  He writes that:

the most important principle is of course `to define
connectives in isolation', meaning that you ask yourself
how can you introduce the main connective of a formula,
and then you try to come up with appropriate premises
from which to conclude the desired conclusion. This works
well for classical and intuitionistic logic, but not so
well for linear and modal logics, and it doesn't work at
all for some other logics like pomset logic. In linear
logic, for example, the promotion rule does not obey this
principle, because you define the `!' but you need to check
for `?'s, so the `!' is not in isolation.

The above, known as Dosen's principle, is only one of a number of
holy cows we need to violate in order to get real progress in proof
theory.  Perhaps more contentiously, we need to recognise that
the subformula property is a nice, but superficial property that has
no non-accidental relationship to the fundamental notion of analytic
proof.  The ironically named "analytic cut" so beloved of lazy
workers in the ATP community ensures that proofs that bear it are
not analytic; the fact that one still can perform some weak kinds
of proof analysis in it's presence should not blind one to this
fact.  More importantly, in the presence of deep inference the
subformula property may be incompatible with the most revealing
presentations of proof analyticity.
```

5 May 2004 23:23

### Re: proof nets for MLL with units

```Charles:

At 7:21 +0000 5.5.04, Charles Stewart wrote:
>  > I'd like to know whether people believe or not that identity of
>>  proofs in sophisticated logics, like classical logic, can effectively
>>  be captured by simple `decorations' on top of the formula tree.
>
>I think so, and I'd go so far as to say that if a particular notion of
>proof can be captured at all, it can be captured this way, but I think
>this is a very risky way in which to proceed, because *any* information
>at all might be added as `decoration'.  Indeed you never need to
>decorate proofs, you can just decorate sequents, and make the rules
>conditional on this decoration.  If you follow this route to its
>ultimate end, you get labelled deduction.  There is then no way
>internal to the framework to tell good from bad.

I'm not following, I need a more explicit explanation. You seem to
formulas.

You take a formula, you decorate it and that captures all `identical'
proofs. This means: you check the decoration and you know you have a
proof of the formula; moreover, two `identical' proofs yield the same
decoration.

This is my (probably wrong) take of the mysticism surrounding proof
nets. Do you believe in this?

Lutz:

```

6 May 2004 01:50

### Re: proof nets for MLL with units

```
Dear Alessio,

> > > I'd like to know whether people believe or not that identity of
> >> proofs in sophisticated logics, like classical logic, can effectively
> >> be captured by simple `decorations' on top of the formula tree.

> >I think so, and I'd go so far as to say that if a particular notion of
> >proof can be captured at all, it can be captured this way, but I think
> >this is a very risky way in which to proceed, because *any* information
> >at all might be added as `decoration'.  Indeed you never need to
> >decorate proofs, you can just decorate sequents, and make the rules
> >conditional on this decoration.  If you follow this route to its
> >ultimate end, you get labelled deduction.  There is then no way
> >internal to the framework to tell good from bad.

> I'm not following, I need a more explicit explanation. You seem to
> formulas.

I had the idea that you shifted from talking about one to the other.
I think both are equally expressive.

> You take a formula, you decorate it and that captures all `identical'
> proofs. This means: you check the decoration and you know you have a
> proof of the formula; moreover, two `identical' proofs yield the same
> decoration.

```

6 May 2004 15:34

### Re: Clarification on rule schemes

```On Wed, 5 May 2004, Charles Stewart wrote:

>
> Dear all, dear sundry,
>
> Just a brief addendum to what Alessio says, that I shall take in
> a different direction.  He writes that:
>
> 	the most important principle is of course `to define
> 	connectives in isolation', meaning that you ask yourself
> 	how can you introduce the main connective of a formula,
> 	and then you try to come up with appropriate premises
> 	from which to conclude the desired conclusion. This works
> 	well for classical and intuitionistic logic, but not so
> 	well for linear and modal logics, and it doesn't work at
> 	all for some other logics like pomset logic. In linear
> 	logic, for example, the promotion rule does not obey this
> 	principle, because you define the `!' but you need to check
> 	for `?'s, so the `!' is not in isolation.
>
> The above, known as Dosen's principle, is only one of a number of
> holy cows we need to violate in order to get real progress in proof
> theory.

Just a clarification:
I think you talk about different principles. Alessio says that logical
rules (rules which introduce logical connectives) should exhibit only one
connective (but possibly more than one occurrence). In other words, the
introduction of a connective should not depend on the presence of
different connectives (I think Wansing already named this principle).
```

8 May 2004 11:29

### Re: Computability Logic

```Alessio - I'm back. Sorry for the late response.

> You were mentioning a couple other fragments of computability logic.
> Can we have a look?

Ok. Let us call this one M. Its language is exactly that of classical propositional logic
(as an aside: computability logic has two sorts of atoms depending on what
interpretations they allow; the atoms of M are different from those of CL1, and
hence CL1 and M may disagree on some formulas).

The valid formulas of M are exactly those that are substitutional instances of binary
tautologies. "Binary tautology" means a classical tautology which, for every atom p,
contains at most one positive and at most one negative occurrence of p.

For example,  -aV(aVa)  is in M because it is a substitutional instance of the binary
tautology  -pV(qVp). On the other hand,  -aV(a&a)  is not in M.

M is a fragment of logic CL2 described in Section 6 of "Computability Logic: A
Formal Theory of Interaction". *Fragment* in the sense that CL2 is a conservative
extension of M. There is an  axiomatization for CL2 in a style similar to CL1, but
even if we are happy with that axiomatization, it cannot be mechanically restricted to
M because it essentially requires a stronger language than that of M.

So, the question now is: can we come up with a reasonable axiomatization directly
for M?

A first naive thought could be that Affine logic (linear logic + weakening) might fit
the bill. But this is not so: M can be shown to be strictly stronger. And, roughly
speaking, the reason - or one of the reasons - is that Affine logic only uses shallow
inference. Here is an example of a formula that is in M but not in Affine logic:
```