5.4. Partial Fixpoint Recursion🔗

All definitions are fundamentally equations: the new constant being defined is equal to the right-hand side of the definition. For functions defined by structural recursion, this equation holds definitionally, and there is a unique value returned by application of the function. For functions defined by well-founded recursion, the equation may hold only propositionally, but all type-correct applications of the function to arguments are equal to the respective values prescribed by the definition. In both cases, the fact that the function terminates for all inputs means that the value computed by applying the function is always uniquely determined.

In some cases where a function does not terminate for all arguments, the equation may not uniquely determine the function's return value for each input, but there are nonetheless functions for which the defining equation holds. In these cases, a definition as a partial fixpoint may still be possible. Any function that satisfies the defining equation can be used to demonstrate that the equation does not create a logical contradiction, and the equation can then be proven as a theorem about this function. As with the other strategies for defining recursive functions, compiled code uses the function as it was originally written; like definitions in terms of eliminators or recursion over accessibility proofs, the function used to define the partial fixpoint is used only to justify the function's equations in Lean's logic for purposes of mathematical reasoning.

The term partial fixpoint is specific to Lean. Functions declared Lean.Parser.Command.declaration : commandpartial do not require termination proofs, so long as the type of their return values is inhabited, but they are completely opaque from the perspective of Lean's logic. Partial fixpoints, on the other hand, can be rewritten using their defining equations while writing proofs. Logically speaking, partial fixpoints are total functions that don't reduce definitionally when applied, but for which equational rewrite rule are provided. They are partial in the sense that the defining equation does not necessarily specify a value for all possible arguments.

While partial fixpoints do allow functions to be defined that cannot be expressed using structural or well-founded recursion, the technique is also useful in other cases. Even in cases where the defining equation fully describes the function's behavior and a termination proof using well-founded recursion would be possible, it may simply be more convenient to define the function as a partial fixpoint to avoid a having to write a termination proof.

Defining recursive functions as partial fixpoints only occurs when explicitly requested by annotating the definition with Lean.Parser.Command.declaration : commandpartial_fixpoint.

There are two classes of functions that can be defined as partial fixpoints:

  • Tail-recursive functions whose return type is inhabited type

  • Functions that return values in a suitable monad, such as the Option monad

Both classes are backed by the same theory and construction: least fixpoints of monotone equations in chain-complete partial orders.

Just as with structural and well-founded recursion, Lean allows mutually recursive functions to be defined as partial fixpoints. To use this feature, every function definition in a mutual block must be annotated with the Lean.Parser.Command.declaration : commandpartial_fixpoint modifier.

Definition by Partial Fixpoint

The following function finds the least natural number for which the predicate p holds. If p never holds, then this equation does not specify the behavior: the function find could return 42 or any other Nat in that case and still satisfy the equation.

def find (p : Nat Bool) (i : Nat := 0) : Nat := if p i then i else find p (i + 1) partial_fixpoint

The elaborator can prove that functions satisfying the equation exist. Within Lean's logic, find is defined to be an arbitrary such function.

5.4.1. Tail-Recursive Functions🔗

A recursive function can be defined as a partial fixpoint if the following two conditions hold:

  1. The function's return type is inhabited (as with functions marked Lean.Parser.Command.declaration : commandpartial) - either a Nonempty or Inhabited instance works.

  2. All recursive calls are in tail position of the function.

An expression is in tail position in the function body if it is:

  • the function body itself,

  • the branches of a Lean.Parser.Term.match : termPattern matching. `match e, ... with | p, ... => f | ...` matches each given term `e` against each pattern `p` of a match alternative. When all patterns of an alternative match, the `match` term evaluates to the value of the corresponding right-hand side `f` with the pattern variables bound to the respective matched values. If used as `match h : e, ... with | p, ... => f | ...`, `h : e = p` is available within `f`. When not constructing a proof, `match` does not automatically substitute variables matched on in dependent variables' types. Use `match (generalizing := true) ...` to enforce this. Syntax quotations can also be used in a pattern match. This matches a `Syntax` value against quotations, pattern variables, or `_`. Quoted identifiers only match identical identifiers - custom matching such as by the preresolved names only should be done explicitly. `Syntax.atom`s are ignored during matching by default except when part of a built-in literal. For users introducing new atoms, we recommend wrapping them in dedicated syntax kinds if they should participate in matching. For example, in ```lean syntax "c" ("foo" <|> "bar") ... ``` `foo` and `bar` are indistinguishable during matching, but in ```lean syntax foo := "foo" syntax "c" (foo <|> "bar") ... ``` they are not. match expression that is in tail position,

  • the branches of an termIfThenElse : term`if c then t else e` is notation for `ite c t e`, "if-then-else", which decides to return `t` or `e` depending on whether `c` is true or false. The explicit argument `c : Prop` does not have any actual computational content, but there is an additional `[Decidable c]` argument synthesized by typeclass inference which actually determines how to evaluate `c` to true or false. Write `if h : c then t else e` instead for a "dependent if-then-else" `dite`, which allows `t`/`e` to use the fact that `c` is true/false. if expression that is in tail position, and

  • the body of a Lean.Parser.Term.let : term`let` is used to declare a local definition. Example: ``` let x := 1 let y := x + 1 x + y ``` Since functions are first class citizens in Lean, you can use `let` to declare local functions too. ``` let double := fun x => 2*x double (double 3) ``` For recursive definitions, you should use `let rec`. You can also perform pattern matching using `let`. For example, assume `p` has type `Nat × Nat`, then you can write ``` let (x, y) := p x + y ``` let expression that is in tail position.

In particular, the discriminant of a Lean.Parser.Term.match : termPattern matching. `match e, ... with | p, ... => f | ...` matches each given term `e` against each pattern `p` of a match alternative. When all patterns of an alternative match, the `match` term evaluates to the value of the corresponding right-hand side `f` with the pattern variables bound to the respective matched values. If used as `match h : e, ... with | p, ... => f | ...`, `h : e = p` is available within `f`. When not constructing a proof, `match` does not automatically substitute variables matched on in dependent variables' types. Use `match (generalizing := true) ...` to enforce this. Syntax quotations can also be used in a pattern match. This matches a `Syntax` value against quotations, pattern variables, or `_`. Quoted identifiers only match identical identifiers - custom matching such as by the preresolved names only should be done explicitly. `Syntax.atom`s are ignored during matching by default except when part of a built-in literal. For users introducing new atoms, we recommend wrapping them in dedicated syntax kinds if they should participate in matching. For example, in ```lean syntax "c" ("foo" <|> "bar") ... ``` `foo` and `bar` are indistinguishable during matching, but in ```lean syntax foo := "foo" syntax "c" (foo <|> "bar") ... ``` they are not. match expression, the condition of an termIfThenElse : term`if c then t else e` is notation for `ite c t e`, "if-then-else", which decides to return `t` or `e` depending on whether `c` is true or false. The explicit argument `c : Prop` does not have any actual computational content, but there is an additional `[Decidable c]` argument synthesized by typeclass inference which actually determines how to evaluate `c` to true or false. Write `if h : c then t else e` instead for a "dependent if-then-else" `dite`, which allows `t`/`e` to use the fact that `c` is true/false. if expression and the arguments of functions are not tail positions.

Loops are Tail Recursive Functions

Because the function body itself is a tail position, the infinitely looping function loop is tail recursive. It can be defined as a partial fixpoint.

def loop (x : Nat) : Nat := loop (x + 1) partial_fixpoint
Tail Recursion with Branching

Array.find could also be constructed using well-founded recursion with a termination proof, but may be more convenient to define using Lean.Parser.Command.declaration : commandpartial_fixpoint, where no termination proof is needed.

def Array.find (xs : Array α) (p : α Bool) (i : Nat := 0) : Option α := if h : i < xs.size then if p xs[i] then some xs[i] else Array.find xs p (i + 1) else none partial_fixpoint

If the result of the recursive call is not just returned, but passed to another function, it is not in tail position and this definition fails.

def List.findIndex (xs : List α) (p : α Bool) : Int := match xs with | [] => -1 | x::ys => if p x then 0 else have r := Could not prove 'List.findIndex' to be monotone in its recursive calls: Cannot eliminate recursive call `List.findIndex ys p` enclosed in let_fun r := ys✝.findIndex p; if r = -1 then -1 else r + 1 List.findIndex ys p if r = -1 then -1 else r + 1 partial_fixpoint

The error message on the recursive call is:

Could not prove 'List.findIndex' to be monotone in its recursive calls:
  Cannot eliminate recursive call `List.findIndex ys p` enclosed in
    let_fun r := ys✝.findIndex p;
    if r = -1 then -1 else r + 1
  

5.4.2. Monadic functions🔗

Defining a function as a partial fixpoint is more powerful if the function's return type is a monad that is an instance of Lean.Order.MonoBind, such as Option. In this case, recursive call are not restricted to tail-positions, but may also occur inside higher-order monadic functions such as bind and List.mapM.

The set of higher-order functions for which this works is extensible (see TODO below), so no exhaustive list is given here. The aspiration is that a monadic recursive function definition that is built using abstract monadic operations like bind, but that does not open the abstraction of the monad (e.g. by matching on the Option value), is accepted. In particular, using Lean.Parser.Term.do : termdo-notation should work.

Monadic functions

The following function implements the Ackermann function in the Option monad, and is accepted without an (explicit or implicit) termination proof:

def ack : (n m : Nat) Option Nat | 0, y => some (y+1) | x+1, 0 => ack x 1 | x+1, y+1 => do ack x ( ack (x+1) y) partial_fixpoint

Recursive calls may also occur within higher-order functions such as List.mapM, if they are set up appropriately, and Lean.Parser.Term.do : termdo-notation:

structure Tree where cs : List Tree def Tree.rev (t : Tree) : Option Tree := do Tree.mk ( t.cs.reverse.mapM (Tree.rev ·)) partial_fixpoint def Tree.rev' (t : Tree) : Option Tree := do let mut cs := [] for c in t.cs do cs := ( c.rev') :: cs return Tree.mk cs partial_fixpoint

Pattern matching on the result of the recursive call will prevent the definition by partial fixpoint from going through:

def List.findIndex (xs : List α) (p : α Bool) : Option Nat := match xs with | [] => none | x::ys => if p x then some 0 else match Could not prove 'List.findIndex' to be monotone in its recursive calls: Cannot eliminate recursive call `List.findIndex ys p` enclosed in match ys✝.findIndex p with | none => none | some r => some (r + 1) List.findIndex ys p with | none => none | some r => some (r + 1) partial_fixpoint
Could not prove 'List.findIndex' to be monotone in its recursive calls:
  Cannot eliminate recursive call `List.findIndex ys p` enclosed in
    match ys✝.findIndex p with
    | none => none
    | some r => some (r + 1)
  

In this particular case, using Functor.map instead of explicit pattern matching helps:

def List.findIndex (xs : List α) (p : α Bool) : Option Nat := match xs with | [] => none | x::ys => if p x then some 0 else (· + 1) <$> List.findIndex ys p partial_fixpoint

5.4.3. Partial Correctness Theorems🔗

For every function defined as a partial fixpoint, Lean proves that the defining equation is satisfied. This enables proofs by rewriting. However, these equational theorems are not sufficient for reasoning about the behavior of the function on arguments for which the function specification does not terminate. Code paths that lead to infinite recursion at runtime would end up as infinite chains of rewrites in a potential proof.

Partial fixpoints in suitable monads, on the other hand, provide additional theorems that map the undefined values from non-termination to suitable values in the monad. In the Option monad, then partial fixpoint equals Option.none on all function inputs for which the defining equation specifies non-termination. From this fact, Lean proves a partial correctness theorem for the function which allows facts to be concluded when the function's result is Option.some.

Partial Correctness Theorem

Recall List.findIndex from an earlier example:

def List.findIndex (xs : List α) (p : α Bool) : Option Nat := match xs with | [] => none | x::ys => if p x then some 0 else (· + 1) <$> List.findIndex ys p partial_fixpoint

With this function definition, Lean automatically proves the following partial correctness theorem:

List.findIndex.partial_correctness {α : Type _} (motive : List α (α Bool) Nat Prop) (h : (findIndex : List α (α Bool) Option Nat), ( (xs : List α) (p : α Bool) (r : Nat), findIndex xs p = some r motive xs p r) (xs : List α) (p : α Bool) (r : Nat), (match xs with | [] => none | x :: ys => if p x = true then some 0 else (fun x => x + 1) <$> findIndex ys p) = some r motive xs p r) (xs : List α) (p : α Bool) (r : Nat) : xs.findIndex p = some r motive xs p r

Here, the motive is a relation between the parameter and return types of List.findIndex, with the Option removed from the return type. If, when given an arbitrary partial function with a signature that's compatible with List.findIndex, the following hold:

  • the motive is satisfied for all inputs for which the arbitrary function returns a value (rather than none),

  • taking one rewriting step with the defining equation, in which the recursive calls are replaced by the arbitrary function, also implies the satisfaction of the motive

then the motive is satsified for all inputs for which the List.findIndex returns some.

The partial correctness theorem is a reasoning principle. It can be used to prove that the resulting number is a valid index in the list and that the predicate holds for that index:

theorem List.findIndex_implies_pred (xs : List α) (p : α Bool) : xs.findIndex p = some i x, xs[i]? = some x p x := α:Type u_1i:Natxs:List αp:αBoolxs.findIndex p = some i x, xs[i]? = some xp x = true α:Type u_1i:Natxs:List αp:αBool∀ (findIndex : List α → (αBool) → Option Nat), (∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = true) → ∀ (xs : List α) (p : αBool) (r : Nat), (match xs with | [] => none | x :: ys => if p x = true then some 0 else (fun x => x + 1) <$> findIndex ys p) = some r x, xs[r]? = some xp x = true α:Type u_1i:Natxs✝:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truexs:List αp:αBoolr:Nathsome:(match xs with | [] => none | x :: ys => if p x = true then some 0 else (fun x => x + 1) <$> findIndex ys p) = some r x, xs[r]? = some xp x = true α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αhsome:none = some r x, [][r]? = some xp x = trueα:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx✝:αys✝:List αhsome:(if p x✝ = true then some 0 else (fun x => x + 1) <$> findIndex ys✝ p) = some r x, (x✝ :: ys✝)[r]? = some xp x = true next α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αhsome:none = some r x, [][r]? = some xp x = true All goals completed! 🐙 next x ys α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx:αys:List αhsome:(if p x = true then some 0 else (fun x => x + 1) <$> findIndex ys p) = some r x_1, (x :: ys)[r]? = some x_1p x_1 = true α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx:αys:List αh✝:p x = truehsome:some 0 = some r x_1, (x :: ys)[r]? = some x_1p x_1 = trueα:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx:αys:List αh✝:¬p x = truehsome:(fun x => x + 1) <$> findIndex ys p = some r x_1, (x :: ys)[r]? = some x_1p x_1 = true next α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx:αys:List αh✝:p x = truehsome:some 0 = some r x_1, (x :: ys)[r]? = some x_1p x_1 = true have : r = 0 := α:Type u_1i:Natxs:List αp:αBoolxs.findIndex p = some i x, xs[i]? = some xp x = true All goals completed! 🐙 All goals completed! 🐙 next α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx:αys:List αh✝:¬p x = truehsome:(fun x => x + 1) <$> findIndex ys p = some r x_1, (x :: ys)[r]? = some x_1p x_1 = true α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolr:Natxs✝:List αx:αys:List αh✝:¬p x = truehsome: a, findIndex ys p = some aa + 1 = r x_1, (x :: ys)[r]? = some x_1p x_1 = true α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natih:∀ (xs : List α) (p : αBool) (r : Nat), findIndex xs p = some r x, xs[r]? = some xp x = truep:αBoolxs✝:List αx:αys:List αh✝:¬p x = truer':Nathr:findIndex ys p = some r' x_1, (x :: ys)[r' + 1]? = some x_1p x_1 = true α:Type u_1i:Natxs:List αp✝:αBoolfindIndex:List α → (αBool) → Option Natp:αBoolxs✝:List αx:αys:List αh✝:¬p x = truer':Nathr:findIndex ys p = some r'ih: x, ys[r']? = some xp x = true x_1, (x :: ys)[r' + 1]? = some x_1p x_1 = true All goals completed! 🐙

5.4.4. Mutual Recursion with Partial Fixpoints🔗

Lean supports the definition of mutually recursive functions using partial fixpoint. Mutual recursion may be introduced using a mutual block, but it also results from Lean.Parser.Term.letrec : termlet rec expressions and Lean.Parser.Command.declaration : commandwhere blocks. The rules for mutual well-founded recursion are applied to a group of actually mutually recursive, lifted definitions, that results from the elaboration steps for mutual groups.

If all functions in the mutual group have the Lean.Parser.Command.declaration : commandpartial_fixpoint clause, then this strategy is used.

5.4.5. Theory and Construction🔗

The construction builds on a variant of the Knaster–Tarski theorem: In a chain-complete partial order, every monotone function has a least fixed point.

The necessary theory is found in the Lean.Order namespace. This is not meant to be a general purpose library of order theoretic results. Instead, the definitions and theorems in Lean.Order are only intended as implementation details of the Lean.Parser.Command.declaration : commandpartial_fixpoint feature, and they should be considered a private API that may change without notice.

The notion of a partial order, and that of a chain-complete partial order, are represented by the type classes Lean.Order.PartialOrder and Lean.Order.CCPO, respectively.

🔗type class
Lean.Order.PartialOrder.{u} (α : Sort u)
    : Sort (max 1 u)

A partial order is a reflexive, transitive and antisymmetric relation.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

Instance Constructor

Lean.Order.PartialOrder.mk.{u}

Methods

rel : ααProp

A “less-or-equal-to” or “approximates” relation.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

rel_refl : ∀ {x : α}, xx
rel_trans : ∀ {x y z : α}, xyyzxz
rel_antisymm : ∀ {x y : α}, xyyxx = y
🔗type class
Lean.Order.CCPO.{u} (α : Sort u)
    : Sort (max 1 u)

A chain-complete partial order (CCPO) is a partial order where every chain has a least upper bound.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

Instance Constructor

Lean.Order.CCPO.mk.{u}

Extends

Methods

rel : ααProp
Inherited from
  1. PartialOrder α
rel_refl : ∀ {x : α}, xx
Inherited from
  1. PartialOrder α
rel_trans : ∀ {x y z : α}, xyyzxz
Inherited from
  1. PartialOrder α
rel_antisymm : ∀ {x y : α}, xyyxx = y
Inherited from
  1. PartialOrder α
csup : (αProp) → α

The least upper bound of a chain.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

csup_spec : ∀ {x : α} {c : αProp}, chain c(CCPO.csup cx∀ (y : α), c yyx)

A function is monotone if it preserves partial orders. That is, if x y then f x f y. The operator represent Lean.Order.PartialOrder.rel.

🔗def
Lean.Order.monotone.{u, v} {α : Sort u}
    [PartialOrder α] {β : Sort v}
    [PartialOrder β] (f : αβ) : Prop

A function is monotone if it maps related elements to related elements.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

The fixpoint of a monotone function can be taken using fix, which indeed constructs a fixpoint, as shown by fix_eq,

🔗def
Lean.Order.fix.{u} {α : Sort u} [CCPO α]
    (f : αα) (hmono : monotone f) : α

The least fixpoint of a monotone function is the least upper bound of its transfinite iteration.

The monotone f assumption is not strictly necessarily for the definition, but without this the definition is not very meaningful and it simplifies applying theorems like fix_eq if every use of fix already has the monotonicty requirement.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

🔗
Lean.Order.fix_eq.{u} {α : Sort u} [CCPO α]
    {f : αα} (hf : monotone f)
    : fix f hf = f (fix f hf)

The main fixpoint theorem for fixedpoints of monotone functions in chain-complete partial orders.

This is intended to be used in the construction of partial_fixpoint, and not meant to be used otherwise.

To construct the partial fixpoint, Lean first synthesizes a suitable CCPO instance.

  • If the function's result type has a dedicated instance, like Option has with instCCPOOption, this is used together with the instance for the function type, instCCPOPi, to construct an instance for the whole function's type.

  • Otherwise, if the function's type can be shown to be inhabited by a witness w, then the instance FlatOrder.instCCPO for the wrapper type FlatOrder w is used. In this order, w is a least element and all other elements are incomparable.

Next, the recursive calls in the right-hand side of the function definitions are abstracted; this turns into the argument f of fix. The monotonicity requirement is solved by the monotonicity tactic, which applies compositional monotonicity lemmas in a syntax-driven way.

The tactic solves goals of the form monotone (fun x => x ) using the following steps:

  • Applying monotone_const when there is no dependency on x left.

  • Splitting on Lean.Parser.Term.match : termPattern matching. `match e, ... with | p, ... => f | ...` matches each given term `e` against each pattern `p` of a match alternative. When all patterns of an alternative match, the `match` term evaluates to the value of the corresponding right-hand side `f` with the pattern variables bound to the respective matched values. If used as `match h : e, ... with | p, ... => f | ...`, `h : e = p` is available within `f`. When not constructing a proof, `match` does not automatically substitute variables matched on in dependent variables' types. Use `match (generalizing := true) ...` to enforce this. Syntax quotations can also be used in a pattern match. This matches a `Syntax` value against quotations, pattern variables, or `_`. Quoted identifiers only match identical identifiers - custom matching such as by the preresolved names only should be done explicitly. `Syntax.atom`s are ignored during matching by default except when part of a built-in literal. For users introducing new atoms, we recommend wrapping them in dedicated syntax kinds if they should participate in matching. For example, in ```lean syntax "c" ("foo" <|> "bar") ... ``` `foo` and `bar` are indistinguishable during matching, but in ```lean syntax foo := "foo" syntax "c" (foo <|> "bar") ... ``` they are not. match expressions.

  • Splitting on termIfThenElse : term`if c then t else e` is notation for `ite c t e`, "if-then-else", which decides to return `t` or `e` depending on whether `c` is true or false. The explicit argument `c : Prop` does not have any actual computational content, but there is an additional `[Decidable c]` argument synthesized by typeclass inference which actually determines how to evaluate `c` to true or false. Write `if h : c then t else e` instead for a "dependent if-then-else" `dite`, which allows `t`/`e` to use the fact that `c` is true/false. if expressions.

  • Moving Lean.Parser.Term.let : term`let` is used to declare a local definition. Example: ``` let x := 1 let y := x + 1 x + y ``` Since functions are first class citizens in Lean, you can use `let` to declare local functions too. ``` let double := fun x => 2*x double (double 3) ``` For recursive definitions, you should use `let rec`. You can also perform pattern matching using `let`. For example, assume `p` has type `Nat × Nat`, then you can write ``` let (x, y) := p x + y ``` let expression to the context, if the value and type do not depend on x.

  • Zeta-reducing a Lean.Parser.Term.let : term`let` is used to declare a local definition. Example: ``` let x := 1 let y := x + 1 x + y ``` Since functions are first class citizens in Lean, you can use `let` to declare local functions too. ``` let double := fun x => 2*x double (double 3) ``` For recursive definitions, you should use `let rec`. You can also perform pattern matching using `let`. For example, assume `p` has type `Nat × Nat`, then you can write ``` let (x, y) := p x + y ``` let expression when value and type do depend on x.

  • Applying lemmas annotated with partial_fixpoint_monotone

The following monotonicity lemmas are registered, and should allow recursive calls under the given higher-order functions in the arguments indicated by · (but not the other arguments, shown as _).

Theorem

Pattern

monotone_allM

Array.allM · _ _ _

monotone_anyM

Array.anyM · _ _ _

monotone_anyM_loop

Array.anyM.loop · _ _ ⋯ _

monotone_array_filterMapM

Array.filterMapM · _

monotone_array_forM

Array.forM · _ _ _

monotone_array_forRevM

Array.forRevM · _ _ _

monotone_findIdxM?

Array.findIdxM? · _

monotone_findM?

Array.findM? · _

monotone_findRevM?

Array.findRevM? · _

monotone_findSomeM?

Array.findSomeM? · _

monotone_findSomeRevM?

Array.findSomeRevM? · _

monotone_flatMapM

Array.flatMapM · _

monotone_foldlM

Array.foldlM · _ _ _ _

monotone_foldlM_loop

Array.foldlM.loop · _ _ ⋯ _ _ _

monotone_foldrM

Array.foldrM · _ _ _ _

monotone_foldrM_fold

Array.foldrM.fold · _ _ _ ⋯ _

monotone_forIn

forIn _ _ ·

monotone_forIn'

forIn' _ _ ·

monotone_forIn'_loop

Array.forIn'.loop _ · _ ⋯ _

monotone_mapFinIdxM

_.mapFinIdxM ·

monotone_mapM

Array.mapM · _

monotone_modifyM

_.modifyM _ ·

monotone_map

_ <$> ·

monotone_allM

List.allM · _

monotone_anyM

List.anyM · _

monotone_filterAuxM

List.filterAuxM · _ _

monotone_filterM

List.filterM · _

monotone_filterRevM

List.filterRevM · _

monotone_findM?

List.findM? · _

monotone_findSomeM?

List.findSomeM? · _

monotone_foldlM

List.foldlM · _ _

monotone_foldrM

List.foldrM · _ _

monotone_forIn

forIn _ _ ·

monotone_forIn'

forIn' _ _ ·

monotone_forIn'_loop

List.forIn'.loop _ · _ _ ⋯

monotone_forM

_.forM ·

monotone_mapM

List.mapM · _

monotone_bindM

Option.bindM · _

monotone_elimM

Option.elimM · · ·

monotone_getDM

_.getDM ·

monotone_mapM

Option.mapM · _

monotone_fst

·.fst

monotone_mk

⟨·, ·⟩

monotone_snd

·.snd

monotone_seq

· <*> ·

monotone_seqLeft

· <* ·

monotone_seqRight

· *> ·

monotone_bind

· >>= ·

monotone_dite

dite _ · ·

monotone_ite

if _ then · else ·