Lean 4.6.0
We're pleased to announce the release of Lean version 4.6.0! This is a big release, with significant improvements made to termination arguments, the simplifier, and the language server. On top of these headlines, there is the usual collection of improvements. The release notes contain more details.
Termination
Lean allows arbitrary recursive functions with the partial
keyword,
but otherwise functions must pass Lean's termination checker
(and only functions passing the termination checker may be unfolded in proofs).
In Lean 4.5.0, Lean gained support for automatically proving termination for more recursive definitions. But improved automation can never capture all termination arguments, so it's sometimes necessary to spell out to Lean why a function call does not lead to infinite loops by specifying a decreasing measure.
An explicit termination argument has two components:

The measure itself: an expression that is smaller at each recursive call, provided with the
termination_by
clause 
A proof that the size of the measure is in fact smaller at each recursive call, provided with the
decreasing_by
clause
Both aspects have been improved in Lean 4.6.0.
Termination Measures
Prior to Lean 4.6.0
, in a mutual
block termination_by
clauses were collected at the bottom of block. Now they are provided separately at each function definition. This has allowed the syntax to become much simpler. At the sime time, some confusing aspects were made consistent with the rest of Lean.
Instead of repeating the function name and providing parameter names, the parameter names from the signature are automatically in scope:
def findFrom? [BEq α] (needle : α) (haystack : Array α) (start : Nat) : Option Nat :=
if h : start < haystack.size then
if haystack[start] == needle then some start
else findFrom? needle haystack (start + 1)
else none
termination_by findFrom? needle haystack start => haystack.size  start
termination_by haystack.size  start
When a termination measure uses unnamed arguments, they can still be bound before a =>
and used in the measure.
The names are taken from left to right, and there's no need to write the function's name (or a placeholder for it):
def ackermann : Nat → Nat → Nat
 0, m => m + 1
 n + 1, 0 => ackermann n 1
 n + 1, m + 1 => ackermann n (ackermann (n + 1) m)
termination_by _ n m => (n, m)
termination_by n m => (n, m)
Termination measures for functions defined using let rec
are now located along with the function, rather than together at the bottom of the overall definition:
def find? [BEq α] (needle : α) (haystack : Array α) : Option Nat :=
let rec go (i : Nat) :=
if h : i < haystack.size then
if haystack[i] == needle then some i
else go (i + 1)
else none
termination_by haystack.size  i
go 0
termination_by go haystack i => haystack.size  i
Likewise, termination measures for functions defined in where
clauses are now placed together with the function:
def find? [BEq α] (needle : α) (haystack : Array α) : Option Nat :=
go 0
where
go (i : Nat) :=
if h : i < haystack.size then
if haystack[i] == needle then some i
else go (i + 1)
else none
termination_by haystack.size  i
termination_by go haystack i => haystack.size  i
This new syntax is more predictable, less verbose, and it places termination measures closer to the functions.
Termination Arguments
Once a termination measure has been specified, Lean requires a proof that each recursive call decreases the measure.
There is a default tactic that works for many common termination proofs, but it is limited to fairly simple patterns of reasoning.
The decreasing_by
clause is used to specify a proof script that demonstrates that the measure decreases.
Each recursive call in the function corresponds to a proof goal.
In prior versions of Lean, the proof script in the decreasing_by
clause was run on all of the goals, as if copypasted, but this behavior could be inconvenient when each goal requires separate reasoning.
This could be worked around by using the errorrecovery combinator first
, but the results were inelegant.
Starting in Lean 4.6.0, the behavior of decreasing_by
is consistent with that of by
, with tactics applying to the first goal.
The all_goals
combinator can be used to apply a tactic to every goal, such as simp_wf
, which simplifies termination goals to remove Lean internals.
As usual, bullets can be used to structure a proof by focusing on individual goals.
For example, this (very pedantic and verbose) termination proof carries out separate reasoning in each goal:
def gcd_root_.gcd (n k : Nat) : Nat
(nNat
kNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
) : NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
:=
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
= 0 then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
kNat
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
kNat
= 0 then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
> kNat
then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
gcd_root_.gcd (n k : Nat) : Nat
(nNat
 kNat
) kNat
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
gcd_root_.gcd (n k : Nat) : Nat
nNat
(kNat
 nNat
)
termination_bySpecify a termination argument for wellfounded termination:
```
termination_by a  b
```
indicates that termination of the currently defined recursive function follows
because the difference between the the arguments `a` and `b`.
If the fuction takes further argument after the colon, you can name them as follows:
```
def example (a : Nat) : Nat → Nat → Nat :=
termination_by b c => a  b
```
If omitted, a termination argument will be inferred.
nNat
+ kNat
decreasing_byManually prove that the termination argument (as specified with `termination_by` or inferred)
decreases at each recursive call.
By default, the tactic `decreasing_tactic` is used.
all_goals`all_goals tac` runs `tac` on each goal, concatenating the resulting goals, if any.
simp_wfUnfold definitions commonly used in well founded relation definitions.
This is primarily intended for internal use in `decreasing_tactic`.
. rw`rw` is like `rewrite`, but also tries to close the goal by "cheap" (reducible) `rfl` afterwards.
[Nat.sub_add_cancelNat.sub_add_cancel {n m : Nat} (h : m ≤ n) : n  m + m = n
]
. apply`apply e` tries to match the current goal against the conclusion of `e`'s type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Nondependent premises are added before dependent ones.
The `apply` tactic uses higherorder pattern matching, type class resolution,
and firstorder unification with dependent types.
Nat.lt_add_right_iff_posNat.lt_add_right_iff_pos {n k : Nat} : n < n + k ↔ 0 < k
.mprIff.mpr {a b : Prop} (self : a ↔ b) (a✝ : b) : a
Modus ponens for if and only if, reversed. If `a ↔ b` and `b`, then `a`.
apply`apply e` tries to match the current goal against the conclusion of `e`'s type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Nondependent premises are added before dependent ones.
The `apply` tactic uses higherorder pattern matching, type class resolution,
and firstorder unification with dependent types.
Nat.zero_lt_of_ne_zeroNat.zero_lt_of_ne_zero {a : Nat} (h : a ≠ 0) : 0 < a
assumption`assumption` tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the `‹t›` term notation, which is a shorthand for `show t by assumption`.
. apply`apply e` tries to match the current goal against the conclusion of `e`'s type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Nondependent premises are added before dependent ones.
The `apply` tactic uses higherorder pattern matching, type class resolution,
and firstorder unification with dependent types.
Nat.le_of_ltNat.le_of_lt {n m : Nat} (h : n < m) : n ≤ m
assumption`assumption` tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the `‹t›` term notation, which is a shorthand for `show t by assumption`.
. rw`rw` is like `rewrite`, but also tries to close the goal by "cheap" (reducible) `rfl` afterwards.
[Nat.add_sub_cancel'Nat.add_sub_cancel' {n m : Nat} (h : m ≤ n) : m + (n  m) = n
]
. apply`apply e` tries to match the current goal against the conclusion of `e`'s type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Nondependent premises are added before dependent ones.
The `apply` tactic uses higherorder pattern matching, type class resolution,
and firstorder unification with dependent types.
Nat.lt_add_of_pos_leftNat.lt_add_of_pos_left {k n : Nat} (a✝ : 0 < k) : n < k + n
apply`apply e` tries to match the current goal against the conclusion of `e`'s type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Nondependent premises are added before dependent ones.
The `apply` tactic uses higherorder pattern matching, type class resolution,
and firstorder unification with dependent types.
Nat.zero_lt_of_ne_zeroNat.zero_lt_of_ne_zero {a : Nat} (h : a ≠ 0) : 0 < a
assumption`assumption` tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the `‹t›` term notation, which is a shorthand for `show t by assumption`.
. apply`apply e` tries to match the current goal against the conclusion of `e`'s type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Nondependent premises are added before dependent ones.
The `apply` tactic uses higherorder pattern matching, type class resolution,
and firstorder unification with dependent types.
Nat.le_of_not_gtNat.le_of_not_gt {a b : Nat} (a✝ : ¬a < b) : b ≤ a
**Alias** of the forward direction of `Nat.not_lt`.
assumption`assumption` tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the `‹t›` term notation, which is a shorthand for `show t by assumption`.
Often, however, all the termination goals can be proven by a more powerful tactic, such as omega
.
In this case, all_goals
is usually the right tool for the job:
def gcd_root_.gcd (n k : Nat) : Nat
(nNat
kNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
) : NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
:=
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
= 0 then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
kNat
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
kNat
= 0 then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
> kNat
then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
gcd_root_.gcd (n k : Nat) : Nat
(nNat
 kNat
) kNat
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
gcd_root_.gcd (n k : Nat) : Nat
nNat
(kNat
 nNat
)
termination_bySpecify a termination argument for wellfounded termination:
```
termination_by a  b
```
indicates that termination of the currently defined recursive function follows
because the difference between the the arguments `a` and `b`.
If the fuction takes further argument after the colon, you can name them as follows:
```
def example (a : Nat) : Nat → Nat → Nat :=
termination_by b c => a  b
```
If omitted, a termination argument will be inferred.
nNat
+ kNat
decreasing_byManually prove that the termination argument (as specified with `termination_by` or inferred)
decreases at each recursive call.
By default, the tactic `decreasing_tactic` is used.
all_goals`all_goals tac` runs `tac` on each goal, concatenating the resulting goals, if any.
simp_wfUnfold definitions commonly used in well founded relation definitions.
This is primarily intended for internal use in `decreasing_tactic`.
; omegaThe `omega` tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows),
but should be effective on many problems.
We handle hypotheses of the form `x = y`, `x < y`, `x ≤ y`, and `k ∣ x` for `x y` in `Nat` or `Int`
(and `k` a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter `x / k` or `x % k` for literal integers `k` we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If `omega` fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
```
omega (config :=
{ splitDisjunctions := true, splitNatSub := true, splitNatAbs := true, splitMinMax := true })
```
can be used to:
* `splitDisjunctions`: split any disjunctions found in the context,
if the problem is not otherwise solvable.
* `splitNatSub`: for each appearance of `((a  b : Nat) : Int)`, split on `a ≤ b` if necessary.
* `splitNatAbs`: for each appearance of `Int.natAbs a`, split on `0 ≤ a` if necessary.
* `splitMinMax`: for each occurrence of `min a b`, split on `min a b = a ∨ min a b = b`
Currently, all of these are on by default.
Just like termination_by
, the decreasing_by
clause is now attached to each function definition.
Custom Simplification Procedures
Lean's allpurpose simplification tactic simp
gained a powerful new feature: custom simplification procedures, called "simprocs".
Typically, simp
rewrites a goal according to a collection of simplification lemmas, which are equality proofs.
Roughly speaking, if the universallyquantified variables in a simplification lemma can be instantiated such that the lefthand side occurs in the term to be simplified, then it is rewritten to the righthand side, as long as simp
can also prove the side conditions of the lemma.
But this approach is not always sufficiently flexible.
Simprocs allow arbitrary Lean code to be used for rewriting of terms using their internal representation as a Lean datatype.
While they are trickier to define than ordinary simp
lemmas, they enable library authors to provide simplification rules that were impossible to implement as ordinary lemmas, including:

Support for reducing numeric literals, which are handled specially by Lean for performance reasons, so
1000 + 2000
simplifies directly to3000

Simplification procedures for custom theories, such as using ring axioms or performing polynomial factorization

Simplification using custom simplification strategies. One example in Lean is a simproc for
ifthenelse
expressions that first tries to simplify the condition. If it simplifies totrue
, only thethen
branch needs to be simplified, and if it simplifies tofalse
, only theelse
branch needs to be simplified. 
Users can write simprocs that interrupt simplification when a specific pattern is found.
For examples, please see the release notes.
Even users who do not directly implement their own simprocs will benefit from the improved automation in simp
.
Language Server
Lean's language server now supports call hierarchy queries. This makes it easy to find out who calls a given function, and recursively explore their callers and their callers' callers.
In VSCode, it looks like this:
Pretty Printer
Lean's pretty printer is the component of the system that's responsible for rendering terms in proof goals, error messages, and other communication to users. Because it is used in so many different contexts, the pretty printer has a number of configurable options that can be used to make it fit a particular use case. Based on user request, new options were introduced in this release.
Sometimes, a term may be very large, but the part that's relevant to the task at hand is not deeply buried. In these cases, the following prettyprinter options can be used:

If
pp.deepTerms
is set tofalse
, the pretty printer stops printing at a given depth in the tree, replacing the term with an ellipsis. 
When deep terms are to be elided, the threshold is specified in
pp.deepTerms.threshold
.
For example, take the following definition of a tree datatype and a function that generates a large tree:
inductiveIn Lean, every concrete type other than the universes
and every type constructor other than dependent arrows
is an instance of a general family of type constructions known as inductive types.
It is remarkable that it is possible to construct a substantial edifice of mathematics
based on nothing more than the type universes, dependent arrow types, and inductive types;
everything else follows from those.
Intuitively, an inductive type is built up from a specified list of constructor.
For example, `List α` is the list of elements of type `α`, and is defined as follows:
```
inductive List (α : Type u) where
 nil
 cons (head : α) (tail : List α)
```
A list of elements of type `α` is either the empty list, `nil`,
or an element `head : α` followed by a list `tail : List α`.
For more information about [inductive types](https://leanlang.org/theorem_proving_in_lean4/inductive_types.html).
TreeTree.{u_1} (α : Sort u_1) : Sort (max 1 u_1)
(αSort u_1
) where
 leafTree.leaf.{u_1} {α : Sort u_1} : Tree α
 branchTree.branch.{u_1} {α : Sort u_1} (left : Tree α) (val : α) (right : Tree α) : Tree α
(leftTree α
: TreeTree.{u_1} (α : Sort u_1) : Sort (max 1 u_1)
αSort u_1
) (valα
: αSort u_1
) (rightTree α
: TreeTree.{u_1} (α : Sort u_1) : Sort (max 1 u_1)
αSort u_1
)
def Tree.countTree.count.{u_1} {α : Sort u_1} (a✝ : Tree α) : Nat
: TreeTree.{u_1} (α : Sort u_1) : Sort (max 1 u_1)
αSort u_1
→ NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
 .leafTree.leaf.{u_1} {α : Sort u_1} : Tree α
=> 0
 .branchTree.branch.{u_1} {α : Sort u_1} (left : Tree α) (val : α) (right : Tree α) : Tree α
lTree α
_ rTree α
=> lTree α
.countTree.count.{u_1} {α : Sort u_1} (a✝ : Tree α) : Nat
+ rTree α
.countTree.count.{u_1} {α : Sort u_1} (a✝ : Tree α) : Nat
+ 1
def unbalanced_root_.unbalanced (n : Nat) : Tree Nat
(nNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
) : TreeTree.{u_1} (α : Sort u_1) : Sort (max 1 u_1)
NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
:=
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
nNat
= 0 then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
.leafTree.leaf.{u_1} {α : Sort u_1} : Tree α
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
Because Lean uses a strict (callbyvalue) evaluation strategy, the signature of this
function is problematic in that it would require `t` and `e` to be evaluated before
calling the `ite` function, which would cause both sides of the `if` to be evaluated.
Even if the result is discarded, this would be a big performance problem,
and is undesirable for users in any case. To resolve this, `ite` is marked as
`@[macro_inline]`, which means that it is unfolded during code generation, and
the definition of the function uses `fun _ => t` and `fun _ => e` so this recovers
the expected "lazy" behavior of `if`: the `t` and `e` arguments delay evaluation
until `c` is known.
let`let` is used to declare a local definition. Example:
```
let x := 1
let y := x + 1
x + y
```
Since functions are first class citizens in Lean, you can use `let` to declare
local functions too.
```
let double := fun x => 2*x
double (double 3)
```
For recursive definitions, you should use `let rec`.
You can also perform pattern matching using `let`. For example,
assume `p` has type `Nat × Nat`, then you can write
```
let (x, y) := p
x + y
```
leftTree Nat
:= unbalanced_root_.unbalanced (n : Nat) : Tree Nat
(nNat
/ 2)
let`let` is used to declare a local definition. Example:
```
let x := 1
let y := x + 1
x + y
```
Since functions are first class citizens in Lean, you can use `let` to declare
local functions too.
```
let double := fun x => 2*x
double (double 3)
```
For recursive definitions, you should use `let rec`.
You can also perform pattern matching using `let`. For example,
assume `p` has type `Nat × Nat`, then you can write
```
let (x, y) := p
x + y
```
rightTree Nat
:= unbalanced_root_.unbalanced (n : Nat) : Tree Nat
(nNat
 1)
.branchTree.branch.{u_1} {α : Sort u_1} (left : Tree α) (val : α) (right : Tree α) : Tree α
leftTree Nat
nNat
rightTree Nat
termination_bySpecify a termination argument for wellfounded termination:
```
termination_by a  b
```
indicates that termination of the currently defined recursive function follows
because the difference between the the arguments `a` and `b`.
If the fuction takes further argument after the colon, you can name them as follows:
```
def example (a : Nat) : Nat → Nat → Nat :=
termination_by b c => a  b
```
If omitted, a termination argument will be inferred.
nNat
decreasing_byManually prove that the termination argument (as specified with `termination_by` or inferred)
decreases at each recursive call.
By default, the tactic `decreasing_tactic` is used.
all_goals`all_goals tac` runs `tac` on each goal, concatenating the resulting goals, if any.
simp_wfUnfold definitions commonly used in well founded relation definitions.
This is primarily intended for internal use in `decreasing_tactic`.
; omegaThe `omega` tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows),
but should be effective on many problems.
We handle hypotheses of the form `x = y`, `x < y`, `x ≤ y`, and `k ∣ x` for `x y` in `Nat` or `Int`
(and `k` a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter `x / k` or `x % k` for literal integers `k` we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If `omega` fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
```
omega (config :=
{ splitDisjunctions := true, splitNatSub := true, splitNatAbs := true, splitMinMax := true })
```
can be used to:
* `splitDisjunctions`: split any disjunctions found in the context,
if the problem is not otherwise solvable.
* `splitNatSub`: for each appearance of `((a  b : Nat) : Int)`, split on `a ≤ b` if necessary.
* `splitNatAbs`: for each appearance of `Int.natAbs a`, split on `0 ≤ a` if necessary.
* `splitMinMax`: for each occurrence of `min a b`, split on `min a b = a ∨ min a b = b`
Currently, all of these are on by default.
A proof state that contains a large tree is difficult to read and occupies a great deal of space:
example : (unbalancedunbalanced (n : Nat) : Tree Nat
10).countTree.count.{u_1} {α : Sort u_1} (a✝ : Tree α) : Nat
= 59 := by`by tac` constructs a term of the expected type by running the tactic(s) `tac`.
simpThe `simp` tactic uses lemmas and hypotheses to simplify the main goal target or
nondependent hypotheses. It has many variants:
 `simp` simplifies the main goal target using lemmas tagged with the attribute `[simp]`.
 `simp [h₁, h₂, ..., hₙ]` simplifies the main goal target using the lemmas tagged
with the attribute `[simp]` and the given `hᵢ`'s, where the `hᵢ`'s are expressions.
If an `hᵢ` is a defined constant `f`, then the equational lemmas associated with
`f` are used. This provides a convenient way to unfold `f`.
 `simp [*]` simplifies the main goal target using the lemmas tagged with the
attribute `[simp]` and all hypotheses.
 `simp only [h₁, h₂, ..., hₙ]` is like `simp [h₁, h₂, ..., hₙ]` but does not use `[simp]` lemmas.
 `simp [id₁, ..., idₙ]` simplifies the main goal target using the lemmas tagged
with the attribute `[simp]`, but removes the ones named `idᵢ`.
 `simp at h₁ h₂ ... hₙ` simplifies the hypotheses `h₁ : T₁` ... `hₙ : Tₙ`. If
the target or another hypothesis depends on `hᵢ`, a new simplified hypothesis
`hᵢ` is introduced, but the old one remains in the local context.
 `simp at *` simplifies all the hypotheses and the target.
 `simp [*] at *` simplifies target and all (propositional) hypotheses using the
other hypotheses.
[unbalancedunbalanced (n : Nat) : Tree Nat
]
unsolved goals ⊢ Tree.count (Tree.branch (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)) 5 (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)) 4 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 3 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf))))) 10 (Tree.branch (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)) 4 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 3 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)))) 9 (Tree.branch (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)) 4 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 3 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)))) 8 (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 3 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf))) 7 (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 3 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf))) 6 (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)) 5 (Tree.branch (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)) 4 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 3 (Tree.branch (Tree.branch Tree.leaf 1 Tree.leaf) 2 (Tree.branch Tree.leaf 1 Tree.leaf)))))))))) = 59
Asking Lean to elide deep terms and setting a suitable threshold greatly reduces the size of the message:
set_option pp.deepTermsLean.pp.deepTerms
false
set_option pp.deepTerms.thresholdLean.pp.deepTerms.threshold
5
example : (unbalancedunbalanced (n : Nat) : Tree Nat
10).countTree.count.{u_1} {α : Sort u_1} (a✝ : Tree α) : Nat
= 59 := by`by tac` constructs a term of the expected type by running the tactic(s) `tac`.
simpThe `simp` tactic uses lemmas and hypotheses to simplify the main goal target or
nondependent hypotheses. It has many variants:
 `simp` simplifies the main goal target using lemmas tagged with the attribute `[simp]`.
 `simp [h₁, h₂, ..., hₙ]` simplifies the main goal target using the lemmas tagged
with the attribute `[simp]` and the given `hᵢ`'s, where the `hᵢ`'s are expressions.
If an `hᵢ` is a defined constant `f`, then the equational lemmas associated with
`f` are used. This provides a convenient way to unfold `f`.
 `simp [*]` simplifies the main goal target using the lemmas tagged with the
attribute `[simp]` and all hypotheses.
 `simp only [h₁, h₂, ..., hₙ]` is like `simp [h₁, h₂, ..., hₙ]` but does not use `[simp]` lemmas.
 `simp [id₁, ..., idₙ]` simplifies the main goal target using the lemmas tagged
with the attribute `[simp]`, but removes the ones named `idᵢ`.
 `simp at h₁ h₂ ... hₙ` simplifies the hypotheses `h₁ : T₁` ... `hₙ : Tₙ`. If
the target or another hypothesis depends on `hᵢ`, a new simplified hypothesis
`hᵢ` is introduced, but the old one remains in the local context.
 `simp at *` simplifies all the hypotheses and the target.
 `simp [*] at *` simplifies target and all (propositional) hypotheses using the
other hypotheses.
[unbalancedunbalanced (n : Nat) : Tree Nat
]
unsolved goals ⊢ Tree.count (Tree.branch (Tree.branch (Tree.branch (Tree.branch ⋯ ⋯ ⋯) 2 (Tree.branch ⋯ ⋯ ⋯)) 5 (Tree.branch (Tree.branch ⋯ ⋯ ⋯) 4 (Tree.branch ⋯ ⋯ ⋯))) 10 (Tree.branch (Tree.branch (Tree.branch ⋯ ⋯ ⋯) 4 (Tree.branch ⋯ ⋯ ⋯)) 9 (Tree.branch (Tree.branch ⋯ ⋯ ⋯) 8 (Tree.branch ⋯ ⋯ ⋯)))) = 59
Hovering the mouse over an ellipsis shows the elided term:
Additionally, Lean's pretty printer now supports adding type annotations to numeric literals. In this example, there's no way to see the types by glancing at the output:
structure NumbersNumbers : Type
where
oneNumbers.one (self : Numbers) : Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
anotherNumbers.another (self : Numbers) : Int
: IntInt : Type
The type of integers. It is defined as an inductive type based on the
natural number type `Nat` featuring two constructors: "a natural
number is an integer", and "the negation of a successor of a natural
number is an integer". The former represents integers between `0`
(inclusive) and `∞`, and the latter integers between `∞` and `1`
(inclusive).
This type is specialcased by the compiler. The runtime has a special
representation for `Int` which stores "small" signed numbers directly,
and larger numbers use an arbitrary precision "bignum" library
(usually [GMP](https://gmplib.org/)). A "small number" is an integer
that can be encoded with 63 bits (31 bits on 32bits architectures).
yetMoreNumbers.yetMore (self : Numbers) : Float
: FloatFloat : Type
Native floating point type, corresponding to the IEEE 754 *binary64* format
(`double` in C or `f64` in Rust).
#check {oneNat
:= 3, anotherInt
:= 3, yetMoreFloat
:= 3 : NumbersNumbers : Type
}
{ one := 3, another := 3, yetMore := 3 } : Numbers
Setting pp.numeralTypes
to true
results in type ascriptions:
set_option pp.numericTypesLean.pp.numericTypes
true
#check {oneNat
:= 3, anotherInt
:= 3, yetMoreFloat
:= 3 : NumbersNumbers : Type
}
{ one := (3 : Nat), another := (3 : Int), yetMore := (3 : Float) } : Numbers
Further Improvements
Further improvements were made to the pretty printer, Lake's handling of multipleplatform projects, and the behavior of Array.swap!
was made more consistent with the other !
variants. The implementation of structure updating was made both more predictable and faster, which led to a large improvement in Mathlib's build times. Many small bugs were fixed, and many more improvements were made—please see the release notes for a complete list.