Lean 4.8.0
Lean 4.8.0 has been released!
Version 4.8.0 is a very big release, containing more than 370 separate improvements. There is no way for a single post to do justice to all of the hard work that has been put into this release. Many of the improvements have served to make Lean faster and more robust, but will be only visible because the whole system feels snappier and reliable. On the other hand, there were also a large number of directly visible enhancements—read on for some highlights!
For a complete accounting of the updates in Lean 4.8.0, please refer to the release notes.
Standard Library
In the past, Lean had a communitydeveloped standard library, maintained separately from the compiler. This library provided not only datatypes and functions, but also tactics, language features like #guard_msgs
, linters, and even interactive features such as code actions.
However, an externallydeveloped standard library comes with a number of drawbacks:

It is difficult to fully integrate features from the library with the rest of the language. For example, when Lean 4.7 moved the
omega
tactic into Lean itself, it became possible to automatically check that array accesses were inbounds and to use it for automatic termination arguments. 
Features that require tight coordination between compiler improvements and library improvements are much more difficult to implement, because they need to be coordinated across multiple repositories and maintainer teams.

Users are burdened with explicitly tracking their dependency on the standard library in their build system, and some users may not know to use it.
We made the decision to include the standard library as part of Lean itself. Lean 4.7 upstreamed many of the most important, mature parts of the community standard library into the compiler, making them available out of the box for everyone. The community library, now intended for extra contributions that are not part of Lean itself, has been renamed and moved to leanprovercommunity/batteries, so please update your Lakefiles to point at the new location.
Termination Checking
The termination checker now recognizes more complex patterns of recursive calls, so fewer termination_by
clauses are needed. In particular, the common idiom of counting up to a bound rather than down towards zero is now recognized automatically, as in Array.count
, which counts the elements of an array that satisfy some Boolean predicate Array.count.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) : Nat
p
:
def Array.countArray.count.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) : Nat
(pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
) (arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
) : NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
:=
let rec goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
(accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
) (iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
) :=
if"Dependent" ifthenelse, normally written via the notation `if h : c then t(h) else e(h)`,
is sugar for `dite c (fun h => t(h)) (fun h => e(h))`, and it is the same as
`if c then t else e` except that `t` is allowed to depend on a proof `h : c`,
and `e` can depend on `h : ¬c`. (Both branches use the same name for the hypothesis,
even though it has different types in the two cases.)
We use this to be able to communicate the ifthenelse condition to the branches.
For example, `Array.get arr ⟨i, h⟩` expects a proof `h : i < arr.size` in order to
avoid a bounds check, so you can write `if h : i < arr.size then arr.get ⟨i, h⟩ else ...`
to avoid the bounds check inside the if branch. (Of course in this case we have only
lifted the check into an explicit `if`, but we could also use this proof multiple times
or derive `i < arr.size` from some other proposition that we are checking in the `if`.)
hi < array.size
: iNat
< arrayArray α
.sizeArray.size.{u} {α : Type u} (a : Array α) : Nat
Get the size of an array. This is a cached value, so it is O(1) to access.
then"Dependent" ifthenelse, normally written via the notation `if h : c then t(h) else e(h)`,
is sugar for `dite c (fun h => t(h)) (fun h => e(h))`, and it is the same as
`if c then t else e` except that `t` is allowed to depend on a proof `h : c`,
and `e` can depend on `h : ¬c`. (Both branches use the same name for the hypothesis,
even though it has different types in the two cases.)
We use this to be able to communicate the ifthenelse condition to the branches.
For example, `Array.get arr ⟨i, h⟩` expects a proof `h : i < arr.size` in order to
avoid a bounds check, so you can write `if h : i < arr.size then arr.get ⟨i, h⟩ else ...`
to avoid the bounds check inside the if branch. (Of course in this case we have only
lifted the check into an explicit `if`, but we could also use this proof multiple times
or derive `i < arr.size` from some other proposition that we are checking in the `if`.)
if`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
pα → Bool
arrayArray α
[iNat
] then`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
(accNat
+ 1) (iNat
+ 1)
else`if c then t else e` is notation for `ite c t e`, "ifthenelse", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent ifthenelse" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
accNat
(iNat
+ 1)
else"Dependent" ifthenelse, normally written via the notation `if h : c then t(h) else e(h)`,
is sugar for `dite c (fun h => t(h)) (fun h => e(h))`, and it is the same as
`if c then t else e` except that `t` is allowed to depend on a proof `h : c`,
and `e` can depend on `h : ¬c`. (Both branches use the same name for the hypothesis,
even though it has different types in the two cases.)
We use this to be able to communicate the ifthenelse condition to the branches.
For example, `Array.get arr ⟨i, h⟩` expects a proof `h : i < arr.size` in order to
avoid a bounds check, so you can write `if h : i < arr.size then arr.get ⟨i, h⟩ else ...`
to avoid the bounds check inside the if branch. (Of course in this case we have only
lifted the check into an explicit `if`, but we could also use this proof multiple times
or derive `i < arr.size` from some other proposition that we are checking in the `if`.)
accNat
goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
0 0
Functional Induction
Functional induction makes it much easier to write proofs about recursive functions that are not structurally recursive. Typically, proofs about recursive functions must follow the structure of the functions' recursive calls. Structural recursion matches inductive arguments: base cases of the recursion correspond to base cases for the argument, and recursive calls correspond to appeals to an induction hypothesis.
However, functions defined using other patterns of recursion require proofs that are structured similarly. In earlier versions of Lean, these proofs could be written as recursive functions, but Lean 4.8 includes support for functional induction, where recursive functions provide the code that the induction
tactic needs in order to follow the structure of their recursion.
In Lean 4.8, a proof that the count will never exceed the size of the array can be written using the induction
tactic directly, using the functional induction principle for the inner loop Array.count.go
:Array.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
theorem Array.count.go_le_sizeArray.count.go_le_size.{u_1} {α : Type u_1} {acc i : Nat} {p : α → Bool} {array : Array α} :
acc ≤ i → i ≤ array.size → go p array acc i ≤ array.size
{arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
} :
accNat
≤ iNat
→ iNat
≤ arrayArray α
.sizeArray.size.{u} {α : Type u} (a : Array α) : Nat
Get the size of an array. This is a cached value, so it is O(1) to access.
→
Array.count.goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
iNat
≤ arrayArray α
.sizeArray.size.{u} {α : Type u} (a : Array α) : Nat
Get the size of an array. This is a cached value, so it is O(1) to access.
:= αType u_1
: Type u_1 accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
→ iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
αType u_1
: Type u_1 accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
inductionAssuming `x` is a variable in the local context with an inductive type,
`induction x` applies induction on `x` to the main goal,
producing one goal for each constructor of the inductive type,
in which the target is replaced by a general instance of that constructor
and an inductive hypothesis is added for each recursive argument to the constructor.
If the type of an element in the local context depends on `x`,
that element is reverted and reintroduced afterward,
so that the inductive hypothesis incorporates that hypothesis as well.
For example, given `n : Nat` and a goal with a hypothesis `h : P n` and target `Q n`,
`induction n` produces one goal with hypothesis `h : P 0` and target `Q 0`,
and one goal with hypotheses `h : P (Nat.succ a)` and `ih₁ : P a → Q a` and target `Q (Nat.succ a)`.
Here the names `a` and `ih₁` are chosen automatically and are not accessible.
You can use `with` to provide the variables names for each constructor.
 `induction e`, where `e` is an expression instead of a variable,
generalizes `e` in the goal, and then performs induction on the resulting variable.
 `induction e using r` allows the user to specify the principle of induction that should be used.
Here `r` should be a term whose result type must be of the form `C t`,
where `C` is a bound variable and `t` is a (possibly empty) sequence of bound variables
 `induction e generalizing z₁ ... zₙ`, where `z₁ ... zₙ` are variables in the local context,
generalizes over `z₁ ... zₙ` before applying the induction but then introduces them in each goal.
In other words, the net effect is that each inductive hypothesis is generalized.
 Given `x : Nat`, `induction x with  zero => tac₁  succ x' ih => tac₂`
uses tactic `tac₁` for the `zero` case, and `tac₂` for the `succ` case.
accNat
, iNat
using count.go.inductArray.count.go.induct.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (motive : Nat → Nat → Prop)
(case1 : ∀ (acc i : Nat) (h : i < array.size), p array[i] = true → motive (acc + 1) (i + 1) → motive acc i)
(case2 : ∀ (acc i : Nat) (h : i < array.size), ¬p array[i] = true → motive acc (i + 1) → motive acc i)
(case3 : ∀ (acc i : Nat), ¬i < array.size → motive acc i) (acc i : Nat) : motive acc i
(p := pα → Bool
) (array := arrayArray α
) withAfter `with`, there is an optional tactic that runs on all branches, and
then a list of alternatives.
 case1 accNat
iNat
inBoundsi < array.size
pTruep array[i] = true
ihacc + 1 ≤ i + 1 → i + 1 ≤ array.size → go p array (acc + 1) (i + 1) ≤ array.size
 case2 accNat
iNat
inBoundsi < array.size
pFalse¬p array[i] = true
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
case2
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
case2
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ (if h : iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size then if pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
then goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
(accNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) (iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) else goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1)
else accNat
) ≤
LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
case2
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
case2.hle1
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1case2.hle2
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size case2.hle1
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1case2.hle2
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
accNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
iNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
inBoundsi < array.size
: iNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size pFalse¬p array[i] = true
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
pα → Bool
arrayArray α
[iNat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
ihacc ≤ i + 1 → i + 1 ≤ array.size → go p array acc (i + 1) ≤ array.size
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 → iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size → goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
accNat
(iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size hle1acc ≤ i
: accNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
iNat
hle2i ≤ array.size
: iNat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ iNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size All goals completed! 🐙
 case3 case3
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
acc✝Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
i✝Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
h✝¬i✝ < array.size
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
i✝Nat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size hle1acc✝ ≤ i✝
: acc✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
i✝Nat
hle2i✝ ≤ array.size
: i✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
acc✝Nat
i✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
case3
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
acc✝Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
i✝Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
h✝¬i✝ < array.size
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
i✝Nat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size hle1acc✝ ≤ i✝
: acc✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
i✝Nat
hle2i✝ ≤ array.size
: i✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ (if h : i✝Nat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size then if pα → Bool
arrayArray α
[i✝Nat
] = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
then goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
(acc✝Nat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) (i✝Nat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1) else goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
acc✝Nat
(i✝Nat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1)
else acc✝Nat
) ≤
LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
case3
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
acc✝Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
i✝Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
h✝¬i✝ < array.size
: ¬Not (a : Prop) : Prop
`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://leanlang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositionallogic)
i✝Nat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
arrayArray α
.size hle1acc✝ ≤ i✝
: acc✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
i✝Nat
hle2i✝ ≤ array.size
: i✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
⊢ acc✝Nat
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
All goals completed! 🐙
theorem Array.count_le_sizeArray.count_le_size.{u_1} {α : Type u_1} {p : α → Bool} {array : Array α} : count p array ≤ array.size
{arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
} :
arrayArray α
.countArray.count.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) : Nat
pα → Bool
≤ arrayArray α
.sizeArray.size.{u} {α : Type u} (a : Array α) : Nat
Get the size of an array. This is a cached value, so it is O(1) to access.
:= αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ countArray.count.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) : Nat
pα → Bool
arrayArray α
≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ count.goArray.count.go.{u_1} {α : Type u_1} (p : α → Bool) (array : Array α) (acc i : Nat) : Nat
pα → Bool
arrayArray α
0 0 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size
a._@.Post480._hyg.1811
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ 0 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
0a._@.Post480._hyg.1817
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ 0 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size a._@.Post480._hyg.1811
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ 0 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
0a._@.Post480._hyg.1817
αType u_1
: Type u_1 pα → Bool
: αType u_1
→ BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
arrayArray α
: ArrayArray.{u} (α : Type u) : Type u
`Array α` is the type of [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array)
with elements from `α`. This type has special support in the runtime.
An array has a size and a capacity; the size is `Array.size` but the capacity
is not observable from Lean code. Arrays perform best when unshared; as long
as they are used "linearly" all updates will be performed destructively on the
array, so it has comparable performance to mutable arrays in imperative
programming languages.
From the point of view of proofs `Array α` is just a wrapper around `List α`.
αType u_1
⊢ 0 ≤ LE.le.{u} {α : Type u} [self : LE α] : α → α → Prop
The lessequal relation: `x ≤ y`
arrayArray α
.size All goals completed! 🐙
Functional induction is described in more detail in Joachim's recent blog post.
Language Server and IDE Improvements
Import Completion
Lean now supports completing import declarations when a prefix of a module name has already been typed out. For example, attempting to complete import Init.By
will now suggest Init.ByCases
, whereas it would not offer any completions in previous Lean version. This makes import completion feel more consistent.
Semantic Token Synchronization
In prior versions of Lean, VS Code would sometimes fail to update its semantic tokens. This led to misleading highlighting in the source file. Now, the Lean language server explicitly invalidates the cache more often, leading to uptodate highlights.
“Restart File” Diagnostic
The language server now detects when the dependencies of the current file have been modified, informing the user that they may wish to refresh the language server's view of the dependencies with the “Restart File” command, initiating a rebuild. In the past, users had to remember to do this on their own, which could lead to wasted work writing proofs about previous versions of definitions.
Foundations of Incrementality
Lean 4.8.0 contains the foundations of incremental processing within commands. Prior versions of Lean save snapshots after each toplevel command, allowing Lean to roll back to a state immediately prior to a user's edit to a file before responding to the edit. This incrementality is useful, but it has limitations: commandlevel granularity means that tactic proofs must be replayed from scratch, and many DSLs are most naturally embedded as a single Lean command.
In Lean 4.8.0, implementors of Lean elaborators can opt in to saving snapshots of the state of their custom elaboration process, and then resume elaboration with one of these snapshots. This can be used by embedded DSLs to achieve incremental reuse of prior elaboration work, making them more responsive in practice.
Support for incrementality in Lean 4.8 is intended for DSL authors to experiment and provide feedback on the API primitives. We are in the process of adding finegrained incrementality to Lean's own commands and tactics, and we expect to have initial support for finegrained incremental processing in Lean 4.9, which will provide visible benefits to all Lean users.
Lake
TOML Configurations
Lean's build tool, lake
, provides the full power of Lean for configuring builds. Sometimes, this is extremely useful, but many projects don't need this power. Power has a costthe need to use Lean to interpret build instructions made it more difficult to write tools in other languages that process Lean build configurations. Now, the most commonly used project configuration settings are available in a TOML format, so library authors can choose the level of power and control that's appropriate to their needs.
name = "aesop" defaultTargets = ["Aesop"] testRunner = "test" [[require]] name = "batteries" git = "https://github.com/leanprovercommunity/batteries" rev = "main" [[lean_lib]] name = "Aesop" [[lean_lib]] name = "AesopTest" globs = ["AesopTest.+"] leanOptions = {linter.unusedVariables = false} [[lean_exe]] name = "test" srcDir = "scripts"
Lake configuration files can be translated from Lean to TOML automatically: running lake translateconfig toml
produces a TOML version of the current configuration. This translation does not preserve comments or nondeclarative configuration features, but it can save significant time nonetheless.
Today, not all features are available in TOML files. Thus, if both TOML and Lean configuration files exist, Lake uses the Lean version and issues a warning. We encourage you to use the TOML format if it supports everything you need, and we're working on reducing the need for Leanbased configurations by adding declarative support for more build features.
lake test
Lake now supports designating scripts or executables to be test runners, using the @[test_runner]
attribute. The new lake test
command invokes the test runner. Additionally, the new command lake checktest
will exit with code 0
if the package has a test runner, or 1
otherwise.
Progress Reporting
In previous versions of Lean, lake
emitted a line of plain text for each task it carried out. When running in a terminal with the right features, lake
now produces a progress tracker that updates inplace, only emitting persistent lines of text when there is output such as errors or warnings. Additionally, Lake's tracking of errors in build jobs has been made more robust, and it can continue building in more situations even after encountering errors.
The new wfail
and iofail
options to lake build
cause a build to fail if any of the jobs log a warning (wfail
) or produce any output or log information messages (iofail
). Unlike some other build systems, these options do NOT convert these logs into errors, and Lake does not abort jobs on such a log—the build will continue unimpeded, and then report an error when it has completed.
Tactic Improvements
Simplifier
The simplifier tactic simp
has seen a number of improvements in Lean 4.8.0. Its documentation has been improved, its internal caching behavior is more robust, its behavior for literals has been made more consistent, and it is more robust in the face of loops. Additionally, the new diagnostics infrastructure makes it much easier to find and fix looping simp
lemmas or lemmas that match too often but aren't used, among other performance issues.
omega
The omega
tactic is a solver for a wide range of linear arithmetic problems, first included by default in Lean 4.7. In Lean 4.8, omega
's error messages have been greatly improved and a new canonicalizer makes it work more predictably.
Error Messages
Internally, omega
supports reasoning about equalities and inequalities of expressions built up using certain operators. Any subexpressions that fall outside of the theory that omega
natively supports are considered atomically, as if they were free variables. In Lean 4.8, omega
's error messages list the atoms as well as constraints over these atoms that a counterexample may satisfy—because the atoms are uninterpreted, omega
may not be able to see that the constraints may in fact not be satisfiable.
Give f
with type f : Nat → Int
Nat
, the example:Nat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
→ IntInt : Type
The type of integers. It is defined as an inductive type based on the
natural number type `Nat` featuring two constructors: "a natural
number is an integer", and "the negation of a successor of a natural
number is an integer". The former represents integers between `0`
(inclusive) and `∞`, and the latter integers between `∞` and `1`
(inclusive).
This type is specialcased by the compiler. The runtime has a special
representation for `Int` which stores "small" signed numbers directly,
and larger numbers use an arbitrary precision "bignum" library
(usually [GMP](https://gmplib.org/)). A "small number" is an integer
that can be encoded with 63 bits (31 bits on 32bits architectures).
example : 2 + xInt
+ ff : Nat → Int
yNat
< 2  yNat
:= xInt
: IntInt : Type
The type of integers. It is defined as an inductive type based on the
natural number type `Nat` featuring two constructors: "a natural
number is an integer", and "the negation of a successor of a natural
number is an integer". The former represents integers between `0`
(inclusive) and `∞`, and the latter integers between `∞` and `1`
(inclusive).
This type is specialcased by the compiler. The runtime has a special
representation for `Int` which stores "small" signed numbers directly,
and larger numbers use an arbitrary precision "bignum" library
(usually [GMP](https://gmplib.org/)). A "small number" is an integer
that can be encoded with 63 bits (31 bits on 32bits architectures).
yNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ 2 + HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
xInt
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
ff : Nat → Int
yNat
< LT.lt.{u} {α : Type u} [self : LT α] : α → α → Prop
The lessthan relation: `x < y`
2  HSub.hSub.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HSub α β γ] : α → β → γ
`a  b` computes the difference of `a` and `b`.
The meaning of this notation is typedependent.
* For natural numbers, this operator saturates at 0: `a  b = 0` when `a ≤ b`.
↑yNat
All goals completed! 🐙
yields the following message:
omega could not prove the goal: a possible counterexample may satisfy the constraints a + b + c ≥ 0 c ≥ 0 where a := x b := f y c := ↑y
The where
clause at the bottom lists the atoms that omega
identified. Seeing the atoms and the constraints can make it easier to diagnose why omega
failed to discharge a goal.
Canonicalizer
When omega
is deciding whether two subterms of a goal are the same atom, it uses syntactic equality, because checking full definitional equality proved to be too slow in practice. However, this could lead to minor differences in things like terms found by type class synthesis causing omega
to treat seeminglyidentical terms as different. In Lean 4.8, a fast canonicalization pass is applied prior to atom selection, leading to omega
succeeding more often but without slowing down the tactic.
#guard_msgs
improvements
The #guard_msgs
command, used for writing quick inline tests in a file, has gained a number of new features. Whitespace normalization and message sorting make it easier to write tests that are robust and don't fail spuriously, while diff output makes it easier to diagnose the reason why a test is failing.
Whitespace Normalization
Most developers configure their editors to remove trailing whitespace from lines of code automatically. But this can cause problems if the program being tested ends a line of output with a space, because there is no way to include that space in the expected output that does not risk getting deleted automatically. Additionally, long lines of output could interfere with code style warnings about maximum line lengths. Lean 4.8.0 contains new solutions to these problems: #guard_msgs
now has new whitespace normalization features as well as a special character (⏎
) to protect trailing whitespace.
Normally, #guard_msgs
normalized whitespace by converting all newline characters to spaces when comparing to the expected message, to accommodate code style warnings, but now (whitespace := exact)
turns off whitespace normalization completely, and (whitespace := lax)
collapses all runs of whitespace into a single space, so changes to formatting like indentation will not cause test failures.
Sorting
Because commands may also produce multiple messages in a nondeterministic order, #guard_msgs
can also now sort the messages prior to comparison. This feature is off by default, but can be enabled by passing the (ordering := sorted)
option.
Diffs
Finally, diagnosing test failures that result from small changes in a relatively large output can be difficult. Starting in Lean 4.8, setting the option guard_msgs.diff
to true
causes the test failure to contain a diff of the expected and actual messages.
Tree.build
constructs a binary tree. Diagnosing test failures that include large trees can be difficult, and it can be tempting to have Lean simply update the expected output with the actual output. For example:Tree.build (n k : Nat) : Tree
set_option`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
guard_msgs.diffguard_msgs.diff
false`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
in
/
info: Tree.branch
(Tree.branch
(Tree.branch
(Tree.branch (Tree.branch (Tree.leaf 4) 1 (Tree.leaf 2)) 2 (Tree.leaf 2))
3
(Tree.leaf 3))
4
(Tree.leaf 3))
5
(Tree.leaf 3)
/
#guard_msgs`/ ... / #guard_msgs in cmd` captures the messages generated by the command `cmd`
and checks that they match the contents of the docstring.
Basic example:
```lean
/
error: unknown identifier 'x'
/
#guard_msgs in
example : α := x
```
This checks that there is such an error and then consumes the message.
By default, the command captures all messages, but the filter condition can be adjusted.
For example, we can select only warnings:
```lean
/
warning: declaration uses 'sorry'
/
#guard_msgs(warning) in
example : α := sorry
```
or only errors
```lean
#guard_msgs(error) in
example : α := sorry
```
In the previous example, since warnings are not captured there is a warning on `sorry`.
We can drop the warning completely with
```lean
#guard_msgs(error, drop warning) in
example : α := sorry
```
In general, `#guard_msgs` accepts a commaseparated list of configuration clauses in parentheses:
```
#guard_msgs (configElt,*) in cmd
```
By default, the configuration list is `(all, whitespace := normalized, ordering := exact)`.
Message filters (processed in lefttoright order):
 `info`, `warning`, `error`: capture messages with the given severity level.
 `all`: capture all messages (the default).
 `drop info`, `drop warning`, `drop error`: drop messages with the given severity level.
 `drop all`: drop every message.
Whitespace handling (after trimming leading and trailing whitespace):
 `whitespace := exact` requires an exact whitespace match.
 `whitespace := normalized` converts all newline characters to a space before matching
(the default). This allows breaking long lines.
 `whitespace := lax` collapses whitespace to a single space before matching.
Message ordering:
 `ordering := exact` uses the exact ordering of the messages (the default).
 `ordering := sorted` sorts the messages in lexicographic order.
This helps with testing commands that are nondeterministic in their ordering.
For example, `#guard_msgs (error, drop all) in cmd` means to check warnings and drop
everything else.
in`/ ... / #guard_msgs in cmd` captures the messages generated by the command `cmd`
and checks that they match the contents of the docstring.
Basic example:
```lean
/
error: unknown identifier 'x'
/
#guard_msgs in
example : α := x
```
This checks that there is such an error and then consumes the message.
By default, the command captures all messages, but the filter condition can be adjusted.
For example, we can select only warnings:
```lean
/
warning: declaration uses 'sorry'
/
#guard_msgs(warning) in
example : α := sorry
```
or only errors
```lean
#guard_msgs(error) in
example : α := sorry
```
In the previous example, since warnings are not captured there is a warning on `sorry`.
We can drop the warning completely with
```lean
#guard_msgs(error, drop warning) in
example : α := sorry
```
In general, `#guard_msgs` accepts a commaseparated list of configuration clauses in parentheses:
```
#guard_msgs (configElt,*) in cmd
```
By default, the configuration list is `(all, whitespace := normalized, ordering := exact)`.
Message filters (processed in lefttoright order):
 `info`, `warning`, `error`: capture messages with the given severity level.
 `all`: capture all messages (the default).
 `drop info`, `drop warning`, `drop error`: drop messages with the given severity level.
 `drop all`: drop every message.
Whitespace handling (after trimming leading and trailing whitespace):
 `whitespace := exact` requires an exact whitespace match.
 `whitespace := normalized` converts all newline characters to a space before matching
(the default). This allows breaking long lines.
 `whitespace := lax` collapses whitespace to a single space before matching.
Message ordering:
 `ordering := exact` uses the exact ordering of the messages (the default).
 `ordering := sorted` sorts the messages in lexicographic order.
This helps with testing commands that are nondeterministic in their ordering.
For example, `#guard_msgs (error, drop all) in cmd` means to check warnings and drop
everything else.
#eval Tree.buildTree.build (n k : Nat) : Tree
5 4
fails with:
❌ Docstring on `#guard_msgs` does not match generated message: info: Tree.branch (Tree.branch (Tree.branch (Tree.branch (Tree.branch (Tree.leaf 4) 1 (Tree.leaf 2)) 2 (Tree.leaf 2)) 3 (Tree.leaf 2)) 4 (Tree.leaf 2)) 5 (Tree.leaf 2)
This can be manually compared to the expected string. With guard_msgs.diff
enabled, the test still fails:
set_option`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
guard_msgs.diffguard_msgs.diff
true`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
in
/
info: Tree.branch
(Tree.branch
(Tree.branch
(Tree.branch (Tree.branch (Tree.leaf 4) 1 (Tree.leaf 2)) 2 (Tree.leaf 2))
3
(Tree.leaf 3))
4
(Tree.leaf 3))
5
(Tree.leaf 3)
/
#guard_msgs`/ ... / #guard_msgs in cmd` captures the messages generated by the command `cmd`
and checks that they match the contents of the docstring.
Basic example:
```lean
/
error: unknown identifier 'x'
/
#guard_msgs in
example : α := x
```
This checks that there is such an error and then consumes the message.
By default, the command captures all messages, but the filter condition can be adjusted.
For example, we can select only warnings:
```lean
/
warning: declaration uses 'sorry'
/
#guard_msgs(warning) in
example : α := sorry
```
or only errors
```lean
#guard_msgs(error) in
example : α := sorry
```
In the previous example, since warnings are not captured there is a warning on `sorry`.
We can drop the warning completely with
```lean
#guard_msgs(error, drop warning) in
example : α := sorry
```
In general, `#guard_msgs` accepts a commaseparated list of configuration clauses in parentheses:
```
#guard_msgs (configElt,*) in cmd
```
By default, the configuration list is `(all, whitespace := normalized, ordering := exact)`.
Message filters (processed in lefttoright order):
 `info`, `warning`, `error`: capture messages with the given severity level.
 `all`: capture all messages (the default).
 `drop info`, `drop warning`, `drop error`: drop messages with the given severity level.
 `drop all`: drop every message.
Whitespace handling (after trimming leading and trailing whitespace):
 `whitespace := exact` requires an exact whitespace match.
 `whitespace := normalized` converts all newline characters to a space before matching
(the default). This allows breaking long lines.
 `whitespace := lax` collapses whitespace to a single space before matching.
Message ordering:
 `ordering := exact` uses the exact ordering of the messages (the default).
 `ordering := sorted` sorts the messages in lexicographic order.
This helps with testing commands that are nondeterministic in their ordering.
For example, `#guard_msgs (error, drop all) in cmd` means to check warnings and drop
everything else.
in`/ ... / #guard_msgs in cmd` captures the messages generated by the command `cmd`
and checks that they match the contents of the docstring.
Basic example:
```lean
/
error: unknown identifier 'x'
/
#guard_msgs in
example : α := x
```
This checks that there is such an error and then consumes the message.
By default, the command captures all messages, but the filter condition can be adjusted.
For example, we can select only warnings:
```lean
/
warning: declaration uses 'sorry'
/
#guard_msgs(warning) in
example : α := sorry
```
or only errors
```lean
#guard_msgs(error) in
example : α := sorry
```
In the previous example, since warnings are not captured there is a warning on `sorry`.
We can drop the warning completely with
```lean
#guard_msgs(error, drop warning) in
example : α := sorry
```
In general, `#guard_msgs` accepts a commaseparated list of configuration clauses in parentheses:
```
#guard_msgs (configElt,*) in cmd
```
By default, the configuration list is `(all, whitespace := normalized, ordering := exact)`.
Message filters (processed in lefttoright order):
 `info`, `warning`, `error`: capture messages with the given severity level.
 `all`: capture all messages (the default).
 `drop info`, `drop warning`, `drop error`: drop messages with the given severity level.
 `drop all`: drop every message.
Whitespace handling (after trimming leading and trailing whitespace):
 `whitespace := exact` requires an exact whitespace match.
 `whitespace := normalized` converts all newline characters to a space before matching
(the default). This allows breaking long lines.
 `whitespace := lax` collapses whitespace to a single space before matching.
Message ordering:
 `ordering := exact` uses the exact ordering of the messages (the default).
 `ordering := sorted` sorts the messages in lexicographic order.
This helps with testing commands that are nondeterministic in their ordering.
For example, `#guard_msgs (error, drop all) in cmd` means to check warnings and drop
everything else.
#eval Tree.buildTree.build (n k : Nat) : Tree
5 4
However, the error message includes the lines that differ:
❌ Docstring on `#guard_msgs` does not match generated message: info: Tree.branch (Tree.branch (Tree.branch (Tree.branch (Tree.branch (Tree.leaf 4) 1 (Tree.leaf 2)) 2 (Tree.leaf 2)) 3  (Tree.leaf 3)) + (Tree.leaf 2)) 4  (Tree.leaf 3)) + (Tree.leaf 2)) 5  (Tree.leaf 3) + (Tree.leaf 2)
This option is presently off by default while we experiment with it and refine the user experience, but we do plan to enable it by default in a future release. Please test it out and let us know what works well and what needs improvement.
More Performance Diagnostic Tools
For advanced users who would like to understand how to improve the elaboration time of their Lean programs and proofs, Lean 4.8.0 provides two new tools: a central option that controls the display of a number of internal counters, and support for exporting profiler data to the Firefox Profiler.
Diagnostics Counters
If a program or proof is taking too long to elaborate, use set_option diagnostics true
to enable the tracking and printing of a number of counters across various Lean subsystems. Values that exceed the threshold set in diagnostics.threshold
are reported. For type class synthesis, this option currently tracks unfoldings per declaration and uses per instance, and for simp
, it tracks the number of times each simp
theorem is used and the number of times it is tried. This information is especially helpful for finding type class instances or simp
theorems that slow down instance synthesis or simplification. More metrics may be added in the future.
In this example, two simp
lemmas induce a loop:
@[simpTheorems tagged with the `simp` attribute are used by the simplifier
(i.e., the `simp` tactic, and its variants) to simplify expressions occurring in your goals.
We call theorems tagged with the `simp` attribute "simp theorems" or "simp lemmas".
Lean maintains a database/index containing all active simp theorems.
Here is an example of a simp theorem.
```lean
@[simp] theorem ne_eq (a b : α) : (a ≠ b) = Not (a = b) := rfl
```
This simp theorem instructs the simplifier to replace instances of the term
`a ≠ b` (e.g. `x + 0 ≠ y`) with `Not (a = b)` (e.g., `Not (x + 0 = y)`).
The simplifier applies simp theorems in one direction only:
if `A = B` is a simp theorem, then `simp` replaces `A`s with `B`s,
but it doesn't replace `B`s with `A`s. Hence a simp theorem should have the
property that its righthand side is "simpler" than its lefthand side.
In particular, `=` and `↔` should not be viewed as symmetric operators in this situation.
The following would be a terrible simp theorem (if it were even allowed):
```lean
@[simp] lemma mul_right_inv_bad (a : G) : 1 = a * a⁻¹ := ...
```
Replacing 1 with a * a⁻¹ is not a sensible default direction to travel.
Even worse would be a theorem that causes expressions to grow without bound,
causing simp to loop forever.
By default the simplifier applies `simp` theorems to an expression `e`
after its subexpressions have been simplified.
We say it performs a bottomup simplification.
You can instruct the simplifier to apply a theorem before its subexpressions
have been simplified by using the modifier `↓`. Here is an example
```lean
@[simp↓] theorem not_and_eq (p q : Prop) : (¬ (p ∧ q)) = (¬p ∨ ¬q) :=
```
When multiple simp theorems are applicable, the simplifier uses the one with highest priority.
If there are several with the same priority, it is uses the "most recent one". Example:
```lean
@[simp high] theorem cond_true (a b : α) : cond true a b = a := rfl
@[simp low+1] theorem or_true (p : Prop) : (p ∨ True) = True :=
propext < Iff.intro (fun _ => trivial) (fun _ => Or.inr trivial)
@[simp 100] theorem ite_self {d : Decidable c} (a : α) : ite c a a = a := by
cases d <;> rfl
```
]
theorem one_plus_eq_plus_one_root_.one_plus_eq_plus_one {n : Nat} : 1 + n = n + 1
: 1 + nNat
= nNat
+ 1 := nNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ 1 + HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
nNat
= Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
nNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 All goals completed! 🐙
@[simpTheorems tagged with the `simp` attribute are used by the simplifier
(i.e., the `simp` tactic, and its variants) to simplify expressions occurring in your goals.
We call theorems tagged with the `simp` attribute "simp theorems" or "simp lemmas".
Lean maintains a database/index containing all active simp theorems.
Here is an example of a simp theorem.
```lean
@[simp] theorem ne_eq (a b : α) : (a ≠ b) = Not (a = b) := rfl
```
This simp theorem instructs the simplifier to replace instances of the term
`a ≠ b` (e.g. `x + 0 ≠ y`) with `Not (a = b)` (e.g., `Not (x + 0 = y)`).
The simplifier applies simp theorems in one direction only:
if `A = B` is a simp theorem, then `simp` replaces `A`s with `B`s,
but it doesn't replace `B`s with `A`s. Hence a simp theorem should have the
property that its righthand side is "simpler" than its lefthand side.
In particular, `=` and `↔` should not be viewed as symmetric operators in this situation.
The following would be a terrible simp theorem (if it were even allowed):
```lean
@[simp] lemma mul_right_inv_bad (a : G) : 1 = a * a⁻¹ := ...
```
Replacing 1 with a * a⁻¹ is not a sensible default direction to travel.
Even worse would be a theorem that causes expressions to grow without bound,
causing simp to loop forever.
By default the simplifier applies `simp` theorems to an expression `e`
after its subexpressions have been simplified.
We say it performs a bottomup simplification.
You can instruct the simplifier to apply a theorem before its subexpressions
have been simplified by using the modifier `↓`. Here is an example
```lean
@[simp↓] theorem not_and_eq (p q : Prop) : (¬ (p ∧ q)) = (¬p ∨ ¬q) :=
```
When multiple simp theorems are applicable, the simplifier uses the one with highest priority.
If there are several with the same priority, it is uses the "most recent one". Example:
```lean
@[simp high] theorem cond_true (a b : α) : cond true a b = a := rfl
@[simp low+1] theorem or_true (p : Prop) : (p ∨ True) = True :=
propext < Iff.intro (fun _ => trivial) (fun _ => Or.inr trivial)
@[simp 100] theorem ite_self {d : Decidable c} (a : α) : ite c a a = a := by
cases d <;> rfl
```
]
theorem plus_one_eq_one_plus_root_.plus_one_eq_one_plus {n : Nat} : n + 1 = 1 + n
: nNat
+ 1 = 1 + nNat
:= nNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ nNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
1 + HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
nNat
All goals completed! 🐙
theorem looping_root_.looping {k : Nat} : k + 1 > 0
: kNat
+ 1 > 0 := kNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ kNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 > GT.gt.{u} {α : Type u} [LT α] (a b : α) : Prop
`a > b` is an abbreviation for `b < a`.
0 kNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ kNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 > GT.gt.{u} {α : Type u} [LT α] (a b : α) : Prop
`a > b` is an abbreviation for `b < a`.
0
This leads to simp
looping until it reaches its maximum recursion depth:
tactic 'simp' failed, nested error: maximum recursion depth has been reached use `set_option maxRecDepth <num>` to increase limit use `set_option diagnostics true` to get diagnostic information
Enabling diagnostics
@[simpTheorems tagged with the `simp` attribute are used by the simplifier
(i.e., the `simp` tactic, and its variants) to simplify expressions occurring in your goals.
We call theorems tagged with the `simp` attribute "simp theorems" or "simp lemmas".
Lean maintains a database/index containing all active simp theorems.
Here is an example of a simp theorem.
```lean
@[simp] theorem ne_eq (a b : α) : (a ≠ b) = Not (a = b) := rfl
```
This simp theorem instructs the simplifier to replace instances of the term
`a ≠ b` (e.g. `x + 0 ≠ y`) with `Not (a = b)` (e.g., `Not (x + 0 = y)`).
The simplifier applies simp theorems in one direction only:
if `A = B` is a simp theorem, then `simp` replaces `A`s with `B`s,
but it doesn't replace `B`s with `A`s. Hence a simp theorem should have the
property that its righthand side is "simpler" than its lefthand side.
In particular, `=` and `↔` should not be viewed as symmetric operators in this situation.
The following would be a terrible simp theorem (if it were even allowed):
```lean
@[simp] lemma mul_right_inv_bad (a : G) : 1 = a * a⁻¹ := ...
```
Replacing 1 with a * a⁻¹ is not a sensible default direction to travel.
Even worse would be a theorem that causes expressions to grow without bound,
causing simp to loop forever.
By default the simplifier applies `simp` theorems to an expression `e`
after its subexpressions have been simplified.
We say it performs a bottomup simplification.
You can instruct the simplifier to apply a theorem before its subexpressions
have been simplified by using the modifier `↓`. Here is an example
```lean
@[simp↓] theorem not_and_eq (p q : Prop) : (¬ (p ∧ q)) = (¬p ∨ ¬q) :=
```
When multiple simp theorems are applicable, the simplifier uses the one with highest priority.
If there are several with the same priority, it is uses the "most recent one". Example:
```lean
@[simp high] theorem cond_true (a b : α) : cond true a b = a := rfl
@[simp low+1] theorem or_true (p : Prop) : (p ∨ True) = True :=
propext < Iff.intro (fun _ => trivial) (fun _ => Or.inr trivial)
@[simp 100] theorem ite_self {d : Decidable c} (a : α) : ite c a a = a := by
cases d <;> rfl
```
]
theorem one_plus_eq_plus_one_root_.one_plus_eq_plus_one {n : Nat} : 1 + n = n + 1
: 1 + nNat
= nNat
+ 1 := nNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ 1 + HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
nNat
= Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
nNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 All goals completed! 🐙
@[simpTheorems tagged with the `simp` attribute are used by the simplifier
(i.e., the `simp` tactic, and its variants) to simplify expressions occurring in your goals.
We call theorems tagged with the `simp` attribute "simp theorems" or "simp lemmas".
Lean maintains a database/index containing all active simp theorems.
Here is an example of a simp theorem.
```lean
@[simp] theorem ne_eq (a b : α) : (a ≠ b) = Not (a = b) := rfl
```
This simp theorem instructs the simplifier to replace instances of the term
`a ≠ b` (e.g. `x + 0 ≠ y`) with `Not (a = b)` (e.g., `Not (x + 0 = y)`).
The simplifier applies simp theorems in one direction only:
if `A = B` is a simp theorem, then `simp` replaces `A`s with `B`s,
but it doesn't replace `B`s with `A`s. Hence a simp theorem should have the
property that its righthand side is "simpler" than its lefthand side.
In particular, `=` and `↔` should not be viewed as symmetric operators in this situation.
The following would be a terrible simp theorem (if it were even allowed):
```lean
@[simp] lemma mul_right_inv_bad (a : G) : 1 = a * a⁻¹ := ...
```
Replacing 1 with a * a⁻¹ is not a sensible default direction to travel.
Even worse would be a theorem that causes expressions to grow without bound,
causing simp to loop forever.
By default the simplifier applies `simp` theorems to an expression `e`
after its subexpressions have been simplified.
We say it performs a bottomup simplification.
You can instruct the simplifier to apply a theorem before its subexpressions
have been simplified by using the modifier `↓`. Here is an example
```lean
@[simp↓] theorem not_and_eq (p q : Prop) : (¬ (p ∧ q)) = (¬p ∨ ¬q) :=
```
When multiple simp theorems are applicable, the simplifier uses the one with highest priority.
If there are several with the same priority, it is uses the "most recent one". Example:
```lean
@[simp high] theorem cond_true (a b : α) : cond true a b = a := rfl
@[simp low+1] theorem or_true (p : Prop) : (p ∨ True) = True :=
propext < Iff.intro (fun _ => trivial) (fun _ => Or.inr trivial)
@[simp 100] theorem ite_self {d : Decidable c} (a : α) : ite c a a = a := by
cases d <;> rfl
```
]
theorem plus_one_eq_one_plus_root_.plus_one_eq_one_plus {n : Nat} : n + 1 = 1 + n
: nNat
+ 1 = 1 + nNat
:= nNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ nNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 = Eq.{u_1} {α : Sort u_1} : α → α → Prop
The equality relation. It has one introduction rule, `Eq.refl`.
We use `a = b` as notation for `Eq a b`.
A fundamental property of equality is that it is an equivalence relation.
```
variable (α : Type) (a b c d : α)
variable (hab : a = b) (hcb : c = b) (hcd : c = d)
example : a = d :=
Eq.trans (Eq.trans hab (Eq.symm hcb)) hcd
```
Equality is much more than an equivalence relation, however. It has the important property that every assertion
respects the equivalence, in the sense that we can substitute equal expressions without changing the truth value.
That is, given `h1 : a = b` and `h2 : p a`, we can construct a proof for `p b` using substitution: `Eq.subst h1 h2`.
Example:
```
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
Eq.subst h1 h2
example (α : Type) (a b : α) (p : α → Prop)
(h1 : a = b) (h2 : p a) : p b :=
h1 ▸ h2
```
The triangle in the second presentation is a macro built on top of `Eq.subst` and `Eq.symm`, and you can enter it by typing `\t`.
For more information: [Equality](https://leanlang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
1 + HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
nNat
All goals completed! 🐙
theorem looping_root_.looping {k : Nat} : k + 1 > 0
: kNat
+ 1 > 0 := kNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ kNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 > GT.gt.{u} {α : Type u} [LT α] (a b : α) : Prop
`a > b` is an abbreviation for `b < a`.
0 kNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
⊢ kNat
+ HAdd.hAdd.{u, v, w} {α : Type u} {β : Type v} {γ : outParam (Type w)} [self : HAdd α β γ] : α → β → γ
`a + b` computes the sum of `a` and `b`.
The meaning of this notation is typedependent.
1 > GT.gt.{u} {α : Type u} [LT α] (a b : α) : Prop
`a > b` is an abbreviation for `b < a`.
0
This reports the following counters, making the looping simp
theorems apparent:
[simp] used theorems (max: 250, num: 2): one_plus_eq_plus_one ↦ 250 plus_one_eq_one_plus ↦ 250 [simp] tried theorems (max: 251, num: 2): plus_one_eq_one_plus ↦ 251, succeeded: 250 one_plus_eq_plus_one ↦ 250, succeeded: 250
Firefox Profiler
The second tool is an extension of the existing trace.profiler
option, which annotates Lean's existing subsystem traces with timing information and automatically unfolds them according to trace.profiler.threshold
. The trace profiler can be used to understand time usage by elaboration steps (subterms, tactics, ...) and by Lean subsystems. However, while the produced trace tree can be explored interactively in the Lean Infoview, a textual timeline representation is still bothersome to navigate and comprehend. The new trace.profiler.output
suboption can be used on the command line to instead generate a profile that can be opened in the Firefox Profiler UI.
lake env lean Dtrace.profiler=true Dtrace.profiler.output=out.json YourFile.lean
On top of a more powerful tree view that now distinguishes between “running” (inclusive) and “self” (exclusive) time spent at each node, Firefox Profiler provides a linear timeline view that can give an immediate visual impression of bottlenecks in a file, which can further be inspected by hovering over the timeline and focused on a specific point in time in the tree view by clicking. A “flame graph” view that recursively merges sibling nodes with identical labels is also provided.
Apart from better visualization, Firefox Profiler also supports some transformations of the trace tree that can be helpful for further analyses such as focusing on, merging, or collapsing certain trace classes (labelled "functions") or categories, or inverting the trace tree ("call stack") in order to inspect the most expensive leaf processes.
Finally, two further suboptions can be helpful for some performance investigations:

trace.profiler.useHeartbeats
can be used for debugging timeouts by changing the report from wallclock time to Lean's “heartbeat” metric. 
trace.profiler.output.pp
exports the full trace node labels for a more detailed profile instead of aggregating per trace class and syntax kind.
Other
Shorter Instance Names
When defining instances of type classes that aren't explicitly named, Lean generates a name internally. This name is based on the class and the types involved. The algorithm used in prior versions of Lean, however, led to some very long names. These names occurred from time to time in error messages, generated API documentation, and other places, and the very long names could cause user interface problems. A new algorithm has been adopted, shortening instance names drastically.
Across Batteries and Mathlib, the median ratio between lengths of new names and of old names is about 72%. With the old algorithm, the longest name was 1660 characters, and now the longest name is 202 characters. The new algorithm's 95th percentile name length is 67 characters, versus 278 for the old algorithm. While the new algorithm produces names that are 1.2% less unique, it avoids crossproject collisions by adding a modulebased suffix when it does not refer to declarations from the same "project" (modules that share the same root). This plot shows the name lengths in Lean 4.7 (xaxis) and vs 4.8 (yaxis):
Aside from shorter names in proof goals, error messages, and generated API references, most users shouldn't notice any changes. However, code that relies on the particular names chosen by the algorithm will need to be updated.
PrettyPrinter Improvements
Printing with Anonymous Constructors
Tagging a structure type with the pp_anonymous_constructor
attribute causes the Lean to use the anglebracket syntax for anonymous constructors when displaying instances of the structure. For example, this structure uses Lean's default prettyprinting strategy:
structure BikeBike : Type
where
wheelCountBike.wheelCount (self : Bike) : Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
:= 2
hasBoxBike.hasBox (self : Bike) : Bool
: BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
:= falseBool.false : Bool
The boolean value `false`, not to be confused with the proposition `False`.
def commuter_root_.commuter : Bike
: BikeBike : Type
where
def family_root_.family : Bike
: BikeBike : Type
where
wheelCountNat
:= 3
hasBoxBool
:= trueBool.true : Bool
The boolean value `true`, not to be confused with the proposition `True`.
#print commutercommuter : Bike
#print familyfamily : Bike
It displays the two definitions with their field names:
def commuter : Bike := { wheelCount := 2, hasBox := false }
def family : Bike := { wheelCount := 3, hasBox := true }
Adding the attribute to Bike
, as in:
@[pp_using_anonymous_constructor]
structure BikeBike : Type
where
wheelCountBike.wheelCount (self : Bike) : Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
:= 2
hasBoxBike.hasBox (self : Bike) : Bool
: BoolBool : Type
`Bool` is the type of boolean values, `true` and `false`. Classically,
this is equivalent to `Prop` (the type of propositions), but the distinction
is important for programming, because values of type `Prop` are erased in the
code generator, while `Bool` corresponds to the type called `bool` or `boolean`
in most programming languages.
:= falseBool.false : Bool
The boolean value `false`, not to be confused with the proposition `False`.
results in the #print
commands instead displaying
def commuter : Bike := ⟨2, false⟩
and
def family : Bike := ⟨3, true⟩
Field Notation
In previous versions of Lean, structure projections could be printed using field notation. In Lean 4.8, this feature has been generalized so that any function in a type's namespace is eligible to be printed using field notation. This change makes Lean's output more closely resemble idiomatic Lean code. Field notation is now controlled by the options pp.fieldNotation
(which replaces pp.structureProjections
) and pp.fieldNotation.generalized
, which enables field notation for functions that are not structure field projections. Both now default to true
.
In the output of #check
below, three.val
is used rather than Bounded.val three
:
structure BoundedBounded (lo hi : Nat) : Type
(loNat
hiNat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
) where
valBounded.val {lo hi : Nat} (self : Bounded lo hi) : Nat
: NatNat : Type
The type of natural numbers, starting at zero. It is defined as an
inductive type freely generated by "zero is a natural number" and
"the successor of a natural number is a natural number".
You can prove a theorem `P n` about `n : Nat` by `induction n`, which will
expect a proof of the theorem for `P 0`, and a proof of `P (succ i)` assuming
a proof of `P i`. The same method also works to define functions by recursion
on natural numbers: induction and recursion are two expressions of the same
operation from Lean's point of view.
```
open Nat
example (n : Nat) : n < succ n := by
induction n with
 zero =>
show 0 < 1
decide
 succ i ih =>  ih : i < succ i
show succ i < succ (succ i)
exact Nat.succ_lt_succ ih
```
This type is specialcased by both the kernel and the compiler:
* The type of expressions contains "`Nat` literals" as a primitive constructor,
and the kernel knows how to reduce zero/succ expressions to nat literals.
* If implemented naively, this type would represent a numeral `n` in unary as a
linked list with `n` links, which is horribly inefficient. Instead, the
runtime itself has a special representation for `Nat` which stores numbers up
to 2^63 directly and larger numbers use an arbitrary precision "bignum"
library (usually [GMP](https://gmplib.org/)).
lo_le_valBounded.lo_le_val {lo hi : Nat} (self : Bounded lo hi) : lo ≤ self.val
: loNat
≤ valNat
:= by omegaThe `omega` tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows),
but should be effective on many problems.
We handle hypotheses of the form `x = y`, `x < y`, `x ≤ y`, and `k ∣ x` for `x y` in `Nat` or `Int`
(and `k` a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter `x / k` or `x % k` for literal integers `k` we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If `omega` fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
```
omega (config :=
{ splitDisjunctions := true, splitNatSub := true, splitNatAbs := true, splitMinMax := true })
```
can be used to:
* `splitDisjunctions`: split any disjunctions found in the context,
if the problem is not otherwise solvable.
* `splitNatSub`: for each appearance of `((a  b : Nat) : Int)`, split on `a ≤ b` if necessary.
* `splitNatAbs`: for each appearance of `Int.natAbs a`, split on `0 ≤ a` if necessary.
* `splitMinMax`: for each occurrence of `min a b`, split on `min a b = a ∨ min a b = b`
Currently, all of these are on by default.
val_lt_hiBounded.val_lt_hi {lo hi : Nat} (self : Bounded lo hi) : self.val < hi
: valNat
< hiNat
:= by omegaThe `omega` tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows),
but should be effective on many problems.
We handle hypotheses of the form `x = y`, `x < y`, `x ≤ y`, and `k ∣ x` for `x y` in `Nat` or `Int`
(and `k` a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter `x / k` or `x % k` for literal integers `k` we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If `omega` fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
```
omega (config :=
{ splitDisjunctions := true, splitNatSub := true, splitNatAbs := true, splitMinMax := true })
```
can be used to:
* `splitDisjunctions`: split any disjunctions found in the context,
if the problem is not otherwise solvable.
* `splitNatSub`: for each appearance of `((a  b : Nat) : Int)`, split on `a ≤ b` if necessary.
* `splitNatAbs`: for each appearance of `Int.natAbs a`, split on `0 ≤ a` if necessary.
* `splitMinMax`: for each occurrence of `min a b`, split on `min a b = a ∨ min a b = b`
Currently, all of these are on by default.
def three_root_.three : Bounded 2 6
: BoundedBounded (lo hi : Nat) : Type
2 6 where
valNat
:= 3
#check threethree : Bounded 2 6
.valBounded.val {lo hi : Nat} (self : Bounded lo hi) : Nat
three.val : Nat
In Lean 4.8, field notation is also used by default for Bounded.add1
, even though it isn't a field projection:
def Bounded.add1Bounded.add1 {lo hi : Nat} : Bounded lo hi → Bounded (lo + 1) (hi + 1)
: BoundedBounded (lo hi : Nat) : Type
loNat
hiNat
→ BoundedBounded (lo hi : Nat) : Type
(loNat
+ 1) (hiNat
+ 1)
 {valNat
, ..} => {valNat
:= valBounded.val {lo hi : Nat} (self : Bounded lo hi) : Nat
+ 1}
#check threethree : Bounded 2 6
.add1Bounded.add1 {lo hi : Nat} : Bounded lo hi → Bounded (lo + 1) (hi + 1)
three.add1 : Bounded (2 + 1) (6 + 1)
This notation is not used for theorems, and it can be disabled on a casebycase basis by applying the @[pp_nodot]
attribute to the function that should not be displayed as a field.
Suppressing Metavariables
During elaboration, Lean represents parts of a term that have not yet been found using metavariables. In Lean's output, these metavariables are denoted ?m.XYZ
, where XYZ
is a unique number. While these numbers make it possible to connect multiple occurrences of the same unknown value, they are also ephemeral, and very small changes to the program or to Lean's internals can cause different numbers to be assigned. This can make it difficult to maintain tests that involve terms that include metavariables. Lean 4.8 includes the options pp.mvars
and pp.mvars.withType
that can make messages stable in the face of metavariable renumbering. Setting pp.mvars
to false
causes Lean to display metavariables as ?_
, and setting pp.mvars.withType
to true
causes Lean to insert a type annotation around each metavariable.
The command
#eval noneOption.none.{u} {α : Type u} : Option α
No value.
displays numbered metavariables for the unknown type and universe level of the optional value's type:
don't know how to synthesize implicit argument @none ?m.53180 context: ⊢ Type ?u.53179
Setting pp.mvars
to false
removes the internal numbers from the display:
set_option`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
pp.mvarsLean.pp.mvars
false`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
in
#eval noneOption.none.{u} {α : Type u} : Option α
No value.
don't know how to synthesize implicit argument @none ?_ context: ⊢ Type _
Additionally setting pp.mvars.withType
to true
shows the type of each metavariable:
set_option`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
pp.mvarsLean.pp.mvars
false`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
in
set_option`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
pp.mvars.withTypeLean.pp.mvars.withType
true`set_option <id> <value>` sets the option `<id>` to `<value>`. Depending on the type of the option,
the value can be `true`, `false`, a string, or a numeral. Options are used to configure behavior of
Lean as well as userdefined extensions. The setting is active until the end of the current `section`
or `namespace` or the end of the file.
Autocompletion is available for `<id>` to list available options.
`set_option <id> <value> in <command>` sets the option for just a single command:
```
set_option pp.all true in
#check 1 + 1
```
Similarly, `set_option <id> <value> in` can also be used inside terms and tactics to set an option
only in a single term or tactic.
in
#eval noneOption.none.{u} {α : Type u} : Option α
No value.
don't know how to synthesize implicit argument @none (?_ : Type _) context: ⊢ Type _
Induction and CaseSplitting Eliminators
In previous versions of Lean, the @[eliminator]
attribute could be used to provide a custom induction principle that would be used by default in the induction
and cases
tactics. Lean 4.8.0 provides more finegrained control, replacing this attribute with @[induction_eliminator]
and @[cases_eliminator]
. These attributes are used to provide eliminators for Nat
that show 0
and n + 1
in proof states, rather than the zero
and Nat.zero : Nat
`Nat.zero`, normally written `0 : Nat`, is the smallest natural number.
This is one of the two constructors of `Nat`.
succ
constructors. This can also be used to provide an “inductive view” on a type while using a more efficient representation behind the scenes.Nat.succ (n : Nat) : Nat
The successor function on natural numbers, `succ n = n + 1`.
This is one of the two constructors of `Nat`.
Breaking Changes
The following breaking changes may affect Lean users:

theorem
statements must be propositions Lean no longer accepts
theorem
s whose types are not propositions. Please usedef
to define them. Lake API updates
Lake's internals have been drastically changed to support the improved tracking of build jobs. Functions like
logStep
are no longer necessary, and the structure of the API has changed. Nevertheless, we expect that the most userdefined facets and scripts will only need minor changes (if any) to work in the new version. Lake's
nativeFacets
setting The signature of the
nativeFacets
Lake configuration options has changed from a staticArray
to a function(shouldExport : Bool) → Array
. See its docstring or Lake's README for further details on the changed option. Windows build changes
Lean is now built as a shared library on Windows, so Lean executables that set
supportInterpreter := true
are dynamically linked tolibleanshared.dll
andlibInit_shared.dll
. Running such executables vialake exe
will automatically ensure that these libraries are available, but if they are used in another context, then the libraries must be colocated with the executable or available on$PATH
.
Subarray
fields renamed The fields of the
Subarray
type have been renamed to bring them in line with Lean conventions. The fieldSubarray.{u} (α : Type u) : Type u
as
has been renamed toSubarray.as.{u_1} {α : Type u_1} (s : Subarray α) : Array α
array
, the field formerly known asSubarray.array.{u} {α : Type u} (self : Subarray α) : Array α
h₁
is now namedSubarray.h₁.{u_1} {α : Type u_1} (s : Subarray α) : s.start ≤ s.stop
start_le_stop
, andSubarray.start_le_stop.{u} {α : Type u} (self : Subarray α) : self.start ≤ self.stop
h₂
has been renamed toSubarray.h₂.{u_1} {α : Type u_1} (s : Subarray α) : s.stop ≤ s.as.size
stop_le_array_size
.Subarray.stop_le_array_size.{u} {α : Type u} (self : Subarray α) : self.stop ≤ self.array.size
 Removed coercion from
String
toString : Type
`String` is the type of (UTF8 encoded) strings. The compiler overrides the data representation of this type to a byte sequence, and both `String.utf8ByteSize` and `String.length` are cached and O(1).
Name
Lean.Name : Type
Hierarchical names consist of a sequence of components, each of which is either a string or numeric, that are written separated by dots (`.`). Hierarchical names are used to name declarations and for creating unique identifiers for free variables and metavariables. You can create hierarchical names using a backtick: ``` `Lean.Meta.whnf ``` It is short for `.str (.str (.str .anonymous "Lean") "Meta") "whnf"`. You can use double backticks to request Lean to statically check whether the name corresponds to a Lean declaration in scope. ``` ``Lean.Meta.whnf ``` If the name is not in scope, Lean will report an error. There are two ways to convert a `String` to a `Name`: 1. `Name.mkSimple` creates a name with a single string component. 2. `String.toName` first splits the string into its dotseparated components, and then creates a hierarchical name.
Prior versions of Lean would automatically apply
Name.mkSimple
to coerceLean.Name.mkSimple (s : String) : Name
Converts a `String` to a `Name` without performing any parsing. `mkSimple s` is short for `.str .anonymous s`. This means that `mkSimple "a.b"` is the name `«a.b»`, not `a.b`.
String
s intoString : Type
`String` is the type of (UTF8 encoded) strings. The compiler overrides the data representation of this type to a byte sequence, and both `String.utf8ByteSize` and `String.length` are cached and O(1).
Name
s. However, experience showed that this coercion is not always what's desired. Metaprogram authors who relied on this coercion should insert explicit calls toLean.Name : Type
Hierarchical names consist of a sequence of components, each of which is either a string or numeric, that are written separated by dots (`.`). Hierarchical names are used to name declarations and for creating unique identifiers for free variables and metavariables. You can create hierarchical names using a backtick: ``` `Lean.Meta.whnf ``` It is short for `.str (.str (.str .anonymous "Lean") "Meta") "whnf"`. You can use double backticks to request Lean to statically check whether the name corresponds to a Lean declaration in scope. ``` ``Lean.Meta.whnf ``` If the name is not in scope, Lean will report an error. There are two ways to convert a `String` to a `Name`: 1. `Name.mkSimple` creates a name with a single string component. 2. `String.toName` first splits the string into its dotseparated components, and then creates a hierarchical name.
Name.mkSimple
.Lean.Name.mkSimple (s : String) : Name
Converts a `String` to a `Name` without performing any parsing. `mkSimple s` is short for `.str .anonymous s`. This means that `mkSimple "a.b"` is the name `«a.b»`, not `a.b`.

Option.toMonad
renamed toOption.toMonad.{u_1, u_2} {m : Type u_1 → Type u_2} {α : Type u_1} [Monad m] [Alternative m] : Option α → m α
Option.getM
Option.getM.{u_1, u_2} {m : Type u_1 → Type u_2} {α : Type u_1} [Alternative m] : Option α → m α
Lifts an optional value to any `Alternative`, sending `none` to `failure`.
Additionally, the spurious
[
Monad
Monad.{u, v} (m : Type u → Type v) : Type (max (u + 1) v)
A [monad](https://en.wikipedia.org/wiki/Monad_(functional_programming)) is a structure which abstracts the concept of sequential control flow. It mainly consists of two operations: * `pure : α → F α` * `bind : F α → (α → F β) → F β` (written as `>>=`) Like many functional programming languages, Lean makes extensive use of monads for structuring programs. In particular, the `do` notation is a very powerful syntax over monad operations, and it depends on a `Monad` instance. See [the `do` notation](https://leanlang.org/lean4/doc/do.html) chapter of the manual for details.
mType → Type u_1
]
instance argument was removed. Naming of equational theorems
Proving that a function terminates can involve significant transformations, sometimes leading to definitions in Lean's core theory that are quite different from their source code. That's why Lean generates equational theorems for definitions that prove that the original defining equations of the function hold, allowing users to reason using the original definition. In Lean 4.8, these equational theorems have been renamed from
f._eq_N
tof.eq_N
(wheref
is the function in question andN
∈ 1...), and the theorem that relates a defined name to its meaning has been renamed fromf._unfold
tof.eq_def
. Nested actions cannot be lifted over pure
if
Nested actions are monadic actions written in a pure expression context under a
do
block, surrounded by(← ...)
. They are equivalent to introducing a freshlet
x ← ...
, wherex
is otherwise unused, just before the expression and then referring tox
. This could lead to confusing behavior when a nonmonadicif
expression had a nested action in only one branch that ended up being lifted over theif
and running no matter which branch is selected. Lean no longer lifts nested actions out of pureif
expressions.