Inductive Types
We have seen that Lean's formal foundation includes basic types,
Prop, Type 0, Type 1, Type 2, ...
, and allows for the formation of
dependent function types, (x : α) → β
. In the examples, we have
also made use of additional types like Bool
, Nat
, and Int
,
and type constructors, like List
, and product, ×
. In fact, in
Lean's library, every concrete type other than the universes and every
type constructor other than dependent arrows is an instance of a general family of
type constructions known as inductive types. It is remarkable that
it is possible to construct a substantial edifice of mathematics based
on nothing more than the type universes, dependent arrow types, and inductive
types; everything else follows from those.
Intuitively, an inductive type is built up from a specified list of constructors. In Lean, the syntax for specifying such a type is as follows:
inductive Foo where
| constructor₁ : ... → Foo
| constructor₂ : ... → Foo
...
| constructorₙ : ... → Foo
The intuition is that each constructor specifies a way of building new
objects of Foo
, possibly from previously constructed values. The
type Foo
consists of nothing more than the objects that are
constructed in this way. The first character |
in an inductive
declaration is optional. We can also separate constructors using a
comma instead of |
.
We will see below that the arguments of the constructors can include
objects of type Foo
, subject to a certain "positivity" constraint,
which guarantees that elements of Foo
are built from the bottom
up. Roughly speaking, each ...
can be any arrow type constructed from
Foo
and previously defined types, in which Foo
appears, if at
all, only as the "target" of the dependent arrow type.
We will provide a number of examples of inductive types. We will also consider slight generalizations of the scheme above, to mutually defined inductive types, and so-called inductive families.
As with the logical connectives, every inductive type comes with introduction rules, which show how to construct an element of the type, and elimination rules, which show how to "use" an element of the type in another construction. The analogy to the logical connectives should not come as a surprise; as we will see below, they, too, are examples of inductive type constructions. You have already seen the introduction rules for an inductive type: they are just the constructors that are specified in the definition of the type. The elimination rules provide for a principle of recursion on the type, which includes, as a special case, a principle of induction as well.
In the next chapter, we will describe Lean's function definition package, which provides even more convenient ways to define functions on inductive types and carry out inductive proofs. But because the notion of an inductive type is so fundamental, we feel it is important to start with a low-level, hands-on understanding. We will start with some basic examples of inductive types, and work our way up to more elaborate and complex examples.
Enumerated Types
The simplest kind of inductive type is a type with a finite, enumerated list of elements.
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
The inductive
command creates a new type, Weekday
. The
constructors all live in the Weekday
namespace.
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
#check Weekday.sunday
#check Weekday.monday
open Weekday
#check sunday
#check monday
You can omit : Weekday
when declaring the Weekday
inductive type.
inductive Weekday where
| sunday
| monday
| tuesday
| wednesday
| thursday
| friday
| saturday
Think of sunday
, monday
, ... , saturday
as
being distinct elements of Weekday
, with no other distinguishing
properties. The elimination principle, Weekday.rec
, is defined
along with the type Weekday
and its constructors. It is also known
as a recursor, and it is what makes the type "inductive": it allows
us to define a function on Weekday
by assigning values
corresponding to each constructor. The intuition is that an inductive
type is exhaustively generated by the constructors, and has no
elements beyond those they construct.
We will use the match
expression to define a function from Weekday
to the natural numbers:
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
open Weekday
def numberOfDay (d : Weekday) : Nat :=
match d with
| sunday => 1
| monday => 2
| tuesday => 3
| wednesday => 4
| thursday => 5
| friday => 6
| saturday => 7
#eval numberOfDay Weekday.sunday -- 1
#eval numberOfDay Weekday.monday -- 2
#eval numberOfDay Weekday.tuesday -- 3
Note that the match
expression is compiled using the recursor Weekday.rec
generated when
you declare the inductive type.
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
open Weekday
def numberOfDay (d : Weekday) : Nat :=
match d with
| sunday => 1
| monday => 2
| tuesday => 3
| wednesday => 4
| thursday => 5
| friday => 6
| saturday => 7
set_option pp.all true
#print numberOfDay
-- ... numberOfDay.match_1
#print numberOfDay.match_1
-- ... Weekday.casesOn ...
#print Weekday.casesOn
-- ... Weekday.rec ...
#check @Weekday.rec
/-
@Weekday.rec.{u}
: {motive : Weekday → Sort u} →
motive Weekday.sunday →
motive Weekday.monday →
motive Weekday.tuesday →
motive Weekday.wednesday →
motive Weekday.thursday →
motive Weekday.friday →
motive Weekday.saturday →
(t : Weekday) → motive t
-/
When declaring an inductive datatype, you can use deriving Repr
to instruct
Lean to generate a function that converts Weekday
objects into text.
This function is used by the #eval
command to display Weekday
objects.
inductive Weekday where
| sunday
| monday
| tuesday
| wednesday
| thursday
| friday
| saturday
deriving Repr
open Weekday
#eval tuesday -- Weekday.tuesday
It is often useful to group definitions and theorems related to a
structure in a namespace with the same name. For example, we can put
the numberOfDay
function in the Weekday
namespace. We are
then allowed to use the shorter name when we open the namespace.
We can define functions from Weekday
to Weekday
:
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
deriving Repr
namespace Weekday
def next (d : Weekday) : Weekday :=
match d with
| sunday => monday
| monday => tuesday
| tuesday => wednesday
| wednesday => thursday
| thursday => friday
| friday => saturday
| saturday => sunday
def previous (d : Weekday) : Weekday :=
match d with
| sunday => saturday
| monday => sunday
| tuesday => monday
| wednesday => tuesday
| thursday => wednesday
| friday => thursday
| saturday => friday
#eval next (next tuesday) -- Weekday.thursday
#eval next (previous tuesday) -- Weekday.tuesday
example : next (previous tuesday) = tuesday :=
rfl
end Weekday
How can we prove the general theorem that next (previous d) = d
for any Weekday d
? You can use match
to provide a proof of the claim for each
constructor:
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
deriving Repr
namespace Weekday
def next (d : Weekday) : Weekday :=
match d with
| sunday => monday
| monday => tuesday
| tuesday => wednesday
| wednesday => thursday
| thursday => friday
| friday => saturday
| saturday => sunday
def previous (d : Weekday) : Weekday :=
match d with
| sunday => saturday
| monday => sunday
| tuesday => monday
| wednesday => tuesday
| thursday => wednesday
| friday => thursday
| saturday => friday
def next_previous (d : Weekday) : next (previous d) = d :=
match d with
| sunday => rfl
| monday => rfl
| tuesday => rfl
| wednesday => rfl
| thursday => rfl
| friday => rfl
| saturday => rfl
Using a tactic proof, we can be even more concise:
inductive Weekday where
| sunday : Weekday
| monday : Weekday
| tuesday : Weekday
| wednesday : Weekday
| thursday : Weekday
| friday : Weekday
| saturday : Weekday
deriving Repr
namespace Weekday
def next (d : Weekday) : Weekday :=
match d with
| sunday => monday
| monday => tuesday
| tuesday => wednesday
| wednesday => thursday
| thursday => friday
| friday => saturday
| saturday => sunday
def previous (d : Weekday) : Weekday :=
match d with
| sunday => saturday
| monday => sunday
| tuesday => monday
| wednesday => tuesday
| thursday => wednesday
| friday => thursday
| saturday => friday
def next_previous (d : Weekday) : next (previous d) = d := by
cases d <;> rfl
Tactics for Inductive Types below will introduce additional tactics that are specifically designed to make use of inductive types.
Notice that, under the propositions-as-types correspondence, we can
use match
to prove theorems as well as define functions. In other
words, under the propositions-as-types correspondence, the proof by
cases is a kind of definition by cases, where what is being "defined"
is a proof instead of a piece of data.
The Bool
type in the Lean library is an instance of
enumerated type.
namespace Hidden
inductive Bool where
| false : Bool
| true : Bool
end Hidden
(To run these examples, we put them in a namespace called Hidden
,
so that a name like Bool
does not conflict with the Bool
in
the standard library. This is necessary because these types are part
of the Lean "prelude" that is automatically imported when the system
is started.)
As an exercise, you should think about what the introduction and
elimination rules for these types do. As a further exercise, we
suggest defining boolean operations and
, or
, not
on the
Bool
type, and verifying common identities. Note that you can define a
binary operation like and
using match
:
namespace Hidden
def and (a b : Bool) : Bool :=
match a with
| true => b
| false => false
end Hidden
Similarly, most identities can be proved by introducing suitable match
, and then using rfl
.
Constructors with Arguments
Enumerated types are a very special case of inductive types, in which the constructors take no arguments at all. In general, a "construction" can depend on data, which is then represented in the constructed argument. Consider the definitions of the product type and sum type in the library:
namespace Hidden
inductive Prod (α : Type u) (β : Type v)
| mk : α → β → Prod α β
inductive Sum (α : Type u) (β : Type v) where
| inl : α → Sum α β
| inr : β → Sum α β
end Hidden
Consider what is going on in these examples.
The product type has one constructor, Prod.mk
,
which takes two arguments. To define a function on Prod α β
, we
can assume the input is of the form Prod.mk a b
, and we have to
specify the output, in terms of a
and b
. We can use this to
define the two projections for Prod
. Remember that the standard
library defines notation α × β
for Prod α β
and (a, b)
for
Prod.mk a b
.
namespace Hidden
inductive Prod (α : Type u) (β : Type v)
| mk : α → β → Prod α β
def fst {α : Type u} {β : Type v} (p : Prod α β) : α :=
match p with
| Prod.mk a b => a
def snd {α : Type u} {β : Type v} (p : Prod α β) : β :=
match p with
| Prod.mk a b => b
end Hidden
The function fst
takes a pair, p
. The match
interprets
p
as a pair, Prod.mk a b
. Recall also from Dependent Type Theory
that to give these definitions the greatest generality possible, we allow
the types α
and β
to belong to any universe.
Here is another example where we use the recursor Prod.casesOn
instead
of match
.
def prod_example (p : Bool × Nat) : Nat :=
Prod.casesOn (motive := fun _ => Nat) p (fun b n => cond b (2 * n) (2 * n + 1))
#eval prod_example (true, 3)
#eval prod_example (false, 3)
The argument motive
is used to specify the type of the object you want to
construct, and it is a function because it may depend on the pair.
The cond
function is a boolean conditional: cond b t1 t2
returns t1
if b
is true, and t2
otherwise.
The function prod_example
takes a pair consisting of a boolean,
b
, and a number, n
, and returns either 2 * n
or 2 * n + 1
according to whether b
is true or false.
In contrast, the sum type has two constructors, inl
and inr
(for "insert left" and "insert right"), each of which takes one
(explicit) argument. To define a function on Sum α β
, we have to
handle two cases: either the input is of the form inl a
, in which
case we have to specify an output value in terms of a
, or the
input is of the form inr b
, in which case we have to specify an
output value in terms of b
.
def sum_example (s : Sum Nat Nat) : Nat :=
Sum.casesOn (motive := fun _ => Nat) s
(fun n => 2 * n)
(fun n => 2 * n + 1)
#eval sum_example (Sum.inl 3)
#eval sum_example (Sum.inr 3)
This example is similar to the previous one, but now an input to
sum_example
is implicitly either of the form inl n
or inr n
.
In the first case, the function returns 2 * n
, and the second
case, it returns 2 * n + 1
.
Notice that the product type depends on parameters α β : Type
which are arguments to the constructors as well as Prod
. Lean
detects when these arguments can be inferred from later arguments to a
constructor or the return type, and makes them implicit in that case.
In Section Defining the Natural Numbers we will see what happens when the constructor of an inductive type takes arguments from the inductive type itself. What characterizes the examples we consider in this section is that each constructor relies only on previously specified types.
Notice that a type with multiple constructors is disjunctive: an
element of Sum α β
is either of the form inl a
or of the
form inl b
. A constructor with multiple arguments introduces
conjunctive information: from an element Prod.mk a b
of
Prod α β
we can extract a
and b
. An arbitrary inductive type can
include both features, by having any number of constructors, each of
which takes any number of arguments.
As with function definitions, Lean's inductive definition syntax will let you put named arguments to the constructors before the colon:
namespace Hidden
inductive Prod (α : Type u) (β : Type v) where
| mk (fst : α) (snd : β) : Prod α β
inductive Sum (α : Type u) (β : Type v) where
| inl (a : α) : Sum α β
| inr (b : β) : Sum α β
end Hidden
The results of these definitions are essentially the same as the ones given earlier in this section.
A type, like Prod
, that has only one constructor is purely
conjunctive: the constructor simply packs the list of arguments into a
single piece of data, essentially a tuple where the type of subsequent
arguments can depend on the type of the initial argument. We can also
think of such a type as a "record" or a "structure". In Lean, the
keyword structure
can be used to define such an inductive type as
well as its projections, at the same time.
namespace Hidden
structure Prod (α : Type u) (β : Type v) where
mk :: (fst : α) (snd : β)
end Hidden
This example simultaneously introduces the inductive type, Prod
,
its constructor, mk
, the usual eliminators (rec
and
recOn
), as well as the projections, fst
and snd
, as
defined above.
If you do not name the constructor, Lean uses mk
as a default. For
example, the following defines a record to store a color as a triple
of RGB values:
structure Color where
(red : Nat) (green : Nat) (blue : Nat)
deriving Repr
def yellow := Color.mk 255 255 0
#eval Color.red yellow
The definition of yellow
forms the record with the three values
shown, and the projection Color.red
returns the red component.
You can avoid the parentheses if you add a line break between each field.
structure Color where
red : Nat
green : Nat
blue : Nat
deriving Repr
The structure
command is especially useful for defining algebraic
structures, and Lean provides substantial infrastructure to support
working with them. Here, for example, is the definition of a
semigroup:
structure Semigroup where
carrier : Type u
mul : carrier → carrier → carrier
mul_assoc : ∀ a b c, mul (mul a b) c = mul a (mul b c)
We will see more examples in Chapter Structures and Records.
We have already discussed the dependent product type Sigma
:
namespace Hidden
inductive Sigma {α : Type u} (β : α → Type v) where
| mk : (a : α) → β a → Sigma β
end Hidden
Two more examples of inductive types in the library are the following:
namespace Hidden
inductive Option (α : Type u) where
| none : Option α
| some : α → Option α
inductive Inhabited (α : Type u) where
| mk : α → Inhabited α
end Hidden
In the semantics of dependent type theory, there is no built-in notion
of a partial function. Every element of a function type α → β
or a
dependent function type (a : α) → β
is assumed to have a value
at every input. The Option
type provides a way of representing partial functions. An
element of Option β
is either none
or of the form some b
,
for some value b : β
. Thus we can think of an element f
of the
type α → Option β
as being a partial function from α
to β
:
for every a : α
, f a
either returns none
, indicating
f a
is "undefined", or some b
.
An element of Inhabited α
is simply a witness to the fact that
there is an element of α
. Later, we will see that Inhabited
is
an example of a type class in Lean: Lean can be instructed that
suitable base types are inhabited, and can automatically infer that
other constructed types are inhabited on that basis.
As exercises, we encourage you to develop a notion of composition for
partial functions from α
to β
and β
to γ
, and show
that it behaves as expected. We also encourage you to show that
Bool
and Nat
are inhabited, that the product of two inhabited
types is inhabited, and that the type of functions to an inhabited
type is inhabited.
Inductively Defined Propositions
Inductively defined types can live in any type universe, including the
bottom-most one, Prop
. In fact, this is exactly how the logical
connectives are defined.
namespace Hidden
inductive False : Prop
inductive True : Prop where
| intro : True
inductive And (a b : Prop) : Prop where
| intro : a → b → And a b
inductive Or (a b : Prop) : Prop where
| inl : a → Or a b
| inr : b → Or a b
end Hidden
You should think about how these give rise to the introduction and
elimination rules that you have already seen. There are rules that
govern what the eliminator of an inductive type can eliminate to,
that is, what kinds of types can be the target of a recursor. Roughly
speaking, what characterizes inductive types in Prop
is that one
can only eliminate to other types in Prop
. This is consistent with
the understanding that if p : Prop
, an element hp : p
carries
no data. There is a small exception to this rule, however, which we
will discuss below, in Section Inductive Families.
Even the existential quantifier is inductively defined:
namespace Hidden
inductive Exists {α : Sort u} (p : α → Prop) : Prop where
| intro (w : α) (h : p w) : Exists p
end Hidden
Keep in mind that the notation ∃ x : α, p
is syntactic sugar for Exists (fun x : α => p)
.
The definitions of False
, True
, And
, and Or
are
perfectly analogous to the definitions of Empty
, Unit
,
Prod
, and Sum
. The difference is that the first group yields
elements of Prop
, and the second yields elements of Type u
for
some u
. In a similar way, ∃ x : α, p
is a Prop
-valued
variant of Σ x : α, p
.
This is a good place to mention another inductive type, denoted
{x : α // p}
, which is sort of a hybrid between
∃ x : α, P
and Σ x : α, P
.
namespace Hidden
inductive Subtype {α : Type u} (p : α → Prop) where
| mk : (x : α) → p x → Subtype p
end Hidden
In fact, in Lean, Subtype
is defined using the structure command:
namespace Hidden
structure Subtype {α : Sort u} (p : α → Prop) where
val : α
property : p val
end Hidden
The notation {x : α // p x}
is syntactic sugar for Subtype (fun x : α => p x)
.
It is modeled after subset notation in set theory: the idea is that {x : α // p x}
denotes the collection of elements of α
that have property p
.
Defining the Natural Numbers
The inductively defined types we have seen so far are "flat":
constructors wrap data and insert it into a type, and the
corresponding recursor unpacks the data and acts on it. Things get
much more interesting when the constructors act on elements of the
very type being defined. A canonical example is the type Nat
of
natural numbers:
namespace Hidden
inductive Nat where
| zero : Nat
| succ : Nat → Nat
end Hidden
There are two constructors. We start with zero : Nat
; it takes
no arguments, so we have it from the start. In contrast, the
constructor succ
can only be applied to a previously constructed
Nat
. Applying it to zero
yields succ zero : Nat
. Applying
it again yields succ (succ zero) : Nat
, and so on. Intuitively,
Nat
is the "smallest" type with these constructors, meaning that
it is exhaustively (and freely) generated by starting with zero
and applying succ
repeatedly.
As before, the recursor for Nat
is designed to define a dependent
function f
from Nat
to any domain, that is, an element f
of (n : Nat) → motive n
for some motive : Nat → Sort u
.
It has to handle two cases: the case where the input is zero
, and the case where
the input is of the form succ n
for some n : Nat
. In the first
case, we simply specify a target value with the appropriate type, as
before. In the second case, however, the recursor can assume that a
value of f
at n
has already been computed. As a result, the
next argument to the recursor specifies a value for f (succ n)
in
terms of n
and f n
. If we check the type of the recursor,
namespace Hidden
inductive Nat where
| zero : Nat
| succ : Nat → Nat
#check @Nat.rec
end Hidden
you find the following:
{motive : Nat → Sort u}
→ motive Nat.zero
→ ((n : Nat) → motive n → motive (Nat.succ n))
→ (t : Nat) → motive t
The implicit argument, motive
, is the codomain of the function being defined.
In type theory it is common to say motive
is the motive for the elimination/recursion,
since it describes the kind of object we wish to construct.
The next two arguments specify how to compute the zero and successor cases, as described above.
They are also known as the minor premises.
Finally, the t : Nat
, is the input to the function. It is also known as the major premise.
The Nat.recOn
is similar to Nat.rec
but the major premise occurs before the minor premises.
@Nat.recOn :
{motive : Nat → Sort u}
→ (t : Nat)
→ motive Nat.zero
→ ((n : Nat) → motive n → motive (Nat.succ n))
→ motive t
Consider, for example, the addition function add m n
on the
natural numbers. Fixing m
, we can define addition by recursion on
n
. In the base case, we set add m zero
to m
. In the
successor step, assuming the value add m n
is already determined,
we define add m (succ n)
to be succ (add m n)
.
namespace Hidden
inductive Nat where
| zero : Nat
| succ : Nat → Nat
deriving Repr
def add (m n : Nat) : Nat :=
match n with
| Nat.zero => m
| Nat.succ n => Nat.succ (add m n)
open Nat
#eval add (succ (succ zero)) (succ zero)
end Hidden
It is useful to put such definitions into a namespace, Nat
. We can
then go on to define familiar notation in that namespace. The two
defining equations for addition now hold definitionally:
namespace Hidden
inductive Nat where
| zero : Nat
| succ : Nat → Nat
deriving Repr
namespace Nat
def add (m n : Nat) : Nat :=
match n with
| Nat.zero => m
| Nat.succ n => Nat.succ (add m n)
instance : Add Nat where
add := add
theorem add_zero (m : Nat) : m + zero = m := rfl
theorem add_succ (m n : Nat) : m + succ n = succ (m + n) := rfl
end Nat
end Hidden
We will explain how the instance
command works in
Chapter Type Classes. In the examples below, we will use
Lean's version of the natural numbers.
Proving a fact like zero + m = m
, however, requires a proof by induction.
As observed above, the induction principle is just a special case of the recursion principle,
when the codomain motive n
is an element of Prop
. It represents the familiar
pattern of an inductive proof: to prove ∀ n, motive n
, first prove motive 0
,
and then, for arbitrary n
, assume ih : motive n
and prove motive (succ n)
.
namespace Hidden
open Nat
theorem zero_add (n : Nat) : 0 + n = n :=
Nat.recOn (motive := fun x => 0 + x = x)
n
(show 0 + 0 = 0 from rfl)
(fun (n : Nat) (ih : 0 + n = n) =>
show 0 + succ n = succ n from
calc 0 + succ n
_ = succ (0 + n) := rfl
_ = succ n := by rw [ih])
end Hidden
Notice that, once again, when Nat.recOn
is used in the context of
a proof, it is really the induction principle in disguise. The
rewrite
and simp
tactics tend to be very effective in proofs
like these. In this case, each can be used to reduce the proof to:
namespace Hidden
open Nat
theorem zero_add (n : Nat) : 0 + n = n :=
Nat.recOn (motive := fun x => 0 + x = x) n
rfl
(fun n ih => by simp [add_succ, ih])
end Hidden
As another example, let us prove the associativity of addition,
∀ m n k, m + n + k = m + (n + k)
.
(The notation +
, as we have defined it, associates to the left, so m + n + k
is really (m + n) + k
.)
The hardest part is figuring out which variable to do the induction on. Since addition is defined by recursion on the second argument,
k
is a good guess, and once we make that choice the proof almost writes itself:
namespace Hidden
open Nat
theorem add_assoc (m n k : Nat) : m + n + k = m + (n + k) :=
Nat.recOn (motive := fun k => m + n + k = m + (n + k)) k
(show m + n + 0 = m + (n + 0) from rfl)
(fun k (ih : m + n + k = m + (n + k)) =>
show m + n + succ k = m + (n + succ k) from
calc m + n + succ k
_ = succ (m + n + k) := rfl
_ = succ (m + (n + k)) := by rw [ih]
_ = m + succ (n + k) := rfl
_ = m + (n + succ k) := rfl)
end Hidden
Once again, you can reduce the proof to:
open Nat
theorem add_assoc (m n k : Nat) : m + n + k = m + (n + k) :=
Nat.recOn (motive := fun k => m + n + k = m + (n + k)) k
rfl
(fun k ih => by simp [Nat.add_succ, ih])
Suppose we try to prove the commutativity of addition. Choosing induction on the second argument, we might begin as follows:
open Nat
theorem add_comm (m n : Nat) : m + n = n + m :=
Nat.recOn (motive := fun x => m + x = x + m) n
(show m + 0 = 0 + m by rw [Nat.zero_add, Nat.add_zero])
(fun (n : Nat) (ih : m + n = n + m) =>
show m + succ n = succ n + m from
calc m + succ n
_ = succ (m + n) := rfl
_ = succ (n + m) := by rw [ih]
_ = succ n + m := sorry)
At this point, we see that we need another supporting fact, namely, that succ (n + m) = succ n + m
.
You can prove this by induction on m
:
open Nat
theorem succ_add (n m : Nat) : succ n + m = succ (n + m) :=
Nat.recOn (motive := fun x => succ n + x = succ (n + x)) m
(show succ n + 0 = succ (n + 0) from rfl)
(fun (m : Nat) (ih : succ n + m = succ (n + m)) =>
show succ n + succ m = succ (n + succ m) from
calc succ n + succ m
_ = succ (succ n + m) := rfl
_ = succ (succ (n + m)) := by rw [ih]
_ = succ (n + succ m) := rfl)
You can then replace the sorry
in the previous proof with succ_add
. Yet again, the proofs can be compressed:
namespace Hidden
open Nat
theorem succ_add (n m : Nat) : succ n + m = succ (n + m) :=
Nat.recOn (motive := fun x => succ n + x = succ (n + x)) m
rfl
(fun m ih => by simp only [add_succ, ih])
theorem add_comm (m n : Nat) : m + n = n + m :=
Nat.recOn (motive := fun x => m + x = x + m) n
(by simp)
(fun m ih => by simp [add_succ, succ_add, ih])
end Hidden
Other Recursive Data Types
Let us consider some more examples of inductively defined types. For
any type, α
, the type List α
of lists of elements of α
is
defined in the library.
namespace Hidden
inductive List (α : Type u) where
| nil : List α
| cons : α → List α → List α
namespace List
def append (as bs : List α) : List α :=
match as with
| nil => bs
| cons a as => cons a (append as bs)
theorem nil_append (as : List α) : append nil as = as :=
rfl
theorem cons_append (a : α) (as bs : List α)
: append (cons a as) bs = cons a (append as bs) :=
rfl
end List
end Hidden
A list of elements of type α
is either the empty list, nil
, or
an element h : α
followed by a list t : List α
.
The first element, h
, is commonly known as the "head" of the list,
and the remainder, t
, is known as the "tail."
As an exercise, prove the following:
namespace Hidden
inductive List (α : Type u) where
| nil : List α
| cons : α → List α → List α
namespace List
def append (as bs : List α) : List α :=
match as with
| nil => bs
| cons a as => cons a (append as bs)
theorem nil_append (as : List α) : append nil as = as :=
rfl
theorem cons_append (a : α) (as bs : List α)
: append (cons a as) bs = cons a (append as bs) :=
rfl
theorem append_nil (as : List α) : append as nil = as :=
sorry
theorem append_assoc (as bs cs : List α)
: append (append as bs) cs = append as (append bs cs) :=
sorry
end List
end Hidden
Try also defining the function length : {α : Type u} → List α → Nat
that returns the length of a list,
and prove that it behaves as expected (for example, length (append as bs) = length as + length bs
).
For another example, we can define the type of binary trees:
inductive BinaryTree where
| leaf : BinaryTree
| node : BinaryTree → BinaryTree → BinaryTree
In fact, we can even define the type of countably branching trees:
inductive CBTree where
| leaf : CBTree
| sup : (Nat → CBTree) → CBTree
namespace CBTree
def succ (t : CBTree) : CBTree :=
sup (fun _ => t)
def toCBTree : Nat → CBTree
| 0 => leaf
| n+1 => succ (toCBTree n)
def omega : CBTree :=
sup toCBTree
end CBTree
Tactics for Inductive Types
Given the fundamental importance of inductive types in Lean, it should not be surprising that there are a number of tactics designed to work with them effectively. We describe some of them here.
The cases
tactic works on elements of an inductively defined type,
and does what the name suggests: it decomposes the element according
to each of the possible constructors. In its most basic form, it is
applied to an element x
in the local context. It then reduces the
goal to cases in which x
is replaced by each of the constructions.
example (p : Nat → Prop) (hz : p 0) (hs : ∀ n, p (Nat.succ n)) : ∀ n, p n := by
intro n
cases n
. exact hz -- goal is p 0
. apply hs -- goal is a : Nat ⊢ p (succ a)
There are extra bells and whistles. For one thing, cases
allows
you to choose the names for each alternative using a
with
clause. In the next example, for example, we choose the name
m
for the argument to succ
, so that the second case refers to
succ m
. More importantly, the cases tactic will detect any items
in the local context that depend on the target variable. It reverts
these elements, does the split, and reintroduces them. In the example
below, notice that the hypothesis h : n ≠ 0
becomes h : 0 ≠ 0
in the first branch, and h : succ m ≠ 0
in the second.
open Nat
example (n : Nat) (h : n ≠ 0) : succ (pred n) = n := by
cases n with
| zero =>
-- goal: h : 0 ≠ 0 ⊢ succ (pred 0) = 0
apply absurd rfl h
| succ m =>
-- second goal: h : succ m ≠ 0 ⊢ succ (pred (succ m)) = succ m
rfl
Notice that cases
can be used to produce data as well as prove propositions.
def f (n : Nat) : Nat := by
cases n; exact 3; exact 7
example : f 0 = 3 := rfl
example : f 5 = 7 := rfl
Once again, cases will revert, split, and then reintroduce dependencies in the context.
def Tuple (α : Type) (n : Nat) :=
{ as : List α // as.length = n }
def f {n : Nat} (t : Tuple α n) : Nat := by
cases n; exact 3; exact 7
def myTuple : Tuple Nat 3 :=
⟨[0, 1, 2], rfl⟩
example : f myTuple = 7 :=
rfl
Here is an example of multiple constructors with arguments.
inductive Foo where
| bar1 : Nat → Nat → Foo
| bar2 : Nat → Nat → Nat → Foo
def silly (x : Foo) : Nat := by
cases x with
| bar1 a b => exact b
| bar2 c d e => exact e
The alternatives for each constructor don't need to be solved in the order the constructors were declared.
inductive Foo where
| bar1 : Nat → Nat → Foo
| bar2 : Nat → Nat → Nat → Foo
def silly (x : Foo) : Nat := by
cases x with
| bar2 c d e => exact e
| bar1 a b => exact b
The syntax of the with
is convenient for writing structured proofs.
Lean also provides a complementary case
tactic, which allows you to focus on goal
assign variable names.
inductive Foo where
| bar1 : Nat → Nat → Foo
| bar2 : Nat → Nat → Nat → Foo
def silly (x : Foo) : Nat := by
cases x
case bar1 a b => exact b
case bar2 c d e => exact e
The case
tactic is clever, in that it will match the constructor to the appropriate goal. For example, we can fill the goals above in the opposite order:
inductive Foo where
| bar1 : Nat → Nat → Foo
| bar2 : Nat → Nat → Nat → Foo
def silly (x : Foo) : Nat := by
cases x
case bar2 c d e => exact e
case bar1 a b => exact b
You can also use cases
with an arbitrary expression. Assuming that
expression occurs in the goal, the cases tactic will generalize over
the expression, introduce the resulting universally quantified
variable, and case on that.
open Nat
example (p : Nat → Prop) (hz : p 0) (hs : ∀ n, p (succ n)) (m k : Nat)
: p (m + 3 * k) := by
cases m + 3 * k
exact hz -- goal is p 0
apply hs -- goal is a : Nat ⊢ p (succ a)
Think of this as saying "split on cases as to whether m + 3 * k
is
zero or the successor of some number." The result is functionally
equivalent to the following:
open Nat
example (p : Nat → Prop) (hz : p 0) (hs : ∀ n, p (succ n)) (m k : Nat)
: p (m + 3 * k) := by
generalize m + 3 * k = n
cases n
exact hz -- goal is p 0
apply hs -- goal is a : Nat ⊢ p (succ a)
Notice that the expression m + 3 * k
is erased by generalize
; all
that matters is whether it is of the form 0
or succ a
. This
form of cases
will not revert any hypotheses that also mention
the expression in the equation (in this case, m + 3 * k
). If such a
term appears in a hypothesis and you want to generalize over that as
well, you need to revert
it explicitly.
If the expression you case on does not appear in the goal, the
cases
tactic uses have
to put the type of the expression into
the context. Here is an example:
example (p : Prop) (m n : Nat)
(h₁ : m < n → p) (h₂ : m ≥ n → p) : p := by
cases Nat.lt_or_ge m n
case inl hlt => exact h₁ hlt
case inr hge => exact h₂ hge
The theorem Nat.lt_or_ge m n
says m < n ∨ m ≥ n
, and it is
natural to think of the proof above as splitting on these two
cases. In the first branch, we have the hypothesis hlt : m < n
, and
in the second we have the hypothesis hge : m ≥ n
. The proof above
is functionally equivalent to the following:
example (p : Prop) (m n : Nat)
(h₁ : m < n → p) (h₂ : m ≥ n → p) : p := by
have h : m < n ∨ m ≥ n := Nat.lt_or_ge m n
cases h
case inl hlt => exact h₁ hlt
case inr hge => exact h₂ hge
After the first two lines, we have h : m < n ∨ m ≥ n
as a
hypothesis, and we simply do cases on that.
Here is another example, where we use the decidability of equality on
the natural numbers to split on the cases m = n
and m ≠ n
.
#check Nat.sub_self
example (m n : Nat) : m - n = 0 ∨ m ≠ n := by
cases Decidable.em (m = n) with
| inl heq => rw [heq]; apply Or.inl; exact Nat.sub_self n
| inr hne => apply Or.inr; exact hne
Remember that if you open Classical
, you can use the law of the
excluded middle for any proposition at all. But using type class
inference (see Chapter Type Classes), Lean can actually
find the relevant decision procedure, which means that you can use the
case split in a computable function.
Just as the cases
tactic can be used to carry out proof by cases,
the induction
tactic can be used to carry out proofs by
induction. The syntax is similar to that of cases
, except that the
argument can only be a term in the local context. Here is an example:
namespace Hidden
theorem zero_add (n : Nat) : 0 + n = n := by
induction n with
| zero => rfl
| succ n ih => rw [Nat.add_succ, ih]
end Hidden
As with cases
, we can use the case
tactic instead of with
.
namespace Hidden
theorem zero_add (n : Nat) : 0 + n = n := by
induction n
case zero => rfl
case succ n ih => rw [Nat.add_succ, ih]
end Hidden
Here are some additional examples:
namespace Hidden
theorem add_zero (n : Nat) : n + 0 = n := Nat.add_zero n
open Nat
theorem zero_add (n : Nat) : 0 + n = n := by
induction n <;> simp [*, add_zero, add_succ]
theorem succ_add (m n : Nat) : succ m + n = succ (m + n) := by
induction n <;> simp [*, add_zero, add_succ]
theorem add_comm (m n : Nat) : m + n = n + m := by
induction n <;> simp [*, add_zero, add_succ, succ_add, zero_add]
theorem add_assoc (m n k : Nat) : m + n + k = m + (n + k) := by
induction k <;> simp [*, add_zero, add_succ]
end Hidden
The induction
tactic also supports user-defined induction principles with
multiple targets (aka major premises).
/-
theorem Nat.mod.inductionOn
{motive : Nat → Nat → Sort u}
(x y : Nat)
(ind : ∀ x y, 0 < y ∧ y ≤ x → motive (x - y) y → motive x y)
(base : ∀ x y, ¬(0 < y ∧ y ≤ x) → motive x y)
: motive x y :=
-/
example (x : Nat) {y : Nat} (h : y > 0) : x % y < y := by
induction x, y using Nat.mod.inductionOn with
| ind x y h₁ ih =>
rw [Nat.mod_eq_sub_mod h₁.2]
exact ih h
| base x y h₁ =>
have : ¬ 0 < y ∨ ¬ y ≤ x := Iff.mp (Decidable.not_and_iff_or_not ..) h₁
match this with
| Or.inl h₁ => exact absurd h h₁
| Or.inr h₁ =>
have hgt : y > x := Nat.gt_of_not_le h₁
rw [← Nat.mod_eq_of_lt hgt] at hgt
assumption
You can use the match
notation in tactics too:
example : p ∨ q → q ∨ p := by
intro h
match h with
| Or.inl _ => apply Or.inr; assumption
| Or.inr h2 => apply Or.inl; exact h2
As a convenience, pattern-matching has been integrated into tactics such as intro
and funext
.
example : s ∧ q ∧ r → p ∧ r → q ∧ p := by
intro ⟨_, ⟨hq, _⟩⟩ ⟨hp, _⟩
exact ⟨hq, hp⟩
example :
(fun (x : Nat × Nat) (y : Nat × Nat) => x.1 + y.2)
=
(fun (x : Nat × Nat) (z : Nat × Nat) => z.2 + x.1) := by
funext (a, b) (c, d)
show a + d = d + a
rw [Nat.add_comm]
We close this section with one last tactic that is designed to
facilitate working with inductive types, namely, the injection
tactic. By design, the elements of an inductive type are freely
generated, which is to say, the constructors are injective and have
disjoint ranges. The injection
tactic is designed to make use of
this fact:
open Nat
example (m n k : Nat) (h : succ (succ m) = succ (succ n))
: n + k = m + k := by
injection h with h'
injection h' with h''
rw [h'']
The first instance of the tactic adds h' : succ m = succ n
to the
context, and the second adds h'' : m = n
.
The injection
tactic also detects contradictions that arise when different constructors
are set equal to one another, and uses them to close the goal.
open Nat
example (m n : Nat) (h : succ m = 0) : n = n + 7 := by
injection h
example (m n : Nat) (h : succ m = 0) : n = n + 7 := by
contradiction
example (h : 7 = 4) : False := by
contradiction
As the second example shows, the contradiction
tactic also detects contradictions of this form.
Inductive Families
We are almost done describing the full range of inductive definitions accepted by Lean. So far, you have seen that Lean allows you to introduce inductive types with any number of recursive constructors. In fact, a single inductive definition can introduce an indexed family of inductive types, in a manner we now describe.
An inductive family is an indexed family of types defined by a simultaneous induction of the following form:
inductive foo : ... → Sort u where
| constructor₁ : ... → foo ...
| constructor₂ : ... → foo ...
...
| constructorₙ : ... → foo ...
In contrast to an ordinary inductive definition, which constructs an
element of some Sort u
, the more general version constructs a
function ... → Sort u
, where "...
" denotes a sequence of
argument types, also known as indices. Each constructor then
constructs an element of some member of the family. One example is the
definition of Vector α n
, the type of vectors of elements of α
of length n
:
namespace Hidden
inductive Vector (α : Type u) : Nat → Type u where
| nil : Vector α 0
| cons : α → {n : Nat} → Vector α n → Vector α (n+1)
end Hidden
Notice that the cons
constructor takes an element of
Vector α n
and returns an element of Vector α (n+1)
, thereby using an
element of one member of the family to build an element of another.
A more exotic example is given by the definition of the equality type in Lean:
namespace Hidden
inductive Eq {α : Sort u} (a : α) : α → Prop where
| refl : Eq a a
end Hidden
For each fixed α : Sort u
and a : α
, this definition
constructs a family of types Eq a x
, indexed by x : α
.
Notably, however, there is only one constructor, refl
, which
is an element of Eq a a
.
Intuitively, the only way to construct a proof of Eq a x
is to use reflexivity, in the case where x
is a
.
Note that Eq a a
is the only inhabited type in the family of types
Eq a x
. The elimination principle generated by Lean is as follows:
universe u v
#check (@Eq.rec : {α : Sort u} → {a : α} → {motive : (x : α) → a = x → Sort v}
→ motive a rfl → {b : α} → (h : a = b) → motive b h)
It is a remarkable fact that all the basic axioms for equality follow
from the constructor, refl
, and the eliminator, Eq.rec
. The
definition of equality is atypical, however; see the discussion in Section Axiomatic Details.
The recursor Eq.rec
is also used to define substitution:
namespace Hidden
theorem subst {α : Type u} {a b : α} {p : α → Prop} (h₁ : Eq a b) (h₂ : p a) : p b :=
Eq.rec (motive := fun x _ => p x) h₂ h₁
end Hidden
You can also define subst
using match
.
namespace Hidden
theorem subst {α : Type u} {a b : α} {p : α → Prop} (h₁ : Eq a b) (h₂ : p a) : p b :=
match h₁ with
| rfl => h₂
end Hidden
Actually, Lean compiles the match
expressions using a definition based on
Eq.rec
.
namespace Hidden
theorem subst {α : Type u} {a b : α} {p : α → Prop} (h₁ : Eq a b) (h₂ : p a) : p b :=
match h₁ with
| rfl => h₂
set_option pp.all true
#print subst
-- ... subst.match_1 ...
#print subst.match_1
-- ... Eq.casesOn ...
#print Eq.casesOn
-- ... Eq.rec ...
end Hidden
Using the recursor or match
with h₁ : a = b
, we may assume a
and b
are the same,
in which case, p b
and p a
are the same.
It is not hard to prove that Eq
is symmetric and transitive.
In the following example, we prove symm
and leave as exercises the theorems trans
and congr
(congruence).
namespace Hidden
theorem symm {α : Type u} {a b : α} (h : Eq a b) : Eq b a :=
match h with
| rfl => rfl
theorem trans {α : Type u} {a b c : α} (h₁ : Eq a b) (h₂ : Eq b c) : Eq a c :=
sorry
theorem congr {α β : Type u} {a b : α} (f : α → β) (h : Eq a b) : Eq (f a) (f b) :=
sorry
end Hidden
In the type theory literature, there are further generalizations of inductive definitions, for example, the principles of induction-recursion and induction-induction. These are not supported by Lean.
Axiomatic Details
We have described inductive types and their syntax through examples. This section provides additional information for those interested in the axiomatic foundations.
We have seen that the constructor to an inductive type takes parameters --- intuitively, the arguments that remain fixed throughout the inductive construction --- and indices, the arguments parameterizing the family of types that is simultaneously under construction. Each constructor should have a type, where the argument types are built up from previously defined types, the parameter and index types, and the inductive family currently being defined. The requirement is that if the latter is present at all, it occurs only strictly positively. This means simply that any argument to the constructor in which it occurs is a dependent arrow type in which the inductive type under definition occurs only as the resulting type, where the indices are given in terms of constants and previous arguments.
Since an inductive type lives in Sort u
for some u
, it is
reasonable to ask which universe levels u
can be instantiated
to. Each constructor c
in the definition of a family C
of
inductive types is of the form
c : (a : α) → (b : β[a]) → C a p[a,b]
where a
is a sequence of data type parameters, b
is the
sequence of arguments to the constructors, and p[a, b]
are the
indices, which determine which element of the inductive family the
construction inhabits. (Note that this description is somewhat
misleading, in that the arguments to the constructor can appear in any
order as long as the dependencies make sense.) The constraints on the
universe level of C
fall into two cases, depending on whether or
not the inductive type is specified to land in Prop
(that is,
Sort 0
).
Let us first consider the case where the inductive type is not
specified to land in Prop
. Then the universe level u
is
constrained to satisfy the following:
For each constructor
c
as above, and eachβk[a]
in the sequenceβ[a]
, ifβk[a] : Sort v
, we haveu
≥v
.
In other words, the universe level u
is required to be at least as
large as the universe level of each type that represents an argument
to a constructor.
When the inductive type is specified to land in Prop
, there are no
constraints on the universe levels of the constructor arguments. But
these universe levels do have a bearing on the elimination
rule. Generally speaking, for an inductive type in Prop
, the
motive of the elimination rule is required to be in Prop
.
There is an exception to this last rule: we are allowed to eliminate
from an inductively defined Prop
to an arbitrary Sort
when
there is only one constructor and each constructor argument is either
in Prop
or an index. The intuition is that in this case the
elimination does not make use of any information that is not already
given by the mere fact that the type of argument is inhabited. This
special case is known as singleton elimination.
We have already seen singleton elimination at play in applications of
Eq.rec
, the eliminator for the inductively defined equality
type. We can use an element h : Eq a b
to cast an element
t' : p a
to p b
even when p a
and p b
are arbitrary types,
because the cast does not produce new data; it only reinterprets the
data we already have. Singleton elimination is also used with
heterogeneous equality and well-founded recursion, which will be
discussed in a Chapter Induction and Recursion.
Mutual and Nested Inductive Types
We now consider two generalizations of inductive types that are often useful, which Lean supports by "compiling" them down to the more primitive kinds of inductive types described above. In other words, Lean parses the more general definitions, defines auxiliary inductive types based on them, and then uses the auxiliary types to define the ones we really want. Lean's equation compiler, described in the next chapter, is needed to make use of these types effectively. Nonetheless, it makes sense to describe the declarations here, because they are straightforward variations on ordinary inductive definitions.
First, Lean supports mutually defined inductive types. The idea is that we can define two (or more) inductive types at the same time, where each one refers to the other(s).
mutual
inductive Even : Nat → Prop where
| even_zero : Even 0
| even_succ : (n : Nat) → Odd n → Even (n + 1)
inductive Odd : Nat → Prop where
| odd_succ : (n : Nat) → Even n → Odd (n + 1)
end
In this example, two types are defined simultaneously: a natural
number n
is Even
if it is 0
or one more than an Odd
number, and Odd
if it is one more than an Even
number.
In the exercises below, you are asked to spell out the details.
A mutual inductive definition can also be used to define the notation
of a finite tree with nodes labelled by elements of α
:
mutual
inductive Tree (α : Type u) where
| node : α → TreeList α → Tree α
inductive TreeList (α : Type u) where
| nil : TreeList α
| cons : Tree α → TreeList α → TreeList α
end
With this definition, one can construct an element of Tree α
by
giving an element of α
together with a list of subtrees, possibly
empty. The list of subtrees is represented by the type TreeList α
,
which is defined to be either the empty list, nil
, or the
cons
of a tree and an element of TreeList α
.
This definition is inconvenient to work with, however. It would be
much nicer if the list of subtrees were given by the type
List (Tree α)
, especially since Lean's library contains a number of functions
and theorems for working with lists. One can show that the type
TreeList α
is isomorphic to List (Tree α)
, but translating
results back and forth along this isomorphism is tedious.
In fact, Lean allows us to define the inductive type we really want:
inductive Tree (α : Type u) where
| mk : α → List (Tree α) → Tree α
This is known as a nested inductive type. It falls outside the
strict specification of an inductive type given in the last section
because Tree
does not occur strictly positively among the
arguments to mk
, but, rather, nested inside the List
type
constructor. Lean then automatically builds the
isomorphism between TreeList α
and List (Tree α)
in its kernel,
and defines the constructors for Tree
in terms of the isomorphism.
Exercises
-
Try defining other operations on the natural numbers, such as multiplication, the predecessor function (with
pred 0 = 0
), truncated subtraction (withn - m = 0
whenm
is greater than or equal ton
), and exponentiation. Then try proving some of their basic properties, building on the theorems we have already proved.Since many of these are already defined in Lean's core library, you should work within a namespace named
Hidden
, or something like that, in order to avoid name clashes. -
Define some operations on lists, like a
length
function or thereverse
function. Prove some properties, such as the following:a.
length (s ++ t) = length s + length t
b.
length (reverse t) = length t
c.
reverse (reverse t) = t
-
Define an inductive data type consisting of terms built up from the following constructors:
const n
, a constant denoting the natural numbern
var n
, a variable, numberedn
plus s t
, denoting the sum ofs
andt
times s t
, denoting the product ofs
andt
Recursively define a function that evaluates any such term with respect to an assignment of values to the variables.
-
Similarly, define the type of propositional formulas, as well as functions on the type of such formulas: an evaluation function, functions that measure the complexity of a formula, and a function that substitutes another formula for a given variable.