Lean 4.7.0

Lean 4.7.0 has been released! While this release includes the usual collection of quality-of-life improvements and bug fixes, the most noticable change is that a large part of Std, the external standard library, is included with Lean. Not only does this help with discoverability, it also enabled us to integrate these features more tightly into the rest of the language. In the next few releases, we'll continue to upstream data structures, tactics, proofs, and language features in order to make the out-of-box experience of using Lean as pleasant as possible.

As usual, this announcement post describes selected highlights of the release. For a more complete list of changes and improvements, please refer to the release notes.

Omega

The omega tactic is a decision procedure for a large class of equalities and inequalities of natural numbers and integers that are generated from constants, variables, addition, subtraction, and multiplication by constants. Subexpressions that don't fit this grammar are treated as if they were variables, so omega can also solve goals like f x - 1 < f x + 5 (where f x is treated as if it were a variable).

My colleague Kim Morrison has been working on omega for a few months, and it's been available in Std. There was a prior implementation for Lean 3 by Seul Baek, but the two tactics don't share any code. Because omega is now available out of the box, we've been able to use it in many more contexts, automatically discharging many more proof obligations. The improvement is particularly noticable when working with compile-time bounds checking for array accesses and with recursive functions over numbers.

Array Bounds Checking

In Lean, array lookups are statically bounds-checked by default—writing xs[3] causes Lean to try to prove that 3 < xs.size. The default tactic used for these goals in Lean 4.6 and earlier would work if precisely this fact were available in the local context, but not if e.g. xs.size > 18 were available. In Lean 4.7, omega is used for these situations, making static bounds checking much more ergonomic.

Termination Proofs

Similarly, Lean 4.7 uses omega in the process of automatically searching for termination proofs for recursive functions. This means that fewer manually-provided proofs are necessary. For example, given this definition of binary numbers:

inductive Bin : Type where | zero | twice : Bin Bin | succTwice : Bin Bin

a function to convert Nat to Bin is most naturally written with explicit use of division by two:

def Bin.fromNat (n : Nat) : Bin := if n = 0 then .zero else if n % 2 = 0 then .twice (fromNat (n / 2)) else .succTwice (fromNat (n / 2))

This function terminates because n / 2 is less than n. Because omega is now used to check for decreasing arguments, this definition does not require any additional annotations or proofs.

Library Search

Lean 4.7 includes the new tactics apply? and exact? for searching for lemmas that apply to the present proof goal. Originally developed as part of Mathlib in Lean 3, these tactics migrated first to Lean 4, then to Std, and have now been upstreamed. In recent months, a new index structure has removed the need to generate separate on-disk cache, which means that they can be used for any project without any additional complication in the build system.

Both library search tactics search for a lemma that can be used in the current proof goal. exact? additionally requires that the goal be completely solved by the lemma.

When working with a very large library, this search process can take some time. That's why both tactics report their result back, and a single click can replace the call to the library search tactic with an invocation of apply or exact using the found lemma, making it fast to re-check the proof later.

This proof succeeds:

theorem map_contains {xs : List α} (f : α β) (h : x xs) : f x xs.map f :=
α:Type u_1
β:Type u_2
x:α
xs:List α
f:αβ
h:xxs
f xList.map f xs
All goals completed! 🐙

In the process of succeeding, it suggests:

Try this: exact List.mem_map_of_mem f h

Clicking the message in the infoview or running the code action rewrites the proof to:

theorem map_contains {xs : List α} (f : α β) (h : x xs) : f x xs.map f :=
α:Type u_1
β:Type u_2
x:α
xs:List α
f:αβ
h:xxs
f xList.map f xs
All goals completed! 🐙

The library search tactics are also useful for discovering missing assumptions. Using apply? in this example yields many suggestions:

theorem declaration uses 'sorry'mul_lt (x y z : Nat) : x * y < x * z :=
x:Nat
y:Nat
z:Nat
x * y < x * z
All goals completed! 🐙

The first suggestion is:

Try this: refine Nat.mul_lt_mul_of_pos_left ?h ?hk

Applying this suggestion yields two further subgoals, which suggest premises to add to the theorem statement.

theorem mul_lt (x y z : Nat) : x * y < x * z :=
x:Nat
y:Nat
z:Nat
x * y < x * z
h
x:Nat
y:Nat
z:Nat
y < z
hk
x:Nat
y:Nat
z:Nat
x > 0
unsolved goals
case h
x y z : Nat
⊢ y < z

case hk
x y z : Nat
⊢ x > 0

Inspired by the type-checking speedups that result from inserting the found proof into the file, the new termination_by? keyword causes Lean to offer to insert a termination measure that was found automatically into the file. If the search process takes a long time, this can speed up rebuilding the file. For example,

def ack : Nat Nat Nat | 0, n => n + 1 | m + 1, 0 => ack m 1 | m + 1, n + 1 => ack m (ack (m + 1) n) Try this: termination_by x1 x2 => (sizeOf x1, sizeOf x2)termination_by?

is accepted by Lean, which can automatically discover the necessary lexicographic order for termination checking, but it also suggests an explicit measure:

Try this: termination_by x1 x2 => (sizeOf x1, sizeOf x2)

Clicking the suggestion updates the program with the measure:

def ack : Nat Nat Nat | 0, n => n + 1 | m + 1, 0 => ack m 1 | m + 1, n + 1 => ack m (ack (m + 1) n) termination_by x1 x2 => (sizeOf x1, sizeOf x2)

Further Tactics

In addition to omega and the library search tactics, a number of generally-useful tactics from Std and Mathlib are now a core part of Lean, available without imports to all users.

Recursive Case Analysis

The rcases tactic recursively case-splits a hypothesis, driven by a pattern written in a concise DSL that's inspired by Coq's intro patterns. This tactic was written by Mario Carneiro. Individual hypotheses to extract from one case are provided in angle brackets ( and ), and patterns that describe cases are separated by vertical bars (|). For example, angle brackets are useful for splitting a nested conjunction:

theorem test (h : p1 p2 p3) : p3 p2 p1 :=
p1:Prop
p2:Prop
p3:Prop
h:p1p2p3
p3p2p1
intro.intro
p1:Prop
p2:Prop
p3:Prop
h1:p1
h2:p2
h3:p3
p3p2p1
unsolved goals
case intro.intro
p1 p2 p3 : Prop
h1 : p1
h2 : p2
h3 : p3
⊢ p3 ∧ p2 ∧ p1

Vertical bars are useful for naming nested disjuncts:

theorem test (h : p1 p2 p3) : p3 p2 p1 :=
p1:Prop
p2:Prop
p3:Prop
h:p1p2p3
p3p2p1
inl
p1:Prop
p2:Prop
p3:Prop
h1:p1
p3p2p1
inr.inl
p1:Prop
p2:Prop
p3:Prop
h2:p2
p3p2p1
inr.inr
p1:Prop
p2:Prop
p3:Prop
h3:p3
p3p2p1
unsolved goals
case inl
p1 p2 p3 : Prop
h1 : p1
⊢ p3 ∨ p2 ∨ p1

case inr.inl
p1 p2 p3 : Prop
h2 : p2
⊢ p3 ∨ p2 ∨ p1

case inr.inr
p1 p2 p3 : Prop
h3 : p3
⊢ p3 ∨ p2 ∨ p1

The patterns compose as well:

theorem test (h : p1 (p2 p3) p4) : p4 p2 p1 :=
p1:Prop
p2:Prop
p3:Prop
p4:Prop
h:p1p2p3p4
p4p2p1
inl
p1:Prop
p2:Prop
p3:Prop
p4:Prop
h1:p1
p4p2p1
inr.inl.intro
p1:Prop
p2:Prop
p3:Prop
p4:Prop
h2:p2
h3:p3
p4p2p1
inr.inr
p1:Prop
p2:Prop
p3:Prop
p4:Prop
h4:p4
p4p2p1
unsolved goals
case inl
p1 p2 p3 p4 : Prop
h1 : p1
⊢ p4 ∨ p2 ∧ p1

case inr.inl.intro
p1 p2 p3 p4 : Prop
h2 : p2
h3 : p3
⊢ p4 ∨ p2 ∧ p1

case inr.inr
p1 p2 p3 p4 : Prop
h4 : p4
⊢ p4 ∨ p2 ∧ p1

One common use of angle brackets in rcases is to destructure existentials, bringing both the witness and the proof into scope. In this proof that every number is either even or odd, rcases both splits the disjunction in the induction hypothesis and introduces the existential witnesses and proofs:

theorem even_or_odd (n : Nat) : ( half, n = 2 * half) ( half, n = 2 * half + 1) :=
n:Nat
(∃ half, n = 2 * half)∃ half, n = 2 * half + 1
induction n with | zero
zero(∃ half, Nat.zero = 2 * half)∃ half, Nat.zero = 2 * half + 1
zero.h∃ half, Nat.zero = 2 * half
;
All goals completed! 🐙
| succ n' ih
succ
n':Nat
ih:(∃ half, n' = 2 * half)∃ half, n' = 2 * half + 1
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro
n':Nat
half:Nat
prf:n' = 2 * half
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inr.intro
n':Nat
half:Nat
prf:n' = 2 * half + 1
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro
n':Nat
half:Nat
prf:n' = 2 * half
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro.h
n':Nat
half:Nat
prf:n' = 2 * half
∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro.h
n':Nat
half:Nat
prf:n' = 2 * half
Nat.succ n' = 2 * half + 1
succ.inl.intro.h
n':Nat
half:Nat
prf:n' = 2 * half
Nat.succ n' = Nat.succ (2 * half)
All goals completed! 🐙
succ.inr.intro
n':Nat
half:Nat
prf:n' = 2 * half + 1
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inr.intro.h
n':Nat
half:Nat
prf:n' = 2 * half + 1
∃ half, Nat.succ n' = 2 * half
succ.inr.intro.h
n':Nat
half:Nat
prf:n' = 2 * half + 1
∃ half_1, Nat.succ (2 * half + 1) = 2 * half_1
All goals completed! 🐙

The pattern language for rcases is described in more detail in its docstring.

Extensionality

The ext tactic, by Simon Hudon, applies an extensionality lemma to the current goal. Extensionality lemmas characterize the equality of two objects by the observations that can be made on them; for functions, extensionality means that they map equal inputs to equal outputs, while for structures, extensionality means that their fields are equal.

Lemmas marked with the @[ext] attribute are added to the database that the ext tactic searches. Furthermore, applying @[ext] to a structure definition arranges for an appropriate extensionality lemma for the structure type to be stated, proved, and added to the database. In the following example, the attribute generates the lemma for Bounds:

@[ext] structure Bounds where lower : Nat upper : Nat ok : lower < upper

While the following instances of Bounds are already definitionally equal, ext can be used to prove their equality:

def bOne : Bounds := 1, 4,
1 < 4
All goals completed! 🐙
def bTwo : Bounds where lower := Nat.succ Nat.zero upper := 2 + 2 ok := Nat.lt_of_sub_eq_succ rfl

Applying the ext tactic to this goal gives two new goals, one for each non-proof field:

theorem bOne_eq_bTwo : bOne = bTwo :=
bOne = bTwo
lowerbOne.lower = bTwo.lower
upperbOne.upper = bTwo.upper
unsolved goals
case lower
⊢ bOne.lower = bTwo.lower

case upper
⊢ bOne.upper = bTwo.upper

Simplifier Suggestions

The simp? tactic runs simp as usual, but it tracks which lemmas were used. It then suggests replacing the call with an invocation of simp only. Using a minimal set of simp lemmas can make proofs faster to check, and it can also make them more robust to changes in the set of default simp lemmas. Generally, it's a good idea to ensure that all uses of simp that don't close the current goal take a restricted set of lemmas, which improves the robustness of the proofs.

Proof by Cases

With classical logic, showing that P \to Q and \neg P \to Q is sufficient to show Q. The by_cases tactic captures this pattern of reasoning: by_cases P results in two subgoals, one with an assumption of P and one with an assumption of ¬P.

Cast Normalization

Lean's coercion mechanism makes it easy to combine terms with different types, but the resulting term will include calls to the functions that perform the conversion. When writing proofs, these essentially bureaucratic details can get in the way, and working to manually remove them is tedious.

The norm_cast tactic performs rewrites to put these casts in a normal form, making more terms definitionally equal. It was originally developed for Lean 3 by Robert Y. Lewis and Paul-Nicolas Madelaine.

Changing Types

The change tactic replaces the goal or a hypothesis with one that is definitionally equal, accepting as its argument a pattern that the resulting goal or hypothesis type should match. Underscores in the pattern represent terms to be filled out by unification.

For instance, given this function that splits a list into two sublists:

def split : List α List α × List α | [] => ([], []) | [x] => ([x], []) | x :: x' :: xs => let (pre, post) := split xs (x :: pre, x' :: post)

it may be useful to prove that the resulting pair of lists has as many entries as the original list:

theorem split_length (xs : List α) : (split xs).fst.length + (split xs).snd.length = xs.length :=
α:Type u_1
xs:List α
List.length (split xs).fst + List.length (split xs).snd = List.length xs
match xs with | [] =>
All goals completed! 🐙
| [y] =>
All goals completed! 🐙
| y :: y' :: ys unsolved goals α : Type u_1 xs : List α y y' : α ys : List α this : List.length (split ys).fst + List.length (split ys).snd = List.length ys ⊢ List.length (split (y :: y' :: ys)).fst + List.length (split (y :: y' :: ys)).snd = List.length (y :: y' :: ys)=>
α:Type u_1
xs:List α
y:α
y':α
ys:List α
this:List.length (split ys).fst + List.length (split ys).snd = List.length ys
List.length (split (y :: y' :: ys)).fst + List.length (split (y :: y' :: ys)).snd = List.length (y :: y' :: ys)

At this point in the proof, the unsolved goal is not yet in a form suitable for omega:

unsolved goals
α : Type u_1
xs : List α
y y' : α
ys : List α
this : List.length (split ys).fst + List.length (split ys).snd = List.length ys
⊢ List.length (split (y :: y' :: ys)).fst + List.length (split (y :: y' :: ys)).snd = List.length (y :: y' :: ys)

Invoking the change tactic with a pattern that exposes the additions that result from the calls to List.length results in a goal that is suitable for omega:

theorem split_length (xs : List α) : (split xs).fst.length + (split xs).snd.length = xs.length :=
α:Type u_1
xs:List α
List.length (split xs).fst + List.length (split xs).snd = List.length xs
match xs with | [] =>
All goals completed! 🐙
| [y] =>
All goals completed! 🐙
| y :: y' :: ys unsolved goals α : Type u_1 xs : List α y y' : α ys : List α this : List.length (split ys).fst + List.length (split ys).snd = List.length ys ⊢ List.length (split ys).fst + 1 + List.length (split ys).snd + 1 = List.length ys + 2=>
α:Type u_1
xs:List α
y:α
y':α
ys:List α
this:List.length (split ys).fst + List.length (split ys).snd = List.length ys
List.length (split (y :: y' :: ys)).fst + List.length (split (y :: y' :: ys)).snd = List.length (y :: y' :: ys)
α:Type u_1
xs:List α
y:α
y':α
ys:List α
this:List.length (split ys).fst + List.length (split ys).snd = List.length ys
List.length (split ys).fst + 1 + List.length (split ys).snd + 1 = List.length ys + 2
unsolved goals
α : Type u_1
xs : List α
y y' : α
ys : List α
this : List.length (split ys).fst + List.length (split ys).snd = List.length ys
⊢ List.length (split ys).fst + 1 + List.length (split ys).snd + 1 = List.length ys + 2

The change tactic also optionally accepts a hypothesis name using the same at syntax as simp and rw, causing it to rewrite the hypothesis.

Closing Tactics

solve_by_elim

The solve_by_elim tactic performs a proof search, using the apply tactic on the assumptions in scope along with an optional list of provided lemmas repeatedly to close the current goal. Unlike apply, solve_by_elim will also use the assumptions or lemmas in nested positions, automatically combining them with congrArg and congrFun as needed.

In this updated version of the proof that every natural number is even or odd, solve_by_elim automatically uses the proofs in the local context to find the existential witness and prove the desired property:

theorem even_or_odd (n : Nat) : ( half, n = 2 * half) ( half, n = 2 * half + 1) :=
n:Nat
(∃ half, n = 2 * half)∃ half, n = 2 * half + 1
induction n with | zero
zero(∃ half, Nat.zero = 2 * half)∃ half, Nat.zero = 2 * half + 1
zero.h∃ half, Nat.zero = 2 * half
;
All goals completed! 🐙
| succ n' ih
succ
n':Nat
ih:(∃ half, n' = 2 * half)∃ half, n' = 2 * half + 1
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro
n':Nat
half:Nat
prf:n' = 2 * half
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inr.intro
n':Nat
half:Nat
prf:n' = 2 * half + 1
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro
n':Nat
half:Nat
prf:n' = 2 * half
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inl.intro.h
n':Nat
half:Nat
prf:n' = 2 * half
∃ half, Nat.succ n' = 2 * half + 1
All goals completed! 🐙
succ.inr.intro
n':Nat
half:Nat
prf:n' = 2 * half + 1
(∃ half, Nat.succ n' = 2 * half)∃ half, Nat.succ n' = 2 * half + 1
succ.inr.intro.h
n':Nat
half:Nat
prf:n' = 2 * half + 1
∃ half, Nat.succ n' = 2 * half
succ.inr.intro.h
n':Nat
half:Nat
prf:n' = 2 * half + 1
∃ half_1, Nat.succ (2 * half + 1) = 2 * half_1
All goals completed! 🐙

If you have a background in dependent type theory, don't let the name of solve_by_elim throw you off; it's not a reference to the use of eliminators of inductive types. This is a very old tactic that has been ported from prior versions of Lean. Historically, elim-lemmas were those used by various parts of proof automation, and the name of the tactic stuck even when the concept of elim-lemmas no longer existed. The tactic was originally written by Simon Hudon, and later rewritten by Kim Morrison.

simpa

simpa is a specialized version of simp, designed exclusively for closing goals. Like rcases, it was written by Mario Carneiro. It can be provided a term argument with using, in which case it simplifies both the goal and the term's type before using the term to close the goal. Without a using argument, it will use an assumption called this as the term, simplifying both the goal and the assumption's type. Simplifying both types the same way makes the proof more robust to changes in the simp set.

As an example, the proof split_length that splitting a list results in two lists whose lengths sum to the original length can use simpa instead of omega:

theorem split_length (xs : List α) : (split xs).fst.length + (split xs).snd.length = xs.length :=
α:Type u_1
xs:List α
List.length (split xs).fst + List.length (split xs).snd = List.length xs
match xs with | [] =>
All goals completed! 🐙
| [y] =>
All goals completed! 🐙
| y :: y' :: ys =>
α:Type u_1
xs:List α
y:α
y':α
ys:List α
this:List.length (split ys).fst + List.length (split ys).snd = List.length ys
List.length (split (y :: y' :: ys)).fst + List.length (split (y :: y' :: ys)).snd = List.length (y :: y' :: ys)
α:Type u_1
xs:List α
y:α
y':α
ys:List α
this:List.length (split ys).fst + List.length (split ys).snd = List.length ys
List.length (split ys).fst + 1 + List.length (split ys).snd + 1 = List.length ys + 2
All goals completed! 🐙

Simplifying the goal in arithmetic mode makes it match the assumption this that was introduced by the have tactic.

Tools for Testing

Lean now includes the testing commands #guard_msgs, #check_tactic, and #check_simp from Std. These can be used for lightweight testing of Lean tactics, programs, and metaprograms.

The #guard_msgs command ensures that the messages emitted by the following Lean command are as expected. The expected message is in a documentation comment prior to #guard_msgs.

For example,

❌ Docstring on `#guard_msgs` does not match generated message: info: "4.7.0"#guard_msgs in "4.7.0" #eval Lean.versionString

is an error, because there is no expected message provided:

❌ Docstring on `#guard_msgs` does not match generated message:

info: "4.7.0"

Lean provides a quick fix to add or update the message, which results in:

/-- info: "4.7.0" -/ #guard_msgs in #eval Lean.versionString

Tactics can be tested using #check_tactic, which takes a starting proof goal, an end goal, and a tactic script, ensuring that the tactics result in the desired end goal when applied to the starting goal. For example:

variable (x : Nat) #check_tactic 1 + x ~> x + 1 by
x:Nat
Lean.Meta.CheckTactic.CheckGoalType (x + 1)

#check_simp is just like checkTactic, except it always uses simp as the tactic script.

Proofs about Basic Types

Many, many proofs about basic types have been upstreamed into Lean. This has already made it easier to use Lean without installing extra libraries, lowering the barrier to getting started.

We anticipate even larger benefits going forward. An important aspect of a Lean library is the definition of its simp normal forms. Most libraries include simp lemmas that specify the interactions between the operations provided in the library, guiding the simplifier towards some notion of simpler expressions. A simp normal form is the specification of what simplicity means for a given library. Splitting the development of lemmas about core datatypes between the Lean repository and the Std repository made it much more difficult to coordinate consistent normal forms for simp. While upstreaming these lemmas, we've taken care to ensure that simp is as predictable as possible, with clear normal forms.

Compiler Quality of Life Improvements

Not all improvements in Lean 4.7.0 are upstreamed features from Std.

Improved Completion

Completing symbols is now significantly faster and more robust, especially in contexts where there are a large number of names in scope. In a file with import Mathlib, the performance of various kinds of completions has improved as follows:

  • Completing C: 49000ms -> 1400ms

  • Completing Cat: 14300ms -> 1000ms

  • Completing x. for x : Nat: 3700ms -> 220ms

  • Completing . for an expected type of Nat: 11000ms -> 180ms

Along the way, a number of bugs were fixed as well.

Server Startup

Cross-reference information is now loaded asynchronously, rather than synchronously, when the language server starts. This reduces startup time significantly, at the cost of commands like "find references" returning incomplete results in the first few seconds after opening a file.

Error Messages for decide

The decide tactic uses a Decidable instance to discharge a goal. For example, inequality of natural numbers is decidable, so this proof goes through:

theorem easy : 2 + 2 < 5 :=
2 + 2 < 5
All goals completed! 🐙

It may not be able to discharge the goal for a variety of reasons:

  • the proposition may be false

  • if the goal contains free variables or metavariables, then the code in the Decidable instance could get stuck, so the tactic fails

  • if the Decidable instance itself is not computable, then it may not return an answer

In prior versions of Lean, the error message made it difficult to know which of these situations had occurred. In Lean 4.7.0, decide now gives a much more useful error message when it is unable to discharge a goal, explicitly stating why it could not complete the proof. If the proposition is false, it says so explicitly:

theorem hard : 2 + 2 < 4 :=
2 + 2 < 4
2 + 2 < 4
tactic 'decide' proved that the proposition
  2 + 2 < 4
is false

If it fails because there are variables, the error message is explicit about this:

theorem var : n < n + 2:=
n:Nat
n < n + 2
n:Nat
n < n + 2
expected type must not contain free or meta variables
  n < n + 2

And finally, the use of noncomputable Decidable instances that lead to stuck programs is also indicated explicitly. Classical reasoning allows all propositions to be treated as decidable, but it does not give an algorithm for checking whether they're true or false:

opaque unknownProp : Prop open scoped Classical in example : unknownProp :=
unknownProp
unknownProp
tactic 'decide' failed for proposition
  unknownProp
since its 'Decidable' instance reduced to
  Classical.choice ⋯
rather than to the 'isTrue' constructor.

Breaking Changes

The rounding behavior of the integer division and modulus infix operators has changed to the Euclidean rounding convention. Users who already use Std will not be affected, as Std was already overriding the rounding behavior to this convention.

The function Lean.withTraceNode, which is useful for tactic authors and metaprogrammers, now require that they are used in a monad with a MonadAlwaysExcept ε m instance. This should be synthesized automatically in most contexts, which are built on EIO Lean.Exception.

The nomatch notation now supports multiple comma-separated terms, and the new nofun introduces assumptions before checking that they are impossible. These are upstreaming the functionality of the match ... with. and fun. notations from Std, but the syntax has changed.