assumption
tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the ‹t›
term notation, which is a shorthand for show t by assumption
.
9.5. Tactic Reference
9.5.1. Assumptions
assumption
apply_assumption
apply_assumption
looks for an assumption of the form ... → ∀ _, ... → head
where head
matches the current goal.
You can specify additional rules to apply using apply_assumption [...]
.
By default apply_assumption
will also try rfl
, trivial
, congrFun
, and congrArg
.
If you don't want these, or don't want to use all hypotheses, use apply_assumption only [...]
.
You can use apply_assumption [-h]
to omit a local hypothesis.
You can use apply_assumption using [a₁, ...]
to use all lemmas which have been labelled
with the attributes aᵢ
(these attributes must be created using register_label_attr
).
apply_assumption
will use consequences of local hypotheses obtained via symm
.
If apply_assumption
fails, it will call exfalso
and try again.
Thus if there is an assumption of the form P → ¬ Q
, the new tactic state
will have two goals, P
and Q
.
You can pass a further configuration via the syntax apply_rules (config := {...}) lemmas
.
The options supported are the same as for solve_by_elim
(and include all the options for apply
).
9.5.2. Quantifiers
exists
exists e₁, e₂, ...
is shorthand for refine ⟨e₁, e₂, ...⟩; try trivial
.
It is useful for existential goals.
intro
Introduces one or more hypotheses, optionally naming and/or pattern-matching them.
For each hypothesis to be introduced, the remaining main goal's target type must
be a let
or function type.
-
intro
by itself introduces one anonymous hypothesis, which can be accessed by e.g.assumption
. -
intro x y
introduces two hypotheses and names them. Individual hypotheses can be anonymized via_
, or matched against a pattern:-- ... ⊢ α × β → ... intro (a, b) -- ..., a : α, b : β ⊢ ...
-
Alternatively,
intro
can be combined with pattern matching much likefun
:intro | n + 1, 0 => tac | ...
intros
Introduces zero or more hypotheses, optionally naming them.
-
intros
is equivalent to repeatedly applyingintro
until the goal is not an obvious candidate forintro
, which is to say that so long as the goal is alet
or a pi type (e.g. an implication, function, or universal quantifier), theintros
tactic will introduce an anonymous hypothesis. This tactic does not unfold definitions. -
intros x y ...
is equivalent tointro x y ...
, introducing hypotheses for each supplied argument and unfolding definitions as necessary. Each argument can be either an identifier or a_
. An identifier indicates a name to use for the corresponding introduced hypothesis, and a_
indicates that the hypotheses should be introduced anonymously.
Examples
Basic properties:
def AllEven (f : Nat → Nat) := ∀ n, f n % 2 = 0 -- Introduces the two obvious hypotheses automatically example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := by intros /- Tactic state f✝ : Nat → Nat a✝ : AllEven f✝ ⊢ AllEven fun k => f✝ (k + 1) -/ sorry -- Introduces exactly two hypotheses, naming only the first example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := by intros g _ /- Tactic state g : Nat → Nat a✝ : AllEven g ⊢ AllEven fun k => g (k + 1) -/ sorry -- Introduces exactly three hypotheses, which requires unfolding `AllEven` example : ∀ (f : Nat → Nat), AllEven f → AllEven (fun k => f (k + 1)) := by intros f h n /- Tactic state f : Nat → Nat h : AllEven f n : Nat ⊢ (fun k => f (k + 1)) n % 2 = 0 -/ apply h
Implications:
example (p q : Prop) : p → q → p := by intros /- Tactic state a✝¹ : p a✝ : q ⊢ p -/ assumption
Let bindings:
example : let n := 1; let k := 2; n + k = 3 := by intros /- n✝ : Nat := 1 k✝ : Nat := 2 ⊢ n✝ + k✝ = 3 -/ rfl
intro | ... => ... | ... => ...
The tactic
intro | pat1 => tac1 | pat2 => tac2
is the same as:
intro x match x with | pat1 => tac1 | pat2 => tac2
That is, intro
can be followed by match arms and it introduces the values while
doing a pattern match. This is equivalent to fun
with match arms in term mode.
rintro
The rintro
tactic is a combination of the intros
tactic with rcases
to
allow for destructuring patterns while introducing variables. See rcases
for
a description of supported patterns. For example, rintro (a | ⟨b, c⟩) ⟨d, e⟩
will introduce two variables, and then do case splits on both of them producing
two subgoals, one with variables a d e
and the other with b c d e
.
rintro
, unlike rcases
, also supports the form (x y : ty)
for introducing
and type-ascripting multiple variables at once, similar to binders.
9.5.3. Relations
rfl
This tactic applies to a goal whose target has the form x ~ x
,
where ~
is equality, heterogeneous equality or any relation that
has a reflexivity lemma tagged with the attribute @[refl].
rfl'
rfl'
is similar to rfl
, but disables smart unfolding and unfolds all kinds of definitions,
theorems included (relevant for declarations defined by well-founded recursion).
symm
-
symm
applies to a goal whose target has the formt ~ u
where~
is a symmetric relation, that is, a relation which has a symmetry lemma tagged with the attribute [symm]. It replaces the target withu ~ t
. -
symm at h
will rewrite a hypothesish : t ~ u
toh : u ~ t
.
symm_saturate
For every hypothesis h : a ~ b
where a @[symm]
lemma is available,
add a hypothesis h_symm : b ~ a
.
calc
Step-wise reasoning over transitive relations.
calc a = b := pab b = c := pbc ... y = z := pyz
proves a = z
from the given step-wise proofs. =
can be replaced with any
relation implementing the typeclass Trans
. Instead of repeating the right-
hand sides, subsequent left-hand sides can be replaced with _
.
calc a = b := pab _ = c := pbc ... _ = z := pyz
It is also possible to write the first relation as <lhs>\n _ = <rhs> := <proof>
. This is useful for aligning relation symbols, especially on longer:
identifiers:
calc abc _ = bce := pabce _ = cef := pbcef ... _ = xyz := pwxyz
calc
works as a term, as a tactic or as a conv
tactic.
See Theorem Proving in Lean 4 for more information.
9.5.3.1. Equality
subst
subst x...
substitutes each x
with e
in the goal if there is a hypothesis
of type x = e
or e = x
.
If x
is itself a hypothesis of type y = e
or e = y
, y
is substituted instead.
subst_eqs
subst_eq
repeatedly substitutes according to the equality proof hypotheses in the context,
replacing the left side of the equality with the right, until no more progress can be made.
congr
Apply congruence (recursively) to goals of the form ⊢ f as = f bs
and ⊢ HEq (f as) (f bs)
.
The optional parameter is the depth of the recursive applications.
This is useful when congr
is too aggressive in breaking down the goal.
For example, given ⊢ f (g (x + y)) = f (g (y + x))
,
congr
produces the goals ⊢ x = y
and ⊢ y = x
,
while congr 2
produces the intended ⊢ x + y = y + x
.
ac_rfl
ac_rfl
proves equalities up to application of an associative and commutative operator.
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩ instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩ example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by ac_rfl
9.5.4. Lemmas
apply
apply e
tries to match the current goal against the conclusion of e
's type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Non-dependent premises are added before dependent ones.
The apply
tactic uses higher-order pattern matching, type class resolution,
and first-order unification with dependent types.
refine
refine e
behaves like exact e
, except that named (?x
) or unnamed (?_
)
holes in e
that are not solved by unification with the main goal's target type
are converted into new goals, using the hole's name, if any, as the goal case name.
refine'
refine' e
behaves like refine e
, except that unsolved placeholders (_
)
and implicit parameters are also converted into new goals.
solve_by_elim
solve_by_elim
calls apply
on the main goal to find an assumption whose head matches
and then repeatedly calls apply
on the generated subgoals until no subgoals remain,
performing at most maxDepth
(defaults to 6) recursive steps.
solve_by_elim
discharges the current goal or fails.
solve_by_elim
performs backtracking if subgoals can not be solved.
By default, the assumptions passed to apply
are the local context, rfl
, trivial
,
congrFun
and congrArg
.
The assumptions can be modified with similar syntax as for simp
:
-
solve_by_elim [h₁, h₂, ..., hᵣ]
also applies the given expressions. -
solve_by_elim only [h₁, h₂, ..., hᵣ]
does not include the local context,rfl
,trivial
,congrFun
, orcongrArg
unless they are explicitly included. -
solve_by_elim [-h₁, ... -hₙ]
removes the given local hypotheses. -
solve_by_elim using [a₁, ...]
uses all lemmas which have been labelled with the attributesaᵢ
(these attributes must be created usingregister_label_attr
).
solve_by_elim*
tries to solve all goals together, using backtracking if a solution for one goal
makes other goals impossible.
(Adding or removing local hypotheses may not be well-behaved when starting with multiple goals.)
Optional arguments passed via a configuration argument as solve_by_elim (config := { ... })
-
maxDepth
: number of attempts at discharging generated subgoals -
symm
: adds all hypotheses derived bysymm
(defaults totrue
). -
exfalso
: allow callingexfalso
and trying again ifsolve_by_elim
fails (defaults totrue
). -
transparency
: change the transparency mode when callingapply
. Defaults to.default
, but it is often useful to change to.reducible
, so semireducible definitions will not be unfolded when trying to apply a lemma.
See also the doc-comment for Lean.Meta.Tactic.Backtrack.BacktrackConfig
for the options
proc
, suspend
, and discharge
which allow further customization of solve_by_elim
.
Both apply_assumption
and apply_rules
are implemented via these hooks.
apply_rules
apply_rules [l₁, l₂, ...]
tries to solve the main goal by iteratively
applying the list of lemmas [l₁, l₂, ...]
or by applying a local hypothesis.
If apply
generates new goals, apply_rules
iteratively tries to solve those goals.
You can use apply_rules [-h]
to omit a local hypothesis.
apply_rules
will also use rfl
, trivial
, congrFun
and congrArg
.
These can be disabled, as can local hypotheses, by using apply_rules only [...]
.
You can use apply_rules using [a₁, ...]
to use all lemmas which have been labelled
with the attributes aᵢ
(these attributes must be created using register_label_attr
).
You can pass a further configuration via the syntax apply_rules (config := {...})
.
The options supported are the same as for solve_by_elim
(and include all the options for apply
).
apply_rules
will try calling symm
on hypotheses and exfalso
on the goal as needed.
This can be disabled with apply_rules (config := {symm := false, exfalso := false})
.
You can bound the iteration depth using the syntax apply_rules (config := {maxDepth := n})
.
Unlike solve_by_elim
, apply_rules
does not perform backtracking, and greedily applies
a lemma from the list until it gets stuck.
9.5.5. Falsehood
contradiction
contradiction
closes the main goal if its hypotheses are "trivially contradictory".
-
Inductive type/family with no applicable constructors
example (h : False) : p := by contradiction
-
Injectivity of constructors
example (h : none = some true) : p := by contradiction --
-
Decidable false proposition
example (h : 2 + 2 = 3) : p := by contradiction
-
Contradictory hypotheses
example (h : p) (h' : ¬ p) : q := by contradiction
-
Other simple contradictions such as
example (x : Nat) (h : x ≠ x) : p := by contradiction
false_or_by_contra
Changes the goal to False
, retaining as much information as possible:
-
If the goal is
False
, do nothing. -
If the goal is an implication or a function type, introduce the argument and restart. (In particular, if the goal is
x ≠ y
, introducex = y
.) -
Otherwise, for a propositional goal
P
, replace it with¬ ¬ P
(attempting to find aDecidable
instance, but otherwise falling back to working classically) and introduce¬ P
. -
For a non-propositional goal use
False.elim
.
9.5.6. Goal Management
suffices
Given a main goal ctx ⊢ t
, suffices h : t' from e
replaces the main goal with ctx ⊢ t'
,
e
must have type t
in the context ctx, h : t'
.
The variant suffices h : t' by tac
is a shorthand for suffices h : t' from by tac
.
If h :
is omitted, the name this
is used.
change
-
change tgt'
will change the goal fromtgt
totgt'
, assuming these are definitionally equal. -
change t' at h
will change hypothesish : t
to have typet'
, assuming assumingt
andt'
are definitionally equal.
change ... with ...
-
change a with b
will change occurrences ofa
tob
in the goal, assuminga
andb
are definitionally equal. -
change a with b at h
similarly changesa
tob
in the type of hypothesish
.
generalize
-
generalize ([h :] e = x),+
replaces all occurrencese
s in the main goal with a fresh hypothesisx
s. Ifh
is given,h : e = x
is introduced as well. -
generalize e = x at h₁ ... hₙ
also generalizes occurrences ofe
insideh₁
, ...,hₙ
. -
generalize e = x at *
will generalize occurrences ofe
everywhere.
specialize
The tactic specialize h a₁ ... aₙ
works on local hypothesis h
.
The premises of this hypothesis, either universal quantifications or
non-dependent implications, are instantiated by concrete terms coming
from arguments a₁
... aₙ
.
The tactic adds a new hypothesis with the same name h := h a₁ ... aₙ
and tries to clear the previous one.
obtain
The obtain
tactic is a combination of have
and rcases
. See rcases
for
a description of supported patterns.
obtain ⟨patt⟩ : type := proof
is equivalent to
have h : type := proof rcases h with ⟨patt⟩
If ⟨patt⟩
is omitted, rcases
will try to infer the pattern.
If type
is omitted, := proof
is required.
show
show t
finds the first goal whose target unifies with t
. It makes that the main goal,
performs the unification, and replaces the target with the unified version of t
.
show_term
show_term tac
runs tac
, then prints the generated term in the form
"exact X Y Z" or "refine X ?_ Z" if there are remaining subgoals.
(For some tactics, the printed term will not be human readable.)
9.5.7. Cast Management
The tactics in this section make it easier avoid getting stuck on casts, which are functions that coerce data from one type to another, such as converting a natural number to the corresponding integer. They are described in more detail by Lewis and Madelaine (2020)Robert Y. Lewis and Paul-Nicolas Madelaine, 2020. “Simplifying Casts and Coercions”. arXiv:2001.10594.
norm_cast
The norm_cast
family of tactics is used to normalize certain coercions (casts) in expressions.
-
norm_cast
normalizes casts in the target. -
norm_cast at h
normalizes casts in hypothesish
.
The tactic is basically a version of simp
with a specific set of lemmas to move casts
upwards in the expression.
Therefore even in situations where non-terminal simp
calls are discouraged (because of fragility),
norm_cast
is considered to be safe.
It also has special handling of numerals.
For instance, given an assumption
a b : ℤ h : ↑a + ↑b < (10 : ℚ)
writing norm_cast at h
will turn h
into
h : a + b < 10
There are also variants of basic tactics that use norm_cast
to normalize expressions during
their operation, to make them more flexible about the expressions they accept
(we say that it is a tactic modulo the effects of norm_cast
):
-
exact_mod_cast
forexact
andapply_mod_cast
forapply
. Writingexact_mod_cast h
andapply_mod_cast h
will normalize casts in the goal andh
before usingexact h
orapply h
. -
rw_mod_cast
forrw
. It appliesnorm_cast
between rewrites. -
assumption_mod_cast
forassumption
. This is effectivelynorm_cast at *; assumption
, but more efficient. It normalizes casts in the goal and, for every hypothesish
in the context, it will try to normalize casts inh
and useexact h
.
See also push_cast
, which moves casts inwards rather than lifting them outwards.
push_cast
push_cast
rewrites the goal to move certain coercions (casts) inward, toward the leaf nodes.
This uses norm_cast
lemmas in the forward direction.
For example, ↑(a + b)
will be written to ↑a + ↑b
.
-
push_cast
moves casts inward in the goal. -
push_cast at h
moves casts inward in the hypothesish
. It can be used with extra simp lemmas with, for example,push_cast [Int.add_zero]
.
Example:
example (a b : Nat) (h1 : ((a + b : Nat) : Int) = 10) (h2 : ((a + b + 0 : Nat) : Int) = 10) : ((a + b : Nat) : Int) = 10 := by /- h1 : ↑(a + b) = 10 h2 : ↑(a + b + 0) = 10 ⊢ ↑(a + b) = 10 -/ push_cast /- Now ⊢ ↑a + ↑b = 10 -/ push_cast at h1 push_cast [Int.add_zero] at h2 /- Now h1 h2 : ↑a + ↑b = 10 -/ exact h1
See also norm_cast
.
exact_mod_cast
Normalize casts in the goal and the given expression, then close the goal with exact
.
apply_mod_cast
Normalize casts in the goal and the given expression, then apply
the expression to the goal.
assumption_mod_cast
assumption_mod_cast
is a variant of assumption
that solves the goal
using a hypothesis. Unlike assumption
, it first pre-processes the goal and
each hypothesis to move casts as far outwards as possible, so it can be used
in more situations.
Concretely, it runs norm_cast
on the goal. For each local hypothesis h
, it also
normalizes h
with norm_cast
and tries to use that to close the goal.
9.5.8. Extensionality
ext
Applies extensionality lemmas that are registered with the @[ext]
attribute.
-
ext pat*
applies extensionality theorems as much as possible, using the patternspat*
to introduce the variables in extensionality theorems usingrintro
. For example, the patterns are used to name the variables introduced by lemmas such asfunext
. -
Without patterns,
ext
applies extensionality lemmas as much as possible but introduces anonymous hypotheses whenever needed. -
ext pat* : n
applies ext theorems only up to depthn
.
The ext1 pat*
tactic is like ext pat*
except that it only applies a single extensionality theorem.
Unused patterns will generate warning. Patterns that don't match the variables will typically result in the introduction of anonymous hypotheses.
ext1
ext1 pat*
is like ext pat*
except that it only applies a single extensionality theorem rather
than recursively applying as many extensionality theorems as possible.
The pat*
patterns are processed using the rintro
tactic.
If no patterns are supplied, then variables are introduced anonymously using the intros
tactic.
funext
Apply function extensionality and introduce new hypotheses.
The tactic funext
will keep applying the funext
lemma until the goal target is not reducible to
|- ((fun x => ...) = (fun x => ...))
The variant funext h₁ ... hₙ
applies funext
n
times, and uses the given identifiers to name the new hypotheses.
Patterns can be used like in the intro
tactic. Example, given a goal
|- ((fun x : Nat × Bool => ...) = (fun x => ...))
funext (a, b)
applies funext
once and performs pattern matching on the newly introduced pair.
9.5.9. Simplification
The simplifier is described in greater detail in its dedicated chapter.
simp
The simp
tactic uses lemmas and hypotheses to simplify the main goal target or
non-dependent hypotheses. It has many variants:
-
simp
simplifies the main goal target using lemmas tagged with the attribute[simp]
. -
simp [h₁, h₂, ..., hₙ]
simplifies the main goal target using the lemmas tagged with the attribute[simp]
and the givenhᵢ
's, where thehᵢ
's are expressions.- -
If an
hᵢ
is a defined constantf
, thenf
is unfolded. Iff
has equational lemmas associated with it (and is not a projection or areducible
definition), these are used to rewrite withf
. -
simp [*]
simplifies the main goal target using the lemmas tagged with the attribute[simp]
and all hypotheses. -
simp only [h₁, h₂, ..., hₙ]
is likesimp [h₁, h₂, ..., hₙ]
but does not use[simp]
lemmas. -
simp [-id₁, ..., -idₙ]
simplifies the main goal target using the lemmas tagged with the attribute[simp]
, but removes the ones namedidᵢ
. -
simp at h₁ h₂ ... hₙ
simplifies the hypothesesh₁ : T₁
...hₙ : Tₙ
. If the target or another hypothesis depends onhᵢ
, a new simplified hypothesishᵢ
is introduced, but the old one remains in the local context. -
simp at *
simplifies all the hypotheses and the target. -
simp [*] at *
simplifies target and all (propositional) hypotheses using the other hypotheses.
simp!
simp!
is shorthand for simp
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp?
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := by simp? -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp?!
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := by simp? -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp_arith
simp_arith
is shorthand for simp
with arith := true
and decide := true
.
This enables the use of normalization by linear arithmetic.
simp_arith!
simp_arith!
is shorthand for simp_arith
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
dsimp
The dsimp
tactic is the definitional simplifier. It is similar to simp
but only
applies theorems that hold by reflexivity. Thus, the result is guaranteed to be
definitionally equal to the input.
dsimp!
dsimp!
is shorthand for dsimp
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
dsimp?
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := by simp? -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
dsimp?!
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := by simp? -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp_all
simp_all
is a stronger version of simp [*] at *
where the hypotheses and target
are simplified multiple times until no simplification is applicable.
Only non-dependent propositional hypotheses are considered.
simp_all!
simp_all!
is shorthand for simp_all
with autoUnfold := true
.
This will rewrite with all equation lemmas, which can be used to
partially evaluate many definitions.
simp_all?
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := by simp? -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simp_all?!
simp?
takes the same arguments as simp
, but reports an equivalent call to simp only
that would be sufficient to close the goal. This is useful for reducing the size of the simp
set in a local invocation to speed up processing.
example (x : Nat) : (if True then x + 2 else 3) = x + 2 := by simp? -- prints "Try this: simp only [ite_true]"
This command can also be used in simp_all
and dsimp
.
simpa
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simpa!
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simpa?
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simpa?!
This is a "finishing" tactic modification of simp
. It has two forms.
-
simpa [rules, ⋯] using e
will simplify the goal and the type ofe
usingrules
, then try to close the goal usinge
.
Simplifying the type of e
makes it more likely to match the goal
(which has also been simplified). This construction also tends to be
more robust under changes to the simp lemma set.
-
simpa [rules, ⋯]
will simplify the goal and the type of a hypothesisthis
if present in the context, then try to close the goal using theassumption
tactic.
simp_wf
Unfold definitions commonly used in well founded relation definitions.
Since Lean 4.12, Lean unfolds these definitions automatically before presenting the goal to the
user, and this tactic should no longer be necessary. Calls to simp_wf
can be removed or replaced
by plain calls to simp
.
9.5.10. Rewriting
rw
rw
is like rewrite
, but also tries to close the goal by "cheap" (reducible) rfl
afterwards.
rewrite
rewrite [e]
applies identity e
as a rewrite rule to the target of the main goal.
If e
is preceded by left arrow (←
or <-
), the rewrite is applied in the reverse direction.
If e
is a defined constant, then the equational theorems associated with e
are used.
This provides a convenient way to unfold e
.
-
rewrite [e₁, ..., eₙ]
applies the given rules sequentially. -
rewrite [e] at l
rewritese
at location(s)l
, wherel
is either*
or a list of hypotheses in the local context. In the latter case, a turnstile⊢
or|-
can also be used, to signify the target of the goal.
Using rw (occs := .pos L) [e]
,
where L : List Nat
, you can control which "occurrences" are rewritten.
(This option applies to each rule, so usually this will only be used with a single rule.)
Occurrences count from 1
.
At each allowed occurrence, arguments of the rewrite rule e
may be instantiated,
restricting which later rewrites can be found.
(Disallowed occurrences do not result in instantiation.)
(occs := .neg L)
allows skipping specified occurrences.
erw
erw [rules]
is a shorthand for rw (transparency := .default) [rules]
.
This does rewriting up to unfolding of regular definitions (by comparison to regular rw
which only unfolds @[reducible]
definitions).
Lean.Meta.Rewrite.Config : Type
Constructor
Lean.Meta.Rewrite.Config.mk |
Fields
transparency | : | Lean.Meta.TransparencyMode |
offsetCnstrs | : | Bool |
occs | : | Lean.Meta.Occurrences |
newGoals | : | Lean.Meta.Rewrite.NewGoals |
Lean.Meta.Occurrences : Type
Configuration for which occurrences that match an expression should be rewritten.
Constructors
-
all : Lean.Meta.Occurrences
All occurrences should be rewritten.
-
pos (idxs : List Nat) : Lean.Meta.Occurrences
A list of indices for which occurrences should be rewritten.
-
neg (idxs : List Nat) : Lean.Meta.Occurrences
A list of indices for which occurrences should not be rewritten.
Lean.Meta.TransparencyMode : Type
Constructors
-
all : Lean.Meta.TransparencyMode
unfold all constants, even those tagged as
@[irreducible]
. -
default : Lean.Meta.TransparencyMode
unfold all constants except those tagged as
@[irreducible]
. -
reducible : Lean.Meta.TransparencyMode
unfold only constants tagged with the
@[reducible]
attribute. -
instances : Lean.Meta.TransparencyMode
unfold reducible constants and constants tagged with the
@[instance]
attribute.
Lean.Meta.Rewrite.NewGoals : Type
unfold
-
unfold id
unfolds all occurrences of definitionid
in the target. -
unfold id1 id2 ...
is equivalent tounfold id1; unfold id2; ...
. -
unfold id at h
unfolds at the hypothesish
.
Definitions can be either global or local definitions.
For non-recursive global definitions, this tactic is identical to delta
.
For recursive global definitions, it uses the "unfolding lemma" id.eq_def
,
which is generated for each recursive definition, to unfold according to the recursive definition given by the user.
Only one level of unfolding is performed, in contrast to simp only [id]
, which unfolds definition id
recursively.
Implemented by Lean.Elab.Tactic.evalUnfold
.
replace
Acts like have
, but removes a hypothesis with the same name as
this one if possible. For example, if the state is:
f : α → β h : α ⊢ goal
Then after replace h := f h
the state will be:
f : α → β h : β ⊢ goal
whereas have h := f h
would result in:
f : α → β h† : α h : β ⊢ goal
This can be used to simulate the specialize
and apply at
tactics of Coq.
delta
delta id1 id2 ...
delta-expands the definitions id1
, id2
, ....
This is a low-level tactic, it will expose how recursive definitions have been
compiled by Lean.
9.5.11. Inductive Types
9.5.11.1. Introduction
constructor
If the main goal's target type is an inductive type, constructor
solves it with
the first matching constructor, or else fails.
injection
The injection
tactic is based on the fact that constructors of inductive data
types are injections.
That means that if c
is a constructor of an inductive datatype, and if (c t₁)
and (c t₂)
are two terms that are equal then t₁
and t₂
are equal too.
If q
is a proof of a statement of conclusion t₁ = t₂
, then injection applies
injectivity to derive the equality of all arguments of t₁
and t₂
placed in
the same positions. For example, from (a::b) = (c::d)
we derive a=c
and b=d
.
To use this tactic t₁
and t₂
should be constructor applications of the same constructor.
Given h : a::b = c::d
, the tactic injection h
adds two new hypothesis with types
a = c
and b = d
to the main goal.
The tactic injection h with h₁ h₂
uses the names h₁
and h₂
to name the new hypotheses.
injections
injections
applies injection
to all hypotheses recursively
(since injection
can produce new hypotheses). Useful for destructing nested
constructor equalities like (a::b::c) = (d::e::f)
.
left
Applies the first constructor when the goal is an inductive type with exactly two constructors, or fails otherwise.
example : True ∨ False := by left trivial
right
Applies the second constructor when the goal is an inductive type with exactly two constructors, or fails otherwise.
example {p q : Prop} (h : q) : p ∨ q := by right exact h
9.5.11.2. Elimination
Description of the @[induction_eliminator]
and @[cases_eliminator]
attributes
Tracked at issue #48
cases
Assuming x
is a variable in the local context with an inductive type,
cases x
splits the main goal, producing one goal for each constructor of the
inductive type, in which the target is replaced by a general instance of that constructor.
If the type of an element in the local context depends on x
,
that element is reverted and reintroduced afterward,
so that the case split affects that hypothesis as well.
cases
detects unreachable cases and closes them automatically.
For example, given n : Nat
and a goal with a hypothesis h : P n
and target Q n
,
cases n
produces one goal with hypothesis h : P 0
and target Q 0
,
and one goal with hypothesis h : P (Nat.succ a)
and target Q (Nat.succ a)
.
Here the name a
is chosen automatically and is not accessible.
You can use with
to provide the variables names for each constructor.
-
cases e
, wheree
is an expression instead of a variable, generalizese
in the goal, and then cases on the resulting variable. -
Given
as : List α
,cases as with | nil => tac₁ | cons a as' => tac₂
, uses tactictac₁
for thenil
case, andtac₂
for thecons
case, anda
andas'
are used as names for the new variables introduced. -
cases h : e
, wheree
is a variable or an expression, performs cases one
as above, but also adds a hypothesish : e = ...
to each hypothesis, where...
is the constructor instance for that particular case.
rcases
rcases
is a tactic that will perform cases
recursively, according to a pattern. It is used to
destructure hypotheses or expressions composed of inductive types like h1 : a ∧ b ∧ c ∨ d
or
h2 : ∃ x y, trans_rel R x y
. Usual usage might be rcases h1 with ⟨ha, hb, hc⟩ | hd
or
rcases h2 with ⟨x, y, _ | ⟨z, hxz, hzy⟩⟩
for these examples.
Each element of an rcases
pattern is matched against a particular local hypothesis (most of which
are generated during the execution of rcases
and represent individual elements destructured from
the input expression). An rcases
pattern has the following grammar:
-
A name like
x
, which names the active hypothesis asx
. -
A blank
_
, which does nothing (letting the automatic naming system used bycases
name the hypothesis). -
A hyphen
-
, which clears the active hypothesis and any dependents. -
The keyword
rfl
, which expects the hypothesis to beh : a = b
, and callssubst
on the hypothesis (which has the effect of replacingb
witha
everywhere or vice versa). -
A type ascription
p : ty
, which sets the type of the hypothesis toty
and then matches it againstp
. (Of course,ty
must unify with the actual type ofh
for this to work.) -
A tuple pattern
⟨p1, p2, p3⟩
, which matches a constructor with many arguments, or a series of nested conjunctions or existentials. For example if the active hypothesis isa ∧ b ∧ c
, then the conjunction will be destructured, andp1
will be matched againsta
,p2
againstb
and so on. -
A
@
before a tuple pattern as in@⟨p1, p2, p3⟩
will bind all arguments in the constructor, while leaving the@
off will only use the patterns on the explicit arguments. -
An alternation pattern
p1 | p2 | p3
, which matches an inductive type with multiple constructors, or a nested disjunction likea ∨ b ∨ c
.
A pattern like ⟨a, b, c⟩ | ⟨d, e⟩
will do a split over the inductive datatype,
naming the first three parameters of the first constructor as a,b,c
and the
first two of the second constructor d,e
. If the list is not as long as the
number of arguments to the constructor or the number of constructors, the
remaining variables will be automatically named. If there are nested brackets
such as ⟨⟨a⟩, b | c⟩ | d
then these will cause more case splits as necessary.
If there are too many arguments, such as ⟨a, b, c⟩
for splitting on
∃ x, ∃ y, p x
, then it will be treated as ⟨a, ⟨b, c⟩⟩
, splitting the last
parameter as necessary.
rcases
also has special support for quotient types: quotient induction into Prop works like
matching on the constructor quot.mk
.
rcases h : e with PAT
will do the same as rcases e with PAT
with the exception that an
assumption h : e = PAT
will be added to the context.
induction
Assuming x
is a variable in the local context with an inductive type,
induction x
applies induction on x
to the main goal,
producing one goal for each constructor of the inductive type,
in which the target is replaced by a general instance of that constructor
and an inductive hypothesis is added for each recursive argument to the constructor.
If the type of an element in the local context depends on x
,
that element is reverted and reintroduced afterward,
so that the inductive hypothesis incorporates that hypothesis as well.
For example, given n : Nat
and a goal with a hypothesis h : P n
and target Q n
,
induction n
produces one goal with hypothesis h : P 0
and target Q 0
,
and one goal with hypotheses h : P (Nat.succ a)
and ih₁ : P a → Q a
and target Q (Nat.succ a)
.
Here the names a
and ih₁
are chosen automatically and are not accessible.
You can use with
to provide the variables names for each constructor.
-
induction e
, wheree
is an expression instead of a variable, generalizese
in the goal, and then performs induction on the resulting variable. -
induction e using r
allows the user to specify the principle of induction that should be used. Herer
should be a term whose result type must be of the formC t
, whereC
is a bound variable andt
is a (possibly empty) sequence of bound variables -
induction e generalizing z₁ ... zₙ
, wherez₁ ... zₙ
are variables in the local context, generalizes overz₁ ... zₙ
before applying the induction but then introduces them in each goal. In other words, the net effect is that each inductive hypothesis is generalized. -
Given
x : Nat
,induction x with | zero => tac₁ | succ x' ih => tac₂
uses tactictac₁
for thezero
case, andtac₂
for thesucc
case.
nofun
The tactic nofun
is shorthand for exact nofun
: it introduces the assumptions, then performs an
empty pattern match, closing the goal if the introduced pattern is impossible.
9.5.12. Library Search
The library search tactics are intended for interactive use. When run, they search the Lean library for lemmas or rewrite rules that could be applicable in the current situation, and suggests a new tactic. These tactics should not be left in a proof; rather, their suggestions should be incorporated.
exact?
Searches environment for definitions or theorems that can solve the goal using exact
with conditions resolved by solve_by_elim
.
The optional using
clause provides identifiers in the local context that must be
used by exact?
when closing the goal. This is most useful if there are multiple
ways to resolve the goal, and one wants to guide which lemma is used.
apply?
Searches environment for definitions or theorems that can refine the goal using apply
with conditions resolved when possible with solve_by_elim
.
The optional using
clause provides identifiers in the local context that must be
used when closing the goal.
In this proof state:
invoking All goals completed! 🐙
suggests:
Try this: exact Nat.lt_trans h1 h2
rw?
rw?
tries to find a lemma which can rewrite the goal.
rw?
should not be left in proofs; it is a search tool, like apply?
.
Suggestions are printed as rw [h]
or rw [← h]
.
You can use rw? [-my_lemma, -my_theorem]
to prevent rw?
using the named lemmas.
9.5.13. Case Analysis
split
The split
tactic is useful for breaking nested if-then-else and match
expressions into separate cases.
For a match
expression with n
cases, the split
tactic generates at most n
subgoals.
For example, given n : Nat
, and a target if n = 0 then Q else R
, split
will generate
one goal with hypothesis n = 0
and target Q
, and a second goal with hypothesis
¬n = 0
and target R
. Note that the introduced hypothesis is unnamed, and is commonly
renamed used the case
or next
tactics.
-
split
will split the goal (target). -
split at h
will split the hypothesish
.
by_cases
by_cases (h :)? p
splits the main goal into two cases, assuming h : p
in the first branch, and h : ¬ p
in the second branch.
9.5.14. Decision Procedures
decide
decide
attempts to prove the main goal (with target type p
) by synthesizing an instance of Decidable p
and then reducing that instance to evaluate the truth value of p
.
If it reduces to isTrue h
, then h
is a proof of p
that closes the goal.
The target is not allowed to contain local variables or metavariables.
If there are local variables, you can first try using the revert
tactic with these local variables to move them into the target,
or you can use the +revert
option, described below.
Options:
-
decide +revert
begins by reverting local variables that the target depends on, after cleaning up the local context of irrelevant variables. A variable is relevant if it appears in the target, if it appears in a relevant variable, or if it is a proposition that refers to a relevant variable. -
decide +kernel
uses kernel for reduction instead of the elaborator. It has two key properties: (1) since it uses the kernel, it ignores transparency and can unfold everything, and (2) it reduces theDecidable
instance only once instead of twice. -
decide +native
uses the native code compiler (#eval
) to evaluate theDecidable
instance, admitting the result via theLean.ofReduceBool
axiom. This can be significantly more efficient than using reduction, but it is at the cost of increasing the size of the trusted code base. Namely, it depends on the correctness of the Lean compiler and all definitions with an@[implemented_by]
attribute. Like with+kernel
, theDecidable
instance is evaluated only once.
Limitation: In the default mode or +kernel
mode, since decide
uses reduction to evaluate the term,
Decidable
instances defined by well-founded recursion might not work because evaluating them requires reducing proofs.
Reduction can also get stuck on Decidable
instances with Eq.rec
terms.
These can appear in instances defined using tactics (such as rw
and simp
).
To avoid this, create such instances using definitions such as decidable_of_iff
instead.
Examples
Proving inequalities:
example : 2 + 2 ≠ 5 := by decide
Trying to prove a false proposition:
example : 1 ≠ 1 := by decide /- tactic 'decide' proved that the proposition 1 ≠ 1 is false -/
Trying to prove a proposition whose Decidable
instance fails to reduce
opaque unknownProp : Prop open scoped Classical in example : unknownProp := by decide /- tactic 'decide' failed for proposition unknownProp since its 'Decidable' instance reduced to Classical.choice ⋯ rather than to the 'isTrue' constructor. -/
Properties and relations
For equality goals for types with decidable equality, usually rfl
can be used in place of decide
.
example : 1 + 1 = 2 := by decide example : 1 + 1 = 2 := by rfl
native_decide
native_decide
is a synonym for decide +native
.
It will attempt to prove a goal of type p
by synthesizing an instance
of Decidable p
and then evaluating it to isTrue ..
. Unlike decide
, this
uses #eval
to evaluate the decidability instance.
This should be used with care because it adds the entire lean compiler to the trusted
part, and the axiom Lean.ofReduceBool
will show up in #print axioms
for theorems using
this method or anything that transitively depends on them. Nevertheless, because it is
compiled, this can be significantly more efficient than using decide
, and for very
large computations this is one way to run external programs and trust the result.
example : (List.range 1000).length = 1000 := by native_decide
omega
The omega
tactic, for resolving integer and natural linear arithmetic problems.
It is not yet a full decision procedure (no "dark" or "grey" shadows), but should be effective on many problems.
We handle hypotheses of the form x = y
, x < y
, x ≤ y
, and k ∣ x
for x y
in Nat
or Int
(and k
a literal), along with negations of these statements.
We decompose the sides of the inequalities as linear combinations of atoms.
If we encounter x / k
or x % k
for literal integers k
we introduce new auxiliary variables
and the relevant inequalities.
On the first pass, we do not perform case splits on natural subtraction.
If omega
fails, we recursively perform a case split on
a natural subtraction appearing in a hypothesis, and try again.
The options
omega +splitDisjunctions +splitNatSub +splitNatAbs +splitMinMax
can be used to:
-
splitDisjunctions
: split any disjunctions found in the context, if the problem is not otherwise solvable. -
splitNatSub
: for each appearance of((a - b : Nat) : Int)
, split ona ≤ b
if necessary. -
splitNatAbs
: for each appearance ofInt.natAbs a
, split on0 ≤ a
if necessary. -
splitMinMax
: for each occurrence ofmin a b
, split onmin a b = a ∨ min a b = b
Currently, all of these are on by default.
bv_omega
bv_omega
is omega
with an additional preprocessor that turns statements about BitVec
into statements about Nat
.
Currently the preprocessor is implemented as try simp only [bv_toNat] at *
.
bv_toNat
is a @[simp]
attribute that you can (cautiously) add to more theorems.
9.5.14.1. SAT Solver Integration
bv_decide
Close fixed-width BitVec
and Bool
goals by obtaining a proof from an external SAT solver and
verifying it inside Lean. The solvable goals are currently limited to the Lean equivalent of
QF_BV
:
example : ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b := by intros bv_decide
If bv_decide
encounters an unknown definition it will be treated like an unconstrained BitVec
variable. Sometimes this enables solving goals despite not understanding the definition because
the precise properties of the definition do not matter in the specific proof.
If bv_decide
fails to close a goal it provides a counter-example, containing assignments for all
terms that were considered as variables.
In order to avoid calling a SAT solver every time, the proof can be cached with bv_decide?
.
If solving your problem relies inherently on using associativity or commutativity, consider enabling
the bv.ac_nf
option.
Note: bv_decide
uses ofReduceBool
and thus trusts the correctness of the code generator.
bv_normalize
Run the normalization procedure of bv_decide
only. Sometimes this is enough to solve basic
BitVec
goals already.
bv_check
This tactic works just like bv_decide
but skips calling a SAT solver by using a proof that is
already stored on disk. It is called with the name of an LRAT file in the same directory as the
current Lean file:
bv_check "proof.lrat"
bv_decide?
Suggest a proof script for a bv_decide
tactic call. Useful for caching LRAT proofs.
9.5.15. Control Flow
guard_hyp
Tactic to check that a named hypothesis has a given type and/or value.
-
guard_hyp h : t
checks the type up to reducible defeq, -
guard_hyp h :~ t
checks the type up to default defeq, -
guard_hyp h :ₛ t
checks the type up to syntactic equality, -
guard_hyp h :ₐ t
checks the type up to alpha equality. -
guard_hyp h := v
checks value up to reducible defeq, -
guard_hyp h :=~ v
checks value up to default defeq, -
guard_hyp h :=ₛ v
checks value up to syntactic equality, -
guard_hyp h :=ₐ v
checks the value up to alpha equality.
The value v
is elaborated using the type of h
as the expected type.
guard_target
Tactic to check that the target agrees with a given expression.
-
guard_target = e
checks that the target is defeq at reducible transparency toe
. -
guard_target =~ e
checks that the target is defeq at default transparency toe
. -
guard_target =ₛ e
checks that the target is syntactically equal toe
. -
guard_target =ₐ e
checks that the target is alpha-equivalent toe
.
The term e
is elaborated with the type of the goal as the expected type, which is mostly
useful within conv
mode.
guard_expr
Tactic to check equality of two expressions.
-
guard_expr e = e'
checks thate
ande'
are defeq at reducible transparency. -
guard_expr e =~ e'
checks thate
ande'
are defeq at default transparency. -
guard_expr e =ₛ e'
checks thate
ande'
are syntactically equal. -
guard_expr e =ₐ e'
checks thate
ande'
are alpha-equivalent.
Both e
and e'
are elaborated then have their metavariables instantiated before the equality
check. Their types are unified (using isDefEqGuarded
) before synthetic metavariables are
processed, which helps with default instance handling.
sleep
The tactic sleep ms
sleeps for ms
milliseconds and does nothing.
It is used for debugging purposes only.
checkpoint
checkpoint tac
acts the same as tac
, but it caches the input and output of tac
,
and if the file is re-elaborated and the input matches, the tactic is not re-run and
its effects are reapplied to the state. This is useful for improving responsiveness
when working on a long tactic proof, by wrapping expensive tactics with checkpoint
.
See the save
tactic, which may be more convenient to use.
(TODO: do this automatically and transparently so that users don't have to use this combinator explicitly.)
save
save
is defined to be the same as skip
, but the elaborator has
special handling for occurrences of save
in tactic scripts and will transform
by tac1; save; tac2
to by (checkpoint tac1); tac2
, meaning that the effect of tac1
will be cached and replayed. This is useful for improving responsiveness
when working on a long tactic proof, by using save
after expensive tactics.
(TODO: do this automatically and transparently so that users don't have to use this combinator explicitly.)
stop
stop
is a helper tactic for "discarding" the rest of a proof:
it is defined as repeat sorry
.
It is useful when working on the middle of a complex proofs,
and less messy than commenting the remainder of the proof.
9.5.16. Term Elaboration Backends
These tactics are used during elaboration of terms to satisfy obligations that arise.
decreasing_tactic
decreasing_tactic
is called by default on well-founded recursions in order
to synthesize a proof that recursive calls decrease along the selected
well founded relation. It can be locally overridden by using decreasing_by tac
on the recursive definition, and it can also be globally extended by adding
more definitions for decreasing_tactic
(or decreasing_trivial
,
which this tactic calls).
decreasing_trivial
Extensible helper tactic for decreasing_tactic
. This handles the "base case"
reasoning after applying lexicographic order lemmas.
It can be extended by adding more macro definitions, e.g.
macro_rules | `(tactic| decreasing_trivial) => `(tactic| linarith)
array_get_dec
This tactic, added to the decreasing_trivial
toolbox, proves that
sizeOf arr[i] < sizeOf arr
, which is useful for well founded recursions
over a nested inductive like inductive T | mk : Array T → T
.
decreasing_with
Constructs a proof of decreasing along a well founded relation, by simplifying, then applying
lexicographic order lemmas and finally using ts
to solve the base case. If it fails,
it prints a message to help the user diagnose an ill-founded recursive definition.
get_elem_tactic
get_elem_tactic
is the tactic automatically called by the notation arr[i]
to prove any side conditions that arise when constructing the term
(e.g. the index is in bounds of the array). It just delegates to
get_elem_tactic_trivial
and gives a diagnostic error message otherwise;
users are encouraged to extend get_elem_tactic_trivial
instead of this tactic.
get_elem_tactic_trivial
get_elem_tactic_trivial
is an extensible tactic automatically called
by the notation arr[i]
to prove any side conditions that arise when
constructing the term (e.g. the index is in bounds of the array).
The default behavior is to just try trivial
(which handles the case
where i < arr.size
is in the context) and simp_arith
and omega
(for doing linear arithmetic in the index).
9.5.17. Debugging Utilities
sorry
The sorry
tactic closes the goal using sorryAx
. This is intended for stubbing out incomplete
parts of a proof while still having a syntactically correct proof skeleton. Lean will give
a warning whenever a proof uses sorry
, so you aren't likely to miss it, but
you can double check if a theorem depends on sorry
by using
#print axioms my_thm
and looking for sorryAx
in the axiom list.
dbg_trace
dbg_trace "foo"
prints foo
when elaborated.
Useful for debugging tactic control flow:
example : False ∨ True := by first | apply Or.inl; trivial; dbg_trace "left" | apply Or.inr; trivial; dbg_trace "right"
9.5.18. Other
trivial
trivial
tries different simple tactics (e.g., rfl
, contradiction
, ...)
to close the current goal.
You can use the command macro_rules
to extend the set of tactics used. Example:
macro_rules | `(tactic| trivial) => `(tactic| simp)
infer_instance
infer_instance
is an abbreviation for exact inferInstance
.
It synthesizes a value of any target type by typeclass inference.
unhygienic
unhygienic tacs
runs tacs
with name hygiene disabled.
This means that tactics that would normally create inaccessible names will instead
make regular variables. Warning: Tactics may change their variable naming
strategies at any time, so code that depends on autogenerated names is brittle.
Users should try not to use unhygienic
if possible.
example : ∀ x : Nat, x = x := by unhygienic intro -- x would normally be intro'd as inaccessible exact Eq.refl x -- refer to x