Mathematical logic
Mathematical
logic (also known as symbolic logic) is a subfield of mathematics with close connections
to computer science and philosophical logic. The field includes both the mathematical study
of logic and the applications of formal logic to other areas of
mathematics. The unifying themes in mathematical logic include the study of the
expressive power of formal systems and the deductive power of formal proof system.
Mathematical
logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic,
particularly first-order logic, and definability. In computer science (particularly in the ACM Classification) mathematical logic encompasses additional
topics not detailed in this article; see logic in computer science for those.
Since
its inception, mathematical logic has contributed to, and has been motivated
by, the study of foundations of mathematics. This study began in the late 19th century with the development
of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency
of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and
clarified the issues involved in proving consistency. Work in set theory showed
that almost all ordinary mathematics can be formalized in terms of sets,
although there are some theorems that cannot be proven in common axiom systems
for set theory. Contemporary work in the foundations of mathematics often
focuses on establishing which parts of mathematics can be formalized in
particular formal systems, rather than trying to find theories in which all of
mathematics can be developed.
Mathematical
logic emerged in the mid-19th century as a subfield of mathematics independent
of the traditional study of logic (Ferreirós 2001, p. 443). Before this emergence, logic was
studied with rhetoric,
through the syllogism,
and with philosophy.
The first half of the 20th century saw an explosion of fundamental results,
accompanied by vigorous debate over the foundations of mathematics.
Sophisticated
theories of logic were developed in many cultures, including China, India, Greece and the Islamic world.
In the 18th century, attempts to treat the operations of formal logic in a
symbolic or algebraic way had been made by philosophical mathematicians
includingLeibniz and Lambert, but their labors
remained isolated and little known.
19th century
In
the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work,
building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine
of logic into a sufficient framework for the study of foundations of mathematics (Katz 1998, p. 686).
Charles Sanders Peirce built upon the work of Boole to develop a logical system for
relations and quantifiers, which he published in several papers from 1870 to
1885. Gottlob Frege presented an independent development of logic
with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a
turning point in the history of logic. Frege's work remained obscure, however,
until Bertrand Russell began to promote it near the turn of the
century. The two-dimensional notation Frege developed was never widely adopted
and is unused in contemporary texts.
From
1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and
extended the work of Boole, De Morgan, and Peirce, and was a comprehensive
reference to symbolic logic as it was understood at the end of the 19th
century.
Foundational theories
Some
concerns that mathematics had not been built on a proper foundation led to the
development of axiomatic systems for fundamental areas of mathematics such as
arithmetic, analysis, and geometry.
In
logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano (1888) published a set of axioms for arithmetic that
came to bear his name (Peano axioms), using a variation of the logical system of
Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at
the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely
characterized by their induction properties. Dedekind (1888) proposed a different characterization, which
lacked the formal logical character of Peano's axioms. Dedekind's work,
however, proved theorems inaccessible in Peano's system, including the
uniqueness of the set of natural numbers (up to isomorphism) and the recursive
definitions of addition and multiplication from the successor function and mathematical induction.
In
the mid-19th century, flaws in Euclid's axioms for geometry became known (Katz 1998, p. 774). In addition to the independence
of theparallel postulate, established by Nikolai Lobachevsky in 1826 (Lobachevsky 1840), mathematicians discovered that certain
theorems taken for granted by Euclid were not in fact provable from his axioms.
Among these is the theorem that a line contains at least two points, or that
circles of the same radius whose centers are separated by that radius must
intersect. Hilbert (1899) developed a complete set of axioms for geometry, building on previous work by Pasch (1882). The success in axiomatizing geometry
motivated Hilbert to seek complete axiomatizations of other areas of
mathematics, such as the natural numbers and the real line. This would prove to be a major area of
research in the first half of the 20th century.
The
19th century saw great advances in the theory of real analysis, including theories of convergence of functions
and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched
intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule
for computation, or a smooth graph, were no longer adequate. Weierstrass began
to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the
natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817 (Felscher 2000), but remained relatively unknown.Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858,
Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers (Dedekind 1872), a definition still employed in contemporary
texts.
Georg Cantor developed the fundamental concepts of infinite
set theory. His early results developed the theory of cardinality and proved that the reals and the natural
numbers have different cardinalities (Cantor 1874). Over the next twenty years,
Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of
the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be
well-ordered, but was unable to produce a proof for this result, leaving it as
an open problem in 1895 (Katz 1998, p. 807).
20th century
In
the early decades of the 20th century, the main areas of study were set theory
and formal logic. The discovery of paradoxes in informal set
theory caused some to wonder whether mathematics itself is inconsistent, and to
look for proofs of consistency.
In
1900, Hilbert posed a famous list of 23 problems for the next century.
The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively;
the tenth was to produce a method that could decide whether a multivariate
polynomial equation over the integers has a solution.
Subsequent work to resolve these problems shaped the direction of mathematical
logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would
decide, given a formalized mathematical statement, whether the statement is
true or false.
Set theory and paradoxes
Ernst Zermelo (1904) gave a proof that every set could be
well-ordered, a result Georg Cantor had been unable to
obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among
mathematicians and the pioneers of set theory. The immediate criticism of the
method led Zermelo to publish a second exposition of his result, directly
addressing criticisms of his proof (Zermelo 1908a). This paper led to the general acceptance of
the axiom of choice in the mathematics community.
Skepticism
about the axiom of choice was reinforced by recently discovered paradoxes in
naive set theory. Cesare Burali-Forti (1897) was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter,Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard (1905) discovered Richard's paradox.
Zermelo
(1908b) provided the first set of axioms for set
theory. These axioms, together with the additional axiom of replacement proposed byAbraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox.
In
1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions
and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an
effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works
of the 20th century, although the framework of type theory did not prove
popular as a foundational theory for mathematics (Ferreirós 2001, p. 445).
Fraenkel
(1922) proved that the axiom of choice cannot be
proved from the remaining axioms of Zermelo's set theory with urelements. Later work by Paul Cohen (1966) showed that the addition of urelements is not
needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed
the method of forcing, which is now an
important tool for establishing independence results in set theory.
Leopold Löwenheim (1915) and Thoralf Skolem (1920) obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the
cardinalities of infinite structures. Skolem realized that this theorem would
apply to first-order formalizations of set theory, and that it implies any such
formalization has a countable model This counterintuitive
fact became known as Skolem's paradox.
In
his doctoral thesis, Kurt Gödel (1929) proved the completeness theorem, which establishes a correspondence between syntax and semantics
in first-order logic. Gödel used the completeness theorem to prove
the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant
logic used by mathematicians.
In
1931, Gödel published On Formally
Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a
different meaning of the word) of all sufficiently strong, effective
first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for
mathematics, striking a strong blow to Hilbert's program. It showed the impossibility
of providing a consistency proof of arithmetic within any formal theory of
arithmetic. Hilbert, however, did not acknowledge the importance of the
incompleteness theorem for some time.
Gödel's
theorem shows that a consistency proof of any sufficiently strong, effective
axiom system cannot be obtained in the system itself, if the system is
consistent, nor in any weaker system. This leaves open the possibility of
consistency proofs that cannot be formalized within the system they consider.
Gentzen (1936) proved the consistency of arithmetic using a
finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel (1958) gave a different consistency proof, which
reduces the consistency of classical arithmetic to that of intutitionistic
arithmetic in higher types.
Alfred Tarski developed the basics of model theory.
Beginning
in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish a series of encyclopedic mathematics
texts. These texts, written in an austere and axiomatic style, emphasized
rigorous presentation and set-theoretic foundations. Terminology coined by
these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were
widely adopted throughout mathematics.
The
study of computability came to be known as recursion theory, because early
formalizations by Gödel and Kleene relied on recursive definitions of
functions. When these definitions were shown equivalent to
Turing's formalization involving Turing machines, it became clear that a new
concept – the computable function – had been discovered, and that this definition was robust enough
to admit numerous independent characterizations. In his work on the
incompleteness theorems in 1931, Gödel lacked a rigorous concept of an
effective formal system; he immediately realized that the new definitions of
computability could be used for this purpose, allowing him to state the
incompleteness theorems in generality that could only be implied in the
original paper.
Numerous
results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene (1943) introduced the concepts of relative computability,
foreshadowed by Turing (1939), and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order
functional. Kleene and Kreisel studied formal versions of intuitionistics
mathematics, particularly in the context of proof theory.
The Handbook of Mathematical Logic makes a rough division of contemporary
mathematical logic into four areas:
3.
recursion theory, and
4.
proof theory and constructive mathematics (considered as parts of a single area).
Each
area has a distinct focus, although many techniques and results are shared
between multiple areas. The border lines between these fields, and the lines
between mathematical logic and other fields of mathematics, are not always
sharp. Gödel's incompleteness theorem-marks not only a milestone in recursion theory and proof theory,
but has also led to Löb's theorem in modal logic. The method of forcing is employed in set
theory, model theory, and recursion theory, as well as in the study of
intuitionistics mathematics.
The
mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily
considered a subfield of mathematical logic. Because of its applicability in
diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational
system for mathematics, independent of set theory. These foundations use toposses, which resemble generalized models of set theory that may employ
classical or nonclassical logic.
At
its core, mathematical logic deals with mathematical concepts expressed using
formal logical systems. These systems, though they differ in many
details, share the common property of considering only expressions in a fixed
formal language, or signature. The system of first-order logic is the most widely studied today, because of its
applicability to foundations of mathematics and because of its desirable proof-theoretic properties. Stronger classical logics such as second-order logic or infinitary logic are also studied, along with nonclassical logics
such as intuitionistic logic.
First-order
logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while
its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse.
Early
results about formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable
first-order language has an infinite model then it has at least one model of
each infinite cardinality. This shows that it is impossible for a set of
first-order axioms to characterize the natural numbers, the real numbers, or
any other infinite structure up to isomorphism. As the goal of early foundational studies was
to produce axiomatic theories for all parts of mathematics, this limitation was
particularly stark.
Gödel's completeness theorem (Gödel 1929) established the equivalence between semantic
and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is
true in every model that satisfies a particular set of axioms, then there must
be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness
theorem, and it took many years before logicians grasped its significance and
began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words
that an inconsistent set of formulas must have a finite inconsistent subset.
The completeness and compactness theorems allow for sophisticated analysis of
logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence
of first-order logic in mathematics.
Gödel's incompleteness theorems (Gödel 1931) establish additional limits on first-order
axiomatizations. The first incompleteness
theoremstates that for any
sufficiently strong, effectively given logical system there exists a statement which is true but not
provable within that system. Here a logical system is effectively given if it
is possible to decide, given any formula in the language of the system, whether
the formula is an axiom. A logical system is sufficiently strong if it can
express the Peano axioms.
When applied to first-order logic, the first incompleteness theorem implies
that any sufficiently strong, consistent, effective first-order theory has
models that are not elementarily equivalent, a stronger limitation than the one established by the
Löwenheim–Skolem theorem. The second incompleteness
theorem states that no
sufficiently strong, consistent, effective axiom system for arithmetic can
prove its own consistency, which has been interpreted to show that Hilbert's
program cannot be completed.
Many
logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an
infinite amount of information, and higher-order logics, which include a portion of set theory directly in their
semantics.
The
most well studied infinitary logic is . In this logic, quantifiers may only be nested
to finite depths, as in first order logic, but formulas may have finite or
countably infinite conjunctions and disjunctions within them. Thus, for
example, it is possible to say that an object is a whole number using a formula
of such as
Higher-order
logics allow for quantification not only of elements of the domain of
discourse, but subsets of the domain of discourse, sets of such subsets, and
other objects of higher type. The semantics are defined so that, rather than
having a separate domain for each higher-type quantifier to range over, the
quantifiers instead range over all objects of the appropriate type. The logics
studied before the development of first-order logic, for example Frege's logic,
had similar set-theoretic aspects. Although higher-order logics are more
expressive, allowing complete axiomatizations of structures such as the natural
numbers, they do not satisfy analogues of the completeness and compactness
theorems from first-order logic, and are thus less amenable to proof-theoretic
analysis.
Another
type of logics are fixed-point logics that allow inductive definitions, like one writes for primitive recursive functions.
One
can formally define an extension of first-order logic — a notion which
encompasses all logics in this section because they behave like first-order
logic in certain fundamental ways, but does not encompass all logics in
general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying
both the Compactness theorem and the DownwardLöwenheim–Skolem theorem is first-order logic.
Modal logics include additional modal operators, such as an
operator which states that a particular formula is not only true, but
necessarily true. Although modal logic is not often used to axiomatize
mathematics, it has been used to study the properties of first-order
provability (Solovay 1976) and set-theoretic forcing (Hamkins and Löwe 2007).
Intuitionistic logic was developed by Heyting to study Brouwer's program of
intuitionism, in which Brouwer himself avoided formalization. Intuitionistic
logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation
is true. Kleene's work with the proof theory of intuitionistic logic showed
that constructive information can be recovered from intuitionistic proofs. For
example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of
arithmetic such asPeano arithmetic.
Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental
example is the use of Boolean algebras to represent truth values in classical
propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic
propositional logic. Stronger logics, such as first-order logic and
higher-order logic, are studied using more complicated algebraic structures
such as cylindric algebras.
Set theory is the study of sets, which are abstract
collections of objects. Many of the basic notions, such as ordinal and cardinal
numbers, were developed informally by Cantor before formal axiomatizations of
set theory were developed. The first such axiomatization, due to Zermelo (1908b), was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for
mathematics.
Other
formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory(MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows
objects such as the set of all sets at the cost of restrictions on its
set-existence axioms. The system of Kripke-Platek set theory is closely related to generalized recursion theory.
Two
famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo (1904), was proved independent of ZF by Fraenkel (1922), but has come to be widely accepted by
mathematicians. It states that given a collection of nonempty sets there is a
single set C that contains exactly one element from each set
in the collection. The set C is said to "choose" one element from
each set in the collection. While the ability to make such a choice is
considered obvious by some, since each set in the collection is nonempty, the lack
of a general, concrete rule by which the choice can be made renders the axiom
nonconstructive. Stefan Banach and Alfred Tarski (1924) showed that the axiom of choice can be
used to decompose a solid ball into a finite number of pieces which can then be
rearranged, with no scaling, to make two solid balls of the original size. This
theorem, known as the Banach-Tarski paradox, is one of many counterintuitive results of the axiom of choice.
The
continuum hypothesis, first proposed as a conjecture by Cantor, was listed by
David Hilbert as one of his 23 problems in 1900. Gödel showed that the
continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel
set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In
1963, Paul Cohen showed that the
continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set
theory (Cohen 1966). This independence result did not completely
settle Hilbert's question, however, as it is possible that new axioms for set
theory could resolve the hypothesis. Recent work along these lines has been
conducted by W. Hugh Woodin, although its importance is not yet clear (Woodin 2001).
Contemporary
research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the
existence of such cardinals cannot be proved in ZFC. The existence of the
smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that
large cardinals have extremely high cardinality, their existence has many ramifications for the
structure of the real line. Determinacy refers to the possible existence of winning
strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties
of the real line and other Polish spaces.
Model theory studies the models of various formal theories.
Here a theory is a set of formulas in
a particular formal logic and signature,
while a model is a structure that
gives a concrete interpretation of the theory. Model theory is closely related
to universal algebra and algebraic geometry, although the methods of model theory focus more on logical
considerations than those fields.
The
set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the
properties of models in a particular elementary class, or determine whether
certain classes of structures form elementary classes.
The
method of quantifier elimination can be used to show that definable sets in particular theories
cannot be too complicated. Tarski (1948) established quantifier elimination for real-closed fields, a result which also shows the theory of the
field of real numbers is decidable.
(He also noted that his methods were equally applicable to algebraically closed
fields of arbitrary characteristic.) A modern subfield developing from this is
concerned with o-minimal structures.
Morley's categoricity theorem, proved by Michael D. Morley (1965), states that if a first-order theory in a
countable language is categorical in some uncountable cardinality, i.e. all
models of this cardinality are isomorphic, then it is categorical in all
uncountable cardinalities.
A
trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many
nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum
hypothesis. Many special cases of this conjecture have been established.
Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets which have the
same level of uncomputability. Recursion theory also includes the study of
generalized computability and definability. Recursion theory grew from of the
work of Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s.
Classical
recursion theory focuses on the computability of functions from the natural
numbers to the natural numbers. The fundamental results establish a robust,
canonical class of computable functions with numerous independent, equivalent
characterizations using Turing machines, λ calculus,
and other systems. More advanced results concern the structure of the Turing
degrees and the lattice of recursively enumerable sets.
Generalized
recursion theory extends the ideas of recursion theory to computations that are
no longer necessarily finite. It includes the study of computability in higher
types as well as areas such as hyperarithmetical theory and α-recursion theory.
Contemporary
research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory.
An
important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm which returns the
correct answer for all legal inputs to the problem. The first results about
unsolvability, obtained independently by Church and Turing in 1936, showed that
the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing
the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory
and computer science.
There
are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959.
The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example.
Hilbert's tenth problem asked for an algorithm to determine whether a multivariate
polynomial equation with integer coefficients has a solution in the integers.
Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem
was proved by Yuri Matiyasevich in 1970 (Davis 1973).
Proof theory is the study of formal proofs in various logical
deduction systems. These proofs are represented as formal mathematical objects,
facilitating their analysis by mathematical techniques. Several deduction
systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen.
The
study of constructive mathematics, in the context of mathematical logic, includes
the study of systems in non-classical logic such as intuitionistic logic, as
well as the study of predicative systems. An early
proponent of predicativism was Hermann Weyl,
who showed it is possible to develop a large part of real analysis using only
predicative methods (Weyl 1918).
Because
proofs are entirely finitary, whereas truth in a structure is not, it is common
for work in constructive mathematics to emphasize provability. The relationship
between provability in classical (or nonconstructive) systems and provability
in intuitionistic (or constructive, respectively) systems is of particular
interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (ortranslate)
classical logic into intuitionistic logic, allowing some properties about
intuitionistic proofs to be transferred back to classical proofs.
Recent
developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen.
Connections with computer science
The
study of computability theory in computer science is closely related to the study of computability
in mathematical logic. There is a difference of emphasis, however. Computer
scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on
computability as a theoretical concept and on noncomputability.
The
theory of semantics of
programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard isomorphism between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages.
Computer
science also contributes to mathematics by developing techniques for the
automatic checking or even finding of proofs, such asautomated theorem proving and logic programming.
Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by
sentences of existential second-order logic.
In
the 19th century, mathematicians became aware of logical gaps and
inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an
example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function,
came into question in analysis, as pathological examples such as Weierstrass'
nowhere-differentiable continuous function were discovered.
Cantor's
study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all
else is the work of man," endorsing a return to the study of finite,
concrete objects in mathematics. Although Kronecker's argument was carried
forward by constructivists in the 20th century, the mathematical community as a
whole rejected them. David Hilbert argued in favor of the
study of the infinite, saying "No one shall expel us from the Paradise
that Cantor has created."
Mathematicians
began to search for axiom systems that could be used to formalize large parts
of mathematics. In addition to removing ambiguity from previously-naive terms
such as function, it was hoped that this axiomatization would allow for
consistency proofs. In the 19th century, the main method of proving the
consistency of a set of axioms was to provide a model for it. Thus, for
example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a
fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model
of elliptic geometry, satisfies the axioms of plane geometry except
the parallel postulate.
With
the development of formal logic, Hilbert asked whether it would be possible to
prove that an axiom system is consistent by analyzing the structure of possible
proofs in the system, and showing through this analysis that it is impossible
to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis
should be entirely concrete, using the term finitary to refer to the methods
he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which
show that the consistency of formal theories of arithmetic cannot be
established using methods formalizable in those theories. Gentzen showed that
it is possible to produce a proof of the consistency of arithmetic in a
finitary system augmented with axioms of transfinite induction, and the techniques he developed to so do were seminal in proof
theory.
A
second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different
programs with various definitions of constructive.
At the most accommodating end, proofs in ZF set theory that do not use the
axiom of choice are called constructive by many mathematicians. More limited
versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real
numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values
of the function must be known before the function itself can be said to exist.
In
the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a philosophy of
mathematics. This philosophy, poorly understood at first, stated that in order
for a mathematical statement to be true to a mathematician, that person must be
able to intuit the statement, to not only believe its truth but
understand the reason for its truth. A consequence of this definition of truth
was the rejection of thelaw of the excluded middle, for there are statements that, according to Brouwer, could not
be claimed to be true while their negations also could not be claimed true.
Brouwer's philosophy was influential, and the cause of bitter disputes among
prominent mathematicians. Later, Kleene and Kreisel would study formalized
versions of intuitionistic logic (Brouwer rejected formalization, and presented
his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models,
intuitionism became easier to reconcile with classical mathematics.
No comments:
Post a Comment