Back to title page.
The stable character of mathematical models and theories is not always evident - because of our Platonist habits (we are used to treat mathematical objects as specific "world"). Only few people will dispute the stable character of a fully axiomatic theory. All principles of reasoning, allowed in such theories, are presented in axioms explicitly. Thus the principal basis is fixed, and any changes in it yield explicit changes in axioms.
Could we also fix those theories that are not fully axiomatic yet? How is it possible? For example, all mathematicians are unanimous about the ways of reasoning that allow us to prove theorems about natural numbers (other ways yield only hypotheses or errors). Still, most mathematicians do not know the axioms of arithmetic! And even in the theories that seem to be axiomatic (as, for example, geometry in Euclid's "Elements") we can find aspects of reasoning that are commonly acknowledged as correct, yet are not presented in axioms. For example, the properties of the relation "the point A is located on a straight line between the points B and C", are used by Euclid without any justification. Only in XIX century M. Pasch introduced the "axioms of order" describing this relation. Still, also before this time all mathematicians treated it in a uniform way.
Trying to explain this phenomenon, we are led to the concept of intuition. Intuition is treated usually as "creative thinking", "direct obtaining of truth", etc. I'm interested in much more prosaic aspects of intuition.
The human brain is a very complicated system of processes. Only a small part of these electrochemical fireworks can be controlled consciously. Therefore, similar to the processes going on at the conscious level, there must be a much greater amount of thinking processes going on at the unconscious level. Experience shows that when the result of some unconscious thinking process is very important for the person, it (the result) can be sometimes recognized at the conscious level. The process itself remains hidden, for this reason the effect seems like a "direct obtaining of truth" etc., (see Poincare [1908], Hadamard [1945]).
Since unconscious processes yield not only arbitrary dreams, but also (sometimes) reasonable solutions of real problems, there must be some "reasonable principles" ruling them. In real mathematical theories we have such unconscious "reasonable principles" ruling our reasoning (together with the axioms or without any axioms). Relatively closed sets of unconscious ruling "principles" are the most elementary type of intuition used in mathematics.
We can say, therefore, that a theory (or model) can be stable not only due to some system of axioms, but also due to a specific intuition. So, we can speak about intuition of natural numbers that determines our reasoning about these numbers, and about "Euclidean intuition" that makes the geometry completely definite, though Euclid's axioms do not contain many essential principles of geometric reasoning.
How could we explain the emergence of intuitions that are ruling uniformly the reasoning of so many people? It seems that they can arise because human beings all are approximately equal, because they deal with approximately the same external world, and because in the process of education, practical and scientific work they tend to achieve accordance with each other.
While investigations are going on, they can achieve a level of complexity, at which the degree of definiteness of intuitive models is already insufficient. Then various conflicts between specialists can appear about which ways of reasoning should be accepted. It happens even that a commonly acknowledged way of reasoning leads to absurd conclusions.
Such situations appeared many times�in the history of mathematics: the crash of discrete geometric intuition after the discovery of incommensurable magnitudes (end of VI century BC), problems with negative and complex numbers (up to the end of XVIII century), the dispute between L. Euler and J. d'Alembert on the concept of function (XVIII century), groundless operation with divergent series (up to the beginning of XIX century), problems with the acceptance of Cantor's set theory, paradoxes in set theory (the end of XIX century), scandals around the axiom of choice (beginning of XX century). All that was caused by the inevitably uncontrollable nature of unconscious processes. It seems that ruling "principles" of these processes are picked up and fastened by something like the "natural selection" which is not able to a far-reaching co-ordination without making errors. Therefore, the appearance of (real or imagined) paradoxes in intuitive theories is not surprising.
The defining intuition of a theory does not always remain constant. Frequent changes happen during the beginning period, when the intuition (as the theory itself), is not yet stabilized. During this, the most delicate period of evolution, the most intolerant conflicts appear. The only reliable exit from such situations is the following: we must convert (at least partly) the unconscious ruling "principles" into conscious ones and then investigate their accordance with each other. If this conversion were meant in a literal sense, it would be impossible, since we cannot know the internal structure of a specific intuition. We can speak here only about a reconstruction of a "black box" in some other - explicit - terms. Two different approaches are usually applied for such reconstruction: the so-called genetic method and the axiomatic method.
The genetic method tries to reconstruct intuition by means of some other theory (which can also be intuitive). Thus, a "suspicious" intuition is modeled, using a "more reliable" one. For example, in this way the objections against the use of complex numbers were removed: complex numbers were presented as points of a plane. In this way even their strangest properties (as, for example, the infinite set of values of log x for a negative x) were converted into simple theorems of geometry. After this, all disputes stopped. In a similar way the problems with the basic concepts of the Calculus (limit, convergence, continuity, etc.) were cleared up - through their definition in terms of epsilon-delta.
It appeared, however, that some of these concepts, after the reconstruction in terms of epsilon-delta, obtained unexpected properties missing in the original intuitive concepts. Thus, for example, it was believed that every continuous function of a real variable is differentiable almost everywhere (except of some isolated "break-points"). The concept of continuous function having been redefined in terms of epsilon-delta, it appeared that a continuous function could be obtained, which is nowhere differentiable (the famous construction of C.Weierstrass).
The appearance of unexpected properties in reconstructed concepts means, that here indeed we have a reconstruction - not a direct "copying" of intuitive concepts, and that we must consider the problem seriously: is our reconstruction adequate?
The genetic method clears up one intuition in terms of another one, i.e. it is working relatively. The axiomatic method, conversely, is working "absolutely": among commonly acknowledged assertions about objects of a theory some subset is selected, assertions from this subset are called axioms, i.e. they are acknowledged as true without any proof. All other assertions of the theory we must prove using the axioms. These proofs can contain intuitive moments that must be "more evident" than the ideas presented in axioms. The most famous applications of the axiomatic method are the following: Euclid's axioms, Hilbert's axioms for the Euclidean geometry, the axioms by G.Peano for arithmetic of natural numbers, the axioms by E.Zermelo and A.Fraenkel for set theory.
The axiomatic method (as well as the genetic method) yields only a reconstruction of intuitive concepts. The problem of adequacy can be reduced here to the question, whether all essential properties of intuitive concepts are presented in axioms? From this point of view the most complicated situation appears, when axioms are used to rescue some theory which had "lost its way" in paradoxes. Zermelo-Fraenkel's axioms were developed exactly in such a situation - paradoxes having appeared in the intuitive set theory. Here, the problem of adequacy is very complicated: are all positive contents of the theory saved?
What criteria can be set for the adequacy of reconstruction? Let us remember various definitions of the real number concept in terms of rational numbers, presented in the 1870s simultaneously by R.Dedekind, G.Cantor and some others. Why do we regard these reconstructions to be satisfactory? And how can the adequacy of a reconstruction be justified when the original concept remains hidden in intuition and every attempt to get it out is a reconstruction itself with the same problem of adequacy? The only possible realistic answer is taking into account only those aspects of intuitive concepts that can be recognized in the practice of mathematical reasoning. It means, first, that all properties of real numbers, acknowledged before as "evident", must be proved on the basis of the reconstructed concept. Secondly, all intuitively proven theorems of the Calculus must be proved by means of the reconstructed concept. If this is done, it means that those aspects of the intuitive concept of real number that managed to appear in mathematical practice, all are explicitly presented in the reconstructed concept. Still, maybe, some "hidden" aspects of the intuitive real number concept have not yet appeared in practice? And they will appear in future? At first glance, it seems hard to dispute such a proposition.
However, let us suppose that this is the case, and in 2099 somebody will prove a new theorem of the Calculus using a property of real numbers, used never before in mathematical reasoning. And then will all the other mathematicians agree immediately that this property was "intended" already in 1999? At least, it will be impossible to verify this proposition: none of the mathematicians living today will survive 100 years.
Presuming that intuitive mathematical concepts can possess some "hidden" properties that do not appear in practice for a long time we fall into the usual mathematical Platonism (i.e. we assume that the "world" of mathematical objects exists independently of mathematical reasoning).
Still, let us consider
Freiling's Axiom of Symmetry (see http://www.jmas.co.jp/FAQs/sci-math-faq/continuum). Let A is the set of functions mapping Real Numbers into countable sets of Real Numbers. Given a function f in A, and some arbitrary real numbers x and y, we see that x is in f(y) with probability 0, i.e. x is not in f(y) with probability 1. Similarly, y is not in f(x) with probability 1. Freiling's axiom AX states: "for every f in A, there exist x and y such that x is not in f(y) and y is not in f(x)". The intuitive justification for it is that we can find the x and y by choosing them at random. In ZFC, AX is equivalent to "not CH", i.e. neither AX, nor "not AX" can be derived from the axioms of ZFC. Do you think AX is a counter-example for my previous thesis?
Some of the intuitive concepts admit several different, yet, nevertheless, equivalent explicit reconstructions. In this way an additional very important evidence of adequacy can be given. Let us remember, again, various definitions of real numbers in terms of rational numbers. Cantor's definition was based upon convergent sequences of rational numbers. Dedekind defined real numbers as "cuts" in the set of rational numbers. One definition more were obtained using (infinite) decimal fractions. The equivalence of all these definitions was proved (we cannot prove strongly the equivalence of the intuitive concept and its reconstruction, yet we can prove - or disprove - the equivalence of two explicit reconstructions).
Another striking example is the reconstruction of the intuitive notion of computability (the concept of algorithm). Since 1930s several very different explicit reconstructions of this notion were proposed: recursive functions, "Turing-machines" by A.M. Turing, lambda-calculus by A. Church, canonical systems by E. Post, normal algorithms by A.A. Markov, etc. And here, too, the equivalence of all reconstructions was proved. The equivalence of different reconstructions of the same intuitive concept means that the volume of the reconstructed explicit concepts is not accidental. This is a very important additional argument for replacing the intuitive concept by an explicit reconstruction.
The trend to replace intuitive concepts by their more or less explicit reconstructions appears in the history of mathematics very definitely. Intuitive theories cannot develop without such reconstructions normally: the definiteness of intuitive basic principles gets insufficient when the complexity of concepts and methods is growing. In most situations reconstruction can be performed by the genetic method, yet to reconstruct fundamental mathematical concepts the axiomatic method must be used (fundamental concepts are called fundamental just because they cannot be reduced to other concepts).
Goedel's incompleteness theorem has provoked very much talking about insufficiency of the axiomatic method for a true reconstruction of the "alive, informal" mathematical thinking. Some people say that axioms are not able to cover "all the treasures of the informal mathematics". It is once again the usual mathematical Platonism converted into the methodological one (for a detailed analysis see Podnieks [1981, 1992], or Section 6.1).
Does the "axiomatic reasoning" differ in principle from the informal mathematical reasoning? Do there exist proofs in mathematics obtained not following the pattern "premises - conclusion"? If not and every mathematical reasoning process can be reduced to a chain of conclusions, we may ask: are these conclusions going on by some definite rules that do not change from one situation to another? And, if these rules are definite, can they being a function of the human brain be such that a complete explicit formulation is impossible? Today, if we cannot formulate some "rules" explicitly, then how can we demonstrate that they are definite?
Therefore, it is nonsense to speak about the limited applicability of axiomatization: the limits of axiomatization coincide with the limits of mathematics itself! Goedel's incompleteness theorem is an argument against Platonism, not against formalism! Goedel's theorem demonstrates that no stable, self-contained fantastic "world of ideas" can be perfect. Any stable (self-contained) "world of ideas" leads us either to contradictions or to undecidable problems.
In the process of evolution of mathematical theories, axioms and intuition interact with each other. Axioms "clears up" the intuition when it has lost its way. Still, axiomatization has also some unpleasant consequences: many steps of intuitive reasoning, expressed by a specialist very compactly, appear very long and tedious in an axiomatic theory. Therefore, after replacing intuitive theory by the axiomatic one (this replacement may be non-equivalent because of defects in the intuitive theory) specialists learn a new intuition, and thus they restore the creative potency of their theory. Let us remember the history of axiomatization of set theory. In 1890s contradictions were discovered in Cantor's intuitive set theory, and they were removed by means of axiomatization. Of course, the axiomatic Zermelo-Fraenkel's set theory differs from Cantor's intuitive theory not only in its form, but also in some aspects of contents. For this reason specialists have developed new, modified intuitions (for example, the intuition of sets and proper classes) that allow them to work in the new theory efficiently. Today, agsain, all serious theorems of set theory are proved intuitively.
What are the main benefits of axiomatization? First, as we have seen, axioms allows to "correct" intuition: to remove inaccuracies, ambiguities and paradoxes that arise sometimes due to the insufficient controllability of intuitive processes.
Secondly, axiomatization allows a detailed analysis of relations between basic principles of a theory (to establish their dependency or independence, etc.), and between the principles and theorems (to prove some of theorems only a part of axioms may be necessary). Such investigations may lead to general theories that can be applied to several more specific theories (let us remember the theory of groups).
Thirdly, sometimes after the axiomatization we can establish that the theory considered is not able to solve some of naturally arising problems (let us remember the continuum problem in set theory). In such situations we may try to improve the theory, even developing several alternative theories.
How far can we proceed in the axiomatization of some theory? Complete elimination of intuition, i.e. full reduction to a list of axioms and rules of inference, is this possible? The work by Gottlob Frege, Bertrand Russell, David Hilbert and their colleagues showed how this could be achieved even with the most complicated mathematical theories. All these theories were reduced to axioms and rules of inference without any admixture of intuition. Logical techniques developed by these men allow us today complete axiomatization of any theories based on stable systems of principles (i.e. of any mathematical theories).
What do they look like - such 100% axiomatic theories? They are called formal theories (formal systems or deductive systems) underlining that no step of reasoning can be done without a reference to an exactly formulated list of axioms and rules of inference. Even the most "self-evident" logical principles (like, "if A implies B, and B implies C, then A implies C") must be either formulated in the list of axioms and rules explicitly, or derived from it.
The exact definition of the "formal" can be given in terms of theory of algorithms (i.e. recursive functions): a theory T is called formal theory, iff an algorithm (i.e. a mechanically applicable computation procedure) is presented for checking correctness of reasoning via principles of T. It means that when somebody is going to publish a "mathematical text" calling it "a proof of a theorem in T", we must be able to check mechanically whether the text in question really is a proof according to the standards of reasoning accepted in T. Thus, the standards of reasoning in T must be defined precisely enough to enable the checking of proofs by means of a computer program. Note that we discuss here checking of ready proofs, not the problem of provability!
As an unpractical example of a formal theory let us consider the game of chess, let us call this "theory" CHESS. All possible positions on a chessboard (plus a flag: "whites to go" or "blacks to go") we shall call propositions of CHESS. The only axiom of CHESS will be the initial position, and the rules of inference - the rules of the game. The rules allow us to pass from one proposition of CHESS to some other ones. Starting with the axiom we get in this way theorems of CHESS. Thus, theorems of CHESS are all possible positions that can be obtained from the initial position by moving of chessmen according to the rules of the game.
Exercise 4.1. Can you provide an unprovable proposition of CHESS?
Why is CHESS called formal theory? When somebody offers a "mathematical text" P as a proof of a theorem A in CHESS it means that P is the record of some chess-game stopped in the position A. And, of course, no problem to check correctness of such a "proof". Rules of the game are formulated precisely enough, and we can write a computer program that will execute the task.
Exercise 4.2. Estimate the size of this program in some programming language.
Our second example of a formal theory is only a bit more serious. It was proposed by P. Lorenzen, let us call it theory L. Propositions of L are all possible words made of letters a, b, for example: a, aa, aba, abaab. The only axiom of L is the word a, and L has two rules of inference:
X
--------------
Xb � � aXa
It means that (in L) from a proposition X we can infer immediately the propositions Xb and aXa. For example, proposition aababb is a theorem of L:
a |- ab |- aaba |- aabab
|- aababb
rule1- rule2---�rule1----rule1--------
This fact is expressed usually as L |- aababb ( "aababb is provable in L").
Exercise 4.3. a) Describe an algorithm determining whether a proposition of L is a theorem or not.
b) Can you imagine such an algorithm for the theory CHESS? Of course, you can, yet... Thus you see that even having a relatively simple algorithm for proof correctness the problem of provability can appear a very complicated one.
A very important property of formal theories is given in the following
Exercise 4.4. Show that the set of all theorems of a formal theory is effectively (i.e. recursively) denumerable.
It means that for any formal theory a computer program can be written that will print on an (endless) paper tape all theorems of this theory (and nothing else). Unfortunately, such a program cannot solve the problem that the mathematicians are interested in: is a given proposition provable or not. When, sitting near the computer we see our proposition printed, it means that it is provable. Still, until that moment we cannot know whether the proposition will be printed some time later or it will not be printed at all.
T is called a solvable theory (or, effectively solvable) iff an algorithm (mechanically applicable computation procedure) is presented for checking whether some proposition is provable using principles of T or not. In the exercise 4.3a you have proved that L is a solvable theory. Still, in the Exercise 4.3b you established that it is hard to state whether CHESS is a "normally solvable" theory or not. Proof correctness checking is always much simpler than provability checking. It can be proved that most mathematical theories are unsolvable, elementary (i.e. first order) arithmetic of natural numbers and set theory included (see, for example, Mendelson [1997]).
Normally, mathematical theories contain the negation symbol not. In such theories solving the problem stated in a proposition A means to prove either A or notA. We can try to solve the problem using the enumeration program of the exercise 4.4: let us wait sitting near the computer for until A or notA is printed. If A and notA are printed both, it will mean that T is an inconsistent theory (i.e. one can prove using principles of T some proposition and its negation). Totally we have here 4 possibilities:
�a) A will be printed, but notA will not (then the problem A has a positive solution),
�b) notA will be printed, but A will not (then the problem A has a negative solution),
�c) A and notA will be printed both (then T is an inconsistent theory),
�d) neither A, nor notA will be printed.
In the case d) we will sit forever near the computer, yet nothing interesting will happen: using principles of T one can neither prove nor disprove the proposition A, and for this reason T is called incomplete theory. Goedel's incompleteness theorem says that most mathematical theories are incomplete (see Mendelson [1997]).
Exercise 4.5. Show that any complete formal theory is solvable.
At the beginning of XX century the honor of mathematics was questioned seriously - contradictions were found in set theory. Till that time set theory was acknowledged widely as the natural foundation and a very important tool of mathematics. In order to save the honor of mathematics David Hilbert proposed in 1904 his famous program of "perestroika" in the foundations of mathematics:
D.Hilbert. Ueber die Grundlagen der Logik und Arithmetik. "Verhandlungen des III Internationalen Mathematiker-Kongresses, Heidelberg 1904", 1905, pp.174-185
a) Convert all existing (mainly intuitive) mathematics into a formal theory (a new variant of set theory cleared of paradoxes included).
b) Prove consistency of this formal theory (i.e. prove that no proposition can be proved and disproved in it simultaneously).
Solving the task (a) - it was meant simply to complete axiomatization of mathematics. This process proceeded successfully in the XIX century: formal definition of the notions of function, continuity, real numbers, axiomatization of arithmetic, geometry etc.
The task (b) - contrary to (a) - was a great novelty: an attempt to get an absolute consistency proof of mathematics. Hilbert was the first to realize that a complete solution of the task (a) enables one to set the task (b). Really, if we have not a complete solution of (a), i.e. if we are staying partly in the intuitive mathematics, then we cannot discuss absolute proofs of consistency. We may hope to establish a contradiction in an intuitive theory, i.e. to prove some proposition and its negation simultaneously. Still, we cannot hope to prove the consistency of such a theory: consistency is an assertion about the set of all theorems of the theory, i.e. about the set, explicit definition of which we do not have in the case of intuitive theory.
Still, if a formal theory replaces the intuitive one, then situation is changed. The set of all theorems of a formal theory is an explicitly defined object. Let us remember our examples of formal theories. The set of all theorems of CHESS is (theoretically) finite, yet from a practical point of view it is rather infinite. Nevertheless, one can prove easily the following assertion about all theorems of CHESS:
In a theorem of CHESS one cannot have 10 white queens simultaneously.
Really, in the axiom of CHESS we have 1 white queen and 8 white pawns, and by the rules of the game only white pawns can be converted into white queens. The rest of the proof is arithmetical: 1+8<10. Thus we have selected some specific properties of axioms and inference rules of CHESS that imply our general assertion about all theorems of CHESS.
With the theory L we have similar opportunities. One can prove, for example, the following assertion about all theorems of L: �if X is a theorem, then aaX is also a theorem.
Really, if X is axiom (X=a), then L |- aaX by rule2. Further, if for some X: L |- aaX, then we have the same for X'=Xb and X"=aXa:
aaX |- aa(Xb), �aaX |- aa(aXa)
rule1-----------rule2----
Thus, by induction, our assertion is proved for any theorem of L.
Hence, if the set of theorems is defined precisely enough, one can prove general assertions about all theorems. Hilbert's opinion was that consistency assertions would not be an exception. Roughly, he hoped to select those specific properties of the axiom system of the entire mathematics that make deduction of contradictions impossible.
Let us remember, however, that the set of all theorems is here infinite, and, therefore, consistency cannot be verified empirically. We may only hope to establish it theoretically. For example, our assertion:
L |- X -> L |- aaX
was proved by using the induction principle. What kind of theory must be used to prove the consistency of the entire mathematics? Clearly, the means of reasoning used to prove consistency of some theory T must be more reliable than the means used in T itself. How could we rely on a consistency proof when suspicious means were used in it? Still, if a theory T contains the entire mathematics, then we (mathematicians) cannot know any means of reasoning outside of T. Hence, proving consistency of such a universal theory T we must use means from T itself - from the most reliable part of them.
There are two different levels of "reliability" in mathematics:
1) Arithmetical ("discrete") reasoning - only natural numbers and similar discrete objects are used;
2) Set-theoretic reasoning - Cantor's concept of arbitrary infinite sets is used.
The first level is regarded as reliable (only few people will question it), and the second one - as suspicious (Cantor's set theory was cleared of contradictions, still...). Hilbert's intention was to prove consistency of mathematics by means of the first level.
As soon as Hilbert announced his project in 1904, Henry Poincare stated serious doubts about its reality. He pointed out that proving consistency of mathematics by means of induction principle (the main tool of the first level) Hilbert would use a circular argument: consistency of mathematics means also consistency of induction principle ... proved by means of induction principle! At that time few people could realize the real significance of this hint. Still, 25 years later Kurt Goedel proved that Poincare was right: an absolute consistency proof of essential parts of mathematics is impossible! (For details see Section 5.4)
1. I do not believe that the natural number system is an inborn property of human mind. I think that it was developed from human practice with collections of discrete objects. Therefore, the particular form of our present natural number system is influenced by both: the properties of discrete collections from human practice and the structure of human mind. If so, how long was the development process of this system and when it was ended? I think that the process ended in VI century BC, when first results were obtained about the natural number system as the whole (theorem about infinity of primes was one of such results). In human practice only relatively small sets can appear (and following the modern cosmology we believe that only a finite number of particles can be found in the Universe). Hence, results about "natural number infinity" can be obtained in a theoretical model only. If we believe that general results about natural numbers can be obtained by means of pure reasoning, without any additional experimental practice, it means that we are convinced of stability and (sufficient) completeness of our theoretical model.
2. (See Sections 5.4, 6.5 and Appendix 2 for details) The development process of mathematical concepts does not yield a continuous spectrum of concepts, yet a relatively small number of different concepts (models, theories). Thus, considering the history of natural number concept we see two different stages only. Both stages can be described by corresponding formal theories:
�- Stage 1 (the VI century BC - 1870s) can be described by first order arithmetic,
�- Stage 2 (1870s - today) can be described by arithmetic of ZFC.
I think that the natural number concept of Greeks corresponds to first order arithmetic and that this concept remained unchanged up to 1870s. I believe that Greeks would accept any proof from the so-called elementary number theory of today. G. Cantor's invention of "arbitrary infinite sets" (in particular, "the set of all sets of natural numbers", i.e. P(w)) added new features to the old ("elementary") concept. For example, the well-known Extended Ramsey's theorem became provable. Thus a new model (Stage 2) replaced the model of Stage 1, and it remains principally unchanged up to day.
Finally, let us consider the history of geometry. The invention of non-Euclidean geometry could not be treated as a "further development" of the old Euclidean geometry. Still, Euclidean geometry remains unchanged up to day, and we can still prove new theorems using Euclid's axioms. Non-Euclidean geometry appeared as a new theory, different from the Euclidean one, and it also remains unchanged up to day.
Therefore, I think, I can retain my definition of mathematics as investigation of stable (self-contained) models that can be treated, just because they are stable (self-contained), independently of any experimental data.
3. I do not criticize Platonism as the philosophy (and psychology) of working mathematicians. On the contrary, Platonism as a creative method is extremely effective in this field. Platonist approach to "objects" of investigation is a necessary aspect of the mathematical method. Indeed, how can one investigate effectively a stable (self-contained) model - if not thinking about it in a Platonist way (as the "last reality", without any experimental "world" behind it)?
4. By which means do we judge theories? My criterion is pragmatic (in the worst sense of the word). If in a theory contradictions are established, then any new theory will be good enough, in which main theorems of the old theory (yet not its contradictions) can be proved. In such sense, for example, ZFC is "better" than Cantor's original set theory.
On the other side, if undecidable problems have appeared in a theory (as continuum problem appeared in ZFC), then any extension of the theory will be good enough, in which some of these problems can be solved in a positive or a negative way. Of course, simple postulation of the needed positive or negative solutions leads, as a rule, to uninteresting theories (such as ZFC+GCH). We must search for more powerful hypotheses, such as, for example, "V=L" or AD (axiom of determinateness). Theories ZF+"V=L" and ZF+AD contradict each other, yet they both appear very interesting, and many people make beautiful investigations in each of them.
If some people are satisfied neither with "V=L" nor with AD, they can suggest any other powerful hypothesis having rich and interesting consequences. I do not believe that here any convergence to some unique (the "only right") system of set theory can be expected.
5. Mathematicians are not in agreement about the ways to prove theorems, yet their opinions do not form a continuous spectrum. The existing few variations of these views can be classified, each of them can be described by means of a suitable formal theory. Thus they all can be recognized as "right", and we can peacefully investigate their consequences.
6. I think that the genetic and axiomatic methods are used in mathematics not as heuristics, and not to prove theorems. These methods are used to clarify intuitive concepts that appear insufficiently precise, and, for this reason, investigations cannot be continued normally.
The most striking application of the genetic method is, I think, the definition of continuous functions in terms of epsilon-delta. The old concept of continuous functions (the one of XVIII century) was purely intuitive and extremely vague, so that one could not prove theorems about it. For example, the well known theorem about zeros of a function f continuous on [a, b] with f(a)<0 and f(b)>0 was believed to be "obvious". It was believed also that every continuous function is almost everywhere differentiable (except of some isolated "break points"). The latter assertion could not be even stated precisely. To enable further development of the theory a reconstruction of the intuitive concept in more explicit terms was necessary. Cauchy did this in terms of epsilon-delta. Having such a precise definition, the "obvious" theorem about zeros of f needs already a serious proof. And it was proved. The Weierstrass's construction of a continuous function (in the sense of the new definition) that is nowhere differentiable, shows unexpectedly that the volumes of the old (intuitive) and the new (more explicit) concept are somewhat different. Nevertheless, it was decided that the new concept is "better", and for this reason it replaced the old intuitive concept of continuous functions.
In similar way the genetic method was used many times in the past. The so-called "arithmetization of the Calculus" (definition of real numbers in the terms of natural numbers) also is an application of the genetic method.
7. Our usual metatheory used for investigation of formal theories (to prove Goedel's theorem etc.) is the theory of algorithms (i.e. recursive functions). It is, of course, only a theoretical model giving us a somewhat deformed picture of how are real mathematical theories functioning. Perhaps, the "subrecursive mathematics" will provide more adequate picture of the real processes (see, for example, Parikh [1971]).
Back to title page.