The physical foundations of mathematics in the theory of emergent space-time-matter were considered. It is shown that mathematics, including logic, is a consequence of equation which describes the fundamental field. If the most fundamental level were described not by mathematics, but something else, then instead of mathematics there would be consequences of this something else.
Published in 1903, this book was the first comprehensive treatise on the logical foundations of mathematics written in English. It sets forth, as far as possible without mathematical and logical symbolism, the grounds in favour of the view that mathematics and logic are identical. It proposes simply that what is commonly called mathematics are merely later deductions from logical premises. It provided the thesis for which _Principia Mathematica_ provided the detailed proof, and introduced the work of (...) Frege to a wider audience. In addition to the new introduction by John Slater, this edition contains Russell's introduction to the 1937 edition in which he defends his position against his formalist and intuitionist critics. (shrink)
This paper aims to provide modal foundations for mathematical platonism. I examine Hale and Wright's (2009) objections to the merits and need, in the defense of mathematical platonism and its epistemology, of the thesis of Necessitism. In response to Hale and Wright's objections to the role of epistemic and metaphysical modalities in providing justification for both the truth of abstraction principles and the success of mathematical predicate reference, I examine the Necessitist commitments of the abundant conception of properties endorsed (...) by Hale and Wright and examined in Hale (2013); examine cardinality issues which arise depending on whether Necessitism is accepted at first- and higher-order; and demonstrate how a two-dimensional intensional approach to the epistemology of mathematics, augmented with Necessitism, is consistent with Hale and Wright's notion of there being epistemic entitlement rationally to trust that abstraction principles are true. A choice point that I flag is that between availing of intensional or hyperintensional semantics. The hyperintensional semantic approach that I advance is an epistemic two-dimensional truthmaker semantics. Epistemic and metaphysical states and possibilities may thus be shown to play a constitutive role in vindicating the reality of mathematical objects and truth, and in explaining our possible knowledge thereof. (shrink)
The human attempts to access, measure and organize physical phenomena have led to a manifold construction of mathematical and physical spaces. We will survey the evolution of geometries from Euclid to the Algebraic Geometry of the 20th century. The role of Persian/Arabic Algebra in this transition and its Western symbolic development is emphasized. In this relation, we will also discuss changes in the ontological attitudes toward mathematics and its applications. Historically, the encounter of geometric and algebraic perspectives enriched the (...) mathematical practices and their foundations. Yet, the collapse of Euclidean certitudes, of over 2300 years, and the crisis in the mathematical analysis of the 19th century, led to the exclusion of “geometric judgments” from the foundations of Mathematics. After the success and the limits of the logico-formal analysis, it is necessary to broaden our foundational tools and re-examine the interactions with natural sciences. In particular, the way the geometric and algebraic approaches organize knowledge is analyzed as a cross-disciplinary and cross-cultural issue and will be examined in Mathematical Physics and Biology. We finally discuss how the current notions of mathematical (phase) “space” should be revisited for the purposes of life sciences. (shrink)
Gentzen’s approach by transfinite induction and that of intuitionist Heyting arithmetic to completeness and the self-foundation of mathematics are compared and opposed to the Gödel incompleteness results as to Peano arithmetic. Quantum mechanics involves infinity by Hilbert space, but it is finitist as any experimental science. The absence of hidden variables in it interpretable as its completeness should resurrect Hilbert’s finitism at the cost of relevant modification of the latter already hinted by intuitionism and Gentzen’s approaches for completeness. This (...) paper investigates both conditions and philosophical background necessary for that modification. The main conclusion is that the concept of infinity as underlying contemporary mathematics cannot be reduced to a single Peano arithmetic, but to at least two ones independent of each other. Intuitionism, quantum mechanics, and Gentzen’s approaches to completeness an even Hilbert’s finitism can be unified from that viewpoint. Mathematics may found itself by a way of finitism complemented by choice. The concept of information as the quantity of choices underlies that viewpoint. Quantum mechanics interpretable in terms of information and quantum information is inseparable from mathematics and its foundation. (shrink)
The foundational ideas of David Hilbert have been generally misunderstood. In this dissertation prospectus, different aims of Hilbert are summarized and a new interpretation of Hilbert's work in the foundations of mathematics is roughly sketched out. Hilbert's view of the axiomatic method, his response to criticisms of set theory and intuitionist criticisms of the classical foundations of mathematics, and his view of the role of logical inference in mathematical reasoning are briefly outlined.
MethodologyA new hypothesis on the basic features characterizing the Foundations of Mathematics is suggested.Application of the methodBy means of it, the several proposals, launched around the year 1900, for discovering the FoM are characterized. It is well known that the historical evolution of these proposals was marked by some notorious failures and conflicts. Particular attention is given to Cantor's programme and its improvements. Its merits and insufficiencies are characterized in the light of the new conception of the FoM. (...) After the failures of Frege's and Cantor's programmes owing to the discoveries of an antinomy and internal contradictions, respectively, the two remaining, more radical programmes, i.e. Hilbert's and Brouwer's, generated a great debate; the explanation given here is their mutual incommensurability, defined by means of the differences in their foundational features.ResultsThe ignorance of this phenomenon explains the inconclusiveness of a century-long debate between the advocates of these two proposals. Which however have been so greatly improved as to closely approach or even recognize some basic features of the FoM.Discussion on the resultsYet, no proposal has recognized the alternative basic feature to Hilbert's main one, the deductive organization of a theory, although already half a century before the births of all the programmes this alternative was substantially instantiated by Lobachevsky's theory on parallel lines. Some conclusive considerations of a historical and philosophical nature are offered. In particular, the conclusive birth of a pluralism in the FoM is stressed. (shrink)
Epstein and Carnielli's fine textbook on logic and computability is now in its second edition. The readers of this journal might be particularly interested in the timeline `Computability and Undecidability' added in this edition, and the included wall-poster of the same title. The text itself, however, has some aspects which are worth commenting on.
We study a new formal logic LD introduced by Prof. Grzegorczyk. The logic is based on so-called descriptive equivalence, corresponding to the idea of shared meaning rather than shared truth value. We construct a semantics for LD based on a new type of algebras and prove its soundness and completeness. We further show several examples of classical laws that hold for LD as well as laws that fail. Finally, we list a number of open problems. -/- .
The attempts of theoretically solving the famous puzzle-dictum of physicist Eugene Wigner regarding the “unreasonable” effectiveness of mathematics as a problem of analytical philosophy, started at the end of the 19th century, are yet far from coming out with an acceptable theoretical solution. The theories developed for explaining the empirical “miracle” of applied mathematics vary in nature, foundation and solution, from denying the existence of a genuine problem to structural theories with an advanced level of mathematical formalism. Despite (...) this variation, methodologically fundamental questions like “Which is the adequate theoretical framework for solving Wigner’s conjecture?” and “Can the logico-mathematical formalism solve it and is it entitled to do it?” did not receive answers yet. The problem of the applicability of mathematics in the physical reality has been treated unitarily in some sense, with respect to the semantic-conceptual use of the constitutive terms, within both the structural and non-structural theories. This unity (of consistency) applied to both the referred objects and concepts per se and the aims of the investigations. For being able to make an objective study of the possible alternatives of the existent theories, a foundational approach of them is needed, including through semantic-conceptual distinctions which to weaken the unity of consistency. (shrink)
The current definition of Constructive mathematics as “mathematics within intuitionist logic” ignores two fundamental issues. First, the kind of organization of the theory at issue. I show that intuitionist logic governs a problem-based organization, whose model is alternative to that of the deductive-axiomatic organization, governed by classical logic. Moreover, this dichotomy is independent of that of the kind of infinity, either potential or actual, to which respectively correspond constructive mathematical and classical mathematical tools. According to this view a (...) mathematical theory is based on the choices regarding these two dichotomies. As an example of this kind of foundation, arithmetic is rationally re-founded on constructive mathematical tools and the model of the problem-based organization. In conclusion, constructive mathematics is not only mathematics making use of constructive tools in intuitionist logic but also organized according to around a basic problem, solved by a method discovered using intuitionist logic. (shrink)
Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...) a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil (AE) and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of why cyberspace is central to so many of CE's concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent's action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and fort that matter good) but conversely to `receive' or `suffer from' it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective (though not vice versa). In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...) a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics. It is argued that this goes some way towards providing a methodological explanation of why cyberspace is central to so many of CE’s concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent’s action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil but conversely to ‘receive’ or ‘suffer from’ it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively, in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics is proposed for that theory. It is argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective. In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
In Mathematics is megethology. Philosophia Mathematica, 1, 3–23) David K. Lewis proposes a structuralist reconstruction of classical set theory based on mereology. In order to formulate suitable hypotheses about the size of the universe of individuals without the help of set-theoretical notions, he uses the device of Boolos’ plural quantification for treating second order logic without commitment to set-theoretical entities. In this paper we show how, assuming the existence of a pairing function on atoms, as the unique assumption non (...) expressed in a mereological language, a mereological foundation of set theory is achievable within first order logic. Furthermore, we show how a mereological codification of ordered pairs is achievable with a very restricted use of the notion of plurality without plural quantification. (shrink)
FOURTH EUROPEAN CONGRESS OF MATHEMATICS STOCKHOLM,SWEDEN JUNE27 - JULY 2, 2004 Contributed papers L. Carleson’s celebrated theorem of 1965 [1] asserts the pointwise convergence of the partial Fourier sums of square integrable functions. The Fourier transform has a formulation on each of the Euclidean groups R , Z and Τ .Carleson’s original proof worked on Τ . Fefferman’s proof translates very easily to R . M´at´e [2] extended Carleson’s proof to Z . Each of the statements of the (...) theorem can be stated in terms of a maximal Fourier multiplier theorem [5]. Inequalities for such operators can be transferred between these three Euclidean groups, and was done P. Auscher and M.J. Carro [3]. But L. Carleson’s original proof and another proofs very long and very complicated. We give a very short and very “simple” proof of this fact. Our proof uses PNSA technique only, developed in part I, and does not uses complicated technical formations unavoidable by the using of purely standard approach to the present problems. In contradiction to Carleson’s method, which is based on profound properties of trigonometric series, the proposed approach is quite general and allows to research a wide class of analogous problems for the general orthogonal series. (shrink)
This article seeks the origin, in the theories of Ibn al-Haytham (Alhazen), Descartes, and Berkeley, of two-stage theories of spatial perception, which hold that visual perception involves both an immediate representation of the proximal stimulus in a two-dimensional ‘‘sensory core’’ and also a subsequent perception of the three dimensional world. The works of Ibn al-Haytham, Descartes, and Berkeley already frame the major theoretical options that guided visual theory into the twentieth century. The field of visual perception was the first area (...) of what we now call psychology to apply mathematics, through geometrical models as used by Euclid, Ptolemy, Ibn al-Haytham, and Descartes (among others). The article shows that Kepler’s discovery of the retinal image, which revolutionized visual anatomy and entailed fundamental changes in visual physiology, did not alter the basic structure of theories of spatial vision. These changes in visual physiology are advanced especially in Descartes' Dioptrics and his L'Homme. Berkeley develops a radically empirist theory vision, according to which visual perception of depth is learned through associative processes that rely on the sense of touch. But Descartes and Berkeley share the assertion that there is a two-dimensional sensory core that is in principle available to consciousness. They also share the observation that we don't usually perceived this core, but find depth and distance to be phenomenally immediate, a point they struggle to accommodate theoretically. If our interpretation is correct, it was not a change in the theory of the psychology of vision that engendered the idea of a sensory core, but rather the introduction of the theory into a new metaphysical context. (shrink)
I prove that invoking the univalence axiom is equivalent to arguing 'without loss of generality' (WLOG) within Propositional Univalent Foundations (PropUF), the fragment of Univalent Foundations (UF) in which all homotopy types are mere propositions. As a consequence, I argue that practicing mathematicians, in accepting WLOG as a valid form of argument, implicitly accept the univalence axiom and that UF rightly serves as a Foundation for Mathematical Practice. By contrast, ZFC is inconsistent with WLOG as it is applied, (...) and therefore cannot serve as a foundation for practice. (shrink)
Since the publication of the Remarks on the Foundations of Mathematics, Wittgenstein’s interpreters have endeavored to reconcile his general constructivist/anti-realist attitude towards mathematics with his confessed anti-revisionary philosophy. In this article, the author revisits the issue and presents a solution. The basic idea consists in exploring the fact that the so-called “non-constructive results” could be interpreted so that they do not appear non-constructive at all. The author substantiates this solution by showing how the translation of mathematical results, (...) given by the possibility of translation between logics, can be seen as a tool for partially implementing the solution Wittgenstein had in mind. (shrink)
It is argued that the origins of modern science can be revealed due to joint account of external and internal factors. The author tries to keep it in mind applying his scientific revolution model according to which the growth of knowledge consists in interaction, interpenetration and even unification of different scientific research programmes. Hence the Copernican Revolution as a matter of fact consisted in realization and elimination of the gap between the mathematical astronomy and Aristotelian qualitative physics in Ptolemaic cosmology. (...) Yet the very realization of the contradictions became possible because at the first stages European science was a result of Christian Weltanschaugung evolution with its gradual elimination of pagan components. Key words: modern European science, Christian Weltanschaugung. (shrink)
The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented (...) and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. -/- This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. -/- Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. -/- The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses. (shrink)
This paper introduces a novel object that has less structure than, and is ontologically prior to the natural numbers. As such it is a candidate model of the foundation that lies beneath the natural numbers. The implications for the construction of mathematical objects built upon that foundation are discussed.
K. Marx’s 200th jubilee coincides with the celebration of the 85 years from the first publication of his “Mathematical Manuscripts” in 1933. Its editor, Sofia Alexandrovna Yanovskaya (1896–1966), was a renowned Soviet mathematician, whose significant studies on the foundations of mathematics and mathematical logic, as well as on the history and philosophy of mathematics are unduly neglected nowadays. Yanovskaya, as a militant Marxist, was actively engaged in the ideological confrontation with idealism and its influence on modern (...) class='Hi'>mathematics and their interpretation. Concomitantly, she was one of the pioneers of mathematical logic in the Soviet Union, in an era of fierce disputes on its compatibility with Marxist philosophy. Yanovskaya managed to embrace in an originally Marxist spirit the contemporary level of logico-philosophical research of her time. Due to her highly esteemed status within Soviet academia, she became one of the most significant pillars for the culmination of modern mathematics in the Soviet Union. In this paper, I attempt to trace the influence of the complex socio-cultural context of the first decades of the Soviet Union on Yanovskaya’s work. Among the several issues I discuss, her encounter with L. Wittgenstein is striking. (shrink)
In this paper I challenge Paolo Palmieri’s reading of the Mach-Vailati debate on Archimedes’s proof of the law of the lever. I argue that the actual import of the debate concerns the possible epistemic (as opposed to merely pragmatic) role of mathematical arguments in empirical physics, and that construed in this light Vailati carries the upper hand. This claim is defended by showing that Archimedes’s proof of the law of the lever is not a way of appealing to a non-empirical (...) source of information, but a way of explicating the mathematical structure that can represent the empirical information at our disposal in the most general way. (shrink)
This paper is about Poincaré’s view of the foundations of geometry. According to the established view, which has been inherited from the logical positivists, Poincaré, like Hilbert, held that axioms in geometry are schemata that provide implicit definitions of geometric terms, a view he expresses by stating that the axioms of geometry are “definitions in disguise.” I argue that this view does not accord well with Poincaré’s core commitment in the philosophy of geometry: the view that geometry is the (...) study of groups of operations. In place of the established view I offer a revised view, according to which Poincaré held that axioms in geometry are in fact assertions about invariants of groups. Groups, as forms of the understanding, are prior in conception to the objects of geometry and afford the proper definition of those objects, according to Poincaré. Poincaré’s view therefore contrasts sharply with Kant’s foundation of geometry in a unique form of sensibility. According to my interpretation, axioms are not definitions in disguise because they themselves implicitly define their terms, but rather because they disguise the definitions which imply them. (shrink)
A principle, according to which any scientific theory can be mathematized, is investigated. That theory is presupposed to be a consistent text, which can be exhaustedly represented by a certain mathematical structure constructively. In thus used, the term “theory” includes all hypotheses as yet unconfirmed as already rejected. The investigation of the sketch of a possible proof of the principle demonstrates that it should be accepted rather a metamathematical axiom about the relation of mathematics and reality. Its investigation needs (...) philosophical means. Husserl’s phenomenology is what is used, and then the conception of “bracketing reality” is modelled to generalize Peano arithmetic in its relation to set theory in the foundation of mathematics. The obtained model is equivalent to the generalization of Peano arithmetic by means of replacing the axiom of induction with that of transfinite induction. A comparison to Mach’s doctrine is used to be revealed the fundamental and philosophical reductionism of Husserl’s phenomenology leading to a kind of Pythagoreanism in the final analysis. Accepting or rejecting the principle, two kinds of mathematics appear differing from each other by its relation to reality. Accepting the principle, mathematics has to include reality within itself in a kind of Pythagoreanism. These two kinds are called in paper correspondingly Hilbert mathematics and Gödel mathematics. The sketch of the proof of the principle demonstrates that the generalization of Peano arithmetic as above can be interpreted as a model of Hilbert mathematics into Gödel mathematics therefore showing that the former is not less consistent than the latter, and the principle is an independent axiom. An information interpretation of Hilbert mathematics is involved. It is a kind of ontology of information. Thus the problem which of the two mathematics is more relevant to our being is discussed. An information interpretation of the Schrödinger equation is involved to illustrate the above problem. (shrink)
Proposition are the material of our reasoning. Proposition are the basic building blocks of the world/thought. Proposition have intense relation with the world. World is a series of atomic facts and these facts are valued by the proposition although sentences explain the world of reality but can’t have any truth values, only proposition have truth values to describe the world in terms of assertions. Propositions are truth value bearers, the only quality of proposition is truth & falsity, that they are (...) either true or false. Proposition mirrors the world and explains how world is arranged in an orderly manner. It scans the world(object) and are composed of atomic facts experienced and can be analyzed into propositions. Propositions are the basic units of logic. The truth (affirm) and falsity (nego) are the qualities of the propositions and universality (generality) and Particularity are the quantities of the propositions. There are propositions which are neither true nor false, they are called Pseudo-Propositions and their quality are ipso-facto i.e meaningless. Propositions are used in computers with the modifications brought by the modern logicians in the form of statements or logical sentences. The truth table of the logical gates and binary operations (1,s or 0,s are due to the revolution of the modern logic or mathematical logic. It is a fact that proposition cannot change the word but it shows the relation between the object and of the word. Objectives: The objectives of this research is to explore the importance and need of propositions in logic. It also shows the analysis of propositions and how a philosopher thoughts in terms of propositions or concepts. In this research problem it is shown that propositions had been described in many ways by most of the philosophers and logicians from Aristotle to contemporary philosophers. It also analysis the contribution of the philosophers towards proposition and its relation to the world of reality. This research also describes the definition and nature of proposition. (shrink)
The principle of maximal entropy (further abbreviated as “MaxEnt”) can be founded on the formal mechanism, in which future transforms into past by the mediation of present. This allows of MaxEnt to be investigated by the theory of quantum information. MaxEnt can be considered as an inductive analog or generalization of “Occam’s razor”. It depends crucially on choice and thus on information just as all inductive methods of reasoning. The essence shared by Occam’s razor and MaxEnt is for the relevant (...) data known till now to be postulated as an enough fundament of conclusion. That axiom is the kind of choice grounding both principles. Popper’s falsifiability (1935) can be discussed as a complement to them: That axiom (or axiom scheme) is always sufficient but never necessary condition of conclusion therefore postulating the choice in the base of MaxEnt. Furthermore, the abstraction axiom (or axiom scheme) relevant to set theory (e.g. the axiom scheme of specification in ZFC) involves choice analogically. (shrink)
A case study of quantum mechanics is investigated in the framework of the philosophical opposition “mathematical model – reality”. All classical science obeys the postulate about the fundamental difference of model and reality, and thus distinguishing epistemology from ontology fundamentally. The theorems about the absence of hidden variables in quantum mechanics imply for it to be “complete” (versus Einstein’s opinion). That consistent completeness (unlike arithmetic to set theory in the foundations of mathematics in Gödel’s opinion) can be interpreted (...) furthermore as the coincidence of model and reality. The paper discusses the option and fact of that coincidence it its base: the fundamental postulate formulated by Niels Bohr about what quantum mechanics studies (unlike all classical science). Quantum mechanics involves and develops further both identification and disjunctive distinction of the global space of the apparatus and the local space of the investigated quantum entity as complementary to each other. This results into the analogical complementarity of model and reality in quantum mechanics. The apparatus turns out to be both absolutely “transparent” and identically coinciding simultaneously with the reflected quantum reality. Thus, the coincidence of model and reality is postulated as necessary condition for cognition in quantum mechanics by Bohr’s postulate and further, embodied in its formalism of the separable complex Hilbert space, in turn, implying the theorems of the absence of hidden variables (or the equivalent to them “conservation of energy conservation” in quantum mechanics). What the apparatus and measured entity exchange cannot be energy (for the different exponents of energy), but quantum information (as a certain, unambiguously determined wave function) therefore a generalized law of conservation, from which the conservation of energy conservation is a corollary. Particularly, the local and global space (rigorously justified in the Standard model) share the complementarity isomorphic to that of model and reality in the foundation of quantum mechanics. On that background, one can think of the troubles of “quantum gravity” as fundamental, direct corollaries from the postulates of quantum mechanics. Gravity can be defined only as a relation or by a pair of non-orthogonal separable complex Hilbert space attachable whether to two “parts” or to a whole and its parts. On the contrary, all the three fundamental interactions in the Standard model are “flat” and only “properties”: they need only a single separable complex Hilbert space to be defined. (shrink)
The paper justifies the following theses: The totality can found time if the latter is axiomatically represented by its “arrow” as a well-ordering. Time can found choice and thus information in turn. Quantum information and its units, the quantum bits, can be interpreted as their generalization as to infinity and underlying the physical world as well as the ultimate substance of the world both subjective and objective. Thus a pathway of interpretation between the totality via time, order, choice, and information (...) to the substance of the world is constructed. The article is based only on the well-known facts and definitions and is with no premises in this sense. Nevertheless it is naturally situated among works and ideas of Husserl and Heidegger, linked to the foundation of mathematics by the axiom of choice, to the philosophy of quantum mechanics and information. (shrink)
A condensed summary of the adventures of ideas (1990-2020). Methodology of evolutionary-phenomenological constitution of Consciousness. Vector (BeVector) of Consciousness. Consciousness is a qualitative vector quantity. Vector of Consciousness as a synthesizing category, eidos-prototecton, intentional meta-observer. The development of the ideas of Pierre Teilhard de Chardin, Brentano, Husserl, Bergson, Florensky, Losev, Mamardashvili, Nalimov. Dialectic of Eidos and Logos. "Curve line" of the Consciousness Vector from space and time. The lower and upper sides of the "abyss of being". The existential tension of (...) being. Five reference “points” (existential-extremum) - world events in the evolution of Consciousness from “homo habilis” to “homo sapiens sapiens”. “Prometheus Effect". Inversion and reversion of Сonsciousness. "Open" and "closed" Сonsciousness. Consciousness and Self-Consciousness. Protogeometer: fall in the future, in the "EgoLand". The problem of justification/substantiation of mathematics (knowledge) is an ontological problem. The crisis of ontology and ontological limits of cognition. Ontological structure of space. The project of constructive dialectic ontology. The methodology of dialectical-ontological construction (modeling): conceptual-figure synthesis, total unification of matter, coincidence of ontological opposites, primordial Meta-Axioma and Superprinciple, vector (bivector) of the absolute state (states) of matter (absolute forms of existence). The ontological "celestial triangle" (Plato). Ontological invariants of the Universum and their ontological paths. Ontological and gnoseological dimensions of space. Triune (ontological) space of nine gnoseological dimensions: absolute rest (linear state, "Continuum")+ absolute motion (absolute vortex, "Discretuum") + absolute becoming (absolute wave, "Dis-Continuum"). Primordial generating structure: ontological framework, ontological carcass and ontological foundation. Ontological (structural, cosmic) memory - the semantic core of the conceptual construction of the Universum being as an eternal process of generation of new meanings and structures. Consciousness is an absolute (unconditional) attractor of meanings, a univalent phenomenon of ontological memory, which manifests itself at a certain level of the Universum being. Ontological space-MatterMemory-Ontological time (S-MM-T). Ontological Rhythm and Cycle. The nature and structure of ontological time. Natural (absolute) determinism. (shrink)
Gödel argued that intuition has an important role to play in mathematical epistemology, and despite the infamy of his own position, this opinion still has much to recommend it. Intuitions and folk platitudes play a central role in philosophical enquiry too, and have recently been elevated to a central position in one project for understanding philosophical methodology: the so-called ‘Canberra Plan’. This philosophical role for intuitions suggests an analogous epistemology for some fundamental parts of mathematics, which casts a number (...) of themes in recent philosophy of mathematics (concerning a priority and fictionalism, for example) in revealing new light. (shrink)
There is a wide range of realist but non-Platonist philosophies of mathematics—naturalist or Aristotelian realisms. Held by Aristotle and Mill, they played little part in twentieth century philosophy of mathematics but have been revived recently. They assimilate mathematics to the rest of science. They hold that mathematics is the science of X, where X is some observable feature of the (physical or other non-abstract) world. Choices for X include quantity, structure, pattern, complexity, relations. The article lays (...) out and compares these options, including their accounts of what X is, the examples supporting each theory, and the reasons for identifying the science of X with (most or all of) mathematics. Some comparison of the options is undertaken, but the main aim is to display the spectrum of viable alternatives to Platonism and nominalism. It is explained how these views answer Frege’s widely accepted argument that arithmetic cannot be about real features of the physical world, and arguments that such mathematical objects as large infinities and perfect geometrical figures cannot be physically realized. (shrink)
Kant's reasoning in his special metaphysics of nature is often opaque, and the character of his a priori foundation for Newtonian science is the subject of some controversy. Recent literature on the Metaphysical Foundations of Natural Science has fallen well short of consensus on the aims and reasoning in the work. Various of the doctrines and even the character of the reasoning in the Metaphysical Foundations have been taken to present insuperable obstacles to accepting Kant's claim to ground (...) Newtonian science. Gordon Brittan and Gerd Buchdahl, amongst others, have argued that Kant's stated aims in this case are not to be taken at face value, and that prior ontological commitments play a hidden but central role in Kant's special metaphysics. ;Michael Friedman has shown how Kant's stated aims can be taken seriously with his ingenious reconstruction of the Metaphysical Foundations as a demonstration of the a priori basis for our thinking bodies to be in true motion and in absolute space. However, Friedman does not address the issue of matter theory--despite the importance of the issue to Kant. I argue that a strict reading of both the stated aims and doctrines of the Metaphysical Foundations is possible, since much of Kant's reasoning about the empirical concept of matter can be explained by his views on how the construction of empirical concepts is possible. ;Kant's quasi-mathematical constructions are pivotal in Friedman's interpretation. Constructibility is Kant's criterion of acceptability for the concepts of natural science. Yet Kant notoriously fails to construct the dynamical concept of matter, and accepts this failure with an equally notorious complacency. I argue that Kant's criteria of empirical concept construction, apart from any prior ontological commitments, are enough to generate his views on matter. Kant's failure to construct the requisite concept of matter can be ascribed to a missing law of nature, a law of the relation of forces the discovery of which Kant thought imminent. I conclude that matter theory is central to the Metaphysical Foundations, but that this does not undermine Kant's stated aim of giving the a priori ground of Newtonian science. (shrink)
The propensity interpretation of fitness (PIF) is commonly taken to be subject to a set of simple counterexamples. We argue that three of the most important of these are not counterexamples to the PIF itself, but only to the traditional mathematical model of this propensity: fitness as expected number of offspring. They fail to demonstrate that a new mathematical model of the PIF could not succeed where this older model fails. We then propose a new formalization of the PIF that (...) avoids these (and other) counterexamples. By producing a counterexample-free model of the PIF, we call into question one of the primary motivations for adopting the statisticalist interpretation of fitness. In addition, this new model has the benefit of being more closely allied with contemporary mathematical biology than the traditional model of the PIF. (shrink)
Physical dimensions like “mass”, “length”, “charge”, represented by the symbols [M], [L], [Q], are not numbers, but used as numbers to perform dimensional analysis in particular, and to write the equations of physics in general, by the physicist. The law of excluded middle falls short of explaining the contradictory meanings of the same symbols. The statements like “m tends to 0”, “r tends to 0”, “q tends to 0”, used by the physicist, are inconsistent on dimensional grounds because “m”, “r”, (...) “q” represent quantities with physical dimensions of [M], [L], [Q] respectively and “0” represents just a number—devoid of physical dimension. Consequently, due to the involvement of the statement “q tends to 0'', where q is the test charge” in the definition of electric field leads to either circular reasoning or a contradiction regarding the experimental verification of the smallest charge in the Millikan–Fletcher oil drop experiment. Considering such issues as problematic, by choice, I make an inquiry regarding the basic language in terms of which physics is written, with an aim of exploring how truthfully the verbal statements can be converted to the corresponding physico-mathematical expressions, where “physico-mathematical” signifies the involvement of physical dimensions. Such investigation necessitates an explanation by demonstration of “self inquiry”, “middle way”, “dependent origination”, “emptiness/relational existence”, which are certain terms that signify the basic tenets of Buddhism. In light of such demonstration I explain my view of “definition”; the relations among quantity, physical dimension and number; meaninglessness of “zero quantity” and the associated logico-linguistic fallacy; difference between unit and unity. Considering the importance of the notion of electric field in physics, I present a critical analysis of the definitions of electric field due to Maxwell and Jackson, along with the physico-mathematical conversions of the verbal statements. The analysis of Jackson’s definition points towards an expression of the electric field as an infinite series due to the associated “limiting process” of the test charge. However, it brings out the necessity of a postulate regarding the existence of charges, which nevertheless follows from the definition of quantity. Consequently, I explain the notion of undecidable charges that act as the middle way to resolve the contradiction regarding the Millikan–Fletcher oil drop experiment. In passing, I provide a logico-linguistic analysis, in physico-mathematical terms, of two verbal statements of Maxwell in relation to his definition of electric field, which suggests Maxwell’s conception of dependent origination of distance and charge ) and that of emptiness in the context of relative vacuum. This work is an appeal for the dissociation of the categorical disciplines of logic and physics and on the large, a fruitful merger of Eastern philosophy and Western science. Nevertheless, it remains open to how the reader relates to this work, which is the essence of emptiness. (shrink)
Introduction to mathematical logic, part 2.Textbook for students in mathematical logic and foundations of mathematics. Platonism, Intuition, Formalism. Axiomatic set theory. Around the Continuum Problem. Axiom of Determinacy. Large Cardinal Axioms. Ackermann's Set Theory. First order arithmetic. Hilbert's 10th problem. Incompleteness theorems. Consequences. Connected results: double incompleteness theorem, unsolvability of reasoning, theorem on the size of proofs, diophantine incompleteness, Loeb's theorem, consistent universal statements are provable, Berry's paradox, incompleteness and Chaitin's theorem. Around Ramsey's theorem.
The formalist philosophy of mathematics (in its purest, most extreme version) is widely regarded as a “discredited position”. This pure and extreme version of formalism is called by some authors “game formalism”, because it is alleged to represent mathematics as a meaningless game with strings of symbols. Nevertheless, I would like to draw attention to some arguments in favour of game formalism as an appropriate philosophy of real mathematics. For the most part, these arguments have not yet (...) been used or were neglected in past discussions. (shrink)
Recently Feferman has outlined a program for the development of a foundation for naive category theory. While Ernst has shown that the resulting axiomatic system is still inconsistent, the purpose of this note is to show that nevertheless some foundation has to be developed before naive category theory can replace axiomatic set theory as a foundational theory for mathematics. It is argued that in naive category theory currently a ‘cookbook recipe’ is used for constructing categories, and it is explicitly (...) shown with a formalized argument that this “foundationless” naive category theory therefore contains a paradox similar to the Russell paradox of naive set theory. (shrink)
Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can (...) be fruitfully applied in the evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert’s program due to Simpson [1988], and predicativism in the extended form due to Feferman and Schütte. -/- Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Pi-1-1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Pi-1-1-CA0. (shrink)
I argue for the Wittgensteinian thesis that mathematical statements are expressions of norms, rather than descriptions of the world. An expression of a norm is a statement like a promise or a New Year's resolution, which says that someone is committed or entitled to a certain line of action. A expression of a norm is not a mere description of a regularity of human behavior, nor is it merely a descriptive statement which happens to entail a norms. The view can (...) be thought of as a sort of logicism for the logical expressivist---a person who believes that the purpose of logical language is to make explicit commitments and entitlements that are implicit in ordinary practice. The thesis that mathematical statements are expression of norms is a kind of logicism, not because it says that mathematics can be reduced to logic, but because it says that mathematical statements play the same role as logical statements. ;I contrast my position with two sets of views, an empiricist view, which says that mathematical knowledge is acquired and justified through experience, and a cluster of nativist and apriorist views, which say that mathematical knowledge is either hardwired into the human brain, or justified a priori, or both. To develop the empiricist view, I look at the work of Kitcher and Mill, arguing that although their ideas can withstand the criticisms brought against empiricism by Frege and others, they cannot reply to a version of the critique brought by Wittgenstein in the Remarks on the Foundations of Mathematics. To develop the nativist and apriorist views, I look at the work of contemporary developmental psychologists, like Gelman and Gallistel and Karen Wynn, as well as the work of philosophers who advocate the existence of a mathematical intuition, such as Kant, Husserl, and Parsons. After clarifying the definitions of "innate" and "a priori," I argue that the mechanisms proposed by the nativists cannot bring knowledge, and the existence of the mechanisms proposed by the apriorists is not supported by the arguments they give. (shrink)
DEFINING OUR TERMS A “paradox" is an argumentation that appears to deduce a conclusion believed to be false from premises believed to be true. An “inconsistency proof for a theory" is an argumentation that actually deduces a negation of a theorem of the theory from premises that are all theorems of the theory. An “indirect proof of the negation of a hypothesis" is an argumentation that actually deduces a conclusion known to be false from the hypothesis alone or, more commonly, (...) from the hypothesis augmented by a set of premises known to be true. A “direct proof of a hypothesis" is an argumentation that actually deduces the hypothesis itself from premises known to be true. Since `appears', `believes' and `knows' all make elliptical reference to a participant, it is clear that `paradox', `indirect proof' and `direct proof' are all participant-relative. PARTICIPANT RELATIVITY In normal mathematical writing the participant is presumed to be “the community of mathematicians" or some more or less well-defined subcommunity and, therefore, omission of explicit reference to the participant is often warranted. However, in historical, critical, or philosophical writing focused on emerging branches of mathematics such omission often invites confusion. One and the same argumentation has been a paradox for one mathematician, an inconsistency proof for another, and an indirect proof to a third. One and the same argumentation-text can appear to one mathematician to express an indirect proof while appearing to another mathematician to express a direct proof. WHAT IS A PARADOX’S SOLUTION? Of the above four sorts of argumentation only the paradox invites “solution" or “resolution", and ordinarily this is to be accomplished either by discovering a logical fallacy in the “reasoning" of the argumentation or by discovering that the conclusion is not really false or by discovering that one of the premises is not really true. Resolution of a paradox by a participant amounts to reclassifying a formerly paradoxical argumentation either as a “fallacy", as a direct proof of its conclusion, as an indirect proof of the negation of one of its premises, as an inconsistency proof, or as something else depending on the participant's state of knowledge or belief. This illustrates why an argumentation which is a paradox to a given mathematician at a given time may well not be a paradox to the same mathematician at a later time. -/- The present article considers several set-theoretic argumentations that appeared in the period 1903-1908. The year 1903 saw the publication of B. Russell's Principles of mathematics, [Cambridge Univ. Press, Cambridge, 1903; Jbuch 34, 62]. The year 1908 saw the publication of Russell's article on type theory as well as Ernst Zermelo's two watershed articles on the axiom of choice and the foundations of set theory. The argumentations discussed concern “the largest cardinal", “the largest ordinal", the well-ordering principle, “the well-ordering of the continuum", denumerability of ordinals and denumerability of reals. The article shows that these argumentations were variously classified by various mathematicians and that the surrounding atmosphere was one of confusion and misunderstanding, partly as a result of failure to make or to heed distinctions similar to those made above. The article implies that historians have made the situation worse by not observing or not analysing the nature of the confusion. -/- RECOMMENDATION This well-written and well-documented article exemplifies the fact that clarification of history can be achieved through articulation of distinctions that had not been articulated (or were not being heeded) at the time. The article presupposes extensive knowledge of the history of mathematics, of mathematics itself (especially set theory) and of philosophy. It is therefore not to be recommended for casual reading. AFTERWORD: This review was written at the same time Corcoran was writing his signature “Argumentations and logic”[249] that covers much of the same ground in much more detail. https://www.academia.edu/14089432/Argumentations_and_Logic . (shrink)
The presumptions underlying quantum mechanics make it relevant to a limited range of situations only; furthermore, its statistical character means that it provides no answers to the question ‘what is really going on?’. Following Barad, I hypothesise that the underlying mechanics has parallels with human activities, as used by Barad to account for the way quantum measurements introduce definiteness into previously indefinite situations. We are led to consider a subtle type of order, different from those commonly encountered in the discipline (...) of physics, and yet comprehensible in terms of concepts considered by Barad and Yardley such as oppositional dynamics or ‘intra-actions’. The emergent organisation implies that nature is no longer fundamentally meaningless. Agencies can be viewed as dynamical systems, so we are dealing with models involving interacting dynamical systems. The ‘congealing of agencies’ to which Barad refers can be equated to the presence of regulatory mechanisms restricting the range of possibilities open to the agencies concerned. (shrink)
By the end of his life Plato had rearranged the theory of ideas into his teaching about ideal numbers, but no written records have been left. The Ideal mathematics of Plato is present in all his dialogues. It can be clearly grasped in relation to the effective use of mathematical modelling. Many problems of mathematical modelling were laid in the foundation of the method by cutting the three-level idealism of Plato to the single-level “ideism” of Aristotle. For a long (...) time, the real, ideal numbers of Plato’s Ideal mathematics eliminates many mathematical problems, extends the capabilities of modelling, and improves mathematics. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.