Saturday, November 29, 2014

Knowledge as Recognition

This is an assignment for a grade 12 philosophy course.


Most theories of knowledge depict knowledge as a type of belief. The idea, for example, of knowledge as 'justified true belief' dates back to Plato, who in Theaetetus argued that having a 'true opinion' about something is insufficient to say that we know about something.

In my view, knowledge isn't a type of belief or opinion at all, and knowledge isn't the sort of thing that needs to be justified at all. Instead, knowledge is a type of perception, which we call 'recognition', and knowledge serves as the justification for other things, including opinions and beliefs.

Philosophical Enquiry

One of the long-standing problems of philosophy concerns the justification of knowledge. Noam Chomsky called this problem Plato's Problem. It results from what he calls the "poverty of the stimulus." The evidence and information we receive from the senses, he argues, is insufficient to justify the knowledge we have.

A child, for example, can learn a language even though not explicitly instructed. The knowledge of a language is a way of knowing about universals, because we can generate an infinite number of different sentences in a language. But our experiences are always finite and limited. No matter how much we experience, we can always imagine, and express in language, something that goes beyond our experiences.

In the field of 'epistemology' - that is, the philosophy of knowledge - this is known as the problem of the justification of induction. How can we know about general properties, such as colours or shapes, when we only have limited experiences of them? How can we know universal truths, such as "2+2=4", when we have only finite experiences? We can't! This was the conclusion Descartes reached in Meditations, and he argued that experience is insufficient and unreliable. We must rely on our rationality, our innate knowledge pre-written in our mind like a "mark of God" to make sense of the world.


Cartesian scepticism, as it came to be called, marked the beginning of a tradition in philosophy called rationalism. The rationalists believed that knowledge comes first from the mind, and that through the application of the principles of rationality we can come to know about the world. That is not to discount the role of experience and perception. But these, argue the rationalists, are unreliable.

Perhaps the most important of the rationalists was Immanual Kant. In the Critique of Pure Reason he described the "transcendental deduction" in which he established the "necessary conditions of the possibility of experience". Though direct experience of the external world is impossible, he argues, we can understand its fundamental structure because our experiences of it would be impossible without it. These fundamental structures are the principles of space and time, and these are governed by the principles of pure logic.

In the 20th century, the logical positivists attempted to realize Kant's vision by constructing a universal theory of knowledge based on fundamental data from experience - sense data - and logical inferences from that data. As described by A.J. Ayer in Language, Truth and Logic, knowledge begins with the inference of general principles from observation language, and then proceeds by means of verification of these principles through the process of making and testing predictions. The meaning of a sentence was equivalent to the conditions of its verification; a sentence that could not be verified by experience was, literally, meaningless.

Logic is, in essence, pure abstraction, produced by thought alone. Without the material of observation statements, it has no meaning on its own. Logic can be used to derive knowledge from experience, but not to produce knowledge by itself. Logical and mathematical truths are true only within the language of logic itself; they are then applied to statements about experience and used to infer new statements about experience. So the theory goes, at least.

Scientific Philosophy
In the 20th century the sciences flourished, greatly bolstered by the application of logic and mathematics to physical phenomena. Our understanding of language and meaning led to the development of computer science, which in turn led to the information revolution.

The scientific model described by Ayer was described in much more detail by philosophers such as Carl Hempel, who formalized the method of hypothesis formation and prediction into what he called the Deductive-Nomological Model.Another model was created by Karl Popper, who emphasized falsification rather than verification. Instead of proving that a scientific theory is true, argued Popper, we need to try to prove that theories are false.

Even our study of the mind was impacted; based on logical positivist principles philosophers like B.F. Skinner and Gilbert Ryle developed and popularized the science of behaviourism, which reduced all statements about mental phenomena (such as beliefs, desires and hopes) to statements about physical behaviours.

However, the science of logical positivism was based on a critical flaw, which was first described by W.V.O. Quine in his important paper 'Two Dogmas of Empiricism'. Logical positivism depends, he writes, on two principles that turn out to be false:
  • the analytic-synthetic distinction, which distinguishes between observation statements and and pure logic
  • the principle of reduction, which argues that all knowledge can be reduced to observation and perception
Our observation of the world, our perceptions, our experiences - these are all theory laden. Scientists work in what Thomas Kuhn called paradigms and these define not only the problems that need to be solved and the principles we use to solve them, but also the meanings of the words we use and what counts (and doesn't count) as observation and data. 


Today, we don't know what exists, and what is just an artifact of our mind or of our scientific theories. We are immersed in our world. The meanings of our words is not fixed and determined by observations and reality, but vary and change, as Ludwig Wittgenstein argues, by the way we use them. Our languages are not constructions we create from experience and reason, but games we play with each other in the day-to-day fact of existence.

The theoretical stance we adopt determines what we know (or at least, what we think we know) about the world. One major stance is called 'realism' - this is the idea that we can know that there is a real world, and that science is the process of studying that world. The best evidence of the reality of the world, according to this approach, is that it exists. "Here is a hand," says G.E. Moore, holding out a hand. What more proof could you have? What more proof could you need?

But realism has its sceptics. Not everything that we perceive is 'real'. Take, for example, the colour red. Is the colour red real? Plato thought it was, and that it existed on a plane of ideal forms (along with goodness and virtue, justice and beauty). But even as early as 800 years ago, philosophers like William of Ockham were questioning this doctrine. "Do not multiply entities beyond necessity," argued Ockham in the first formulation of what we now call Ockham's Razor.

Contrasting realism is the philosophy called phenomenology. Most completely described by Edmund Husserl, it is the study of the structure of human experience. This experience typically involves what Husserls calls intentionality, or the property of being directed outward toward the world. The idea is that experience represents or 'intends' external objects or properties. Experience, therefore, is something that is interpreted through a process of reason and reflection. This approach to phenomena can be illuminating; Jacques Derrida, for example, finds through the interpretation of language the essence of hidden meanings and what he calls the difference in the meaning of a word according to the alternatives to that word imagined by the speaker.

In contemporary this has evolved into the idea that knowledge and reality are contained in representations, which are essentially mental models constructed as the result of experience. Thus, for example, when we say that a proposition P is 'true', what we mean is that 'P is true in M', where M is a model or representation of the world. Most science today is conducted through the creation and testing of models or representations as a whole. One example of this approach is described in Bas C. van Frassen's The Scientific Image, which describes what he calls 'constructive empiricism'.

Representationalism has also been advanced as a theory of mind. In his book Representations Jerry Fodor outlines the thesis that our mental states are composed of mental representations, which in turn are created out of what he calls the language of thought. Like Chomsky, Fodor believes that the capacity to build these representations is innate, and that we are born with the potential to realize a fully formed language of thought already realized. Knowledge, therefore, is a true and justified statement in this language, and a collection of such statements combine to form a representational state.

Toward a Theory of Knowledge as Recognition

The history of philosophy is the history of the attempt to justify knowledge through some mechanism of justifying statements describing states of affairs in the world. But this attempt has been thwarted by the fact that we do not have direct experience of the world, and hence are forced in one way or another to study ourselves in an attempt to study the world.

Ultimately, this is unsatisfying. Logic and language require that statements be true or false, or that we have what are called 'attitudes' toward propositions. If knowledge is formed of propositions, therefore, there will always be the question of what comes before knowledge, that will justify or otherwise lead us to forming these attitudes - that a proposition is believed, that it is probable, that it is true, that it is necessary, that it is intentional, and the like. But knowledge should be the foundation of these attitudes, and not the result of them. The idea, therefore, that knowledge is composed of statements in a language, or propositions in a representation, is inherently self-contradictory.

What if knowledge were something else? What if it were something that is sybsymbolic? What if language was useful as a way to express knowledge, but not what knowledge actually is?

In the 1700s the English philosopher David Hume conducted a sceptical enquiry of human reason and reached much the same conclusion. Among other aspects of knowledge, he examined the principle of causation. Without causation, we do not have any coherent concept of science, or of explanation, or of human action and morality, at all. So if anything is an element of knowledge, cause and effect is.

But cause and effect cannot be derived from experience, and it cannot be derived from pure reason. The idea that, because one event happens, another necessarily follows, cannot be derived from any form of inference at all. But, he observes, it is universally believed, and not only by lecturers and scientists, by by the common man, small children, and even animals! So we have knowledge, even if we don't have the language to express it.

In his Treatise of Human Nature, Hume argued that we arrive at principles like causation through the process of custom and habit. "Men will scarce ever be persuaded, that effects of such consequence can flow from principles, which are seemingly so inconsiderable, and that the far greatest part of our reasonings with all our actions and passions, can be derived from nothing but custom and habit." And "Thus it appears, that the belief or assent, which always attends the memory and senses, is nothing but the vivacity of those perceptions they present."

Today we call this form of learning 'associationism' and it forms the basis for theories of neural connectivity. Hume's basic principle of contiguity, where one idea or impression is commonly followed by another, is an instance of the principle of association described by Donald O. Hebb in what we today call Hebbian Associationism, the basic learning theory for neural networks.

When we associate experiences in our mind, we aren't performing any sort of inference on them, and we don't even typically represent them in a language. We see our child's face every day, and we don't describe it to ourselves, we simply come to recognize this particular collection of features as it is presented to us every day. To 'know' that one sort of thing causes another is simply to recognize this circumstance each time we see it. To be able to read, to infer, and even to reason, is to recognize common word forms, syllogisms, or commonalities. The recognition, and the fact of recognition, is the knowledge and the justification for knowledge all rolled into one - a direct, non-inferential form of knowledge.


Ayer, A.J. LLanguage, Truth and Logic. 1936, London: Victor Gollancz Ltd.

Chomsky, Noam. Modular Approaches to the Study of the Mind. San Diego: San Diego State University Press, 1984.

Descartes, René.  Meditations. Translated by John Veitch, 1901. 

Fodor, Jerry. The Language of Thought, Harvard University Press, 1975

Fodor, Jerry. Representations: Philosophical Essays on the Foundations of Cognitive Science, Harvard Press (UK) and MIT Press (US), 1979

Hempel, C. and P. Oppenheim., 1948, ‘Studies in the Logic of Explanation.’, Philosophy of Science, 15: 135–175.

Hume, Davis. A Treatise of Human Nature. Project Gutenberg, 2010. 

Husserl, E., 1963, Ideas: A General Introduction to Pure Phenomenology. Trans. W. R. Boyce Gibson. New York: Collier Books.

Kant, Immanual. Critique of Pure Reason (1929: Norman Kemp Smith translation).

Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: University of Chicago Press (1970, 2nd edition, with postscript).

Ockham's Razor. Encyclopedia Britannica. 

Plato: (427 B.C. – 347 B.C.). Theaetetus. 2004. Farlex, Inc. 15 Mar. 2006.

Popper, Karl, 1959, The Logic of Scientific Discovery, London: Hutchinson.

Quine,  W.V.O. Two Dogmas of E,piricism. The Philosophical Review 60 (1951): 20-43.

Wittgenstein, Ludwig. Philosophical Investigations. 1958: Basil Blackwell.

van Fraassen, Bas C. The Scientific Image. Oxford: Clarendon Press, 1980.


Your comments will be moderated. Sorry, but it's not a nice world out there.