There's a standard trope in the government policy arena around the idea of 'picking winners'. It is based around the argument that government ought not be subsidizing specific companies, because this amounts to selecting which companies will be more successful in the marketplace. The idea is that the marketplace should determine which companies succeed or fail, and that government should not interfere in the process.
A typical example of this argument can be found in a Financial Post article from last May. As the article reports, “Nobody wants to see an industry go bye-bye,” Mark Milke, a senior
fellow at the Fraser Institute and the author of Tuesday’s report, said
in an interview. “But the fact is, it doesn’t make sense to subsidize one business
over another. You pick winners or losers, and you often end up with
There's a certain merit to this argument. In the case of selecting which company to support, there is an evaluation process required, and this usually involves the weighing of one business case against another. Often, the result is decided in favour of whatever company can write the best business case, without proper reference to market conditions. And often, market conditions change, unexpected variables come into play, and what seemed like a good idea at the time turns out to be nonviable, and indeed, would have dies much sooner, and at much lower cost, than it would have been had it not been publicly funded.
A similar case exists in linguistics and language theory. Consider the question of how we name things is answered. Very often, an academic group or a government agency will seek to define what something is named. The word 'hopefully' is like that - purists will constantly complain that the word means one thing, but the popular will prevails, and hopefully we'll soon see the end of this debate. Or the word 'bae', which purists say should not be used at all.
On the web we're seeing a similar phenomenon with respect to categorization. The term 'mule' might have a specific range of meanings, and these may be itemized in the dictionary. Run a search on the hashtag '#mule', however and you get a much more fine-grained set of associations, running the gamut between everything from the animal to the knife to the footwear to, well, whatever... the public's sense of the meaning of the word, it seems, is far more nuanced than the dictionary meaning. And indeed, the dictionary meaning can over time be completely eclipsed by the public use of the term - as for example with the term 'hash' itself.
This is interesting because we can now begin to assess these public uses of a term computationally. In today's OLDaily, for example, I write about the Toronto Deep Learning demonstrations, which show how an artificial neural network can classift images based on public usage of the terms associated with the image - and this with no prior instruction on what the image represents or means. Now these early experiments, as convincing as they are in one dimension, show the need for much more development in others. They at best begin to approach the dictionary-definition level of classification, and need to be refined considerably to reflect the public-usage level of classification.
But it raises the question of whether it's a good idea to define words formally at all. By this I don't refer to the usual practice of coming to some agreement on the usage of a term. This is an ad hoc process typically used for specific domains and purposes. Hence, for example, a software development team needs to define the term 'class' formally as a type of software structure, while a school board will define 'class' completely differently. There's nothing wrong with that; it's the logical equivalent of everyone agreeing to use telephones to communicate with each other.
But imagine the case were these words defined centrally, by the government. We actually have such a case, in French, with the
The Académie française, which defines the standard meanings of terms in the language. One might argue that it's fighting a losing battle. On the one hand, the French language itself drifts with the development of new technology and nomenclature, resulting in the 'Anglicization' of the language (hence the academy's recommendation to use 'courriel' instead of 'email'). And even in France, it faces resistance from regional languages such as Corsican, Breton and Catalan. Here in Canada, the French language spoken in Quebec also varies from the standard, and in New Brunswick the local Acadian population speaks a version of French known as Chiac.
Where the argument seems to be turned on its head, however, is with the notion of educational standards. The very same organizations that say you should not 'pick winners' when it comes to subsidizing businesses turns around and enthusiastically supports 'fundamentals' in learning, 'getting back to basics', curricular standards, standardized testing, and more. But they are, I would argue, making the same mistake as organizations try to do when they 'pick winners' in these other ways.
Let's take everyone's favourite example, mathematics. This subject is perennially recommended, along with reading and writing, as "foundational". The international OECD PISA (Programme for International Student Assessment) tests, for example, assess mathematics skills for 15-year olds along with proficiency in language and science. There is no question an element of 'picking winners' in these tests.
For example, consider the statements released recently from the organization: " In 2012, some economies also participated in the optional assessments of Problem Solving and Financial Literacy." What makes these, one wonders, foundational (and what justifies PISA's decision to evaluate "economies", as opposed to countries)? And within the mathematics assessment itself, PISA organizes the mathematical domain into processes and underlying mathematical capabilities, content knowledge, and contexts.
The processes and capabilities include such things as "identifying the mathematical aspects of a problem situated in a real-world context and identifying the significant variables" and "recognising aspects of a problem that correspond with known problems or mathematical concepts, facts, or procedures." In 'content' there is a list of items including functions, algebraic expressions, co-ordinate systems, mathematical operations, and more (the full list is on page 36). The 'contexts' are personal, occupational, societal and scientific.
From my perspective there is a lot of 'picking winners' here. I would certainly raise the question of whether this set of skills represents what is actually needed in society, or whether the body of experts, functioning much like an educational version of les immortels, is seeking to define 'knowledge' in its own image. Certainly, as someone who has studied in some depth the philosophy of mathematics I could at least suggest alternatives that might have thrived on their own, had it not been for the formalization process 'distorting the marketplace'.
For example, the basic operations in mathematics are depicted as quantificational - that is, the core of mathematics is focused on the quantities, or amounts, of things. These concepts are them applied to other domains. So the thinking process is something like this - you approach a problem, 'decode' it into mathematical terms (the document actually uses the term "decode"), and then apply these (sometimes in other contexts, such as geometry) to identify a solution.
But as Russell and Whitehead showed in Principia Mathematica, mathematics is co-extensive with set theory. They are one and the same thing. What that means is that if we start talking about categories and groups of things, instead of numbers of things, we can perform exactly the same inferences. Indeed, pursing the same line of thinking further, what is really foundational in mathematics are the what we find in and of the proposed axiom systems (such as Peano arithmetic): the properties of symmetry, transitivity, and reflexivity.
These are much more foundational, and much more powerful. Symmetry, for example, is the basis for equivalence ("if x = y, then y = x") but also the basis for concepts in aesthetics and design, physiology, and more. What would a thinking system based on symmetries look like? We'll never know, because everybody in the world is taught, first, to think in numbers, and then second, to encode everything in numbers. Perhaps it would make more sense to think in terms of operations, or similarities. But we've picked a winner: everyone must reason in terms of numbers.
This is of particular concern to me because a large domain of reasoning that is also coextensive with set theory is not taught and not valued by OECD: logic. This covers everything from critical thinking to formal principles such as quantification theory, propositional inferences, modal logic, and more. The state of logical literacy (if I dare use the term) in the world, and even as exhibited among our experts and leaders, is appalling. But there's no OECD test for modus tollens, no league tables (with numerical rankings) of skills in cross-classification.
So where does this leave us? I think it is a substantial prima facie argument (for those of you who understand the term) for questioning the application of standardized curricula - in other words, we should question our process of 'picking winners' among the various domains of knowledge. We should instead be willing to countenance the idea that different people can become expert in different types of skills, and let these interact in (if you will) a knowledge marketplace.
What would this look like? How would it be structured? Well, this is the basis for a lot of the stuff I've written. But it's important to understand at this juncture that what I seek to achieve with my own thinking and my own approach is something very different from what the people supporting standardized learning outcomes support. I am looking for a mechanism that will allow new types of learning - perhaps types that are fundamentally ineffable (that is, cannot be expressed in language).
The domain of learning available to our people, and especially our young, should be at least as rich and as nuanced as the language they learn and the marketplace they inhabit. Shouldn't it?