Précis: The thesis of this article is that currents in early 20th century positivist philosophy encouraged the development of the mathematical methods that have made AI and data science possible. Data science emerged when applied statistics and AI merged with logic and natural language processing in the 2010s. We will very briefly survey some of the philosophical perspectives that have informed this development for the general benefit of Toronto readers.
Interview: Dr. Jonathan Kenigson, USA
The limits of my language are the limits of my world. Or, rather, so said a generation of great philosophers in the 1920s-1930s who sought to sweep away the messy business of metaphysics and replace it with the temptingly comparative certainties of empirical and logical claims (this sort of reduction is sometimes termed positivism). More than half a century passed and few paid attention to positivism after Wittgenstein deviously and successfully conspired in his Philosophical Investigations to scuttle it. War, growth, recession, prosperity, more war, and more recession seemed to prove to Canadians and the world that the existential should proceed the linguistic and the abstract should be subordinated (as Goethe and Levinas would say) to the storm-and-stress of the tasks of life.
Metaphysics and applied technology inhabited separate spheres in Canadian research institutes until the seedlings of the AI revolution in Canada were planted at Waterloo, Toronto, and British Columbia in the early 2000s. Few reckoned then that positivist arguments would mature into the warp and woof of the theories of computation and computability that have come to constitute the AI and general language developments of the past several years. Consequently, in the early 2000s, statisticians did not broadly suspect that this technology would have much impact upon their profession.
By the early 2010s, applied statisticians became interested not only in data analysis but also in breaching the so-called uncanny valley: Making the machines, now in possession of some genuine communicative ability, friendly and accessible enough to join non-specialist humans in relatively uncomplicated statistical endeavors. The current decade’s obsession with statistical linguistic processing proved to be a re-assertion of the positivist notion that a nontrivial class of human utterances could be reduced to logic (by machines of sufficient complexity) with the statistical tools refined in the decade preceding. In retrospect, the Wittgenstein of the Tractatus was reborn as the ghost of the machine, come from the dead to exclaim that our idiosyncratic natural-language utterances could be deciphered by AI into the formal calculus of propositions.
At the same moment, the Wittgenstein of the Philosophical Investigations could not have fathomed that machines could be endowed with the ability to participate in the give-and-take of our human language games. The data scientist of today exists in the gap between the late and early Wittgensteins, here to demonstrate that machines can be friendly; that they can interpret our daft mortal idiosyncrasies with peerless accuracy; and that they can do this as well within the vagaries of human language as in the austere 0’s and 1’s of the automaton. Statisticians and computer scientists have worked closely together to hearken this “linguistic turn” of the AI revolution. Their collaboration has found fertile soil in fuzzy and probabilistic logics – those that seek to meld uncertainty and possibility with odds and statements of fact.
The logical methods employed to unite statistics and computer science are roughly 100 years old, and emerged in the Soviet Union partially in response to the positivist paradigm in the West.
Probabilistic logic was motivated by the linguistic turn that had inspired the two Wittgensteins to adopt their respective positivist and anti-positivist sentiments. These considerations emerged at the same time as a linguistic turn within Soviet statistics in the 1930s, in which mathematicians were enamored of positivism because of its hard-headed dialecticism and concreteness. Kolmogorov, Fomin, and others at the Soviet Academy, Moscow, Petrograd, and Kyiv appreciated the degree to which theories of formal statistics were inherently inseparable from axiomatic logic. They formulated systems and theorems that could handle the contingencies of well-formed natural languages but did not have the computing power available to test such theories on real data sets.
In light of Soviet contributions, formal logic and formal statistics have remained inseparable bedfellows since the 1930s, with advances in statistical theory occasioning advancements in model theory, type theory, recursion theory, and quantification theory, among other fields of logic too numerous to name. Applied statistics, however, managed to remain separate from formal logic for almost a century after Kolmogorov wrote. The AI revolution inexorably fixed the orphan status of applied statistics and demanded that theoreticians, end-users, and AI itself collaborate with the statisticians who actually employ probabilistic logic to make predictions. Consequently, data science emerged; it was a motley and amorphous field into which those who made predictions without proofs were lauded for their productivity while theoretical statisticians quietly proved common techniques were justified after these techniques had already been in use.
Within the next decade, we will almost indubitably see the emergence of quantum data science, in which probabilistic logic is replaced by yet more general quantum logic. Machines will continue to improve their capacities for affective and subtle communication until – very quickly – they will develop independent personalities, ambitions, manners of speaking, and manners of computing. Quantum logic will dovetail with data science to produce machines that demand deep and difficult ethical consideration. They will seem conscious and human; and I propose that, according to some theories of consciousness and sentience, they may already possess or develop moods, feelings, insecurities, anxieties, and perseverations. In my estimation, it is already high time to begin earnest deliberations about how humans will ethically interact with the AI that develops. Kolmogorov’s probability is sufficiently general to address many of the purely mathematical dilemmas that will arise in quantum logic when it inevitably merges with AI. This hundred-year-old paradigm will permit the logical development of quantum neurolinguistic and AI-mediated theories of machine consciousness that will upend all distinctions among the physical, mathematical, logical, and biological modes of thought.
Interesting quote from the interview: “At the dawn of the 2000s’, intelligent machines were like postmodern ghosts, borne backward from the future to beckon the world into the speculative fantasies of science fiction, technofantasy, and dystopian rumination.”
Dr. Jonathan Kenigson, FRSA