Nano-Intentionality: Why We Have Little to Fear From "Thinking Machines"
| 28. November 2016In his commentary, which was first published on Edge.org, cognitive biologist Tecumseh Fitch explains in response to the current Semester Question why he doesn't fear the uprising of artificial intelligence, but rather a catastrophic system failure.
Despite vast increases in computing power – the raw number of bits processed per second – current computers do not think in the way that we do (or a chimpanzee or a dog does). Silicon-based computers lack a crucial capacity of organic minds: the ability to change their detailed material form – and thus their future computations – in response to events in the world. Without this specific capacity (which elsewhere I have dubbed "nano-intentionality"), information processing alone does not amount to meaningful thought, because the symbols and values being computed lack any intrinsic causal connection to the real world. Silicon-based information processing requires interpretation by humans to become meaningful, and will for the foreseeable future. I'll explain this below (or google "nanointentionality" for the full story), but the bottom line is that at present we have little to fear from thinking machines, and more to fear from the increasingly unthinking humans who use them.
About the author:
Tecumseh Fitch is an evolutionary biologist and cognitive scientist. He is Professor of Cognitive Biology at the University of Vienna. Fitch's interests include bioacoustics and biolinguistics, specifically the evolution of speech, language and music. Read more
What exactly is this property present in biological, but not silicon, computers? Fear not that I am invoking some mystical élan vital: this is an observable, mechanistic property of living cells, that evolved via normal Darwinian processes. No mysticism or "invisible spirit" lurks in my argument. At its heart, nanointentionality is the capacity of cells to respond to events and changes in their environment by rearranging their molecules and changing their form. It is present in an amoeba engulfing a bacterium, a muscle cell boosting myosin levels in response to jogging, or (most relevantly) a neuron extending its dendrites in response to its local neuro-computational environment. Nanointentionality is a basic, irreducible, and undeniable feature of life on Earth that is not present in the engraved, rigid silicon chips that form the heart of modern computers. Because this physical difference between brains and computers is a simple brute fact, the issue open to debate is what significance this fact has for more abstract philosophical issues concerning "thought" and "meaning". This is where the argument gets a bit more complicated.
The philosophical debate starts with Kant's observation that our minds are irrevocably separated from the typical objects of our thoughts: physical entities in the world. We gather evidence about these objects (via photons or air vibrations or molecules they release) but our minds/brains never make direct contact with them. Thus, the question of how our mental entities (thoughts, beliefs, desires…) can be said to be "about" things in the real world is surprisingly problematic. Indeed, this problem of "aboutness" is a central problem in the philosophy of mind, at the heart of decades-long debates between philosophers like Dennett, Fodor, and Searle. Philosophers have rather unhelpfully dubbed this putative mental "aboutness" intentionality, (not to be confused with the everyday English meaning of "doing something on purpose"). Issues of intentionality (philosopher's sense) are closely tied with deep issues about phenomenal consciousness, often framed in terms of "qualia" and the "hard problem" of consciousness, but they address a more basic and fundamental question: how can a mental entity (a thought – a pattern of neural firing) be in any sense "connected" to its object (a thing you see or the person you are thinking about).
VERANSTALTUNGSTIPP: Podiumsdiskussion zur Semesterfrage am 16. Jänner
Nach einem Impulsreferat von Wolfgang Wahlster (Leiter des Deutschen Forschungszentrums für Künstliche Intelligenz/Universität des Saarlandes) über "Künstliche Intelligenz im Alltag: Besser als der Mensch?" diskutieren mit ihm am Podium: Zivilrechtsexpertin Christiane Wendehorst (Universität Wien), Neurowissenschafter Claus Lamm (Universität Wien), Roboterpsychologin Martina Mara (Ars Electronica Futurelab) und Lukas Kinigadner (Gründer und CEO des Start-up Anyline). Moderation: Rainer Schüller, Tageszeitung "Der Standard".
Zeit: Montag, 16. Jänner 2017, 18 Uhr
Ort: Großer Festsaal der Universität Wien, Universitätsring 1, 1010 Wien
The skeptical, solipsistic answer is: there is no such connection; intentionality is an illusion. This conclusion is false in at least one crucial domain (already highlighted by Schopenhauer 200 years ago): the one place where mental events (desires and intentions, as instantiated in neural firing), make contact with the "real world," is within our own bodies (e.g., at the neuromuscular junction). In general the plasticity of living matter, and neurons in particular, means that a feedback loop directly connects our thoughts to our actions, percolating back through our perceptions to influence the structure of neurons themselves. This loop is closed every day in our brains (indeed if you remember anything about this essay tomorrow, it is because some neurons in your brain changed their form, weakening or strengthening synapses, extending or withdrawing connections…). Precisely this feedback loop cannot in principle be closed in a rigid silicon chip. This biological quality grants our mental activities (or a chimpanzee's or dog's) with a causal intrinsic intentionality lacking in contemporary silicon computing systems.
To the extent that this argument is correct – and both logic and intuition support it – machines "think", "know" or "understand" only in so far as their makers and programmers do, when meaning is added by an intentional, interpreting agent with a brain. Any "intelligence" of AIs is derived solely from their creators.
Every semester, the University of Vienna asks her scientists one question. The Semester Question 2016/17 is: "How are we living in the digital future?"
I thus have no fear of an AI uprising, or AI rights movement (except perhaps for one led by deluded humans). Does this mean we're in the clear (until someone eventually designs a computer with nanointentionality)? Unfortunately not – there is a different danger created by our strong anthropomorphic tendency to misattribute intentions and understanding to inanimate objects ("my car dislikes low-octane fuel"). When we apply this to computational artifacts (computers, smart phones, control systems…) there is a strong tendency to gradually cede our own responsibilities – informed, competent understanding – to computers (and those who control them). Danger begins when we willingly and lazily cede this unique competence to myriad silicon systems (car navigators, smart phones, electronic voting systems, the global financial system…) that neither know nor care what they are computing about. The global financial crisis gave a taste of what's possible in a computer-interconnected world, where responsibility and competence have unwisely been offloaded to machines (trading millions of shares in microseconds).
In conclusion, I don't fear the triumphal uprising of AIs, but rather a catastrophic system failure caused by multiple minor bugs in over-empowered, interconnected silicon systems. We remain very far from any "Singularity" in which computers outsmart us, but this provides no insurance against a network collapse of catastrophic proportions. The first step in avoiding such catastrophes is to stop granting computers responsibility for meaningful thought or understanding, and accept a basic simple truth: machines don't think. And thinking that they do becomes riskier every day.
This essay by Tecumseh Fitch, Professor of Cognitive Biology at the University of Vienna, was first published at Edge.org as an answer to the annual Edge Question of 2015: "What do you think about machines that think?"
Downloads:
Plakat_A2_Semesterfrage_2016.pdf
Dateigröße: 1,62 MB