Artificial intelligence and human experience

1. Anticipation and Fulfillment in Visual Perception

Reading an interview with philosopher Michael Madary, I was thinking as so many do about how far artificial intelligence (AI) can go in mimicking the human experience. Madary’s bailiwick is philosophy of mind and the ethics of emerging technologies, especially virtual reality. The interview focuses mainly on Madary’s anticipation and fulfillment model of visual perception. The basic model, it seems to me, is equally applicable to human or AI behavior; i.e., visual perception as proliferating perspectives across time. You first see the object from one limited point of view. To truly grasp it, though, you need to anticipate what it looks like from additional perspectives. You move around, double-check, and your anticipation is more or less “fulfilled” or verified. Then the point of fulfillment becomes a starting point for next stage of anticipation, etc. Visual perception is this process of constantly accumulating perspectives, ever absorbing regions of indeterminacy into determinacy, ever approximating an unreachable objectivity of perspective.

It seems AI should be very good at all of this. So what, if anything, about human reality does AI miss? Is it that there is no psychological reality corresponding to the process as AI executes it? Can the distinction of the human’s psychological reality be evidenced? Is it manifest in the idea of motivation? AI can execute the same process but has a different (or absent) motivation vis-à-vis the human counterpart? The human viewing a modern sculpture from different perspectives may be seeking aesthetic beauty or pleasure, which would seem beyond the scope of AI. Although local human actions may mimic AI processes, it may be that the ultimate motivation behind human action in general falls outside the scope of AI – let’s call that ultimate motivation happiness (following Aristotle) or the satisfaction that comes with living a good life (following Plato); is there any comparable motivation for AI?

2. Transcendental Fulfillment

Or how about this take on the AI/human difference. What humans are always seeking, the grand dream under it all, is fulfillment liberated from the anticipation-fulfillment cycle, a sense of contentment that gets us out of the rat race of endless desires and partial fulfillments. For a correlative to the visual perception model, picture yourself gazing at a painting, motionless, without any interest in increasing perspectives, just having the static fulfillment of the beauty in front of you for a certain duration of time. What elapses in that duration of time may be an expression of the thing that is inaccessible to AI. What humans really want is not a sense of fulfillment that is indefinitely deferred by endlessly proliferating perspectives – the never-ending drive for more data that might occupy the AI entity. We want THE sense of fulfillment that comes when you opt out of the cycle of proliferating perspectives, a sense of fulfillment that transcends process. So whereas the anticipation-fulfillment cycle is an end in itself for AI, for humans all such processes are instrumental; the end in itself, that end which motivates the whole process, is that which falls outside of the process, a kind of static contentment that in inaccessible to the AI.

3. Singularity, or Singularities

The concept of singularity might help to clarify the distinction between fulfillment embedded in the anticipation-fulfillment process and transcendental fulfillment, fulfillment liberated from the cycle. Transcendental fulfillment is, let’s say, a metaphysical singularity – the space of infinite oneness alluded to by many philosophers and mystics, indicating escape from the rat race. Compare that to technological singularity, the critical mass at which artificial superintelligence would eclipse all human intelligence, rendering homo sapiens superfluous. Perhaps fears about the technological singularity, about an external AI dislodging us, are misplaced. I will tentatively side with those who say that computers do not really have their own motivation, their own autonomy. The risk then is not of some external AI overtaking us; the risk lies rather in the porosity between human reality and AI. The risk is not that AI will defeat us in some battlefield Armageddon but rather that it will bleed the life out of us, slowly, imperceptibly, until we go limp. We have already ceded quite a bit of brain activity to the machine world. We used to have dozens of phone numbers in our heads – now all stored in our “external” brains.” We used to do a lot more math in our heads. (Try getting change today from a teen worker without the machine telling them how much to give.) World capitals? In the external brain. City map layouts in one’s head? Replaced by GPS real-time instructions.

“So what?” students today might say. “Memorizing all that stuff is now a waste of time?” And they may be right. But as the porosity increases between human reality and the computerized ether we live in, we cede more and more of our basic survival skills to the ether. I don’t expect malice on the part of AI (although the HAL 9000 was a cool concept), but there may come a tipping point at which we have ceded the basic means of species survival to the machine world. And in ceding more control of our inner lives to the external brain, we become more embedded in the anticipation-fulfillment cycle. Even basic human activities take on a query-and-report format. It becomes increasingly difficult to “opt out” of the processing apparatus and find that space of reflection that transcends the endless proliferation of future-directed perspectives.

4. The Historical Side: Dystopic or Utopic

All this talk about homo sapiens being bled out sounds quite dystopic, and perhaps dystopia is the endgame. But not all possible futures are grim. First of all, in structural terms, porosity is two-directional. Ever since the invention of writing, we have transferred information into the external media of books, giving subsequent generations the capacity to “upload” that information and store it in their brains when the books are removed. This prompted Clark and Chalmers, as far back as 1998, to theorize about the “extended mind,” in which the space of the mind is shared by internal processes and environmental objects that work in tandem with those processes. Another parallel is in Wittgenstein’s Blue Book example wherein we use a color chart until we “learn” our colors, and then throw the chart away. In these cases, the external device provides intermediate information storage. We use the Internet in this fashion all the time. Nothing dystopic here. But is it different when the device becomes capable of evolving its own algorithms, generating its own information, and using it to implement tasks that go far beyond mere storage? Perhaps so, but it is not yet clear that the dystopic end is inevitable.

Second of all, in terms of social implication, technology could free us up to spend less of our lives on drudgery and more of our lives in that reflective space of self-fulfillment, working out our own electives of self-realization. Indeed, this is the signature promise of technology in the age of capitalism. Ever since the early 19th-century Luddite rebellion, technology has repeatedly made this promise and repeatedly failed to deliver. Why would it be any different now? It could only be different if there were a fundamental shift in our perspective of what it is to be human.

When Madary exemplifies his visual perception theory with the example of a modern sculpture, he introduces what for me is the wild card in the anticipation-fulfillment cycle: the element of surprise.

“Recall a situation in which you moved to gain a better perspective on a novel object and were surprised by how it appeared from the hidden side … Modern sculpture can be helpful for illustrating visual anticipations because the precise shape of the sculpture is often unclear from one’s initial perspective. Our anticipations regarding the hidden side of a modern sculpture tend to be more indeterminate than our anticipations about the hidden sides of more familiar objects.” (Madary)

Whereas I started out by saying that Madary’s anticipation-fulfillment model of visual perception applies equally to AI and humans, I suspect we might handle the element of “surprise” differently. In the case of humans, “surprise” is a trigger for imagination, a less tractable faculty than might be intelligible to our future friends in the AI phylum. Sure, our AI compatriots might predict possible futures as well or better than we do (and thus they might best us at chess), but is that really “imagination”? Humans imagine not only possible futures, but also alternative presents and alternative pasts, using self-generated imagery to feed nostalgic visions of times gone by. There is something about the creative process of imagination that might separate us from AI and might make us less predictable in the event of “surprises” or disruptions in the normal anticipation-fulfillment process. Since technology has typically failed to deliver on its promise to enhance self-fulfillment time for personal development, we might anticipate another failure when technology says that AI will truly free us up from drudgery. But the result could be different this time if a rupture in conditions is great enough. Imperatives of income inequality and ecological destruction might be rupture enough. As we survey our predicament the way Madary’s viewer surveys the modern sculpture, we might on the other side glimpse the end of capitalism (which may sound dramatic, and yet all ages do end). Perhaps this might jolt the imagination to a new sensibility, a new subjective frame of reference for values like “work” and “technology” and “success” and “self-actualization” – to wit, a new definition of what it means to be fully human.

How rapidly and wholly we make that turn to a new definition of what it means to be fully human will lock in the dystopic or utopic endgame. In the dystopic version, homo sapiens is bled out by some combination of AI and economic and ecological calamities. In the utopic version, consciousness about what it is to be human evolves quickly enough to allay those calamities and to recapture AI as the servant of human ends and not vice versa.

Footnote on Kierkegaard’s 3 modes of lived experience: aesthetic, ethical, religious

The anticipation-fulfillment model of visual perception can be seen as the basic process of Kierkegaard’s aesthetic mode, sensory-based life. A whole life lived on the aesthetic level is lived as a continuous accumulation of equally ephemeral sensory perspectives on one’s own life.

The ethical life turns out the follow the same model but on the ethical level. The ethical life is a continual accumulation of equally provisional ethical perspectives.

The religious life, though, breaks the model. It concerns the absolute. It does not consist of accumulating perspectives but of a singularity; it eschews all the accumulations of visual or ethical perception for the singular relation between the religious subject and the absolute, a singularity which obliterates all mediate or quantitative concerns.

Michael Madary’s Visual Phenomenology

Richard Marshall’s interview of Michael Madary in 3:AM Magazine

BookCoverImage         year-bfly-cover       Cover png

 

Aristotle, Wittgenstein, and Identity Politics

My blog entry on Two Kinds of Liberals raised for me a philosophical knot to be untied, implicating such formidable dead men as Aristotle and Wittgenstein.

Aristotle’s interest in natural philosophy and classification leads him to distinguish essential traits from accidental traits. Having four legs and a tail are “essential” traits of a cat; having a calico coloring is an “accidental” trait, a trait that applies to the individual but doesn’t define the category.

Wittgenstein makes a point in the Blue Book that at first sounds similar to Aristotle’s but turns out to be different in implication. Wittgenstein is interested in how we use language. E.g., when we read, do we process the meaning of each word and then put the meanings together? That may seem intuitive, but thinkers as far back as Edmund Burke (in his great 18th-century treatise on the sublime) suspected that this is not how the psychological process works. Wittgenstein asks us to picture someone who hasn’t learned the names for colors. Send him out to pick red flowers today, blue flowers tomorrow. At first you give him a color chart and he compares the flowers in the field to the chart, picking the correct ones. But soon he doesn’t need the chart because he “knows” his colors. The color chart is no longer relevant to his completion of the task. Just as the color chart is no longer needed to pick the flowers, the “image” associated with each word is not required for the process of reading and understanding the novel. We don’t stop and picture the meaning or image associated with each word before going on to the next word. Were this so, we would never in a lifetime finish our first Russian novel. Thus, Wittgenstein distinguishes between “a process being in accordance with a rule” and “a process involving a rule.” As when the color chart is no longer needed, we understand the novel “in accordance with” the meanings of words, but the meanings are not “involved” in the process. Wittgenstein concludes: “The rule which has been taught and is subsequently applied interests us only so far as it is involved in the application. A rule, so far as it interests us, does not act at a distance.” Or, to put it mathematically, if we want to understand a calculation, we are only interested in a rule if “the symbol of the rule forms part of the calculation.”

At first it looks like “a rule involved in a process” corresponds to an “essential” rule in Aristotle’s terms and “a rule in accordance with which” a process takes place would be an “accidental” rule, and there may indeed be contexts wherein the analogy holds true. But Wittgenstein’s point is more radical. Whereas Aristotle is clarifying aspects of the objective world, Wittgenstein is saying that language, once learned, functions without reference to a world outside of itself. The objective world to which the language might refer is irrelevant to (uninvolved in) our processing and understanding the language. “The sign (the sentence) gets its significance … [not from] an object co-existing with the sign … but from the system of signs, from the language to which it belongs. Roughly, understanding a sentence means understanding a language.”

Unlike Aristotle, Wittgenstein points the way to postmodernism, where the ground of meaning is infinitely displaced by a series of signifiers, where there is no ultimate reference point, and where relativism – metaphysical and cultural – becomes hard to shake off.

This theoretical dissonance may seem pointless, but I think it exposes the layering that undergirds the way we think about real world problems. Take the issue of cultural difference. The wing of liberalism I associate with Enlightenment rationalism, as well as with 1960-70s Civil Rights and feminism, is folded on top of an Aristotelian base. The “essential” aspect of human identity is our shared humanness, and we can best resolve such problems as racism through appeal to our universal human capacities for reason and compassion. Race, gender, and cultural identities are, after all, “accidental” traits superimposed upon that shared humanness.

“Identity politics,” together with “multiculturalism,” took hold in academia in the 1980s, and proposed that objectivity is impossible because everyone is a priori “politically situated” by their race, gender, class, etc. This theory is rooted in the ideas of Wittgenstein rather than those of Aristotle. In addressing problems of cultural difference, identity politics does not expressly deny “shared humanness,” but shared humanness is no longer “involved” in the process – it doesn’t form part of the active calculation. The political determinants of race, gender, etc., on the other hand, are “involved” in the process, and need to be respected as such. For example, when the white William Styron wrote The Confessions of Nat Turner from a black man’s perspective, the liberals who attacked him for the arrogance of crossing that line would fit my category of multiculturalist liberals. For them, in today’s racial milieu, the black experience, the white experience, are “involved” in social relations, whereas shared humanness is remote; thus, it is presumptuous for a white man to think he can comprehend what a black man such as Nat Turner might have felt. The other branch of liberals – Enlightenment rationalists, 1960s liberals – who bank on the Aristotelian notion of shared humanness, would, quite the contrary, praise Styron for struggling to get beyond the “accidental” features of race and grasp experience from the point of view of our shared humanness.

When I said in my Two Kinds of Liberals blog that I was “with multiculturalism when it’s building bridges but not when it’s guarding walls,” I can now say that “identity politics” is an example of multiculturalism “guarding walls.” I see efforts such as Styron’s not as some kind of insidious “cultural appropriation” (an impossible term if one believes in the primacy of shared humanness) but as a heroic attempt to illuminate how our shared humanness is the key to dismantling the prejudice and ill will that can absorb us when we remain trapped within such “accidental” layers of identity as race or gender or cultural groupings. (And remember that “accidental” in Aristotle doesn’t mean trivial or unworthy of celebration, but simply means that it is a feature that does not define the essence.)

One other (unhappy in my opinion) consequence of the rise of “identity politics” within liberalism is the way in which it ceded the high ground that liberals held in the 1960s and 70s. Take the issue of double standards. My Aristotelian liberals (if you’ll permit the conceit) were the outspoken enemies of double standards on race and gender. This includes Wollstonecraft and Equiano in the Enlightenment period as well as the Civil Rights and feminist movements of the 1960s/70s. But with the theoretical turn to identity politics in the 1980s – where racial and gender identity displace shared humanness as the operative factor in race and gender struggles – a subset of liberals flip-flopped from being the enemies of double standards to being the champions of double standards. Thus began a liberal regimen of race-specific rules for what language is acceptable and for which practices are “reserved” against cultural appropriation, not to mention the idea, novel at the time but now widely accepted among a new generation of liberals, that a prejudice against someone on purely racial grounds is only “racism” if you are white (i.e., if your race has the upper hand in a power differential). Thus the legitimate effort to address gender inequities can take the illegitimate form of banning the word “bossy” for girls but presumably not for boys. The endgame of “identity politics” liberals is understandable, even noble, but the means – which shifted from the brazenly integrationist platform of the 60s to a kind of trench warfare defending this or that demographic turf, which shifted from a confident rejection of all double standards to an embrace of, or at least an equivocation toward, double standards – to the extent that these means have been deployed, liberals have ceded the moral high ground – not to conservatives, who from my vantage seem even farther aloof from the moral high ground, but to a vacuum waiting to be filled.

OK, I can’t really blame this all on Wittgenstein (from whom I learn more with every reading), although he is implicated in the trajectory towards postmodernism, which I do believe is at least partly responsible for the moral vacuum that developed within liberalism. But writing this has restored my faith in the extraordinary resilience of ancient Greek thought. Thus in this recycling of one of the great questions that absorbed European wits from Boileau to Swift in the 100 years or so leading into the Enlightenment – whether the ancients or the moderns were the greater masters of learning – the laurel wreath goes to … Aristotle and the ancients!