Artificial intelligence and human experience

1. Anticipation and Fulfillment in Visual Perception

Reading an interview with philosopher Michael Madary, I was thinking as so many do about how far artificial intelligence (AI) can go in mimicking the human experience. Madary’s bailiwick is philosophy of mind and the ethics of emerging technologies, especially virtual reality. The interview focuses mainly on Madary’s anticipation and fulfillment model of visual perception. The basic model, it seems to me, is equally applicable to human or AI behavior; i.e., visual perception as proliferating perspectives across time. You first see the object from one limited point of view. To truly grasp it, though, you need to anticipate what it looks like from additional perspectives. You move around, double-check, and your anticipation is more or less “fulfilled” or verified. Then the point of fulfillment becomes a starting point for next stage of anticipation, etc. Visual perception is this process of constantly accumulating perspectives, ever absorbing regions of indeterminacy into determinacy, ever approximating an unreachable objectivity of perspective.

It seems AI should be very good at all of this. So what, if anything, about human reality does AI miss? Is it that there is no psychological reality corresponding to the process as AI executes it? Can the distinction of the human’s psychological reality be evidenced? Is it manifest in the idea of motivation? AI can execute the same process but has a different (or absent) motivation vis-à-vis the human counterpart? The human viewing a modern sculpture from different perspectives may be seeking aesthetic beauty or pleasure, which would seem beyond the scope of AI. Although local human actions may mimic AI processes, it may be that the ultimate motivation behind human action in general falls outside the scope of AI – let’s call that ultimate motivation happiness (following Aristotle) or the satisfaction that comes with living a good life (following Plato); is there any comparable motivation for AI?

2. Transcendental Fulfillment

Or how about this take on the AI/human difference. What humans are always seeking, the grand dream under it all, is fulfillment liberated from the anticipation-fulfillment cycle, a sense of contentment that gets us out of the rat race of endless desires and partial fulfillments. For a correlative to the visual perception model, picture yourself gazing at a painting, motionless, without any interest in increasing perspectives, just having the static fulfillment of the beauty in front of you for a certain duration of time. What elapses in that duration of time may be an expression of the thing that is inaccessible to AI. What humans really want is not a sense of fulfillment that is indefinitely deferred by endlessly proliferating perspectives – the never-ending drive for more data that might occupy the AI entity. We want THE sense of fulfillment that comes when you opt out of the cycle of proliferating perspectives, a sense of fulfillment that transcends process. So whereas the anticipation-fulfillment cycle is an end in itself for AI, for humans all such processes are instrumental; the end in itself, that end which motivates the whole process, is that which falls outside of the process, a kind of static contentment that in inaccessible to the AI.

3. Singularity, or Singularities

The concept of singularity might help to clarify the distinction between fulfillment embedded in the anticipation-fulfillment process and transcendental fulfillment, fulfillment liberated from the cycle. Transcendental fulfillment is, let’s say, a metaphysical singularity – the space of infinite oneness alluded to by many philosophers and mystics, indicating escape from the rat race. Compare that to technological singularity, the critical mass at which artificial superintelligence would eclipse all human intelligence, rendering homo sapiens superfluous. Perhaps fears about the technological singularity, about an external AI dislodging us, are misplaced. I will tentatively side with those who say that computers do not really have their own motivation, their own autonomy. The risk then is not of some external AI overtaking us; the risk lies rather in the porosity between human reality and AI. The risk is not that AI will defeat us in some battlefield Armageddon but rather that it will bleed the life out of us, slowly, imperceptibly, until we go limp. We have already ceded quite a bit of brain activity to the machine world. We used to have dozens of phone numbers in our heads – now all stored in our “external” brains.” We used to do a lot more math in our heads. (Try getting change today from a teen worker without the machine telling them how much to give.) World capitals? In the external brain. City map layouts in one’s head? Replaced by GPS real-time instructions.

“So what?” students today might say. “Memorizing all that stuff is now a waste of time?” And they may be right. But as the porosity increases between human reality and the computerized ether we live in, we cede more and more of our basic survival skills to the ether. I don’t expect malice on the part of AI (although the HAL 9000 was a cool concept), but there may come a tipping point at which we have ceded the basic means of species survival to the machine world. And in ceding more control of our inner lives to the external brain, we become more embedded in the anticipation-fulfillment cycle. Even basic human activities take on a query-and-report format. It becomes increasingly difficult to “opt out” of the processing apparatus and find that space of reflection that transcends the endless proliferation of future-directed perspectives.

4. The Historical Side: Dystopic or Utopic

All this talk about homo sapiens being bled out sounds quite dystopic, and perhaps dystopia is the endgame. But not all possible futures are grim. First of all, in structural terms, porosity is two-directional. Ever since the invention of writing, we have transferred information into the external media of books, giving subsequent generations the capacity to “upload” that information and store it in their brains when the books are removed. This prompted Clark and Chalmers, as far back as 1998, to theorize about the “extended mind,” in which the space of the mind is shared by internal processes and environmental objects that work in tandem with those processes. Another parallel is in Wittgenstein’s Blue Book example wherein we use a color chart until we “learn” our colors, and then throw the chart away. In these cases, the external device provides intermediate information storage. We use the Internet in this fashion all the time. Nothing dystopic here. But is it different when the device becomes capable of evolving its own algorithms, generating its own information, and using it to implement tasks that go far beyond mere storage? Perhaps so, but it is not yet clear that the dystopic end is inevitable.

Second of all, in terms of social implication, technology could free us up to spend less of our lives on drudgery and more of our lives in that reflective space of self-fulfillment, working out our own electives of self-realization. Indeed, this is the signature promise of technology in the age of capitalism. Ever since the early 19th-century Luddite rebellion, technology has repeatedly made this promise and repeatedly failed to deliver. Why would it be any different now? It could only be different if there were a fundamental shift in our perspective of what it is to be human.

When Madary exemplifies his visual perception theory with the example of a modern sculpture, he introduces what for me is the wild card in the anticipation-fulfillment cycle: the element of surprise.

“Recall a situation in which you moved to gain a better perspective on a novel object and were surprised by how it appeared from the hidden side … Modern sculpture can be helpful for illustrating visual anticipations because the precise shape of the sculpture is often unclear from one’s initial perspective. Our anticipations regarding the hidden side of a modern sculpture tend to be more indeterminate than our anticipations about the hidden sides of more familiar objects.” (Madary)

Whereas I started out by saying that Madary’s anticipation-fulfillment model of visual perception applies equally to AI and humans, I suspect we might handle the element of “surprise” differently. In the case of humans, “surprise” is a trigger for imagination, a less tractable faculty than might be intelligible to our future friends in the AI phylum. Sure, our AI compatriots might predict possible futures as well or better than we do (and thus they might best us at chess), but is that really “imagination”? Humans imagine not only possible futures, but also alternative presents and alternative pasts, using self-generated imagery to feed nostalgic visions of times gone by. There is something about the creative process of imagination that might separate us from AI and might make us less predictable in the event of “surprises” or disruptions in the normal anticipation-fulfillment process. Since technology has typically failed to deliver on its promise to enhance self-fulfillment time for personal development, we might anticipate another failure when technology says that AI will truly free us up from drudgery. But the result could be different this time if a rupture in conditions is great enough. Imperatives of income inequality and ecological destruction might be rupture enough. As we survey our predicament the way Madary’s viewer surveys the modern sculpture, we might on the other side glimpse the end of capitalism (which may sound dramatic, and yet all ages do end). Perhaps this might jolt the imagination to a new sensibility, a new subjective frame of reference for values like “work” and “technology” and “success” and “self-actualization” – to wit, a new definition of what it means to be fully human.

How rapidly and wholly we make that turn to a new definition of what it means to be fully human will lock in the dystopic or utopic endgame. In the dystopic version, homo sapiens is bled out by some combination of AI and economic and ecological calamities. In the utopic version, consciousness about what it is to be human evolves quickly enough to allay those calamities and to recapture AI as the servant of human ends and not vice versa.

Footnote on Kierkegaard’s 3 modes of lived experience: aesthetic, ethical, religious

The anticipation-fulfillment model of visual perception can be seen as the basic process of Kierkegaard’s aesthetic mode, sensory-based life. A whole life lived on the aesthetic level is lived as a continuous accumulation of equally ephemeral sensory perspectives on one’s own life.

The ethical life turns out the follow the same model but on the ethical level. The ethical life is a continual accumulation of equally provisional ethical perspectives.

The religious life, though, breaks the model. It concerns the absolute. It does not consist of accumulating perspectives but of a singularity; it eschews all the accumulations of visual or ethical perception for the singular relation between the religious subject and the absolute, a singularity which obliterates all mediate or quantitative concerns.

Michael Madary’s Visual Phenomenology

Richard Marshall’s interview of Michael Madary in 3:AM Magazine

BookCoverImage         year-bfly-cover       Cover png

 

ISIS/Suspension of Ethics

The recent beheadings and crucifixions in Syria and Iraq in the name of religion is atrocious in its own right, but raises a larger philosophical comparison between secular ethics and religion-based ethics, to the advantage of the secular. Of course, most religious people are horrified by ISIS’s actions and consider them to have no basis in religion whatsoever. I will grant the justice of that position, but it leaves open the question of whether a religion-based ethics is more risky in principle than a secular ethics.

To judge the risk requires pinpointing the essential difference between a religion-based and a secular ethics. The Christian theologian and proto-existentialist, Kierkegaard, is most helpful here. In Fear and Trembling, Kierkegaard sees ethics as fundamentally a secular issue, a derivative of universal rational principles. Religious persons can follow those principles but that is not essentially a function of their religious nature. It simply means that they are following a set of rational principles in addition to being a religious person. The key difference is centered on Kierkegaard’s pointed question: “Can there be a teleological suspension of the ethical?” I.e., can the inscrutable commandments of God overrule “normal” ethical principles?

The paradigmatic case for Kierkegaard is when God commands Abraham to sacrifice his son, Isaac. “The ethical expression for what Abraham did is that he meant to murder Isaac; the religious expression is that he meant to sacrifice Isaac.” So Abraham is forced to choose between the universal principles of ethics (against murdering your son) or accepting the “teleological suspension of ethics,” in which he suspends the rules of ethics to satisfy a higher end.

This to me is the fundamental difference between a secular ethics and an ethics based on religion (at least on the Abrahamic faiths of Judaism, Christianity, and Islam). Religion allows for the possibility that we might suspend normal ethics in light of a higher commandment from an inscrutable God. Otherwise, it is no different from a secular ethics based on rational principles alone (holding God himself subordinate to the laws of ethics).

Although the acts of ISIS are condemned by people of all faiths, the dangers of a “teleological suspension of ethics” can be generalized to some extent, as a risk inherent in religion-based models. In pre-modern Europe, under the hegemonic rule of the Church, we saw the widespread development of those implements that today fill the torture museums of Europe, implements ingeniously designed to create more and more exquisite pain for the ill-fated heretic.  Then we had the brutality of the Spanish Inquisition, brazenly carried on in the name of Church and the states under its authority.

With the 18th century Enlightenment, that largely changed. From the explicitly anti-Church philosophes to Kant, the hegemonic control of the Church gave way to a more humanist ethics grounded in rational principles. The ethics of Western culture today is primarily secular, a product of the Enlightenment. And although far from perfect, it has shaken off the worst abuses of the pre-Enlightenment theocratic ethic. At least now, one cannot break out the torture devices and flaunt them publicly as a general strategy of subjection. At least now, one cannot publicly suspend the normal rules of ethics because an inscrutable God has commanded it.

Now back to Kierkegaard, and to Abraham and Isaac. Although Kierkegaard is a Christian and I am unambiguous in my preference for a secular ethics, Kierkegaard may agree with me up to a point. He himself is almost Kantian in his emphasis that ethics is based on rational principles (unrelated to faith) and is therefore universal. The “ethical” and the “religious” are simply incommensurate categories for Kierkegaard. The ethical has to do with social relations and universal principles. The religious concerns only the individual in relation to the absolute. For Kierkegaard, the “religious moment” occurs when an individual, perhaps like Abraham, lives out his or her life among others, bound by the universal principles of ethics, and then one day something ruptures the plane of that living, and the individual’s identity shoots out in a perpendicular line to the absolute. His relation to the absolute (religious) and his relation to others (ethical) “cannot be mediated,” says Kierkegaard, in a jibe at Hegel and his understudies. Abraham cannot be justified on the ethical plane. He is up against an either/or crisis of the sort that most interested Kierkegaard. There is no gray area. Either you do something completely unethical in honor of God, or you reject God.

Kierkegaard may also agree with me that any social order would do best with a secular ethics based on rational principles. He certainly had no patience for state religion, and often disparaged the Christian state of Denmark and “Christendom” in general for their deployments of Christianity into the political or social arena. But he leaves room for Abraham, the “knight of faith” – not as a model of good citizenship or social order, but as a model of the individual wrenched away from his social identity by a connection to the absolute.

I finally disagree with Kierkegaard and reject the “teleological suspension of ethics” in all of its forms; however, I find Kierkegaard well worth reading and I myself have only scratched the surface of his thought. Moreover, no sound reading of Kierkegaard can ever use the “teleological suspension of ethics” to justify the behavior of ISIS or the Spanish Inquisition. In Kierkegaard, that suspension can never be applied as a public practice, but can only occur as a relation between the individual and the absolute. The problem is that so many groups at so many times and places have used a variant of the idea (God’s commandment allows me to overrule ethics) to vicious ends. In the case of the Middle East, this is further complicated by a historical trajectory quite different from Europe. Whereas the Enlightenment – the rise of secular ethics and secular democracies – in Europe can be seen as a liberation from the hegemonic oppression of the Church, in the Middle East of the past half-century, religion (in the form of a resurgent Islam) is often seen as the liberating force that can throw off the shackles of oppressive Western democracies. This inversion of the role of religion is historically explicable, but the ethical dangers are apparent when we see how easily ethical norms can be discarded when religious zeal is in full cry. Better to have a secular ethics based on rational principles. If you want to layer a religious faith on top of that ethics, fine, but don’t start believing that your faith trumps ethics or you become a danger to yourself and others.

A Defense of Plato

Dear MT,

Per your comparisons, I don’t think Plato is as eager as Nietzsche or Kierkegaard (or perhaps MT) to separate men into two groups and condemn the ignorant masses. Plato’s myth of the cave is more about PROCESS than about passing judgment on the ignorant. It’s sort of like a rational correlative to the Buddhist process of enlightenment. We ALL resist the truth when it first dazzles us and we’re used to shadows. Plato’s myth is about the process we ALL have to go through if we want to achieve enlightenment. And yes, some are not strong enough, some have to turn back. But for Plato I think all rational beings have the capacity if they can find the fortitude. And he quite explicitly says that the enlightened ones should go back and help those who are still in the cave. In this sense he’s more Buddhist and less condescending than Nietzsche and Kierkegaard (especially Nietzsche in my estimation). In this process-orientation, Plato is actually not far from Aristotle’s notion of entelechy, where all things strive unconsciously toward their ideal destination, like the acorn strives toward becoming the oak. In fact, the wedge between Plato and Aristotle is somewhat forced. They have different emphases, yes, but they share a lot of fundamentals. Aristotle learned his Plato well.

In metaphysics, I think your resistance to Plato is a resistance to a straw man version of Plato – as if his formal world is like the Christian God with the beard who sits somewhere in physical space. I find it hard to believe Plato would be so naïve. He is just saying, in the cave and elsewhere, that there is an intellectual reality, a kind of Jungian collective unconscious, which is a hidden prerequisite to all the contingent truths we find in our everyday (transitory) reality. Whether we realize it or not (and most of us don’t), the contingent truths we structure our daily lives by would not be intelligible were they not undergirded by that collective unconsciousness, that conceptual substrate of deeper truths. And the deeper we dig, the closer we get to eternal truths and the more deeply we understand the prerequisites of our surface knowledge.

So you’re right that your idea of a perfect car may not match my idea of a perfect car, but were it not for some abstract concept of perfection implicitly acknowledged by both of us, neither of us could have ANY idea of a perfect car. The concept of perfection is a presupposed premise of your idea and my idea. So now we can talk about a concept of perfection that, albeit abstract, is a necessary prerequisite to our contingent and various concrete ideas. Now we can ponder things at a deeper level, and delve dialectically deeper into the roots of our own consciousness. That’s what Plato is all about.

Re politics, of course Plato’s politics does sort men, but the sorting is not as damning as in Nietzsche. He just says that few men will find their way out of the cave and stay out, and those should be our leaders. And he is undemocratic in the sense that he seems to believe that order requires hierarchy – a practical consideration more than an existential judgment about master and slave races a la Nietzsche. We moderns tend to dismiss hierarchy as a prerequisite to political order, but go back just to the late 18th-century Enlightenment and you will still find strong and intelligent voices (e.g., Edmund Burke, Samuel Johnson) arguing that without hierarchy is chaos. So I don’t agree with Plato here, but I’ll give him a pass on politics. (From what I hear, Rebecca Newberger Goldstein’s new book, Plato at the Googleplex, presses Plato harder on the human implications of his politics, but I haven’t had a chance to read it yet.) Anyway, as I’ve said, I don’t think politics is the most compelling branch of his philosophy, but I still agree with Bertrand Russell’s mentor, Alfred North Whitehead, that “Western philosophy is a series of footnotes to Plato.”

And with due respect to Nietzsche’s wit, I think Plato would be the more amiable drinking companion.

 

O’Connor’s Misfit and Christian Existentialism

In a scene from Flannery O’Connor’s “A Good Man is Hard to Find,” the grandma and Red Sam (“the fat boy with the happy laugh,” as he proudly posts on the signs for his barbecue  joint) lament how hard it is to find a good man nowadays. But from these two master manipulators of the older generation to the self-centered brats (John Wesley and June Star) of the younger generation, it’s safe to say that O’Connor’s point is that it always has been and always will be virtually impossible to find a good man in this world. (SPOILERS ahead, so you may want to click the link and read the roughly 10-page story first.)

But the dearth of good men does not prevent O’Connor from giving us a truly interesting man in the villain of the piece, the Misfit, who gets all the best lines as he pours out his bio and hodgepodge philosophy to the grandma’s family when he stumbles upon them after their car wreck on a rural Georgia back road.

When the Misfit’s father says of the Misfit, “It’s some that can live their whole life out without asking about it and it’s others has to know why it is, and this boy is one of the latters,” he divides people into two groups – those who live out their whole lives without ever breaking the surface and those whose penetrating intelligence constantly pushes them toward a deeper understanding. The former would include the family, whose comically superficial attitude toward life and death and violence occupies the first half of the story. The Misfit is in the latter group, which leaves us with a knot: the Misfit is in the “good” group, but is clearly not a viable hero.

This brings us to the Misfit’s shrewd response to the grandmother’s comically self-serving claim that he’s a good man: “’Nome, I ain’t a good man.’ The Misfit said after a second as if he had considered her statement carefully, ‘but I ain’t the worst in the world neither.’” This clearly divides people into three groups: the “good,” the “worst,” and some third group to which the Misfit must belong. Understanding the three groups requires unpacking the epigraph (which, unfortunately, is omitted from some editions of the story):

 The dragon is by the side of the road, watching those who pass. Beware lest he devour you. We go to the Father of Souls, but it is necessary to pass by the dragon.

                                                                                                — St. Cyril of Jerusalem

The dragon represents the existential crisis – the recognition that the world is irrational, morally absurd, and that the lives we live are utterly meaningless. There are three tracks of human existence relative to the dragon. The “worst” off would be those people who coast along from one superficial event to the next and die without ever realizing that their whole lives were lived out on a glassy shallow surface (witness Bailey’s famous last words: “I’ll be back in a minute, Momma”). These never even reach the dragon. Then there are those who do face the dragon/crisis. This requires a deeper intelligence and the Misfit has certainly made it this far. But this by no means gets you home free. At this point one is faced with the only real dilemma that will ever count: irrational faith or despair.

The Misfit has obviously reached the dragon/crisis (thus is “not one of the worst”), but how does he respond? He responds by committing himself to reason and balance. He is driven insane by the fact that “I can’t make what all I done wrong fit what all I gone through in punishment.” Regarding Jesus’ claim about raising the dead, we are told, “’I wisht I had of been there,’ [the Misfit] said, hitting the ground with his fist. . . . ‘If I had of been there I would of known and I wouldn’t be like I am now.’” He wants moral balance, rational certainty. And he is shrewd enough to recognize that this is the path to despair. Indeed, I suppose the unacknowledged ghost in the Misfit’s world view, Kierkegaard, would define “despair” as precisely an insistence upon rational, moral balance in the world. “Beware lest he [the dragon] devour you.”

O’Connor’s point of view is existentialist because it insists that the world is irrational and morally absurd, no matter how many little tricks we use to impose a rational order upon it. And it is decisively Christian existentialism. Jesus, as the Misfit, says, “thown everything off balance. If He did what he said, then it’s nothing for you to do but thow away everything and follow Him, and if He didn’t, then it’s nothing for you to do but enjoy the few minutes you got left the best you can – by killing somebody or burning down his house or . . . .” Jesus throws the whole rational game off balance. We have absolutely no reason to believe anything he said. Indeed, seeking a reason to believe leads us to the Misfit’s path of despair. From the Christian existentialist position, we must conclude that any Christian who believes he or she has good reason to believe must be in group one, among the “worst” who have never truly broken the surface and faced the dragon/existential crisis. Any Christian who seeks a reason to believe, but is smart enough to know that s/he can’t really find one (group 2), has faced the dragon but is continually being devoured by it (as the Misfit). The true Christian (group 3) must choose faith with the full knowledge that such a choice is, in the face of the dragon, absurd.

The story is a bit shaky, despite O’Connor’s overt Christian intentions, on demonstrating the final option – those who have faced the dragon and chosen the irrational leap of faith. The grandmother is supposedly a last-minute exemplar.

“His voice seemed about to crack and the grandmother’s head cleared for an instant. She saw the man’s face twisted close to her own as if he were going to cry and she murmured, ‘Why you’re one of my babies. You’re one of my own children!’ She reached out and touched him.”

Presumably, the grandmother finally breaks through her petty self-interest and chooses a redemptive act. The fact that it’s the grandmother, the heretofore exemplar of manipulative self-interest, reinforces absurdity, unpredictability. The fact that she’s shot in the chest three times in the next sentence reinforces the idea that the point of faith is not to achieve balance in this world (such an objective would be a variant of despair).

Finally, the famous penultimate line of the Misfit: “She would have been a good woman . . . if it had been somebody there to shoot her every minute of her life.” Again, the Misfit very shrewdly sees that the only thing that ever made grandma crack the surface was a gun in her face. This is typical Flannery O’Connor. We need some violent shock to thrust us into crisis – lest we live out our lives in that dreamy, surface complacency. Granted, it’s not pleasant, but it’s the only way: “We go to the Father of Souls, but it is necessary to pass by the dragon.” Thus O’Connor crafts her own recipe for Christian existentialism, like a Waffle House version of Dostoevsky’s Brothers Karamazov, cut, reshaped, and chicken-fried to the order of the Southern redneck.