Artificial intelligence and human experience

1. Anticipation and Fulfillment in Visual Perception

Reading an interview with philosopher Michael Madary, I was thinking as so many do about how far artificial intelligence (AI) can go in mimicking the human experience. Madary’s bailiwick is philosophy of mind and the ethics of emerging technologies, especially virtual reality. The interview focuses mainly on Madary’s anticipation and fulfillment model of visual perception. The basic model, it seems to me, is equally applicable to human or AI behavior; i.e., visual perception as proliferating perspectives across time. You first see the object from one limited point of view. To truly grasp it, though, you need to anticipate what it looks like from additional perspectives. You move around, double-check, and your anticipation is more or less “fulfilled” or verified. Then the point of fulfillment becomes a starting point for next stage of anticipation, etc. Visual perception is this process of constantly accumulating perspectives, ever absorbing regions of indeterminacy into determinacy, ever approximating an unreachable objectivity of perspective.

It seems AI should be very good at all of this. So what, if anything, about human reality does AI miss? Is it that there is no psychological reality corresponding to the process as AI executes it? Can the distinction of the human’s psychological reality be evidenced? Is it manifest in the idea of motivation? AI can execute the same process but has a different (or absent) motivation vis-à-vis the human counterpart? The human viewing a modern sculpture from different perspectives may be seeking aesthetic beauty or pleasure, which would seem beyond the scope of AI. Although local human actions may mimic AI processes, it may be that the ultimate motivation behind human action in general falls outside the scope of AI – let’s call that ultimate motivation happiness (following Aristotle) or the satisfaction that comes with living a good life (following Plato); is there any comparable motivation for AI?

2. Transcendental Fulfillment

Or how about this take on the AI/human difference. What humans are always seeking, the grand dream under it all, is fulfillment liberated from the anticipation-fulfillment cycle, a sense of contentment that gets us out of the rat race of endless desires and partial fulfillments. For a correlative to the visual perception model, picture yourself gazing at a painting, motionless, without any interest in increasing perspectives, just having the static fulfillment of the beauty in front of you for a certain duration of time. What elapses in that duration of time may be an expression of the thing that is inaccessible to AI. What humans really want is not a sense of fulfillment that is indefinitely deferred by endlessly proliferating perspectives – the never-ending drive for more data that might occupy the AI entity. We want THE sense of fulfillment that comes when you opt out of the cycle of proliferating perspectives, a sense of fulfillment that transcends process. So whereas the anticipation-fulfillment cycle is an end in itself for AI, for humans all such processes are instrumental; the end in itself, that end which motivates the whole process, is that which falls outside of the process, a kind of static contentment that in inaccessible to the AI.

3. Singularity, or Singularities

The concept of singularity might help to clarify the distinction between fulfillment embedded in the anticipation-fulfillment process and transcendental fulfillment, fulfillment liberated from the cycle. Transcendental fulfillment is, let’s say, a metaphysical singularity – the space of infinite oneness alluded to by many philosophers and mystics, indicating escape from the rat race. Compare that to technological singularity, the critical mass at which artificial superintelligence would eclipse all human intelligence, rendering homo sapiens superfluous. Perhaps fears about the technological singularity, about an external AI dislodging us, are misplaced. I will tentatively side with those who say that computers do not really have their own motivation, their own autonomy. The risk then is not of some external AI overtaking us; the risk lies rather in the porosity between human reality and AI. The risk is not that AI will defeat us in some battlefield Armageddon but rather that it will bleed the life out of us, slowly, imperceptibly, until we go limp. We have already ceded quite a bit of brain activity to the machine world. We used to have dozens of phone numbers in our heads – now all stored in our “external” brains.” We used to do a lot more math in our heads. (Try getting change today from a teen worker without the machine telling them how much to give.) World capitals? In the external brain. City map layouts in one’s head? Replaced by GPS real-time instructions.

“So what?” students today might say. “Memorizing all that stuff is now a waste of time?” And they may be right. But as the porosity increases between human reality and the computerized ether we live in, we cede more and more of our basic survival skills to the ether. I don’t expect malice on the part of AI (although the HAL 9000 was a cool concept), but there may come a tipping point at which we have ceded the basic means of species survival to the machine world. And in ceding more control of our inner lives to the external brain, we become more embedded in the anticipation-fulfillment cycle. Even basic human activities take on a query-and-report format. It becomes increasingly difficult to “opt out” of the processing apparatus and find that space of reflection that transcends the endless proliferation of future-directed perspectives.

4. The Historical Side: Dystopic or Utopic

All this talk about homo sapiens being bled out sounds quite dystopic, and perhaps dystopia is the endgame. But not all possible futures are grim. First of all, in structural terms, porosity is two-directional. Ever since the invention of writing, we have transferred information into the external media of books, giving subsequent generations the capacity to “upload” that information and store it in their brains when the books are removed. This prompted Clark and Chalmers, as far back as 1998, to theorize about the “extended mind,” in which the space of the mind is shared by internal processes and environmental objects that work in tandem with those processes. Another parallel is in Wittgenstein’s Blue Book example wherein we use a color chart until we “learn” our colors, and then throw the chart away. In these cases, the external device provides intermediate information storage. We use the Internet in this fashion all the time. Nothing dystopic here. But is it different when the device becomes capable of evolving its own algorithms, generating its own information, and using it to implement tasks that go far beyond mere storage? Perhaps so, but it is not yet clear that the dystopic end is inevitable.

Second of all, in terms of social implication, technology could free us up to spend less of our lives on drudgery and more of our lives in that reflective space of self-fulfillment, working out our own electives of self-realization. Indeed, this is the signature promise of technology in the age of capitalism. Ever since the early 19th-century Luddite rebellion, technology has repeatedly made this promise and repeatedly failed to deliver. Why would it be any different now? It could only be different if there were a fundamental shift in our perspective of what it is to be human.

When Madary exemplifies his visual perception theory with the example of a modern sculpture, he introduces what for me is the wild card in the anticipation-fulfillment cycle: the element of surprise.

“Recall a situation in which you moved to gain a better perspective on a novel object and were surprised by how it appeared from the hidden side … Modern sculpture can be helpful for illustrating visual anticipations because the precise shape of the sculpture is often unclear from one’s initial perspective. Our anticipations regarding the hidden side of a modern sculpture tend to be more indeterminate than our anticipations about the hidden sides of more familiar objects.” (Madary)

Whereas I started out by saying that Madary’s anticipation-fulfillment model of visual perception applies equally to AI and humans, I suspect we might handle the element of “surprise” differently. In the case of humans, “surprise” is a trigger for imagination, a less tractable faculty than might be intelligible to our future friends in the AI phylum. Sure, our AI compatriots might predict possible futures as well or better than we do (and thus they might best us at chess), but is that really “imagination”? Humans imagine not only possible futures, but also alternative presents and alternative pasts, using self-generated imagery to feed nostalgic visions of times gone by. There is something about the creative process of imagination that might separate us from AI and might make us less predictable in the event of “surprises” or disruptions in the normal anticipation-fulfillment process. Since technology has typically failed to deliver on its promise to enhance self-fulfillment time for personal development, we might anticipate another failure when technology says that AI will truly free us up from drudgery. But the result could be different this time if a rupture in conditions is great enough. Imperatives of income inequality and ecological destruction might be rupture enough. As we survey our predicament the way Madary’s viewer surveys the modern sculpture, we might on the other side glimpse the end of capitalism (which may sound dramatic, and yet all ages do end). Perhaps this might jolt the imagination to a new sensibility, a new subjective frame of reference for values like “work” and “technology” and “success” and “self-actualization” – to wit, a new definition of what it means to be fully human.

How rapidly and wholly we make that turn to a new definition of what it means to be fully human will lock in the dystopic or utopic endgame. In the dystopic version, homo sapiens is bled out by some combination of AI and economic and ecological calamities. In the utopic version, consciousness about what it is to be human evolves quickly enough to allay those calamities and to recapture AI as the servant of human ends and not vice versa.

Footnote on Kierkegaard’s 3 modes of lived experience: aesthetic, ethical, religious

The anticipation-fulfillment model of visual perception can be seen as the basic process of Kierkegaard’s aesthetic mode, sensory-based life. A whole life lived on the aesthetic level is lived as a continuous accumulation of equally ephemeral sensory perspectives on one’s own life.

The ethical life turns out the follow the same model but on the ethical level. The ethical life is a continual accumulation of equally provisional ethical perspectives.

The religious life, though, breaks the model. It concerns the absolute. It does not consist of accumulating perspectives but of a singularity; it eschews all the accumulations of visual or ethical perception for the singular relation between the religious subject and the absolute, a singularity which obliterates all mediate or quantitative concerns.

Michael Madary’s Visual Phenomenology

Richard Marshall’s interview of Michael Madary in 3:AM Magazine

BookCoverImage         year-bfly-cover       Cover png

 

Transhumanism

For Thomas Z., to whom I owe a philosophical entry

First thing in Mainz was to join my philosopher friend, Michael, over a bottle of Spätburgunder, the delicious red wine you can only find in southwestern Germany, and hear about his recent forays into transhumanism. The concept echoed some recurring themes of my blog, so let’s have another go at it.

Here’s a quote from the mover and shaker of transhumanism, Max More.

“Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die – just as we’re beginning to attain wisdom. And, you forgot to give us the operating manual for ourselves! … What you have made is glorious, yet deeply flawed … We have decided that it is time to amend the human constitution … We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence … Over the coming decades we will pursue a series of changes to our own constitution … We will no longer tolerate the tyranny of aging and death … We will expand our perceptual range … improve on our neural organization and capacity … reshape our motivational patterns and emotional responses … take charge over our genetic programming and achieve mastery over our biological and neurological processes.”

An enticing mission statement, no doubt, but which side carries more weight — the passionate, techno-idealism or the Faustian arrogance? What if we expand and magnify all the quantifiable aspects of human identity only to discover that the things of true value in the human experience are precisely the non-quantifiable ones? To paraphrase a fine blog entry by your present correspondent, what if we increase our knowledge a hundredfold, a milllionfold, about neurological indicators of “being in love,” place all our bets for a better future there, and then discover, like J. Alfred Prufrock, that “this is not it at all,” that an infinite and complete set of data about the neurological (objective) facts of being in love turns out to be a mere child’s game, an insignificant correlative to the real thing, the subjective experience of love, love in its non-quantifiable aspect. What if we place all our bets on the objectively measurable and manipulable, and then find that the objective abstraction of reality is just the husk, the crust, empty shell of lived experience? As Sri Sri Ravi Shankar says, we cling tightly to the banana skin and throw away the banana. The objective aspect of reality may be nothing more than a map whose coordinates correspond to the subjective conditions that make up the real meat and matter of life. Knowing every infinitely granular datum on a map of New York is not the same thing as being alive and in New York.

And the transhumanist’s desire for improvement may seem intuitively good and true, but is it really that intuitive? I would say that the obsession with continual improvement is a modern, or at least post-Renaissance, obsession. As late as the eighteenth century (at least in England, whose cultural history I’m most familiar with), there was widespread and vocal resistance to the apostles of “improvement.” If the ancient Greeks were right that meaning and value for us is to be located in “happiness” (Aristotle) or in living “the good life” (Plato), is the frenetic quest for continual improvement really conducive to those ends? Couldn’t the Greeks be right that a life of tranquility and acceptance and reflection is more apropos?

Or, to take the most persuasive case for the transhumanist, the ethical case, why not modify human beings to be more altruistic? Surely there’s no harm there. Maybe. But what if moral variation turns out to have the same crucial value in our spiritual journey, our collective quest for the good life, as genetic variation has in the biological furtherance of the species? Absent moral variation, is there then no way forward, no dynamic built into the system, no adaptability without a spread of traits across individuals?

Finally, there’s the sense that you can’t beat Mother Nature. In the 1950s, the “improvement” team was telling us that factory-made formula was better than mother’s milk. The most conventional of modern medical practice holds that a lifelong battery of pharmaceuticals and surgeries is better than the body’s natural healing processes. DDT to kill pests sounds great until you realize there’s reason Mother Nature did not carpet bomb her own fields and rivers with DDT. Science is enormously instructive within its scope, but when it goes beyond scope with easy claims of how it can outsmart nature’s millions of years of accumulated intelligence, I would like to keep at least one foot on the brakes.

And even if you could beat Mother Nature, at least temporarily, postponing death, is that really so great? If we don’t grow old and die, children’s voices will no longer fill playgrounds, as the cycle of death and replenishent of the species will have been broken. Is the trade-off really worth it? Extend your old age further and further in a world with fewer and fewer kids at play. This specific point is negotiable, but in general, the “obvious” good might sometimes have a collateral damage that our scientist, or a particular community of scientists, limited by their historical vantage and their own egocentrism, may not see.

Despite all this, I remain intrigued by transhumanism and hope to read up on it. (Full disclosure: I have not studied the actual literature on transhumanism at all; I am merely use my discussion in Mainz as the occasion to develop these thoughts.) I am not against all efforts to improve the human condition. I myself have a hippie idealism about where to go from here that my more faithful readers will know. But when we’re going to improve the moral and social condition of humans, and rewrite our collective idealism, based on the mechanical technologies of the day, I would at least like to know that the transhumanist has fully considered all the counterpoints.

Frankenstein is a tired comparison but apt. The good doctor was motivated by pure idealism, with a passion to use technology to better the human condition. In our narrative, the narrative of living humanity, can we be sure that the transhumanist will really be able to rewrite the ending this time?

P.S. Thanks, Dr. M., for pointing out that the confederacy of dunces has my back (New York Times, 07/26/16).

Fallacies of Science

To the scientists in my circle: I’m more with you than you think. I don’t doubt for a minute the value of science. I find it absurd, e.g., that some people think religious texts can compete with science as a source of information about how the physical world works. But I like to amuse myself by playing watchdog for my scientific friends.

Even in my watchdog role, I can raise no objections to the scientific method, or to the analytical power that science has to unpack the facts and processes of the physical world. But as self-appointed guardian at the gates, I propose the following fallacies often committed by the scientifically-minded – all, again, fallacies of application or of scope, not intended to impeach the core value of the scientific method but to snap at the heels of scientists — and even our most admirable scientists like Neil DeGrasse Tyson and Stephen Hawking — when they make claims that go beyond the scope of their expertise.

The fallacy of metaphysical (external) scope

As I’ve argued elsewhere in this fine blog, science studies the “objective world” and has great analytical power within that scope. But science oversteps its scope when it claims that the “objective world” is the “real world period” and anything else is nonsense, thus implying that science is the one and only path to truth.

I propose that it’s misleading to call the “objective world” (which is the full scope of scientific inquiry) real or unreal; it is more accurately an abstraction from reality. There is no purely objective world just as there is no purely subjective world. Each is an abstraction from lived reality.

(Don’t the abstractions called “objects” in computer science suggest as much? A computer program at Tulane may, and probably does, have an “object” called Wayne xxx. This object is an abstraction that consists of a character string (name), numeric string (birthdate), etc. A different database—say that of the IRS—may also have an object called Wayne xxx but with different characteristics abstracted. The physical scientist, like the computer scientist, studies only those details relevant to his or her level of abstraction. But scientists sometimes forget this and make claims that go “beyond scope.”)

Just as the scientist elucidates valuable truths from her abstraction from reality (called the “objective world”), so might poets, philosophers, and Zen masters elucidate valuable truths from their abstractions from reality. It’s not at all clear to me that the subjective aspects of lived reality – art, justice, ethics, the felt joy of love and friendship, and the felt pain of loss and betrayal, are really reducible to (although they may be correlated to) scientific data about neurons. It’s not at all clear to me that the rich unconscious landscapes of Greek mythology or Blake’s visionary poetry, or the subjective-centered critique of empiricism in Kant’s philosophy, teach us less about lived reality than Darwin. To call the scientist’s abstraction of the world “the real world period” is to falsely assign it a metaphysical status, confusing one abstract way of looking at lived reality with the presumed metaphysical ground of lived reality itself.

The fallacy of substantive (internal) scope

Let’s look more narrowly at the role science plays within the scope of the objective world it studies. It mines and generates much knowledge about the physical world, and for that we are grateful. But how much of its substantive area does it really grasp? Even at its present power, it only nibbles the tip of the iceberg. Take the human body. Medical science knows much more about the body’s processes than it knew 350 years ago, when the Age of Science really started coming on line. We look back at the 17th century as a kind of dark ages of leeches and blood-letters. Isn’t it obvious that science will expand its knowledge base just as rapidly, if not more rapidly, in the centuries to come? Won’t they look back at us with the same amusement, as a people nobly gathering knowledge but remarkably primitive in what we had gathered?

This telescopic view from the future should give us pause before we leap. Just a few decades ago, “science” was telling us that it could produce a baby formula more nutritious than mother’s milk. For every “well-tested” drug on the market, there’s a class action lawsuit addressing unintended consequences of that drug. One doesn’t have to be religious to believe that there is a vast (evolved) intelligence at work in the human body and in nature, and that science has only mapped a few percentage points of what is really going on in these systems. Don’t get me wrong – a few percentage points is better than no percentage points, and I’m all for science expanding its knowledge base. But when it comes to applying that knowledge, I take a humbler approach than some more eager proponents of science. The pro-implementation argument I most hear is that the things to be deployed have been tested exhaustively in study after study. Although this may be true, it is limited by context. If scientific understanding of its subject area (in this case the human body and the natural world) has leaped from 1% to 5% in the past few hundred years, it has still mapped just the tip of the iceberg, and still leaves enormous territory unexplored. So when you test exhaustively for results and side-effects, you are only really testing within the zone you understand. There are so many collateral aspects of human and natural ecological systems that are undiscovered that it is sheer arrogance to say that we’ve tested by 2015 standards and thus pronounce such-and-such safer and more effective than Mother Nature.

How does this translate to policy? If you have a serious illness, by all means draw upon that scientific knowledge base and try a scientific cure. If you have a less serious illness, you may be better off trusting to the body’s natural healing mechanisms, insofar science has only scratched the surface on how these mechanisms work, and tampering with biochemical processes may do more harm than good. I and everyone will have to judge this case by case, but by no means am I willing to conclude that science understands every aspect of how the body works and has therefore tested and measured every collateral effect for a particular drug or procedure.

On a tricky subject such as GMO foods, I am not as rabidly anti- as some of my hippie-ish brethren, but not as naively optimistic as some of my scientist friends. I like the idea of scientists building a knowledge base on this topic. But when it comes to implementation, I tend to keep one foot on the brakes, especially since radical changes can now be implemented globally and with much greater speed than in centuries past. I’m not at all convinced that science in its current state understands all the collateral processes of nature well enough to make the “exhaustively tested” claim. Or, to go back to our telescope of time, isn’t it possible that scientists 200 years from now will look back and shake their heads in amusement at our “exhaustively tested” claims?

And I haven’t even gotten to the corruptive influence of money and big corporations when it comes to what substantive areas of scientific inquiry will be funded and how results will be implemented. There may be something like a “fallacy of scientific purity” embedded here.

The fallacy of epistemological scope

Here, I use epistemology broadly as the quest for knowledge – almost, one could say, the quest for self-actualization that drives human reality, if not every aspect of reality. British Romantic poets will be my outside reference point here. The Romantics saw the development of self-knowledge, or self-actualization, in three stages. In Blake, these correspond to an Age of Innocence, Age of Experience, and an Age of Redeemed Imagination. In the Age of Innocence, we access knowledge through the fantastic mechanism of imagination, which keeps us in a state of wonder but leaves us naïve about the world and easily exploited. In the Age of Experience, we begin to access knowledge through reason and science, gaining factual knowledge that makes us less naïve and more worldly, but with that worldliness comes a cynicism, a sense of world-weariness, a sense of loss, of fallenness. Indeed, the Romantic world view at times seems to equate the world of Experience, the world of objective facts, with the world in its deadened aspect. The trick in Blake is to find the turn into a third stage, wherein the power of imagination re-engages at a mature level, re-animates the dry world of abstract facts, and saves us from the cynicism of Experience. In a word, we can put the scientific-type knowledge of Experience into perspective. We can still see its value but without being constrained by it in our quest for self-actualization. In Wordsworth’s “Tintern Abbey,” this plays out as the innocence of “boyish days” (73), experience “‘mid the din / Of towns and cities” (25-26), and the “tranquil restoration” of the mature poet (30). In the third stage, the sensory raptures of youth and the worldly knowledge of experience have both lost their traction. Specifically, the poet has lost the pleasure of immediacy but has gained the power of inward reflection. The “sense sublime / Of something far more deeply interfused” (95-96) is reserved for the third stage, and indeed is specifically used as a counterpoint to the sensory appreciation and worldly knowledge of earlier phases.

These 3 stages can easily be projected beyond the individual onto the cultural or even the cosmic screen. Blake, with his Jungian vision of the archetypal sources of consciousness, readily applies it to the cosmic level. I’ll apply it to the level of cultural history by saying that the Age of Science fits the second stage very well. Science emerged as the dominant epistemology around the late 17th century, putting to bed some childish theories and introducing us to a more worldly-wise engagement with the physical world. Who knows when this Age of Science will end, but when it does, perhaps then we will enter the Age of Aquarius I’ve promoted only half tongue-in-cheek. And perhaps then we will look back at the Age of Science as Blake or Wordsworth look back at their middle stage – as an epistemological period that starts out liberating but eventually binds our imaginations, makes us a little cynical about the possibilities of self-actualization, chains us to what Plato calls “the prison-house” of materialism. So the fallacy of epistemological scope is the fallacy of myopically seeing only that force of knowledge that is present in the middle period, whereas true wisdom may be broader than that. It may be that the innocent child and the mature poet can grasp things about reality that are inaccessible to the purely scientific mind.

The watchdog sleeps

So those are my fallacy sketches for my scientific friends. Now pause and ponder.

rachael art - bad day

 And if in your pondering, you find yourself viewing me with the gaze of the character above (provided by the talented Rachael Gautier), remember: When my watchdog shift ends, I’m more on your side than you think. At least you can take comfort that in the next U.S. election I will be voting for the party that takes science seriously and not the party that seems perpetually at war with science. Meanwhile, I’m happy to revise, especially if a particular Ukrainian physicist I know will home-brew another batch of Russian Imperial Stout to facilitate the review process.

Psychosis/Enlightenment 2

MT, we started by talking about Plato, and you pondered what would happen if we stripped away our illusions. Would we end up as the Dalai Lama or as Meursault in Camus’s The Stranger? Would we spiral towards madness or find serenity?

So I pondered Plato. Reality is a manifold, with some layers more illusory than others. Plato found the sensory layer most illusory (as do the Buddhists I presume), but he didn’t see it in black and white terms (illusion bad, reality good). Even the sensory layer is an important first step, a pointer to the next layer, which then seems “real” to us until we get one step deeper, etc. MT, you’re becoming a Platonist despite your own resistance.

Note Plato’s assignment of sensory data to the lowest level (most illusory) of reality/truth seems to pit him against the empiricist epistemology that dominates our current Age of Science (late 17th century to present); however, one of the foremost thinkers of the emerging Age of Science, David Hume, who carried empiricism as far as it could logically go (much to the consternation and inspiration of Kant), concluded much the same – that following the truth of sensory data (empiricism) leads us to conclude that sensory data tells us nothing about the objective world “out there” but only tells us about the imprints some presumed world out there makes on our personal sensory registers. The only difference between Hume the empiricist and Plato the rationalist is that, after they’ve both deconstructed the idea of gaining knowledge about the world-as-it-really-is via sensory data, Plato seeks a deeper layer through rational inquiry while Hume says that’s the end of it and goes out for a pint and a game of backgammon (and my Scottish friends can take that as an insult or a compliment, as you will).

I like your Dalai Lama or Meursault reverie, but I’d go a step further and say that these are the utopic and dystopic outcomes, respectively, of stripping away our illusions.

Although at first glance it seems cute but false to say that madness equates to “stripped of illusions,” it seems believable when I think of illusions as filters. To lose all of your filters would seem a form of psychosis. Someone — was it Aldous Huxley in Doors of Perception? — suggested that consciousness itself evolved not as a way to increase access to the world but as a filter for limiting access to the world, for blocking all the “ambient noise” as it were, so we could focus on a smaller zone of input more efficiently. And if the Huxley/Doors reference is right, I think he went on to say that hallucinogenics remove filters, quite literally expanding the scope of consciousness (and he struggles with whether the output is more akin to psychosis or enlightenment).

For the psychosis side of the equation, see psychoanalyst Jacques Lacan and his sometime follower, Julia Kristeva. In my primitive understanding of Lacan, we pass through three “orders” in the formation of the psyche (or rather we build up three layers, like rings in a tree). The “real” order is the hidden kernel to which we have no analytical access. Like the noumenal world in Kant’s metaphysics, it is merely a logical assumption that we must make in order for later stages to make sense. We enter the “imaginary” order when we one day see ourselves in the mirror, so to speak (maybe around a year old), see an entity with clear boundaries, and come to imagine ourselves as separate individuals surrounded by external people and environments. Later, we enter the “symbolic” order with the formation of language skills. We begin to process the world through a symbolic overlay (e.g., the sound “tree” symbolically represents the concept “tree,” which isolates and defines a whole range of sensory inputs, the sound “me” represents…, etc.). We now define our personhood relative to that symbolic overlay. We have entered the symbolic order.

In trying to access the “real,” we can only “imagine” it as an undifferentiated flux, or conceptualize it via the symbolic order (as a logical presupposition, an object of psychoanalysis, etc.). Either way, our view is mediated through imaginary or symbolic orders – we have no direct, unmediated access.

Kristeva followed Lacan in theory and focused in practice on “borderline” patients, patients whom I think she found permanently stuck between imaginary and symbolic orders, with perhaps some tantalizing glimpses of the “real” (alas, I’ve lost my original notes on Kristeva and Lacan to Hurricane Katrina).

Back to Huxley’s inference about hallucinogenics, he might say that they strip away the layering of the symbolic order, the webs and webs we have thrown over the flux of original experience, dividing it up into regions we can name and render intelligible. If you strip away all that layering, all those illusions, and get back in some fashion to the lived experience of the “imaginary” order or even the “real,” is the result more akin to psychosis or enlightenment? I think Huxley tentatively concludes that it can give you isolated moments of personal enlightenment but that it is inconsistent with everyday life; it inhibits your ability to function successfully in the workaday, social world (which seems consonant with my personal LSD experiences). In other words, you can strip away the illusion and dip into those pre-symbolic levels of experience, but you have to come back up to sustain your everyday life, since the very enlightenment you feel on the personal level renders you psychotic relative to the social order within which you must live.

Then again, there’s always the Dalai Lama.

Prequel:  Psychosis and Enlightenment

Science and Philosophy

For some reason, science and philosophy have recently been pitted against each other in the blogosphere and public discourse. Maybe something Neil deGrasse Tyson said in Cosmos, but I didn’t have a chance to watch it. The antagonism between those disciplines, though, seems unwarranted.

Science was a subset of philosophy (“natural philosophy”) until the late 17th century. The subset was defined as a basically empirical quest for knowledge about the sensory world, or the objective world. Science has now grown into a separate discipline, and I think all acknowledge that physicists are far more precise than philosophers at elucidating knowledge of the objective world. But the objective world is only one abstraction from lived reality. When it comes to the subjective aspect of lived reality and related values – art, ethics, love, justice – philosophy has the edge. If you’re grappling with “how to live a good life” (a favorite question of the ancient Greek philosophers), a perusal of Epicurus or Gandhi might serve at least as well as Newton’s Principia or Einstein’s General Theory of Relativity. And every physicist should be able to appreciate, at a minimum, Plato and Hume and Kant, who consider the logical presuppositions of empiricism as well as the conditions within which physics and the study of the objective world have a value for those of us living concrete human lives. “Why should we care about science?” is almost by definition the purview not of physics but of meta-physics, as it requires someone to step outside of science and view science as a whole against the larger screen of human values and what makes life worth living.

I think all will also acknowledge that science isn’t “the world” but is a secondary mechanism that observes and analyzes the world at an objective distance. There will always be a difference between the immediate experience of the world (e.g., the feeling of being in love) and the mediated analysis of the world (e.g., finding the chemical process that corresponds to the feeling of being in love). Science is de facto a mediated view of the world. It gains its power by limiting its scope to what can be gleaned at an objective distance from lived reality. Just as Plato’s myth of the cave and Boethius’s metaphor of the circle and Blake’s visionary poetry and Buddhist yoga practices and Shakespeare plays give us access points to lived reality that might fall outside the scope of science (i.e., vantage points that do not stand at the same objective distance as science).

So I am as fascinated as most with the yields of science, but I say let’s celebrate the scientist, artist, and philosopher all for advancing our range of fulfillment. And let’s keep some historical perspective. Pre-17th century periods, in which empiricism was not the dominant epistemology, didn’t value science quite as much because they considered the sensory world less important in the scheme of human values. Science and empiricism constitute the dominant epistemology of our age (a comparatively short 300 years so far). But who knows what priorities, what epistemologies, what new paradigms lay past the horizon line of the next age?

M Gandhi and Ayn Rand

“How does one live a good life?” was the core question for Plato and other classical Greek philosophers. Here are two mutually exclusive answers from the 20th century:

Gandhi: Through service to others and simplicity of lifestyle.

Ayn Rand: Through rational self-interest and the advancement of capitalism.

Pick your path to happiness and to our best possible future. I know which one I lean toward.

A Defense of Plato

Dear MT,

Per your comparisons, I don’t think Plato is as eager as Nietzsche or Kierkegaard (or perhaps MT) to separate men into two groups and condemn the ignorant masses. Plato’s myth of the cave is more about PROCESS than about passing judgment on the ignorant. It’s sort of like a rational correlative to the Buddhist process of enlightenment. We ALL resist the truth when it first dazzles us and we’re used to shadows. Plato’s myth is about the process we ALL have to go through if we want to achieve enlightenment. And yes, some are not strong enough, some have to turn back. But for Plato I think all rational beings have the capacity if they can find the fortitude. And he quite explicitly says that the enlightened ones should go back and help those who are still in the cave. In this sense he’s more Buddhist and less condescending than Nietzsche and Kierkegaard (especially Nietzsche in my estimation). In this process-orientation, Plato is actually not far from Aristotle’s notion of entelechy, where all things strive unconsciously toward their ideal destination, like the acorn strives toward becoming the oak. In fact, the wedge between Plato and Aristotle is somewhat forced. They have different emphases, yes, but they share a lot of fundamentals. Aristotle learned his Plato well.

In metaphysics, I think your resistance to Plato is a resistance to a straw man version of Plato – as if his formal world is like the Christian God with the beard who sits somewhere in physical space. I find it hard to believe Plato would be so naïve. He is just saying, in the cave and elsewhere, that there is an intellectual reality, a kind of Jungian collective unconscious, which is a hidden prerequisite to all the contingent truths we find in our everyday (transitory) reality. Whether we realize it or not (and most of us don’t), the contingent truths we structure our daily lives by would not be intelligible were they not undergirded by that collective unconsciousness, that conceptual substrate of deeper truths. And the deeper we dig, the closer we get to eternal truths and the more deeply we understand the prerequisites of our surface knowledge.

So you’re right that your idea of a perfect car may not match my idea of a perfect car, but were it not for some abstract concept of perfection implicitly acknowledged by both of us, neither of us could have ANY idea of a perfect car. The concept of perfection is a presupposed premise of your idea and my idea. So now we can talk about a concept of perfection that, albeit abstract, is a necessary prerequisite to our contingent and various concrete ideas. Now we can ponder things at a deeper level, and delve dialectically deeper into the roots of our own consciousness. That’s what Plato is all about.

Re politics, of course Plato’s politics does sort men, but the sorting is not as damning as in Nietzsche. He just says that few men will find their way out of the cave and stay out, and those should be our leaders. And he is undemocratic in the sense that he seems to believe that order requires hierarchy – a practical consideration more than an existential judgment about master and slave races a la Nietzsche. We moderns tend to dismiss hierarchy as a prerequisite to political order, but go back just to the late 18th-century Enlightenment and you will still find strong and intelligent voices (e.g., Edmund Burke, Samuel Johnson) arguing that without hierarchy is chaos. So I don’t agree with Plato here, but I’ll give him a pass on politics. (From what I hear, Rebecca Newberger Goldstein’s new book, Plato at the Googleplex, presses Plato harder on the human implications of his politics, but I haven’t had a chance to read it yet.) Anyway, as I’ve said, I don’t think politics is the most compelling branch of his philosophy, but I still agree with Bertrand Russell’s mentor, Alfred North Whitehead, that “Western philosophy is a series of footnotes to Plato.”

And with due respect to Nietzsche’s wit, I think Plato would be the more amiable drinking companion.

 

Two Critiques of Materialism

1. What are you most certain of and how do you know it?  Let’s say you’re pretty sure about many things but most certain about mathematical truths (e.g., an equilateral triangle has three 60 degree angles).  Let’s say to prove this you draw a triangle on the chalk board.  But the triangle on the chalk board is imperfect: the lines are grainy and not quite straight, etc.  In fact, every physically produced triangle will fall short of the perfect triangle.  But all of your mathematical knowledge about triangles is based on the perfect triangle—the one in which the lines are perfectly straight and the angles exactly what they should be.  That ideal triangle, the one that doesn’t exist in the material world, is the only one you really know anything about, but luckily that knowledge carries over, albeit imperfectly, in the material world.  And mathematical knowledge, being the most certain, is a model for other kinds of knowledge.  So according to this theory, the material world does exist, but for us to have any real knowledge about it requires assumption of an ideal world that transcends this or that material object.

2. The second critique disagrees with the first, and assumes that knowledge is sensory based.  All knowledge begins with the five senses.  But what knowledge do you gain from the five senses?  Knowledge about the material world?  Not at all.  “The pillow is red” is not a proposition about the pillow itself, but about how the pillow registers in the retina.  “The pillow is soft” says nothing about the pillow in itself but only addresses how the pillow registers tactile sensations in neurons under the skin.  In fact, we may agree that the pillow is largely empty space with tiny atoms bombarding each other. But this knowledge too has been derived from empirical studies that rely in the first instance on sensory data.  So all we know about are the imprints made upon our own body’s sensory registers.  If there are material things out in space which exist independently of our sensory judgments, we can know nothing about them in themselves—we can only know about them post-processed, as it were.  So according to this theory, the material world may exist, but we’ll never know, because all we know about are the products of our own subjective processing plant.

———————————

The first critique was favored in the 4th century BC by Plato, who deliberately rejected empiricism (and the Greeks had already theorized that the world was nothing more than billions of tiny particles called atoms bombarding one another and creating the appearance of solid shapes—a conclusion they drew through philosophical inquiry without any need for scientific instruments, and a materialist conclusion Plato had studied and rejected on the grounds that it could only give us low-level knowledge accessible to the senses, knowledge of material reality, which is only a shadow cast by a more ideal reality accessible to reason).

The second critique was favored by the empiricist, David Hume, who was presumably following the empiricism of 18th-century scientists to its logical conclusion—that the material world was either non-existent or utterly unknowable.  (There are counterarguments—and rebuttals—Kant, e.g., would pick up exactly where Hume left off. But it’s interesting to see how the ultimate empiricist, Hume, and the ultimate anti-empiricist, the rationalist Plato, both conclude that a purely materialist world view is untenable.)

 

Regifting and Post-Tech Ethics

Roiled in the recent holiday spirit, my friend, Brit, asked if I could do a regifting manifesto in the vein of my fashion anarchy manifesto. I thought I’d over-comply and build an entire ethical system around regifting. Thus the following.

I think of ethics as having a constant layer and a layer of culturally-specific variables. The constant layer – the golden rule – is fairly simple, and is constant even as expressed differently by Kant, Jesus, Plato, Confucius, et al. As the Dalai Lama puts it: “If you want others to be happy, practice compassion. If you want to be happy, practice compassion.”

On the variable layer, ethical conundrums arise with each age and within each culture. As the Mayan calendar ends and we move into the post-technological age, I see a few practical strategies for ethical behavior that might navigate us from late capitalism to the Age of Aquarius.

First, we have to restructure our ethical vision to meet changes in the natural environment. Technology has reached a point where it can (a) rapidly strip-mine all remaining resources off the face of the earth in pursuit of quick profits, or (b) distribute resources as needed to all parts of the world. The Corporate State wants to bind people to the consumerist ethic that keeps technology on track (a). One person alone can’t stop that consumerist mentality, with its concomitant greed and political structures, all designed to maximize how much stuff can be hoarded. But there are things individuals can do. And through the old-fashioned ripple-effect of friends of friends of friends, and the newfangled speed of social media, we can change the cultural sensibility more rapidly now than in the past.

Thrift store shopping (kudos to Macklemore). Simple. Why burn through Mother Nature’s resources more quickly than you need just to satisfy the “new stuff” fetish that has been cynically implanted into our brains by the Corporate State?

Regifting. If you have something you know a friend would like, why not give them something that has a little bit of your own life imprinted on it, something with real traces of sentiment, something that shows you’ve sacrificed a little bit of yourself for them to keep forever or until such time as they regift it and pass along the chain of accumulated sentiment? Things made with your own hands would fall into this category too, at least so long as those things are given in the spirit that the receiver is welcome to pass along the object, which is now a locus of emotional history and not just an anonymous commodity, to someone else that he or she would like to bring into the chain.

Regifting will not get traction as quickly as thrift store shopping, because the Corporate State has buried this taboo into its subjects more deeply. After all, since regifting completely detaches the idea of “the purchase” from the idea of “meaningful gift,” the Corporate State rightly sees it as an even bigger threat. All the more reason for us to get a movement going to make regifting cool. And here we must rely on a new generation of teens and twenty somethings, as the stigma will be too much for most older people to overcome on their own.

So practice regifting, practice thrift store shopping. And practice fashion anarchy, too, as it will maximize creative leeway for every individual and at the same time liberate our most basic self-presentation from the commodified versions of self being sold to us for cold cash at retail outlets and big box stores every day. It will also dispel, and perhaps transform, the motivation of some of consumer culture’s most dogged enforcers (those who act as fashion police). If individuals do these things and promote these ideas mindfully, we will already be moving toward a culture where self-actualization and human achievement is no longer measured in terms of purchasing power.

But don’t underestimate the resistance we will encounter. On the economic level, these apparently small lifestyle choices shift the priority from ever-growing economies to sustainable economies, which is a very dangerous idea to the status quo of profiteering giants who are currently managing the global economy. On the other hand, don’t overestimate the power of those giants. As the earth’s resources are depleted, the age of consumerism will die. The writing is on the wall. The ice sheets are melting. What little rainforest remains (now about 6% of the land surface) could be consumed in about 40 years at present rates. The Age of Aquarius is coming. The only question is whether it will happen via a utopian or dystopian pathway. In the utopian model, human ideals are transformed and we come to find fulfillment in creatively sustaining the resources around us. In the dystopian model, our appetite continues to grow until there are not enough resources left to sustain growth, and the species begins to implode as resources dry up while humans still define themselves by how many resources they can personally control. Now make your choice.

From Boethius to Blake

“The relationship between the ever-changing course of Fate and the stable simplicity of Providence is like that between reasoning and understanding … or between the moving circle and the still point in the middle.”

Embedded in this quote from The Consolation of Philosophy, beautiful to contemplate in its own right, is a code that solves all of Boethius’s philosophical problems. Writing just as the classical period gave way to the medieval (late 5th/early 6th century), while he was personally awaiting execution, Boethius struggled with many common questions: (1) how do we explain fortune’s wheel, which turns up and down quite irrespectively of what one deserves; (2) how do we deal with the problem of evil; (3) can we justify our belief in free will when everything seems logically predetermined by external and pre-existing forces?

Boethius views Fate and Providence as descriptions of the same reality but from different orientation points. From the point of view of one who exists in time, events often seem to follow each other by chance, with no rhyme nor reason to rewards and punishments. But the point of view of the eternal sees the full history of the universe simultaneously. The question of how one thing leads to another is irrelevant, as time has evaporated and the whole of eternity lies before one like a unified tapestry with all threads woven as they should be.

If we accept the premise of these two orientation points, this solves problem # 1 directly. Problem # 2 he solves with the supplemental argument that all men strive for happiness, that true happiness is consonant with goodness, and evil is never actually rewarded, as evil people mistake their goal and must always fall short of happiness by virtue of their own evil. Problem # 3 is a bit more indirect. From the point of view of Providence, from the still point in the middle, all things are simultaneous. In Boethius’s sometimes theological diction, all things are “foreseen.” But from the point of view of people moving along the circle, they need to make decisions every day with practical and ethical implications. To Boethius, foreseen is not the same thing as fore-ordained. The omniscience at the center of the circle in no way mitigates the urgency of making the right decisions for those of us in motion.

Although one can detect concerns here that would occupy the Christian age, Boethius remains classical in a couple of key ways. His intellectual guide is always reason, his moral compass moderation and tranquility. Combine these with the sense of Providence and Plato’s metaphysics, and you have the basic framework of Christian Platonism that looms over the next millennium.

One could argue that John Milton’s Paradise Lost takes this medieval Christian worldview into the Renaissance. Milton’s Satan is the great Renaissance humanist, the high achiever who thinks it “better to reign in Hell than serve in Heaven.” Satan’s villainy, and his undoing, is his all-too-human pride, his tragic belief in his own self-sufficiency.

Whereas in this (necessarily simplified) line of reasoning, Milton smoothly transitions to the Renaissance, just as Boethius had smoothly bridged from classical to medieval, there is nothing smooth about William Blake’s emergence at the beginning of what would be called the Romantic period. Here we get a real rupture. Blake praised Milton for his concrete vision of divine reality, a panorama that rang true to Blake’s own visionary experience. Milton’s only flaw, to Blake, is that he misnamed the characters. The character Milton calls “Satan” is actually the Messiah, and the character Milton called “the Messiah” is actually Satan.

Shock value aside, there is a method to Blake’s madness. Milton’s Messiah represents reason and restraint, the chains that bind the human spirit in Blake’s cosmology. Milton’s Satan represents passion and excess and unrestrained will, all the redemptive forces that enable maximum human achievement and self-actualization.

All great writers, each with something to offer the questing spirit, but after Blake it’s suddenly a long way back to Boethius.