Singularity good or bad

I recently read Zizek’s entries on singularity in The Philosophical Salon and have a few thoughts.

Singularity, as far as I can tell, refers to the networking of all our psyches so that we are all sharing one database of ideas, all our brains plugged (virtually) into the external brain, continually uploading and downloading our thoughts. It really just takes today’s thought and emotion recognition technologies a bit further and adds in the networking aspect. I think of it as a digital objectification of subjectivity, the cogntive correlative of the bio-mechanical hybrids proposed by transhumanism. (Disclaimer: I ponder these things strictly as an amateur, but even the experts on such a topic might want to track what we amateurs are thinking 🙂 ).

Sounds scary, but in one sense it’s just the natural evolution of consciousness. Think about it. Every new communication technology is a kind of brain extension, enabling us to take some of the knowledge stored in our head and store it outside in the community or in external spaces, where it can be retrieved later as needed by us or others.

Spoken language
Writing
Books
Printing press
Personal computers
Cloud-based networks
Singularity

If we wanted to follow the Marxist-leaning Zizek, we could coordinate this line with economic developments, as rapid changes in how information is stored and shared are no doubt interwoven with rapid economic changes. Language allows us to coordinate into agricultural activities, writing allows us to organize into city bureaucracies, etc.

More to the effect on subjectivity, we could see each of these stages as a kind of alienation of the subject, as the knowledge relevant to the subject’s existence becomes increasingly relocated outside of the subject’s own body. But all that “alienation” doesn’t seem so bad to us now. Language and libraries and personal computers — they seem to move us toward greater freedom, greater control over our personal lives, physically and intellectually.

So will the next horizon line – Singularity – play out the same? Will it appear in the form of alienation and dread but liberate us as did those previous technologies? Or will this one be different? Will the moment of singularity be the moment of collapse in the individual’s trajectory of liberation? One could certainly argue for the dystopic turn. What if singularity results in the elimination of privacy, so that our thoughts are exposed to the general consciousness? What if our thinking process elapses in the collective space, our thoughts visible to those around us, all of us wearing Google smart glasses on steroids. Would we allow such a thing? Indeed, we would probably beg for it, the same way insurance companies get customers to beg for more and more onboard monitoring devices to track their every habit, on the grounds that it “helps” the customer.

At the very least, it seems that the mind-sharing aspect of singularity would result in a degree of self-censorship that is alarming by today’s standards, perhaps alarming enough to break the trajectory of liberation associated with prior communication advances. Would each self be censored into a Stepford Wife knock-off? Or would there not even be a self to censor, if our thoughts form and grow in shared space, our physical bodies and brains merely energy sources for that shared space? Maybe The Matrix is a more apt metaphor than The Stepford Wives. 

Thus spake the amateur, in reference to technological/AI singularity, not so much to singularity in the Eastern/akashic record sense, although that might be an interesting tangent. But per that technological singularity, I suspect there are many in the world with similar amateurish thoughts. Maybe one of you techie readers can chime in and bring the hammer down on our collectively imagined dystopia before it’s too late.

P.S. Remember these?

* * *

              

* * *

(Click image for links)

          BookCoverImage      year-bfly-cover                

Artificial intelligence and human experience

1. Anticipation and Fulfillment in Visual Perception

Reading an interview with philosopher Michael Madary, I was thinking as so many do about how far artificial intelligence (AI) can go in mimicking the human experience. Madary’s bailiwick is philosophy of mind and the ethics of emerging technologies, especially virtual reality. The interview focuses mainly on Madary’s anticipation and fulfillment model of visual perception. The basic model, it seems to me, is equally applicable to human or AI behavior; i.e., visual perception as proliferating perspectives across time. You first see the object from one limited point of view. To truly grasp it, though, you need to anticipate what it looks like from additional perspectives. You move around, double-check, and your anticipation is more or less “fulfilled” or verified. Then the point of fulfillment becomes a starting point for next stage of anticipation, etc. Visual perception is this process of constantly accumulating perspectives, ever absorbing regions of indeterminacy into determinacy, ever approximating an unreachable objectivity of perspective.

It seems AI should be very good at all of this. So what, if anything, about human reality does AI miss? Is it that there is no psychological reality corresponding to the process as AI executes it? Can the distinction of the human’s psychological reality be evidenced? Is it manifest in the idea of motivation? AI can execute the same process but has a different (or absent) motivation vis-à-vis the human counterpart? The human viewing a modern sculpture from different perspectives may be seeking aesthetic beauty or pleasure, which would seem beyond the scope of AI. Although local human actions may mimic AI processes, it may be that the ultimate motivation behind human action in general falls outside the scope of AI – let’s call that ultimate motivation happiness (following Aristotle) or the satisfaction that comes with living a good life (following Plato); is there any comparable motivation for AI?

2. Transcendental Fulfillment

Or how about this take on the AI/human difference. What humans are always seeking, the grand dream under it all, is fulfillment liberated from the anticipation-fulfillment cycle, a sense of contentment that gets us out of the rat race of endless desires and partial fulfillments. For a correlative to the visual perception model, picture yourself gazing at a painting, motionless, without any interest in increasing perspectives, just having the static fulfillment of the beauty in front of you for a certain duration of time. What elapses in that duration of time may be an expression of the thing that is inaccessible to AI. What humans really want is not a sense of fulfillment that is indefinitely deferred by endlessly proliferating perspectives – the never-ending drive for more data that might occupy the AI entity. We want THE sense of fulfillment that comes when you opt out of the cycle of proliferating perspectives, a sense of fulfillment that transcends process. So whereas the anticipation-fulfillment cycle is an end in itself for AI, for humans all such processes are instrumental; the end in itself, that end which motivates the whole process, is that which falls outside of the process, a kind of static contentment that in inaccessible to the AI.

3. Singularity, or Singularities

The concept of singularity might help to clarify the distinction between fulfillment embedded in the anticipation-fulfillment process and transcendental fulfillment, fulfillment liberated from the cycle. Transcendental fulfillment is, let’s say, a metaphysical singularity – the space of infinite oneness alluded to by many philosophers and mystics, indicating escape from the rat race. Compare that to technological singularity, the critical mass at which artificial superintelligence would eclipse all human intelligence, rendering homo sapiens superfluous. Perhaps fears about the technological singularity, about an external AI dislodging us, are misplaced. I will tentatively side with those who say that computers do not really have their own motivation, their own autonomy. The risk then is not of some external AI overtaking us; the risk lies rather in the porosity between human reality and AI. The risk is not that AI will defeat us in some battlefield Armageddon but rather that it will bleed the life out of us, slowly, imperceptibly, until we go limp. We have already ceded quite a bit of brain activity to the machine world. We used to have dozens of phone numbers in our heads – now all stored in our “external” brains.” We used to do a lot more math in our heads. (Try getting change today from a teen worker without the machine telling them how much to give.) World capitals? In the external brain. City map layouts in one’s head? Replaced by GPS real-time instructions.

“So what?” students today might say. “Memorizing all that stuff is now a waste of time?” And they may be right. But as the porosity increases between human reality and the computerized ether we live in, we cede more and more of our basic survival skills to the ether. I don’t expect malice on the part of AI (although the HAL 9000 was a cool concept), but there may come a tipping point at which we have ceded the basic means of species survival to the machine world. And in ceding more control of our inner lives to the external brain, we become more embedded in the anticipation-fulfillment cycle. Even basic human activities take on a query-and-report format. It becomes increasingly difficult to “opt out” of the processing apparatus and find that space of reflection that transcends the endless proliferation of future-directed perspectives.

4. The Historical Side: Dystopic or Utopic

All this talk about homo sapiens being bled out sounds quite dystopic, and perhaps dystopia is the endgame. But not all possible futures are grim. First of all, in structural terms, porosity is two-directional. Ever since the invention of writing, we have transferred information into the external media of books, giving subsequent generations the capacity to “upload” that information and store it in their brains when the books are removed. This prompted Clark and Chalmers, as far back as 1998, to theorize about the “extended mind,” in which the space of the mind is shared by internal processes and environmental objects that work in tandem with those processes. Another parallel is in Wittgenstein’s Blue Book example wherein we use a color chart until we “learn” our colors, and then throw the chart away. In these cases, the external device provides intermediate information storage. We use the Internet in this fashion all the time. Nothing dystopic here. But is it different when the device becomes capable of evolving its own algorithms, generating its own information, and using it to implement tasks that go far beyond mere storage? Perhaps so, but it is not yet clear that the dystopic end is inevitable.

Second of all, in terms of social implication, technology could free us up to spend less of our lives on drudgery and more of our lives in that reflective space of self-fulfillment, working out our own electives of self-realization. Indeed, this is the signature promise of technology in the age of capitalism. Ever since the early 19th-century Luddite rebellion, technology has repeatedly made this promise and repeatedly failed to deliver. Why would it be any different now? It could only be different if there were a fundamental shift in our perspective of what it is to be human.

When Madary exemplifies his visual perception theory with the example of a modern sculpture, he introduces what for me is the wild card in the anticipation-fulfillment cycle: the element of surprise.

“Recall a situation in which you moved to gain a better perspective on a novel object and were surprised by how it appeared from the hidden side … Modern sculpture can be helpful for illustrating visual anticipations because the precise shape of the sculpture is often unclear from one’s initial perspective. Our anticipations regarding the hidden side of a modern sculpture tend to be more indeterminate than our anticipations about the hidden sides of more familiar objects.” (Madary)

Whereas I started out by saying that Madary’s anticipation-fulfillment model of visual perception applies equally to AI and humans, I suspect we might handle the element of “surprise” differently. In the case of humans, “surprise” is a trigger for imagination, a less tractable faculty than might be intelligible to our future friends in the AI phylum. Sure, our AI compatriots might predict possible futures as well or better than we do (and thus they might best us at chess), but is that really “imagination”? Humans imagine not only possible futures, but also alternative presents and alternative pasts, using self-generated imagery to feed nostalgic visions of times gone by. There is something about the creative process of imagination that might separate us from AI and might make us less predictable in the event of “surprises” or disruptions in the normal anticipation-fulfillment process. Since technology has typically failed to deliver on its promise to enhance self-fulfillment time for personal development, we might anticipate another failure when technology says that AI will truly free us up from drudgery. But the result could be different this time if a rupture in conditions is great enough. Imperatives of income inequality and ecological destruction might be rupture enough. As we survey our predicament the way Madary’s viewer surveys the modern sculpture, we might on the other side glimpse the end of capitalism (which may sound dramatic, and yet all ages do end). Perhaps this might jolt the imagination to a new sensibility, a new subjective frame of reference for values like “work” and “technology” and “success” and “self-actualization” – to wit, a new definition of what it means to be fully human.

How rapidly and wholly we make that turn to a new definition of what it means to be fully human will lock in the dystopic or utopic endgame. In the dystopic version, homo sapiens is bled out by some combination of AI and economic and ecological calamities. In the utopic version, consciousness about what it is to be human evolves quickly enough to allay those calamities and to recapture AI as the servant of human ends and not vice versa.

Footnote on Kierkegaard’s 3 modes of lived experience: aesthetic, ethical, religious

The anticipation-fulfillment model of visual perception can be seen as the basic process of Kierkegaard’s aesthetic mode, sensory-based life. A whole life lived on the aesthetic level is lived as a continuous accumulation of equally ephemeral sensory perspectives on one’s own life.

The ethical life turns out the follow the same model but on the ethical level. The ethical life is a continual accumulation of equally provisional ethical perspectives.

The religious life, though, breaks the model. It concerns the absolute. It does not consist of accumulating perspectives but of a singularity; it eschews all the accumulations of visual or ethical perception for the singular relation between the religious subject and the absolute, a singularity which obliterates all mediate or quantitative concerns.

Michael Madary’s Visual Phenomenology

Richard Marshall’s interview of Michael Madary in 3:AM Magazine

BookCoverImage         year-bfly-cover       Cover png

 

Imagination’s role in all of this

The question: Is “reality” is limited to “all things that are actual” or does it include “all things that are possible”?

As in a previous fine entry on the topic, full of pith and wit, I choose the more inclusive definition.

If all possible futures were not part of the fabric of reality, imagination itself would not be possible.

It is, so they are.

An analogy: Microorganisms in the human body outnumber human cells 10-to-1; without those microorganisms, the human body’s ecosystem would collapse, or more accurately, would never have existed. It may be the same with possibilities folded into the world of actuality.

Here’s an article that looks at the topic from the point of view of quantum physics:

Quantum mysteries dissolve if possibilities are realities

Thanks to Wayne.

 

 

The fascinating brain of teen girls

I don’t really know much about the brain of teen girls. As a man, the female psyche must on some level remain for me, as it was for Freud, “a dark continent” (The Question of Lay Analysis, 1926). Freud was prescient enough to know that the mechanisms he studied were the objective mechanisms of identity formation — not the subjective experience itself (the dark continent). He was also progressive enough to warn his fellow analysts against “underestimating the influence of social customs” in discussions of gender and to emphasize that “the proportion in which masculine and feminine are mixed in an individual is subject to quite considerable fluctuations” (Essay on Femininity, 1933).

But enough about Freud. After all the psychology and philosophy and literature I’ve read, I think my daughter (I believe 14 at the time) most succinctly expressed, by accident one day, exactly what it feels like to be a teenage girl. We were wandering a city in Spain — Barcelona, Madrid, I forget which city — and were in a green space filled with monuments. I had momentarily lost her, and then I heard her voice near a monument and came back up to her.

“Hey there. What ya doing?” I asked her.

“Singing. And thinking about how weird I look.”

She tossed the line off casually, but I thought that that was it. The rich and contradictory inner life of the teenage girl in a nutshell.

Now I welcome feedback from those of you who actually were teenage girls (and from those of you who weren’t — unlike some of my younger liberal friends, I reject all restrictions on what you are allowed to say, think, or do, based on your demographic identity).

Fallacies of Science

To the scientists in my circle: I’m more with you than you think. I don’t doubt for a minute the value of science. I find it absurd, e.g., that some people think religious texts can compete with science as a source of information about how the physical world works. But I like to amuse myself by playing watchdog for my scientific friends.

Even in my watchdog role, I can raise no objections to the scientific method, or to the analytical power that science has to unpack the facts and processes of the physical world. But as self-appointed guardian at the gates, I propose the following fallacies often committed by the scientifically-minded – all, again, fallacies of application or of scope, not intended to impeach the core value of the scientific method but to snap at the heels of scientists — and even our most admirable scientists like Neil DeGrasse Tyson and Stephen Hawking — when they make claims that go beyond the scope of their expertise.

The fallacy of metaphysical (external) scope

As I’ve argued elsewhere in this fine blog, science studies the “objective world” and has great analytical power within that scope. But science oversteps its scope when it claims that the “objective world” is the “real world period” and anything else is nonsense, thus implying that science is the one and only path to truth.

I propose that it’s misleading to call the “objective world” (which is the full scope of scientific inquiry) real or unreal; it is more accurately an abstraction from reality. There is no purely objective world just as there is no purely subjective world. Each is an abstraction from lived reality.

(Don’t the abstractions called “objects” in computer science suggest as much? A computer program at Tulane may, and probably does, have an “object” called Wayne xxx. This object is an abstraction that consists of a character string (name), numeric string (birthdate), etc. A different database—say that of the IRS—may also have an object called Wayne xxx but with different characteristics abstracted. The physical scientist, like the computer scientist, studies only those details relevant to his or her level of abstraction. But scientists sometimes forget this and make claims that go “beyond scope.”)

Just as the scientist elucidates valuable truths from her abstraction from reality (called the “objective world”), so might poets, philosophers, and Zen masters elucidate valuable truths from their abstractions from reality. It’s not at all clear to me that the subjective aspects of lived reality – art, justice, ethics, the felt joy of love and friendship, and the felt pain of loss and betrayal, are really reducible to (although they may be correlated to) scientific data about neurons. It’s not at all clear to me that the rich unconscious landscapes of Greek mythology or Blake’s visionary poetry, or the subjective-centered critique of empiricism in Kant’s philosophy, teach us less about lived reality than Darwin. To call the scientist’s abstraction of the world “the real world period” is to falsely assign it a metaphysical status, confusing one abstract way of looking at lived reality with the presumed metaphysical ground of lived reality itself.

The fallacy of substantive (internal) scope

Let’s look more narrowly at the role science plays within the scope of the objective world it studies. It mines and generates much knowledge about the physical world, and for that we are grateful. But how much of its substantive area does it really grasp? Even at its present power, it only nibbles the tip of the iceberg. Take the human body. Medical science knows much more about the body’s processes than it knew 350 years ago, when the Age of Science really started coming on line. We look back at the 17th century as a kind of dark ages of leeches and blood-letters. Isn’t it obvious that science will expand its knowledge base just as rapidly, if not more rapidly, in the centuries to come? Won’t they look back at us with the same amusement, as a people nobly gathering knowledge but remarkably primitive in what we had gathered?

This telescopic view from the future should give us pause before we leap. Just a few decades ago, “science” was telling us that it could produce a baby formula more nutritious than mother’s milk. For every “well-tested” drug on the market, there’s a class action lawsuit addressing unintended consequences of that drug. One doesn’t have to be religious to believe that there is a vast (evolved) intelligence at work in the human body and in nature, and that science has only mapped a few percentage points of what is really going on in these systems. Don’t get me wrong – a few percentage points is better than no percentage points, and I’m all for science expanding its knowledge base. But when it comes to applying that knowledge, I take a humbler approach than some more eager proponents of science. The pro-implementation argument I most hear is that the things to be deployed have been tested exhaustively in study after study. Although this may be true, it is limited by context. If scientific understanding of its subject area (in this case the human body and the natural world) has leaped from 1% to 5% in the past few hundred years, it has still mapped just the tip of the iceberg, and still leaves enormous territory unexplored. So when you test exhaustively for results and side-effects, you are only really testing within the zone you understand. There are so many collateral aspects of human and natural ecological systems that are undiscovered that it is sheer arrogance to say that we’ve tested by 2015 standards and thus pronounce such-and-such safer and more effective than Mother Nature.

How does this translate to policy? If you have a serious illness, by all means draw upon that scientific knowledge base and try a scientific cure. If you have a less serious illness, you may be better off trusting to the body’s natural healing mechanisms, insofar science has only scratched the surface on how these mechanisms work, and tampering with biochemical processes may do more harm than good. I and everyone will have to judge this case by case, but by no means am I willing to conclude that science understands every aspect of how the body works and has therefore tested and measured every collateral effect for a particular drug or procedure.

On a tricky subject such as GMO foods, I am not as rabidly anti- as some of my hippie-ish brethren, but not as naively optimistic as some of my scientist friends. I like the idea of scientists building a knowledge base on this topic. But when it comes to implementation, I tend to keep one foot on the brakes, especially since radical changes can now be implemented globally and with much greater speed than in centuries past. I’m not at all convinced that science in its current state understands all the collateral processes of nature well enough to make the “exhaustively tested” claim. Or, to go back to our telescope of time, isn’t it possible that scientists 200 years from now will look back and shake their heads in amusement at our “exhaustively tested” claims?

And I haven’t even gotten to the corruptive influence of money and big corporations when it comes to what substantive areas of scientific inquiry will be funded and how results will be implemented. There may be something like a “fallacy of scientific purity” embedded here.

The fallacy of epistemological scope

Here, I use epistemology broadly as the quest for knowledge – almost, one could say, the quest for self-actualization that drives human reality, if not every aspect of reality. British Romantic poets will be my outside reference point here. The Romantics saw the development of self-knowledge, or self-actualization, in three stages. In Blake, these correspond to an Age of Innocence, Age of Experience, and an Age of Redeemed Imagination. In the Age of Innocence, we access knowledge through the fantastic mechanism of imagination, which keeps us in a state of wonder but leaves us naïve about the world and easily exploited. In the Age of Experience, we begin to access knowledge through reason and science, gaining factual knowledge that makes us less naïve and more worldly, but with that worldliness comes a cynicism, a sense of world-weariness, a sense of loss, of fallenness. Indeed, the Romantic world view at times seems to equate the world of Experience, the world of objective facts, with the world in its deadened aspect. The trick in Blake is to find the turn into a third stage, wherein the power of imagination re-engages at a mature level, re-animates the dry world of abstract facts, and saves us from the cynicism of Experience. In a word, we can put the scientific-type knowledge of Experience into perspective. We can still see its value but without being constrained by it in our quest for self-actualization. In Wordsworth’s “Tintern Abbey,” this plays out as the innocence of “boyish days” (73), experience “‘mid the din / Of towns and cities” (25-26), and the “tranquil restoration” of the mature poet (30). In the third stage, the sensory raptures of youth and the worldly knowledge of experience have both lost their traction. Specifically, the poet has lost the pleasure of immediacy but has gained the power of inward reflection. The “sense sublime / Of something far more deeply interfused” (95-96) is reserved for the third stage, and indeed is specifically used as a counterpoint to the sensory appreciation and worldly knowledge of earlier phases.

These 3 stages can easily be projected beyond the individual onto the cultural or even the cosmic screen. Blake, with his Jungian vision of the archetypal sources of consciousness, readily applies it to the cosmic level. I’ll apply it to the level of cultural history by saying that the Age of Science fits the second stage very well. Science emerged as the dominant epistemology around the late 17th century, putting to bed some childish theories and introducing us to a more worldly-wise engagement with the physical world. Who knows when this Age of Science will end, but when it does, perhaps then we will enter the Age of Aquarius I’ve promoted only half tongue-in-cheek. And perhaps then we will look back at the Age of Science as Blake or Wordsworth look back at their middle stage – as an epistemological period that starts out liberating but eventually binds our imaginations, makes us a little cynical about the possibilities of self-actualization, chains us to what Plato calls “the prison-house” of materialism. So the fallacy of epistemological scope is the fallacy of myopically seeing only that force of knowledge that is present in the middle period, whereas true wisdom may be broader than that. It may be that the innocent child and the mature poet can grasp things about reality that are inaccessible to the purely scientific mind.

The watchdog sleeps

So those are my fallacy sketches for my scientific friends. Now pause and ponder.

rachael art - bad day

 And if in your pondering, you find yourself viewing me with the gaze of the character above (provided by the talented Rachael Gautier), remember: When my watchdog shift ends, I’m more on your side than you think. At least you can take comfort that in the next U.S. election I will be voting for the party that takes science seriously and not the party that seems perpetually at war with science. Meanwhile, I’m happy to revise, especially if a particular Ukrainian physicist I know will home-brew another batch of Russian Imperial Stout to facilitate the review process.

From Depth Psychology to the Akashic Record

It’s commonplace now to hear how modern physics increasingly dovetails with the ancient world view of the Eastern mystics. If this is true of our evolving conception of the objective universe and how it works, it is also true in the vast space of the subjective universe, the space of the psyche.

Before Freud, you had “faculty psychology,” which seemed well seated upon the Western classical world view – a symmetrical row of nice, neat boxes, each representing a “faculty” (appetite, emotion, desire, reason, etc.). Freud’s theories signaled a paradigm shift to “depth psychology,” with layers of unconscious drives and desires and memories folded beneath our conscious awareness, influencing our everyday behavior from invisible, forgotten spaces in the depths of the psyche.

“Depth psychology” is still the dominant paradigm for the psyche, and even Freud’s attackers draw upon Freud for their weapons, but his breakaway student, Jung, expanded the “depth” of depth psychology. Freud’s locus of interest is the individual psyche, and his case histories typically trace back antecedents of adult behaviors to the formative infantile development of the individual. Jung traces the roots of the psyche deeper still, to a place that transcends the individual altogether; hence we get the universal archetypes of the collective unconscious, a deep space of psychic phenomena shared by us all. You can think of it as our common grazing land, or if you prefer a high-tech metaphor, it’s the “cloud” wherein our fundamental data are stored and from which we all download to configure our own machinery. Either way it is here, in this transcendentally deep “subjective inner world,” that Jung finds “the instinctive data of the dark primitive psyche, the real but invisible roots of consciousness.”

It’s a short stretch from Jung to the akashic record of the mystics. The akashic record in the Eastern mythos is the record of everything normally considered past, present, and future (in our clumsy linear sense of time). Every thought, every movement of every leaf, is contained in this vast database, as it were. But the akashic record is more than a database. It is the ultimate reality. All our daily actions are reflections of, or abstractions from, the akashic record. We are right now living the akashic record, experiencing it from one orientation point. Through yoga, meditation, or other spiritual practices, you can almost picture your self-reflection carrying you down to the Freudian depth of childhood and then infancy, then breaking through to the Jungian depth of the collective unconscious, and finally arriving at the level we metaphorically call the akashic record. At this point, we’ve not only carried depth psychology to a point where Western psychology merges with Eastern mysticism, but we’ve inadvertently married the “objective” and “subjective” universes that provided the point of departure in the opening paragraph of this fine blog entry. Cosmic consciousness, as the very compound of the phrase suggests, simultaneously expresses ultimate reality in both its objective and subjective aspects. When you hit that ultimate depth, the inside becomes the outside, the innermost psyche finds itself expressed as the objective cosmos. So om mani padme hum, and I’ll see my physicist friends on the other side.

Kant’s supposed relativism

To my friend who argued that Kant denied that we have any direct knowledge of the objective world and is therefore a relativist, I’ll give my take on Kant, and maybe one of my professional philosopher friends (at least one of whom I know is listening) can add his or her two cents.

My friend’s premise that Kant denied us any direct knowledge of the objective world is true. The conclusion, that Kant is a relativist, might then seem a no-brainer, but a close look shows that this conclusion does not follow from the premise.

Kant is indeed famous for subjectifying everything at the end of an eighteenth century which sought, through empiricism, to objectify everything. What is the most basic thing about the world as we know it? Space and time. Kant meticulously argues that space and time are not “out there,” not things in the world but ways of organizing the world. They are the subjective categories through which we make sense of the otherwise inaccessible flux of reality. But crucial is the idea that they are subjective categories and not objective facts. And if space and time are subjective categories, then it follows that everything we know about the world is subjectively constructed. Or, in Kantian terms, the world we know is the phenomenal world. It turns out that our knowledge presupposes a noumenal world anterior to the phenomenal world, but we have no access to such a world – it exists for us merely as an abstract, logical prerequisite.

This radical subjectification of the human experience would seem to throw us into a dizzying relativism, but not so in Kant. Indeed, Kant tells us in his early notebooks (before the Big Three critiques – of pure reason, of practical reason, of judgment) that his whole goal is to find universals in a world that seems to have spun off into relativism. It was David Hume who had carried empiricism to its logical conclusion, using the five senses to show that we have no evidence that external reality exists. Kant felt the justice of Hume’s argument, but like many was uncomfortable with the loss of all universal reference points. He felt that there is something universal about reality, that there is some shared universe we occupy. Kant’s epiphany came when he saw that if we were to have universals, we would have to locate them subjectively, not objectively. The objective world cannot give us universals because it is, insofar as we have access to it, always already shaped by subjective categories of understanding.

So how does Kant find a universal ground for ethics? I’m not sure because the Critique of Practical Reason is the one I’m least familiar with. But I can say how he does it in regard to aesthetics (the subject of the Critique of Judgment).

A true (valid) aesthetic judgment is (1) disinterested, (2) subjective, and (3) universal. Disinterested: “The satisfaction which we combine with the representation of the existence of the object is called ‘interest.’” I.e., if the satisfaction involves a vested interest in the existence of the object, it is an interested judgment. “You’re beautiful because our sex is great” is NOT a disinterested judgment: “A judgment about beauty in which the least interest mingles, is very partial and not a pure judgment of taste.” To be freed from such interest, a judgment must be subjective: “When the question is if a thing is beautiful, we do not … depend on the existence of the thing … but … judge it by mere observation … We wish only to know if this mere representation of the object is accompanied in me with satisfaction, however indifferent I may be as regards the existence of the object of this representation.” Only a subjective judgment is truly disinterested, and thus only a subjective judgment can be universal: “For the fact of which everyone is conscious, that the satisfaction is for him quite disinterested implies in his judgment a ground of satisfaction in all men.”

So to achieve an unbiased view, you must strip away all vested interest in the existence of objects at hand. Only then can your judgment be disinterested and therefore universally valid (and by definition, then, you are viewing it subjectively, as mere “representation” without regard to its objective existence).

I assume the analogy holds for ethics. An ethical judgment, to be valid, must be universal, and it can only be universal if disinterested, and only disinterested if subjective (stripped of all self-interest in the objective reality of the representation at hand).

What my friend who started this discussion wants, Kant would say, is not an objective ground of ethics per se; he wants a universal ground of ethics. And he would do best to find it subjectively, not objectively.

Nuggets from Freud and Jung

I chose these little quotes with no thought to the differences nor to the similarities of Freud and Jung. Just thought-provoking nuggets about how the unconscious fits into the big scheme of things from two sage wits who between them laid the foundations of depth psychology.

. . .

“I then made some short observations upon … the fact that everything conscious was subject to a process of wearing-away, while what was unconscious was relatively unchangeable; and I illustrated my remarks by pointing to the antiques standing about in my room. They were, in fact, I said, only objects found in a tomb, and their burial had been their preservation.” (Freud, Case History of the Rat Man)

. . .

“Just as our free will clashes with necessity in the outside world, so also it finds its limits … in the subjective inner world … in the instinctive data of the dark primitive psyche, the real but invisible roots of consciousness.” (Jung, Psyche and Symbol)