Artificial intelligence and human experience

1. Anticipation and Fulfillment in Visual Perception

Reading an interview with philosopher Michael Madary, I was thinking as so many do about how far artificial intelligence (AI) can go in mimicking the human experience. Madary’s bailiwick is philosophy of mind and the ethics of emerging technologies, especially virtual reality. The interview focuses mainly on Madary’s anticipation and fulfillment model of visual perception. The basic model, it seems to me, is equally applicable to human or AI behavior; i.e., visual perception as proliferating perspectives across time. You first see the object from one limited point of view. To truly grasp it, though, you need to anticipate what it looks like from additional perspectives. You move around, double-check, and your anticipation is more or less “fulfilled” or verified. Then the point of fulfillment becomes a starting point for next stage of anticipation, etc. Visual perception is this process of constantly accumulating perspectives, ever absorbing regions of indeterminacy into determinacy, ever approximating an unreachable objectivity of perspective.

It seems AI should be very good at all of this. So what, if anything, about human reality does AI miss? Is it that there is no psychological reality corresponding to the process as AI executes it? Can the distinction of the human’s psychological reality be evidenced? Is it manifest in the idea of motivation? AI can execute the same process but has a different (or absent) motivation vis-à-vis the human counterpart? The human viewing a modern sculpture from different perspectives may be seeking aesthetic beauty or pleasure, which would seem beyond the scope of AI. Although local human actions may mimic AI processes, it may be that the ultimate motivation behind human action in general falls outside the scope of AI – let’s call that ultimate motivation happiness (following Aristotle) or the satisfaction that comes with living a good life (following Plato); is there any comparable motivation for AI?

2. Transcendental Fulfillment

Or how about this take on the AI/human difference. What humans are always seeking, the grand dream under it all, is fulfillment liberated from the anticipation-fulfillment cycle, a sense of contentment that gets us out of the rat race of endless desires and partial fulfillments. For a correlative to the visual perception model, picture yourself gazing at a painting, motionless, without any interest in increasing perspectives, just having the static fulfillment of the beauty in front of you for a certain duration of time. What elapses in that duration of time may be an expression of the thing that is inaccessible to AI. What humans really want is not a sense of fulfillment that is indefinitely deferred by endlessly proliferating perspectives – the never-ending drive for more data that might occupy the AI entity. We want THE sense of fulfillment that comes when you opt out of the cycle of proliferating perspectives, a sense of fulfillment that transcends process. So whereas the anticipation-fulfillment cycle is an end in itself for AI, for humans all such processes are instrumental; the end in itself, that end which motivates the whole process, is that which falls outside of the process, a kind of static contentment that in inaccessible to the AI.

3. Singularity, or Singularities

The concept of singularity might help to clarify the distinction between fulfillment embedded in the anticipation-fulfillment process and transcendental fulfillment, fulfillment liberated from the cycle. Transcendental fulfillment is, let’s say, a metaphysical singularity – the space of infinite oneness alluded to by many philosophers and mystics, indicating escape from the rat race. Compare that to technological singularity, the critical mass at which artificial superintelligence would eclipse all human intelligence, rendering homo sapiens superfluous. Perhaps fears about the technological singularity, about an external AI dislodging us, are misplaced. I will tentatively side with those who say that computers do not really have their own motivation, their own autonomy. The risk then is not of some external AI overtaking us; the risk lies rather in the porosity between human reality and AI. The risk is not that AI will defeat us in some battlefield Armageddon but rather that it will bleed the life out of us, slowly, imperceptibly, until we go limp. We have already ceded quite a bit of brain activity to the machine world. We used to have dozens of phone numbers in our heads – now all stored in our “external” brains.” We used to do a lot more math in our heads. (Try getting change today from a teen worker without the machine telling them how much to give.) World capitals? In the external brain. City map layouts in one’s head? Replaced by GPS real-time instructions.

“So what?” students today might say. “Memorizing all that stuff is now a waste of time?” And they may be right. But as the porosity increases between human reality and the computerized ether we live in, we cede more and more of our basic survival skills to the ether. I don’t expect malice on the part of AI (although the HAL 9000 was a cool concept), but there may come a tipping point at which we have ceded the basic means of species survival to the machine world. And in ceding more control of our inner lives to the external brain, we become more embedded in the anticipation-fulfillment cycle. Even basic human activities take on a query-and-report format. It becomes increasingly difficult to “opt out” of the processing apparatus and find that space of reflection that transcends the endless proliferation of future-directed perspectives.

4. The Historical Side: Dystopic or Utopic

All this talk about homo sapiens being bled out sounds quite dystopic, and perhaps dystopia is the endgame. But not all possible futures are grim. First of all, in structural terms, porosity is two-directional. Ever since the invention of writing, we have transferred information into the external media of books, giving subsequent generations the capacity to “upload” that information and store it in their brains when the books are removed. This prompted Clark and Chalmers, as far back as 1998, to theorize about the “extended mind,” in which the space of the mind is shared by internal processes and environmental objects that work in tandem with those processes. Another parallel is in Wittgenstein’s Blue Book example wherein we use a color chart until we “learn” our colors, and then throw the chart away. In these cases, the external device provides intermediate information storage. We use the Internet in this fashion all the time. Nothing dystopic here. But is it different when the device becomes capable of evolving its own algorithms, generating its own information, and using it to implement tasks that go far beyond mere storage? Perhaps so, but it is not yet clear that the dystopic end is inevitable.

Second of all, in terms of social implication, technology could free us up to spend less of our lives on drudgery and more of our lives in that reflective space of self-fulfillment, working out our own electives of self-realization. Indeed, this is the signature promise of technology in the age of capitalism. Ever since the early 19th-century Luddite rebellion, technology has repeatedly made this promise and repeatedly failed to deliver. Why would it be any different now? It could only be different if there were a fundamental shift in our perspective of what it is to be human.

When Madary exemplifies his visual perception theory with the example of a modern sculpture, he introduces what for me is the wild card in the anticipation-fulfillment cycle: the element of surprise.

“Recall a situation in which you moved to gain a better perspective on a novel object and were surprised by how it appeared from the hidden side … Modern sculpture can be helpful for illustrating visual anticipations because the precise shape of the sculpture is often unclear from one’s initial perspective. Our anticipations regarding the hidden side of a modern sculpture tend to be more indeterminate than our anticipations about the hidden sides of more familiar objects.” (Madary)

Whereas I started out by saying that Madary’s anticipation-fulfillment model of visual perception applies equally to AI and humans, I suspect we might handle the element of “surprise” differently. In the case of humans, “surprise” is a trigger for imagination, a less tractable faculty than might be intelligible to our future friends in the AI phylum. Sure, our AI compatriots might predict possible futures as well or better than we do (and thus they might best us at chess), but is that really “imagination”? Humans imagine not only possible futures, but also alternative presents and alternative pasts, using self-generated imagery to feed nostalgic visions of times gone by. There is something about the creative process of imagination that might separate us from AI and might make us less predictable in the event of “surprises” or disruptions in the normal anticipation-fulfillment process. Since technology has typically failed to deliver on its promise to enhance self-fulfillment time for personal development, we might anticipate another failure when technology says that AI will truly free us up from drudgery. But the result could be different this time if a rupture in conditions is great enough. Imperatives of income inequality and ecological destruction might be rupture enough. As we survey our predicament the way Madary’s viewer surveys the modern sculpture, we might on the other side glimpse the end of capitalism (which may sound dramatic, and yet all ages do end). Perhaps this might jolt the imagination to a new sensibility, a new subjective frame of reference for values like “work” and “technology” and “success” and “self-actualization” – to wit, a new definition of what it means to be fully human.

How rapidly and wholly we make that turn to a new definition of what it means to be fully human will lock in the dystopic or utopic endgame. In the dystopic version, homo sapiens is bled out by some combination of AI and economic and ecological calamities. In the utopic version, consciousness about what it is to be human evolves quickly enough to allay those calamities and to recapture AI as the servant of human ends and not vice versa.

Footnote on Kierkegaard’s 3 modes of lived experience: aesthetic, ethical, religious

The anticipation-fulfillment model of visual perception can be seen as the basic process of Kierkegaard’s aesthetic mode, sensory-based life. A whole life lived on the aesthetic level is lived as a continuous accumulation of equally ephemeral sensory perspectives on one’s own life.

The ethical life turns out the follow the same model but on the ethical level. The ethical life is a continual accumulation of equally provisional ethical perspectives.

The religious life, though, breaks the model. It concerns the absolute. It does not consist of accumulating perspectives but of a singularity; it eschews all the accumulations of visual or ethical perception for the singular relation between the religious subject and the absolute, a singularity which obliterates all mediate or quantitative concerns.

Michael Madary’s Visual Phenomenology

Richard Marshall’s interview of Michael Madary in 3:AM Magazine

BookCoverImage         year-bfly-cover       Cover png

 

Professionalism and Alienation

I recently heard (or perhaps instigated) someone at work talking about how proper attire promotes professionalism. My faithful readers will recall that I, as a fashion anarchist, have commented on Jeffrey Tucker’s suggestion that people should dress properly at work (Bourbon for Breakfast, Chapter 37).

Now to tackle the tangent idea that a dress code promotes professionalism. First, if professionalism is meant in the narrow sense of an individual’s competence to complete the tasks at hand with rigor, efficiency, and integrity, the fashion anarchist wins this one easily. Obviously, my engineering or accounting or design skills are not affected every time I change clothes.

If professionalism is meant in the general sense – the sense that it is generally easier to maintain professional relations where people are dressed professionally – this is a little trickier. On this level, I say good riddance to professionalism, which has been a scourge on human contact for some 300 hundred years.

The Age of Bourgeois Capitalism, which began in roughly the 18th century, could also be called the Age of Professionalism.  In the previous age, the frame of reference for human relations was the landed hierarchy of commoners, gentry, aristocracy and various subsets. Doctors and lawyers and such were generally commoners, subject to much mirth and ridicule in the literatures of the day. Even where respected, their professions (or one might call them “occupations” in that pre-professional age) conferred no class status. As bourgeois capitalism replaced landed hierarchies as the defining scaffold of power, the “professions” came to confer the kind of class status we see today, with grandmas encouraging grandkids to grow up to be doctors or lawyers (and not, on good authority of Waylon and Willie, cowboys, those residual personae of the land). The old frame of reference for human relations in the landed order – things like de facto respect for those above you in the hierarchy and generosity towards those below you in the hierarchy – was replaced by the public sphere paradigm to “behave professionally.”

“Professional behavior” presupposes human connections that are less vertical and more horizontal/democratic, and that may well be a step forward toward the ideal of a human community of mutual fulfillment, but it comes at a cost. The cost is alienation. Human relations becomes the “business of human relations.” When Karl Marx says that under capitalism “human relations take on the fantastic form of relations between things” (Capital, Vol. 1), this can be applied on the social as well as the economic level.  Human relations become a little bit icier. The other person is objectified, which enables us to treat him or her as an object in some market-driven game and not as a concrete human being. One scene in The Godfather (dir. Francis Ford Coppola, screenplay Coppola and Mario Puzo) nicely encapsulates human relations in the Age of Professionalism. Tessio has betrayed Michael and now realizes that Michael has discovered the deed and set him up to be killed. Tessio, knowing the end is near, tells Tom: “Tell Mike it was only business. I always liked him.” Tom replies with some pathos, “He understands that,” and then goes forward with the hit. Lift the veil on professionalism’s polite exterior, and this is the model of human relations you have underneath. It brings everyone one step closer to the version of human identity manifested in the “officials” of Kafka’s novels, who epitomize ad absurdum the sloughing off of all human responsibility in the execution of the office.

The alienation that takes place in the Age of Professionalism indeed gives us another reason to look to the Luddite/technophobe point of view. In particular, the technophobe distrust of mechanization may raise valid points about the impact of technology not just on labor markets but on human relations generally. If professionalism takes a subjective toll on the fullness of human relations, new technologies, without moral steerage, can give a kind of exoskeleton to the process of alienation, abstracting us from the human warmth and human consequences of our actions. The person who pushes a button in Nevada to launch a drone strike on a Pakistani village and then stops by Walmart on the way home probably does not see his actions the same way as one who had to stand toe to toe and push the steel blade into his opponent’s belly.

Now for the optimistic conclusion: In our collective reach for higher ideals, professionalism has served its purpose, weaning us away from hierarchies that were antithetical to the fullest form of human relations and giving us a basis for something more democratic and fully reciprocal. But we have paid a cost in terms of the objectification of, and alienation from, our fellows. It’s time take the next turn, put professionalism to bed, and reinvest full humanness into our relationships, even into our relationships in the workplace and with remote clients and customers. And one way to start that slow tectonic shift is to gently undermine the professionalism paradigm by bringing, so far as we can manage it, a little fashion anarchy into the workplace.  It might look funny, but it beats becoming characters in a Kafka novel.

Luddites & Technophobes

“Luddite”: The very word conjures up images of knuckle-dragging curmudgeons. When the wheels of the Industrial Revolution started turning in late eighteenth-century England, the cult of “improvement” was already long entrenched (indeed it had been satirized by Jonathan Swift and his motley “projectors” nearly a century earlier). Resisting the “improvements” of industrialization at the turn into the nineteenth century were the Luddites. As weavers and artisans lost their jobs to new labor-saving machinery that required fewer and less skilled workers, the Luddites of 1811-1817 fought back by smashing new factory machines in the dark of night. The dominant ideology has ever since scoffed at the Luddites’ economic naivete and lumped the Luddites themselves in with the flat earth society.

I beg to differ. I propose that the reason the Luddites were and continue to be subject to such ridicule in the dominant ideology is that they are dangerously correct, that they lift the veil on an unhappy truth about how labor markets work under capitalism. The captains of industry have always drawn upon the “improvement” philosophy to argue that increased automation would be good for everyone, enabling workers to generate the same productivity in much shorter time, leading to a utopia in which people would work a couple of hours a day and have expanded time for personal growth in whatever physical, intellectual, and cultural arenas interested them. Luddites argued that they would lose their jobs and worsen their lot while the factory owners amassed greater and greater profits. The Luddite argument shows a better grasp of the structural incentives of capitalism. The owners’ argument rests upon the hidden premise that workers themselves will profit from their increased productivity. But capitalist incentives work the other way: the company incentive is to lay off superfluous workers while remaining workers make twice as many widgets per day at the same old wages. After all, the remaining workers are now “lucky” to have a job and it is a “buyer’s market” for the employer.

Of course it is not a zero-sum game. Luddites were right in that working class conditions in Victorian England were famously appalling. (Engels’s Condition of the Working Classes in England is perhaps the best contemporary account.) But the government would intervene with labor laws, and the economy itself would adjust to fill the vacuum with new veins of employment. No one would argue that workers today, at least in the West, are not better off than they were in the nineteenth century. But the point is that the increase in productivity due to mechanization did not proportionately increase leisure opportunities for personal fulfillment. Workers were still expected to work full time. The curve change was not in the amount of labor time input but in the amount of productivity output. More aggregate wealth was generated with no increase in aggregate leisure (except perhaps for the investing classes).

Today’s tech revolution is subject to the same utopian mythmaking by the “improvement” industry and to the same grim truths of the labor market. We are told that computerized automation will exponentially increase per capita productivity, freeing people up for personal fulfillment. But the truth is that more often it results in layoffs, fewer jobs for humans, at least in the short run, and more productivity expected per salary. And think about Facebook’s recent (February 2014) acquisition of WhatsApp, a company with 55 employees, for $19 billion dollars. Where so much of GDP is funneled through 55 employees, what does that mean for workers in the aggregate? Does this make it easier for them to find employment or empower them to increase their leisure time? Not likely.

Moreover, those who are “freed up for personal fulfillment” by virtue of being unemployed or underemployed are charged with laziness. No matter how much productivity increases per capita, working and middle class people are expected to work their 40 hours or be damned as parasites. (I lump together working and middle classes, because the investing elite class is not subject to the same labor dynamics as those who live paycheck to paycheck.) Witness the recent CBO (Congressional Budget Office) report on Obamacare (February 2014), which said that once health care in the U.S. is universal and affordable, some people may be freed up to work fewer hours or to have one parent stay at home. This inspired much gnashing of teeth within one of our two national parties. So what if technological advances enable the same GDP with fewer hours worked, or enable affordable health care for all? How dare working class and middle class people take any extra hours for personal fulfillment!

This doesn’t mean all is lost. Although I believe the Luddites and their protege technophobes still need a fair hearing for what they reveal about technological impacts on labor within a capitalist system, I don’t believe that technological innovation is intrinsically antagonistic to workers’ long-term interests. I am not ready to completely dismiss the utopian dream of the apostles of improvement. Technology can be a force for good. But it needs to be framed by a different economy of values.  The capitalist world view of infinite market expansion incentivizes the full exploitation of a labor force, not with any eye on the human fulfillment of the workers (which is outside the scope of capitalism and its forces), but with an eye only on increased productivity and profit. This presupposition, this capitalist sensibility, is inconsistent with the utopian possibilities of technology. We need a new sensibility, a new subjective frame of reference for values like “work” and “technology” and “success.” And there are some signs that a new paradigm for self-actualization is emerging on the horizon line of capitalism. There is an increased consciousness that good stewardship of limited world resources is inconsistent with the world view whose metric of fulfillment is in magnitudes of consumption. The young techie entrepreneurs of today seem often motivated by an idealism that is beyond the scope of classical capitalism and its industrial giants. Or at least it is an idealism that is fluid or heterogeneous enough to accommodate post-capitalist ideals commingled with the residual values of productivity and profit.

The change in sensibility we need, in this case a change in the moral attitude about work, was captured as well as anyone by Buckminster Fuller at a time when the hippie revolution was coming to a head, its fate not yet decided (New York Magazine, 30 March 1970).

“We should do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”

Related internal blog entry: Taxes, Private Property, and the Age of Aquarius
Recommended external blog entry: Global Therapy (Paul Adkin)