Singularity good or bad

I recently read Zizek’s entries on singularity in The Philosophical Salon and have a few thoughts.

Singularity, as far as I can tell, refers to the networking of all our psyches so that we are all sharing one database of ideas, all our brains plugged (virtually) into the external brain, continually uploading and downloading our thoughts. It really just takes today’s thought and emotion recognition technologies a bit further and adds in the networking aspect. I think of it as a digital objectification of subjectivity, the cogntive correlative of the bio-mechanical hybrids proposed by transhumanism. (Disclaimer: I ponder these things strictly as an amateur, but even the experts on such a topic might want to track what we amateurs are thinking 🙂 ).

Sounds scary, but in one sense it’s just the natural evolution of consciousness. Think about it. Every new communication technology is a kind of brain extension, enabling us to take some of the knowledge stored in our head and store it outside in the community or in external spaces, where it can be retrieved later as needed by us or others.

Spoken language
Writing
Books
Printing press
Personal computers
Cloud-based networks
Singularity

If we wanted to follow the Marxist-leaning Zizek, we could coordinate this line with economic developments, as rapid changes in how information is stored and shared are no doubt interwoven with rapid economic changes. Language allows us to coordinate into agricultural activities, writing allows us to organize into city bureaucracies, etc.

More to the effect on subjectivity, we could see each of these stages as a kind of alienation of the subject, as the knowledge relevant to the subject’s existence becomes increasingly relocated outside of the subject’s own body. But all that “alienation” doesn’t seem so bad to us now. Language and libraries and personal computers — they seem to move us toward greater freedom, greater control over our personal lives, physically and intellectually.

So will the next horizon line – Singularity – play out the same? Will it appear in the form of alienation and dread but liberate us as did those previous technologies? Or will this one be different? Will the moment of singularity be the moment of collapse in the individual’s trajectory of liberation? One could certainly argue for the dystopic turn. What if singularity results in the elimination of privacy, so that our thoughts are exposed to the general consciousness? What if our thinking process elapses in the collective space, our thoughts visible to those around us, all of us wearing Google smart glasses on steroids. Would we allow such a thing? Indeed, we would probably beg for it, the same way insurance companies get customers to beg for more and more onboard monitoring devices to track their every habit, on the grounds that it “helps” the customer.

At the very least, it seems that the mind-sharing aspect of singularity would result in a degree of self-censorship that is alarming by today’s standards, perhaps alarming enough to break the trajectory of liberation associated with prior communication advances. Would each self be censored into a Stepford Wife knock-off? Or would there not even be a self to censor, if our thoughts form and grow in shared space, our physical bodies and brains merely energy sources for that shared space? Maybe The Matrix is a more apt metaphor than The Stepford Wives. 

Thus spake the amateur, in reference to technological/AI singularity, not so much to singularity in the Eastern/akashic record sense, although that might be an interesting tangent. But per that technological singularity, I suspect there are many in the world with similar amateurish thoughts. Maybe one of you techie readers can chime in and bring the hammer down on our collectively imagined dystopia before it’s too late.

P.S. Remember these?

* * *

              

* * *

(Click image for links)

          BookCoverImage      year-bfly-cover                

15 thoughts on “Singularity good or bad

  1. Zizek’s description of the singularity as a the networking of all our psyches is not what people normally mean when they discuss the singularity. It seems to imply the negation of the individual, to be replaced with a kind of collective being. Perhaps that’s what Zizek hopes for.

    The more usual definition of the singularity is the point in the future at which technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization (Wikipedia). Of course, this will not be a single point. It is a question of a sliding scale. We already see change coming ever faster. Our grandparents saw that too, but at a slower pace.

    To predict specific developments that characterise the singularity is to entirely miss the point. The singularity is the time when we will no longer be able to make predictions.

    Your pedantic fan,
    Steve.

    Liked by 2 people

    • Thanks, Steve, I was hoping you would come in and clarify. Is it possible that we can problematize what happens to “predictions”? E.g., once the algorithms come to control their own evolution, training their own next-generation algorithms, a loss of control is clear, and predictions become more difficult. But could we not still make predictions (albeit at a lower level of probability) based on the core algorithms we had created before singularity? Or based on historical patterns of what had happened in systems where the creators lost control of the system’s development? Unless those algorithms abandon logic altogether, some measure of prediction seems possible. I can lose control of my Frankenstein monster but still make predictions about his future actions based on past actions, no?

      Like

  2. Personally, I don’t subscribe to this model of the future. I don’t think there will be a future point in time when this happens. I see it as a process that has already begun. I don’t believe that AI is a necessary component of the mix either. We are already doing a very good job at accelerating technology ourselves. Besides, why build an AI to potentially enslave ourselves, when we could augment our own intelligence?

    Liked by 1 person

    • I thought the point was that buidling the AI and augmenting our intelligence were two vantage points on the same process, as we would augment our intelligence by sharing that mental space, as things we heretofore needed to process in our brains become “idiot-proof” processed externally by the AI. Could we become slaves to our augmented intelligence if that intelligence is stored externally in the cloud? Sort of like I used to know 50 phone numbers in my head. Now I don’t even know my mom’s phone number; it’s all saved externally. So in a small way I am enslaved to the devices into which I have relocated the info that was formerly carried in my brain. Anyway, at least, in honor of your country, I got to use “heretofore” (a word that no one would use here unless they were trying to raise eyebrows or get kicked out a bar).

      Liked by 1 person

  3. The first question that comes to my mind is, Who owns, and therefore controls, this technology? With books–to take an example I actually understand–you run into the problem of ownership to some extent. As someone or other said, freedom of the press belongs to the person who owns one. But once a book’s in print, or a magazine or newspaper, it has a way of getting loose from whoever published it and, at its best, has a life of its own. With the internet–and here we’re getting into territory I don’t claim to understand with any depth–we seem to be in a territory where we’ve opened our selves and our cultures to deep manipulation. Go a step beyond that and, yeah, I’ll keep my thoughts in my own head, thanks.

    Liked by 1 person

  4. We now experience in hose ever-increasing offerings of cyber programs, like correct our spellings, write more efficiently, and give us advice on how to formulate our sentences. All those apparent support material is only giving witness to an ever-rising control of language and the way we express our self’s.
    Those programs now so readily available and offering very little variety will have a long-term effect on the way we think.
    As long we had to rely on dictionaries published by numerous universities, researched by linguists of differing thought and open to interpretation, the language was kept alive, and it’s meaning often controversial.
    Now and even more in the future, more and more writers and thinkers will rely on fewer programs and providing stricter guidance and interpretation. Where will that lead us?
    Alas, we cannot ignore that technologies always had a strong influence on our minds and bodies’ development. With increasing reliance on technologies, our bodies grew weaker, and our minds gained complexity.
    Technology had always forced us to discard unsustainable conditions, including cultural habits, processes, and out-dated technologies. To lament the passing of those “ancient” traditions is somewhat pathetic and ignorant towards the human condition.
    We might as well say let us stop using our minds!

    Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.