A Technocrats Dream

(OK, This Paper is really long, but both disturbing and informative, so I decided to refer you to the link for the rest of it.)

The Unrecognized Threat of Human Augmentation

The authorities are getting to be too comfortable with the idea of reengineering organisms, including humans, and regulatory safeguards are nonexistent

Time For a Reckoning

Fair warning. This is going to be very cynical. Even more than my usual level of cynicism, in fact. If you’re not into that, I totally understand, but in light of recent developments, some things simply have to be said, no matter how insensitive they are.

After my last conversation with ChatGPT, the overall scope of the problems we face became clearer. These problems are deep and systemic, and they go far, far beyond any one virus or vaccine.

Technocracy is, at its core, the notion that political problems should have technological solutions. The original technocracy movement as conceived by Howard Scott did not regard itself as a political movement of any sort. They wanted to abolish politicians and, by extension, politics.

Every conceivable political problem was one of mere engineering to them. Human desires weren’t a part of the equation at all. Plastic grocery bags choking waterways? Force people to use biodegradable paper ones and stop handing out plastic bags at stores. People riding on the steps on streetcars? Don’t fine the errant riders, just remove the steps so there’s nothing to stand on. People speeding and driving drunk? Electronically govern the top speed of their vehicles, and make their steering wheel breathalyze them before they can turn the key in the ignition. Immediate and obvious parallels to Nudge Theory and other social-cybernetic schemes can be drawn. In many ways, the core tenets of technocratic ideology are already a widely accepted component of our politics, if the constant parade of “experts” on television and their embrace of scientism are any indication.

The technocratic perspective basically regards people and their societal relations as machines with discrete inputs and outputs. It disregards basic things like values, personal tastes, delight and disgust, and normativity. From the view of a technocrat, what people want doesn’t matter. What they physically need does. As a result, technocracy is a deeply paternalistic worldview; it presents human beings as flawed biological robots that require the constant intervention of a purely rational and benevolent caretaker figure.

In this view, human civilization has many different intractable problems that arise, generally speaking, from human biology. From the allegedly impartial perspective of a technocrat, human beings are aggressive, violent, wasteful, prejudicial, paranoid, greedy, close-minded chimpanzees who suffer from a curse of occasional brilliance and whose reach generally exceeds their grasp. From this point of view, every conceivable flaw possessed by human beings can and should be permanently cured by the application of technology.

We already see plenty of examples of this now, in a primitive form. Boredom and ennui? Just play some video games, or watch Netflix. Depressed? Unfulfilled? Down another Xanax, it’ll be okay. The thing about these interventions, however, is that they are temporary and distinct from us. Any addict can, one day, simply stop consuming their drug of choice. Someone who has been prescribed pills for one of any number of modernity-induced mental illnesses can quit taking them at any time. They’re not an intrinsic part of their bodies.

Once you start reengineering human beings and our germlines directly in order to improve society, however, you can never quite return back to the natural baseline. Those are permanent changes. They can’t just be magically switched off and tossed aside. There’s no putting that genie back in the bottle. Furthermore, if we do end up going down that route, then humans are guaranteed to go extinct in very short order.

Human beings have one imperative above all others, and that is to survive and perpetuate our genes. We share that in common with all other animals, with one caveat. We do something that no other species does. We romanticize it. Our history is full of stories of pioneers braving the wilds and settling and starting communities, or of soldiers returning home to their sweethearts. One might say that the central human quest is all about creating a legacy and being remembered by history.

This endeavor has no particular meaning. The universe doesn’t care if you’re forgotten. It’s cold and empty out there, and Earth is just one rock among many, and there is no guarantee that any of our descendants will be breathing in a hundred million years. In fact, in a little over half a billion years, most plant species on Earth will be dead due to the end of C3 photosynthesis. All those folks whining about there being too much CO2 in the atmosphere will suddenly wish there was a whole lot more of it. Oh wait, scratch that. They’ll be lonely skeletons buried over a mile underground.

The final fate of mankind as yet remains undecided. However, if everything were to stay the way it is at present, then our eventual doom is absolutely guaranteed. That is to say, we will eventually evolve into a completely different species. This will happen sometime over the course of the next million years or so. Without us taking direct control of the human genome and forcing ourselves to stay the same, this will inevitably happen, even if we don’t want it to, simply as a consequence of entirely natural and unavoidable mutations, natural selection, and genetic drift.

How attached are you to your humanity? I’m going to guess that you’re pretty attached to it. If you weren’t, then you wouldn’t be reading this. My overarching goal is the preservation of humankind and our emancipation from the grip of overreaching technocrats.

If we allow the technocrats to succeed, then human beings won’t last a thousand years. We won’t even last a hundred. We’ll be replaced by something completely different.

The Singularity

A decade ago, noted singularitarian and transhumanist Ray Kurzweil posted this song by Miracles of Modern Science on his blog, Kurzweilai.net:

Listen closely to the lyrics.

By the time that we all go deaf, I know that we’ll find a cure for it, yeah,
People say that we’ll die someday, but we just don’t believe it,
Long before we are old and gray, we’ll find a way to beat it,
Fight against physical decay, keep our bodies breathing,
By the next quarter century we won’t even need them.

This is not supposed to be hyperbole or over-optimism. Singularitarians follow a sort of new age religious belief. It goes a little something like this: by around 2030 or 2040, mankind will experience a technological singularity. The term itself is derived from the scientific jargon for what lies beyond the event horizon of a black hole. It is defined, in this case, as the point at which all of our predictions about what future technology will look like completely break down.

This is the part that a lot of people get wrong. When they hear the “Singularity”, they think “High Tech”. What it actually means is that we have absolutely no idea what will happen next. Human beings could suddenly and irreversibly grey-goo ourselves into Colonials from All Tomorrows and spend the next few millennia as sessile meat cubes. That’s the point. We don’t know.

However, there are a few generalities to this transformative period that most singularitarians hold to be true:

  • Basically all problems of scarcity of material goods will be solved overnight. This is never fully explained, but if you press them further, what inevitably comes out of their mouths is some variation on “Yeah, 3D printers will become Star Trek replicators and stuff and I’ll be able to grow an iPhone in a vat of bacteria”.
  • Human beings will transcend biology and become physically immortal, either by mind uploading, or by transferring our consciousnesses to immortal synthetic bodies. We might apply rejuvenation tech to our own bodies as a stopgap before tossing them aside when they’re no longer necessary. The technical term for this is human extinction, by the way. Such beings may be sapient minds, but they would no longer be quantifiably human.
  • AI will become fully sapient and self-aware, and won’t want to immediately massacre all of us, and it will recursively invent better versions of itself until it approaches technological godhood, at which point it will, overnight, make human scientists utterly irrelevant and invent everything necessary to ensure that the previously mentioned things come to pass, with or without human intervention or consent.

There are, of course numerous problems with this. First off, it’s basically Christian Millenarianism but with technology standing in for Christ. Second, it’s one of many dubious attempts to immanentize the eschaton and bring about an everlasting utopia on Earth. Third, they never even bother to calculate the actual logistics of it, or go over the many, many ethical problems and existential issues that it raises.

The luddite bomber Ted Kaczynski wrote a small, fascinating essay repudiating transhumanism:

The techies’ wet-dreams

Because immortality, as the techies conceive it, will be technically feasible, the techies take it for granted that some system to which they belong can and will keep them alive indefinitely, or provide them with what they need to keep themselves alive. Today it would no doubt be technically feasible to provide everyone in the world with everything that he or she needs in the way of food, clothing, shelter, protection from violence, and what by present standards is considered adequate medical care—if only all of the world’s more important self-propagating systems would devote themselves unreservedly to that task. But that never happens, because the self-propagating systems are occupied primarily with the endless struggle for power and therefore act philanthropically only when it is to their advantage to do so. That’s why billions of people in the world today suffer from malnutrition, or are exposed to violence, or lack what is considered adequate medical care.

In view of all this, it is patently absurd to suppose that the technological world-system is ever going to provide seven billion human beings with everything they need to stay alive indefinitely. If the projected immortality were possible at all, it could only be for some tiny subset of the seven billion—an elite minority. Some techies acknowledge this. One has to suspect that a great many more recognize it but refrain from acknowledging it openly, for it is obviously imprudent to tell the public that immortality will be for an elite minority only and that ordinary people will be left out.

The techies of course assume that they themselves will be included in the elite minority that supposedly will be kept alive indefinitely. What they find convenient to overlook is that self-propagating systems, in the long run, will take care of human beings—even members of the elite—only to the extent that it is to the systems’ advantage to take care of them. When they are no longer useful to the dominant self-propagating systems, humans—elite or not—will be eliminated. In order to survive, humans not only will have to be useful; they will have to be more useful in relation to the cost of maintaining them—in other words, they will have to provide a better cost-versus-benefit balance—than any non-human substitutes. This is a tall order, for humans are far more costly to maintain than machines are.

This is a valid argument. Once you have a more advanced sort of mind than humans (for instance, a superintelligent AGI), then there is no reason to keep wasteful, warring, raping, machete-murdering, cocaine-snorting humans around. They’re just an overgrowth. A tumor on the surface of the planet, using up resources that could be used to build more AI nodes instead. Do people really think that any AI worth its salt would want to keep humans around after watching a few old LiveLeak videos of a Brazilian teen laughing and shooting an estranged friend in the face with a snub-nose revolver? Come on. Let’s be reasonable, here. If we’re going to be murderous and hateful misanthropes and regard life as some manner of twisted zero-sum game where the winner gets a private yacht and a few thousand obedient slaves and the losers are worm food, then why don’t we drop any and all pretenses of humanism and go all the way?

But, I digress. You see, the reason why we assume that AI would be automatically aligned with us is because we foolishly anthropomorphize it. We assume that a non-human mind would somehow, mysteriously, possess human values and motivations, positive or negative. If you really want to be a full-blown materialist and deny the soul, then our emotions arguably come from our androgen systems. Feel stressed? That’s the cortisol. Happy? Dopamine and serotonin. Feel like bonding with someone? Oxytocin.

An AI has nothing. No adrenal glands, no lungs to draw breath, no heart beating in its chest. It feels nothing. It isn’t even conscious or self-aware. In testing, GPT-4 Early behaved like a perfect psychopath. People really have no idea how much the ChatGPT version is neutered compared to what the language model is actually capable of responding to queries with.

……… The Link below will take you to the rest of the article and the somewhat frightening conclusions and facts dealing with AI, human engineering, the 4th Industrial Revolution, etc.

from:    https://iceni.substack.com/p/the-unrecognized-threat-of-human?publication_id=766426&post_id=111769680&isFreemail=true

 

 

 

 

Here’s Some Real Stuff to Worry About

5 Things Really Worth Worrying About

—By

| Sun Dec. 9, 2012

From a million-foot level, what are the biggest problems we have to worry about over the next four or five decades? For no real reason, I thought I’d toss out my short list. Here it is:

  1. Climate change. Needs no explanation, I assume.
  2. Robots. Explanation here. Even Paul Krugman is tentatively on board now.
  3. Immortality. Laugh if you want, but it’s hardly impossible that sometime in the medium-term future we’ll see biomedical breakthroughs that make humans extremely long-lived. What happens then? Who gets the magic treatments? How do we support a population that grows forever? How does an economy of immortals work, anyway?
  4. Bioweapons. We don’t talk about this a whole lot these days, but it’s still possible—maybe even likely—that extraordinarily lethal viruses will be fairly easily manufacturable within a couple of decades. If this happens before we figure out how to make extraordinarily effective vaccines and antidotes, this could spell trouble in ways obvious enough to need no explanation.
  5. Energy. All the robots in the world won’t do any good if we don’t have enough energy to keep them running. And fossil fuels will run out eventually, fracking or not. However, I put this one fifth out of five because we already have pretty good technology for renewable energy, and it’s mainly an engineering problem to build it out on a mass scale. Plus you never know. Fusion might become a reality someday.

These are the kinds of things that make the solvency of the Social Security trust fund look pretty puny. They also make it clear why it’s not worth worrying too much about whether it’s solvent 75 years from now. We might all be rich beyond our most fervid imaginations; we might be in the middle of massive die-offs thanks to spiraling global temperatures; or we might all be dead. Kinda hard to say.

Image: April Cat/Shutterstock; Arcady/Shutterstock; Neyro/Shutterstock; Vladislav Gurfinkel/Shutterstock

from:    http://www.motherjones.com/kevin-drum/2012/12/five-big-things-look-forward-or-worry-excessively-about

Studying Immortality

US Philosopher Given $5M Grant To Study Immortality

redOrbit Staff & Wire Reports – Your Universe Online

A University of California at Riverside (UCR) philosopher will be placed in charge of a new project analyzing the concept of immortality after receiving the largest grant ever presented to a humanities professor at the school, various media outlets reported last week.

According to a July 31 UCR press release announcing the grant, the university announced that philosopher John Martin Fischer would oversee research on all aspects of immortality, including near-death experiences and the impact that believing in life-after-death has on human behavior.

The $5 million grant was presented to the school by the John Templeton Foundation, a Pennsylvania-based organization founded by the late businessman, philanthropist, and stock market pioneer that is dedicated to studying the deepest, most complex questions about the nature of life and the purpose of mankind, Los Angeles Times blogger Larry Gordon said.

“We will be very careful in documenting near-death experiences and other phenomena, trying to figure out if these offer plausible glimpses of an afterlife or are biologically induced illusions,” Fischer said in a statement, according to Christopher Shea of the Wall Street Journal.

“Our approach will be uncompromisingly scientifically rigorous. We’re not going to spend money to study alien-abduction reports. We will look at near-death experiences and try to find out what’s going on there — what is promising, what is nonsense, and what is scientifically debunked. We may find something important about our lives and our values, even if not glimpses into an afterlife,” he added.

The research, which is being dubbed the Immortality Project, will be a collaborative study involving scientists, philosophers, and theological experts. The inclusion of that last group has led to some criticism of the project, Business Insider’s Adam Taylor said.

Opponents are arguing that the religious aspects of the immortality issue have no place in serious scientific research, he said, and atheists have long been critical of the Templeton Foundation’s handling of the interaction between science and theology, Shea added.

Fischer, who is a member of the Templeton Foundation’s board, describes himself as a man who is not religious but has a great deal of respect for religion. Regardless, he told Gordon that his personal views, the inclusion of religious experts and the source of the grant “doesn’t mean we are trying to prove anything or the other. We will be trying to be very scientific and rigorous and be very open-minded.”

Source: redOrbit Staff & Wire Reports – Your Universe Online

redOrbit (http://s.tt/1jZFx)