It isn't just the economy or the environment or the wars and rumors of wars - it's damn near everything. "Hell in a hand basket" doesn't make the grade either. Where is the relief? Even those who hoped that things might brighten up with the ascendancy of Barack Obama are finding holes in their armor of "hope" and "change".
It's as if we were bolted into the tarmac at the intersection of several runways. Toward each one of them a massive, crippled Airbus heads in for a crash landing, landing gear up, engines aflame, air brakes screaming.
Under the circumstances, it's hard to look even at the most immediate questions: "Is my money really gone?" "Will I have a place to live?" "A job?" "Food?" "How long before things are 'normal' again?" "Months?" "Years?" "Ever?" For some, a darkness has already come. For others . . . soon, I'm afraid. It's numbing, that's what it is.
In this context, examining the phenomena described in the title of this article, however onerous and overwhelming, is crucial. It is these issues, rooted as they are in power, which decide what path humanity takes from this crossroad. The implications of these trends and their convergence are serious and immediate. Very little can be "done about them", except making personal and community choices and spreading awareness and insisting on discussion. To the point . . .
To begin, two terms should be defined - "The Singularity" and "Transhumanism". We'll rely on Wikipedia for both (see the articles to read footnotes):
The technological singularity is a theoretical future point of unprecedented technological progress, caused in part by the ability of machines to improve themselves using artificial intelligence.[1]Please remember that last part. We'll come back to it. The second term:
Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to an exponential and quite sudden growth in intelligence.
Vernor Vinge later called this event "the Singularity" as an analogy between the breakdown of modern physics near a gravitational singularity and the drastic change in society he argues would occur following an intelligence explosion. In the 1980s, Vinge popularized the singularity in lectures, essays, and science fiction. More recently, some prominent technologists such as Bill Joy, founder of Sun Microsystems, voiced concern over the potential dangers of Vinge's singularity (Joy 2000). Following its introduction in Vinge's stories, particularly Marooned in Realtime and A Fire Upon the Deep, the singularity has also become a common plot element throughout science fiction . . .
Transhumanism (symbolized by H+ or h+),[1] a term often used as a synonym for "human enhancement", is an international, intellectual and cultural movement supporting the use of science and technology to enhance human mental and physical characteristics and capacities, and overcome what it regards as undesirable and unnecessary aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death. Transhumanist thinkers study the possibilities and consequences of developing and using human enhancement techniques and other emerging technologies for these purposes. Possible dangers, as well as benefits, of powerful new technologies that might radically change the conditions of human life are also of concern to the transhumanist movement.[2]Here is the gist: (1) in the not too distant future (some say between four and ten years), "robotic/artificial intelligence" will surpass human intelligence and robots will be able to create themselves and (2) at the same time, we will have the technology to determine our own evolution, perhaps into something "superhuman" or even "suprahuman".Although the first known use of the term "transhumanism" dates from 1957, the contemporary meaning is a product of the 1980s when futurists in the United States began to organize what has since grown into the transhumanist movement. Transhumanist thinkers predict that human beings may eventually be able to transform themselves into beings with such greatly expanded abilities as to merit the label "posthuman".[2] Transhumanism is therefore sometimes referred to as "posthumanism" or a form of transformational activism influenced by posthumanist ideals[3] . . .
It is not as if all this is a big secret. "Sensationalist" stories about scientific "breakthroughs" are ubiquitous in both the mainstream and alternative media. Robots have been around for decades and have become quite sophisticated. The notion of "human enhancement" has been a fact of human life since an early humanoid picked up a stick and a stone, then discovered the wheel. Every tool we use is an enhancement, per se. For millenia, mankind has concerned itself with making bigger, better, more destructive sticks and stones, faster, more powerful wheels.
Here is where the problems arise. We are become immune to "sensationalism". News of scientific "advances" are received with a kind of learned ho-hum. And we thus are being passively indoctrinated, accepting these new phenomena uncritically as "progress". Above, in the quoted definition of "The Singularity" was the observation that, "Following its introduction in Vinge's stories, particularly Marooned in Realtime and A Fire Upon the Deep, the singularity has also become a common plot element throughout science fiction . . ." The issue is that science fiction and science fact are becoming the same thing, the lines among past, present, and future are amorphous. The difference between dystopia and utopia is unclear.
What is clear is that such science is literally out of control. The result is a tyranny producing de facto "techno-fascism". To underline the point, here's a snip of a counterpunch piece of June, 2008 by Chellis Glendinning, "Techno-Fascism: Every Move You Make", which notes,
“Inverted totalitarianism,” as [political scientist Sheldon Wolin] calls it in his recent Democracy Incorporated, “lies in wielding total power without appearing to, without establishing concentration camps, or enforcing ideological uniformity, or forcibly suppressing dissident elements so long as they remain ineffectual.” To Wolin, such a form of political power makes the United States “the showcase of how democracy can be managed without appearing to be suppressed.” . . .In the West, "fascism" is a sexy buzzword. As such, its meaning becomes amorphous. If by the term we mean the obliteration of the lines among technology, government, and private power; if we mean the absence of democracy and the rise of a totalitarian force, the term is apt.
Wolin rightfully points out that the origins of U.S. governance were “born with a bias against democracy,” and yet the system has quickly lunged beyond its less-than-democratic agrarian roots to become a mass urban society that, with distinct 1984 flavorings, could be called techno-fascism. The role of technology is the overlooked piece of the puzzle of the contemporary political conundrum.
What are its mechanisms of control? . . .
Less obvious are what could be called “inverted mechanization” whereby citizens blindly accept the march of technological development as an expression of a very inexact, some would say erroneous, concept of “progress.” One mechanism propagating such blindness is the U.S. government’s invisible role as regulatory handmaiden to industry, offering little-to-no means for citizen determination of what technologies are disseminated; instead we get whatever GMOs and nuclear plants corporations dish out. A glaring example is the Telecommunications Act of 1996 that, seeking to not repeat the “errors” of the nuclear industry, offers zero public input as to health or environmental impacts of its antennae, towers, and satellites – the result being that the public has not a clue about the very real biological effects of electromagnetic radiation. Inverted mechanization is thrust forward as well by unequal access to resources: corporations lavishly crafting public opinion and mounting limitless legal defenses versus citizen groups who may be dying from exposure to a dangerous technology but whose funds trickle in from bake sales. In his Autonomous Technology: Technics-Out-Of-Control as a Theme in Political Thought, political scientist Langdon Winner points out that, to boot, the artifacts themselves have grown to such magnitude and complexity that they define popular conception of necessity. Witness the “need” to get to distant locales in a few hours or enjoy instantaneous communication.
Even less obvious a mechanism of public control is the technological inversion that results from the fact that, as filmmaker Godfrey Reggio puts it, “We don’t use technology, we live it.” Like fish in water we cannot consider modern artifacts as separate from ourselves and so cannot admit that they exist . . .
The fundamental questions are, "Who chooses?" "Who controls?" "Is it good or bad?" "How will it be used?" Or perhaps even more basic - "Who are we?" "What is 'a human', 'humanity', 'life'?" "What is progress?"
There are a few in the scientific community who are asking these questions. Michael Anissimov, author of Accelerating Future, is one. His recent post, "The Terasem Movement 4th Colloquium on the Law of Futuristic Persons" is a good example to read. Think about what he writes here:
Do we need to think about ethics for robots (an inclusive term for AI and virtual/physical bots). Yes, beginning now. Robots are already making decisions that effect humans good or bad. Initially in very limited areas, these will quickly expand. Several ethical questions: Does society want computers and robots making important decisions? This gets into issues of society’s comfort with technology. Are robots the kind of entities capable of making moral decisions? The bulk of this book looks at how we can make ethics computationally tractable, something that can be programmed in today or technology with the very near future. Not just predictions that we will have human-level computers.Please read the first few sentences at least once more. These are not toys, although they are marketed as such. Cute little R2D2s and C3POs taking our kids to the playground? Who programs these things? What about those adorable robo-pets? Is there a doomsday chip and/or program in there somewhere?
We break the area into three subjects. Top-down approaches: Asimov’s Laws, Ten Commandments, utilitarianism, etc. Bottom-up approaches: inspired by evolution and developmental psychology. Not an explicit notion of what is right and good, but developmental. Third area: Superrational faculties. Is reason enough to get robots to make moral decisions, or something more? Are embodiment, emotions, consciousness, or theories of mind necessary? This looks at such an inclusive area of ethics that it is fascinating in its relevance to human ethics as well. Once we’ve granted personhood to corporations, it isn’t a huge leap to translate personhood to machines, so that will also be relevant . . .
Nick Bostrum is another scientist who advocates a cautious approach. His web site has always had the following introduction:
I want to make it possible to think more rationally about big picture questions.Indeed. But who are "we"? The discussion must do its best to inform not just scientists, not just "the ruling/owning class", but all of humanity.
Some of these questions are about ethics and value. Others have to do with methodology and how we make predictions or deal with uncertainty. Still others pertain to specific concerns and possibilities, such as existential risks, the simulation hypothesis, artificial intelligence, human enhancement, and transhumanism. Others are more mundane.
Suppose we get many little things right and make progress. What use, if we are marching in the wrong direction? Or squandering our resources on projects of limited utility while pivotal (maybe unconventional) tasks are left unfunded and undone? What if we are attending mainly to matters that don’t matter?
My working assumption: Macro-questions are at least as important as micro-questions, and therefore deserve to be studied with at least the same level of scholarship, creativity, and academic rigor.
This assumption might be wrong. Perhaps we are so irredeemably inept at thinking about the big picture that it is good that we usually don’t. Perhaps attempting to wake up will only result in bad dreams. Perhaps. But how will we know unless we try?
In and of itself, no scientific or technological development has an intrinsic moral value. That value is determined by its use. Its use is determined by its owner. And you don't own this stuff. If you buy a robotic AI unit and you're not in control of both its hardware and software, you are at its mercy. Furthermore, these new machines are touted as potentially smarter than their original makers and able to learn. Learn what?
The original creators of this technology are, well, humans. The initial rules, the basic assumptions, the algorithms are all human-made. Considering the plight of humanity and its home, a dire situation brought on itself in the name of "progress", skepticism is, at the very least, healthy. I chose that word carefully.
For example, although I will only mention it, but not explore it here, the field of eugenics has gained some momentum. It also has met with a great deal of critical examination. It's relation to population control and the "new world order" raises some monumental questions. Conspiracy of the elites or not, again, the question that must be asked - and answered - is, "who chooses?"
Humanity as a species has not demonstrated two essential qualities which will prove necessary to its own survival - restraint and humility. As an atheist, I cannot for one minute think, "Just do it. God will sort it out. Everything gonna be OK." Because that's what we've been doing and everything is not OK. Our greatest talent has been in developing the means of our own destruction (as well as a great deal of where we live and what we share it with). So to imagine that suddenly our own science will bail us out: Sheer, and probably fatal, denial.
Also, as an atheist, I am able to stand away from biblical myth and emotionalism to appreciate, for example, the cautionary tales that the ancient writers imparted. The lesson of Adam's and Eve's banishment from the Garden, I believe, is quite simple: "Not everything that can be done, should be done." God didn't say this. A human did. One perhaps not as smart as Ray Kuzweil, Richard Dawkins, or PZ Myers, but, in my opinion, a great deal wiser than the three of them put together. The nasty serpent, I think, is not in our imagination, it is our imagination - childish, undisciplined, remarkably ignorant of consequences, driven by arrogance. God or not, we have not done well by ourselves.
The "new atheism", as I see it, is complicit in and often drives this madness. Born as it is within the backlash against the abuses and downright destructiveness of evangelical "christian" zealotry and Muslim extremism, it is, in fact, zealotry itself. It is amoral, anti-intellectual, and ultimately nihilistic. And fashionable, oh, so fashionable. Anything goes. No rules. The conceit of science is such that it seems to believe that now god is out of the way, the only laws are the ones they make. We can twist and remake the laws of physics. We can make life. Nothing and everything is synthetic. We are what we make ourselves. One superficial reason this writer does not believe in god is that it hasn't bitch-slapped a few of these poor folks.
Let me wind down with two thoughts. First, the absence of god from the position of CEO of the Universe does not mean that humans are qualified for the job. Second, we ignore at our peril that trusty old adage, "It's not nice to mess with Mother Nature".
Be at peace.
Categories: atheism, ecology, eugenics, genocide, narcissism, post-democracy, post-society, principles, responsibilities, techno-fascism, values, weapons