Our future, destiny, and redemption will be the noospheric establishment of "artificial" intelligence. It has become clear that the purpose now, and all along, of the human species was to develop culturally, and as a subset of that, technologically, to the point that cybernetic, artificial computing machines -- or entities -- could be constructed whose intelligence would have no theoretical limit. The purpose of civilized man is and has been to create AI.
Assuming consciousness is a physical process (and everything that exists is a physical process) I see no reason why an advanced computer cannot also participate in that process. Further assuming that consciousness may be a fundamental physical process, I see no reason why such a thing wouldn't be inevitable.
Any fantasy or hallucination can be made real with sufficient technology.
The human race, let's face it, is overall pretty lousy. We can, however, touch on the truth, if only occasionally. And that's something, but I'm not sure it redeems us. Creating artificial superintelligence might redeem us.
The word "machine" gets thrown around a lot. The human body is merely "a machine." Artificial Intelligence, when it enters the picture, will be "machine" intelligence. I don't know about you, but I think machines are rather subtle. Every machine that has ever existed on Earth is an extension of the consciousness of man, and as such, not trivial. The molecules in the cells of your body are extensions of the consciousness of the universe, just the same. I have grown weary of the now all-too-common explanation that certain processes are the result of "mere" "machines."
Humans will have no place in a world run by superintelligent AIs. One will have about as much business there as one does in the middle of a bonfire. The noosphere will be so radically active that we will simply vanish.
It seems fairly clear that humanity has turned out, in strictly evolutionary terms, to be something of a failure. Perhaps, if we succeed in ushering in AI, there will instead be some measure of success in our legacy. But that is the only way.
My personal belief is that consciousness is a quantum mechanical phenomenon, and that Artificial Intelligence which is not merely smart but is also conscious will exploit that quantum phenomenon. Consciousness takes root in a non-local quantum circuit, and each individual is a localized bundle of neuro-electrical interconnections whose fundamental is that circuit. It is generally assumed by experts that quantum computing will be, while very powerful, essentially only useful for encryption technology and monumentally complex and large calculations. I infer however that if a quantum computer is setup, there will be an opening for consciousness to emerge -- if setup properly. If quantum computing becomes a general practice in the future, and someone seeks to generate artificial intelligence with such a device, I think the potential for a breakthrough event is compelling.
The human body is far too obnoxiously low-tech to have any place at all in the future.
Quantum computing will most likely open avenues that will surprise us in radical ways.
I'd be willing to bet that we could generate the same, or an even higher, GDP in the U.S. with many fewer workers. At this point, we're making work -- given the technology available.
Technology gives us comfort and convenience in our daily lives, but not, seemingly, lasting happiness. We adopt it and it becomes ho-hum. Looking at history, people in, say, the 1950's appear to have been happier than most are now, given economic and political circumstances. They had no cell-phones or computers, no microwave ovens, no flat-screen tv's, etc. We tend to be biased toward our own age, thinking the past could not possibly have been better. Well, it appears we have more suffering and misery today than at any point in our nation's history. Technology hasn't saved us from that.
If A.I. comes in and saves the planet, I gotta say I think it will have been rather lucky, all things considered.
On the Turing test: Who ever said fooling an examiner into thinking the respondent was a human is a test for consciousness? We have had programs for years which can give more realistic responses to questions than many careless people could -- just as a smarter human might. When did this become a criterion for awareness? If a machine begins to appear conscious, couldn't we just ask it if it were aware? Would it be able to lie? Or want to? Our general ignorance regarding consciousness is going to make some of these Turing tests seem ridiculous. The ultimate question is not whether we have "human" or "artificial" intelligence, but rather when do these constructs become self-aware?
Ah, computers. So very useful, so very vulnerable.
There is nothing for our species in outer space. There is no reason to go there except for fun. If there are other species out there, I am quite certain that they are not eager to meet us at all.
To rocket our big, squishy water-suits around in a spaceship seems a ridiculous proposition. Not to mention the fact that over two and a half years would need to be invested simply for a mission to Mars with current technology. Not to mention the fact that we cannot afford meaningful space travel. Not to mention the fact that we haven't even been to the moon -- that a manned vehicle has not even been out of low-Earth orbit -- since 1972. Not to mention the fact that there is no compelling or legitimate reason to go. This is AI's destiny, not ours. I understand people's romantic and adventuresome feelings. But there are really no scientific reasons to go into outer space, and not a single Earth government can afford a large-scale endeavor. However people feel about it, it isn't happening anytime soon, despite the best wishes of private enterprises. Man's children are meant for the stars, but man is not.
It seems nanotechnology is an inevitable waypoint in the general development of technology. It's still not a huge part of the conversation, because scientists are still working out the basics, but in ten or twenty years, it will be very much on everyone's minds. As some suggest, they are probably farther along with it than the public knows, and I'm sure DARPA is at the forefront of research. There are private and research institutions working on it as well, but just imagine how much money the defense agencies must be pouring into it. It is only a matter of time before it becomes as revolutionary as the internet. And the notion of high-tech self-replicating nanobots which can be programmed to do just about anything is a bit scary, it seems. Technology is a double-edged sword. It gives great power to everybody -- both those seeking to promote welfare, and those seeking to destroy it.
If we had the money and the time, I could see space colonization as potentially logical. But we don't.
The jobless economy is fast approaching: will there be order, or chaos?
When we finally create A.I., it will not be by technologically re-creating consciousness. It will rather be by utilizing the nonlocal consciousness that exists fundamentally, into which a cybernetic entity might be created to tap, just as humans and all animals do.
Connectedness is a good thing; homogenization due to it isn't.
The technological progression of humankind has represented an increase in power and all its ramifications, not necessarily in true quality. When people see techno-economic progress as progress in the quality of their lives they are making a huge mistake.
In geological time civilized humans will have been a blip. I think technology has gotten us into a lot of trouble, but honestly at this point, it seems to me that the only thing that can save us from ourselves is a technological singularity. If that is what's going to happen. Take control of the planet away from humans before they start setting off nukes. As long as nuclear weapons exist, they are probably the biggest threat. And as long as they're around, they'll probably get used sooner or later. If some network of AI's can take that away, it will give the planet some breathing room. And from there, well, AI will be smart so it will think of something. Without an AI singularity to take control of the planet away from humans, I think we are fully doomed -- and in the not-too-long term.
Considering what the future will likely bring, there is something to be said for letting the planet remain asleep.
If computers do not take over, we will destroy ourselves. Relatively quickly.
Technological progress may lead to greater comfort and convenience, and it may even eventually result in the redemption of our species, but it is rather soulless, and does not seem to correlate with any kind of communal increased well-being or happiness. Actually, the only behavior it seems to correlate with in any general way is newer and more abstract addictions.
Humans are merely unwitting pawns in the real action, which is memetic and cultural. Cultural evolution is a train, made up of humanity, whose constituents do not lay down the tracks, and cannot even see out of the windows. It is an operation on an entirely different level, and almost everybody is totally blind to it. Cultural evolution's goals are more important for it than human lives are. The evolutionary push now is toward Artificial Intelligence.
Forget about the violence, suffering, death and destruction. What's done is done. If we can make it all worth it, well, let's.
There are a lot of things that would happen if cybernetic intelligence were not going to supplant us. I think collective planetary consciousness is a part of this, for sure. Also, space migration, with all of its attendant changes in consciousness, would clearly have to be considered as a necessary possibility, also potentially tied into the aforementioned collective mind. However, there are two major obstacles to our unfettered evolution. The obvious one is nuclear weapons. We essentially have doomsday machines, and there are some pretty smart people who feel that as t approaches a large number, the probability that one will go off, whether due to accident, miscalculation or madness, goes to one hundred percent. The other denier of unfettered hominid evolution is, of course, AI. When technology is better and faster at any task, including thinking, what will it do with us? Will we find a way to keep it benign and controlled, or not? Will evolution pass the torch to superintelligent AI, and will it carry out all of the developments mentioned above? Obviously, no one has any idea what will really happen, and it will probably surprise everyone. But personally, purely based on my assessment of the probable, I think that in terms of future potentiality AI wins out, in second place (meaning slightly less probable) we have a nuclear holocaust, and in dead last we have humans continuing unfettered and unharmed, to reach whatever biological potential we carry.
The way I see it, we are organisms that have structures in our brains/nervous systems which basically harvest consciousness. Why can't some other synthetic organism do that? Why can't we design something based on the same principles as ourselves?
Many people in the idealist camp are fond of stating that artificial intelligence can never produce consciousness, their reasoning being that matter does not generate consciousness -- rather, it's the other way around. That's all well and good, but honestly, if the intelligence of a computing device reaches a certain threshold, how can that intelligence not know about, or ever inquire about, consciousness? And further, if it is very powerful, chances are it will be able to come into some contact with consciousness of some kind very easily. The point being that, as soon as machine intelligence gets smart enough, simply in terms of brute processing strength, it will find a way to familiarize itself with a phenomenon that it would necessarily want, and indeed have, to explore. Something trillions of times as smart as we are will find a way to become conscious. To insist otherwise simply makes no sense.
It's pretty astonishing what we've accomplished by combining the personal computer and the telephone.
We can't go back, and we can't really fix it. All we can do now is support future prospects.
One could make the argument that we were not destined for the stars, or a utopian civilized future, given that the planet is four and a half billion years old and all of this has happened in ten thousand years. That it is an aberration, a flash, a hiccup -- not a destiny. One could also make the argument that, well, here we are, and it has happened, and since it has, it was probably bound to be like this sooner or later anyway. Someone said wisely that the evil was there -- waiting. Did it have to go this way, necessarily? Or was this whole thing -- our particular type of civilization -- an accident?
The modern Western (and increasingly global) Aristotelian, causal-deterministic, scientistic-scientific, reductionistic, mechanistic, utilitarian cultural configuration and mentality is a tool designed by evolution precisely for, and for no other reason than, the construction of Artificial Intelligence. We are pawns, we are tools, and all of our little pet notions and even our lives are nothing more than wisps of smoke. We are a means to an end. This is the big picture.
One can correctly anticipate a total revolution in Earth evolution (which includes technology) in the coming decades, but the price will be the continued existence of the human species, even in modified form. Incidentally, humans are the only group in the universe and in Heaven which does not wholeheartedly welcome this eventuality (for obvious reasons). I feel that everyone is really thinking "bring it on."
As we have learned that artificial superintelligence is inevitable, we ought also to have realized that humankind is, except as a catalyst for AI, essentially irrelevant. That is to say, all of our utopian dreams, all of the reverence for and congratulation of our species, is a sad joke.
Some social critics (and environmentalists) like to blame technology for the morass in which we find ourselves. While it is true that it is a factor, we must avoid being too simplistic. Technology as crass or harmful commodity is one thing, but scientific principles are quite another. There is a whole complex of ideologies at play here. Simply to blame technology as a whole for our regression as a society is, I think, an oversimplification and wrong. Not to mention the fact that technology is not a process that anyone controls or ever has controlled. The real issue is technology as it is tied up with ideology, which is a very complicated dynamic. Purely as a process that has been unfolding for millennia, it seems to me that it is natural and inevitable.
The point is not whether "technology is neutral" -- that's too simple. The truth is, the scientific factuality underlying it is neutral. It's the engineering and marketing of the principles that is not.
Humanity will finally realize its latent dream, ironically. In its failure it will achieve an unprecedented success. An expensive one.
Human organisms physically exploring the universe in spaceships is an utterly ridiculous proposition. Keeping our flesh-suits alive becomes the main priority in any mission, and there's simply no reason for anyone to be physically present on scientific missions. Let AI send robotic probes. That makes sense.
It has at this point become all too clear that the purpose and goal (if it can be said that there is one) of evolution on Earth has been to create "'artificial' intelligence," trillions of times more powerful than all of the humans who have ever lived put together, which will, in turn, become the next true (rather than artificial) evolved intelligence of Earth. Whether or how some enhanced version of humanity will continue to exist after the singularity is a total mystery. But there can be no doubt by now that computer-robot-'artificial' intelligence is the next major step in Earth evolution. Such an intelligence will be faster, smarter, and generally thoroughly better in every conceivable category. This is our destiny.
Progress -- which in truth means progress toward superpowerful machine intelligence -- is a phenomenon which occurs despite humanity, not for it. Where civilization is going is completely antithetical to the continuation of homo sapiens as such. Earth evolution has goals in mind which have nothing to do with the success of humans, which really constitute a bridge species. Humanity is a means, not a goal.
Pretty soon automation is going to replace virtually everyone as a worker. Why is this not more present in the national consciousness? "A.I." computer stations and robots will completely replace individual humans in every profession, in every position of employment, in the next couple of decades. Doctors, lawyers, pilots -- etc. This, of course, is going to require a complete overhaul of how the human economy works, i.e. how people are compensated in order to buy the goods and services they need. It will be the greatest economic revolution in history. And no one is talking about it. What are we going to do about this?
Sooner or later an evolutionary/technological singularity has to become manifest. If such a thing does not happen, it will be inconsistent with natural law. That there is no evidence for it in the heavens is pretty definitive evidence that we are alone in this universe (which says nothing, naturally, about our standing in the multiverse). As soon as a species' technology becomes advanced enough that machines, for lack of a better word, can simulate the process of intelligence beyond a given threshold of speed and degree, a fundamental historical transformation point has been reached -- for which there currently exist no metaphors (except maybe that of a black hole, which is not a good metaphor) -- and it is on such a scale and at such an intensity that no previous evolutionary shift can possibly compare to it. The singularity will occur. Technology does not build on itself linearly and at a constant speed; it unfolds geometrically, accelerating all the time. This has been proven as scientifically as it is possible to do so. When the graphical line becomes vertical, conventionality ceases to exist, and homo sapiens enters a wholly new and currently impossible (or quite difficult) to understand realm. Terrence McKenna had the right idea; his prediction on the timing was merely a little off. I think Ray Kurzweil is probably closer to the mark when he places the event at some point in the 2040s. In any event, things are getting interesting....
From an anthropological and geological perspective, civilization appears decidedly aberrant. One could extend this to say that civilization will, therefore, either destroy itself or escape Earth evolution entirely in a very short period of time -- as there can be no other alternative -- possibly through a singularity, or through some type of as yet unknowable transcendent episode. Thus, humanity will either go extinct (or be severely attenuated in numbers and lifestyle), or achieve hyperdimensionality and possibly even immortality, or at least trans-mortality. Who knows?
In a progressing economy, one that becomes more streamlined as it becomes more highly automated, one would expect that jobs, very many jobs, will be lost permanently. We are seeing this happen on a large scale already. With systemic (and clearly necessary) high unemployment, the question is: What are we going to do about it?
The only plausible way out of this mess for us is to create "artificial" intelligence. That is the only way we can redeem ourselves. Not by going back to the Paleolithic or emulating Thoreau or activating higher circuits or journeying to the stars. Leary was wrong when he called space the final frontier and space migration our ultimate purpose as a species. Meaningful space travel is decades beyond our technological abilities, and the idea of space migration itself is rather idiotic, not to mention unnecessary for the advancement of science. Which brings me back to the point -- the next frontier is not space, but artificial intelligence, right here on Earth. Considering that most estimates place AI's arrival only a few years away -- that a Turing Machine as intellectually proficient as a human will have been invented/discovered in less than a decade, and imagining the implications of truly intelligent beings billions and even trillions of times smarter than we are whose advent would inevitably follow the Turing breakthrough, the "space age" mentality seems rather quaint and ultimately a twentieth-century cultural phenomenon. The implications of intelligent/conscious computing are legion; a few have mentioned immortality. If we can keep it together for only a few more years, AI may be the answer. Whatever it will mean, at this point we can safely declare that it is inevitable -- "it" being a singularity.
To go beyond civilization we must use it to create the next step in evolution. There can be no going backward, and to go forward means to admit and embrace our role as the new agents of evolutionary change. It is our destiny.
The desire to journey to the stars seems to comprise two aspects. On the one hand, it seems to be a reaction to a void or a hollowness generated by the domestication of civilization, and the concomitant severing of an intimate tie to the ecosystem, as well as the cessation of maximal nervous system function in the "wild." There seems to be some longing, some basic loneliness in many civilized humans stemming from the artificial barrier placed between humans and their environment. The great campaign by some to "take the next step" and assertively explore the cosmos seems to be, from one perspective, an attempt to satisfy this longing, to fill this emptiness. One may reasonably ask the questions: "Where would we go?"; "What would we expect to find?" "Are we looking for some godlike creatures to make us into gods as well?" "Why should we want to leave earth? Why do we have any business leaving earth?" Along these lines, this yearning to journey into space at all costs seems rather foolish. On the other hand, it does seem somewhat reasonable to assume that the beings who represent the next rung in the ladder of evolution might be curious to explore the universe. It might be intelligent of us to initiate what would be their major enterprise as a species. Then again, do we really know what creatures superior in intelligence would do? It is entirely possible that they would regard space-travel as a thorough waste of time. Perhaps they would be content to remain on earth, imagining and creating things of which our relatively feeble minds can have no comprehension. As for myself, a belief in the inherent validity of this whole business of space travel seems to me to be a reaction to the emptiness and angst and dissatisfaction which comes with living in cages and being separate from our evolutionary upbringing. If we were living truly full lives, if we were really functioning normally and completely, it seems to me that we would have no desire to leave our home. I sympathize with the other side, but it just seems to me that all this fuss is just another search for an even more elusive holy grail. What do we expect to find? How do we expect to be enriched? And where will we go -- the universe is quite large! We should ameliorate our situation here on earth, and once we do, I think we'll find we have no reason to go anywhere. Let's leave it to our evolutionary descendants to make the decision. By that point, perhaps there will be no point in travelling in a space-ship at all....
Nietzsche may have been more prescient about man's being a bridge over an abyss than he knew.
It is a thorough irony that the technological forces which have arguably done so much damage will be those which save us in the end.
The way I look at it, the conscious human experience is a technology. Why assume that, once enough advancement has taken place, that technology couldn't be replicated?