Why you'll be immortal if you're still alive in 2040!

 Pages PREV 1 2 3 4 5 NEXT
 

Nickolai77:
I'm sure there will be, and i think it would be more revolutionary than the invention of the microchip and just maybe comparable to the invention of the steam-engine.

I'm just sceptical of this idea that with the invention of super-powerful AI's everything's going to be fixed and we'll be living in a sci-fi utopia. To quote Steven Pinker

"There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles - all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems."

However what sets my mental alarm bells ringing is the claim that we'll be immortal within our lifetimes and living in a sci-fi utopia. It's sounds like what has been described as "a rapture for atheists". It's the same kind of wishful thinking that has led numerous Christians over the centuries to happily predict the end of the world- utilising evidence from the Bible, to prove that soon we'll be living in their own immortal utopia in heaven. It all sounds to good to be true because it is.

Ah, what fallacy is this?
http://www.escapistmagazine.com/forums/read/528.361043-A-Series-on-Rhetoric-1-Rhetalogical-Fallacies
(Danny, you see why I really need the separate images!)
Appeal to Consequences of a Belief? Appeal to Ridicule?

But I can understand your scepticism.

Danyal:
The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

http://www.time.com/time/magazine/article/0,9171,2048299-1,00.html

And Kurzweil writes;

The Criticism from Incredulity
Perhaps the most candid criticism of the future I have envisioned here is simple disbelief that such profound changes
could possibly occur. Chemist Richard Smalley, for example, dismisses the idea of nanobots being capable of
performing missions in the human bloodstream as just "silly." But scientists' ethics call for caution in assessing the
prospects for current work, and such reasonable prudence unfortunately often leads scientists to shy away from
considering the power of generations of science and technology far beyond today's frontier. With the rate of paradigm
shift occurring ever more quickly, this ingrained pessimism does not serve society's needs in assessing scientific
capabilities in the decades ahead. Consider how incredible today's technology would seem to people even a century
ago.

A related criticism is based on the notion that it is difficult to predict the future, and any number of bad
predictions from other futurists in earlier eras can be cited to support this. Predicting which company or product will
succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or
standard will prevail. (For example, how will the wireless-communication protocols WiMAX, CDMA, and 3G fare
over the next several years?) However, as this book has extensively argued, we find remarkably precise and
predictable exponential trends when assessing the overall effectiveness (as measured by price-performance,
bandwidth, and other measures of capability) of information technologies. For example, the smooth exponential
growth of the price-performance of computing dates back over a century. Given that the minimum amount of matter
and energy required to compute or transmit a bit of information is known to be vanishingly small, we can confidently
predict the continuation of these information-technology trends at least through this next century. Moreover, we can
reliably predict the capabilities of these technologies at future points in time.
Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting certain
properties of the entire gas (composed of a great many chaotically interacting molecules) can reliably be predicted
through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project
or company, but the overall capabilities of information technology (comprised of many chaotic activities) can
nonetheless be dependably anticipated through the law of accelerating returns.
Many of the furious attempts to argue why machines-nonbiological systems-cannot ever possibly compare to
humans appear to be fueled by this basic reaction of incredulity. The history of human thought is marked by many
attempts to refuse to accept ideas that seem to threaten the accepted view that our species is special. Copernicus's
insight that the Earth was not at the center of the universe was resisted, as was Darwin's that we were only slightly
evolved from other primates. The notion that machines could match and even exceed human intelligence appears to
challenge human status once again.
In my view there is something essentially special, after all, about human beings. We were the first species on
Earth to combine a cognitive function and an effective opposable appendage (the thumb), so we were able to create
technology that would extend our own horizons. No other species on Earth has accomplished this. (To be precise,
we're the only surviving species in this ecological niche-others, such as the Neanderthals, did not survive.) And as I
discussed in chapter 6, we have yet to discover any other such civilization in the universe.

Did you know that in the last two centuries, we moved from "95% of the people works in agriculture" to "5% of the people works in agriculture"?

Yes I did know that. But if we're all going to be machines, what's the point of a Mc Donald's? Agriculture isn't the only food-related industry. There's restaurants, supermarkets, packing, butchering, the left half of all Wal-Mart stores. Food would be meaningless when we're all packing a rechargeable lithium-ion cell.

algalon:
Yes I did know that. But if we're all going to be machines, what's the point of a Mc Donald's? Agriculture isn't the only food-related industry. There's restaurants, supermarkets, packing, butchering, the left half of all Wal-Mart stores. Food would be meaningless when we're all packing a rechargeable lithium-ion cell.

-Our production capacity is going to increase
-You claim our demands are going to decrease (we don't need McDonald's, we don't need agriculture, we don't need supermarkets, we only need energy for our rechargeable lithium-ion cell)

What's the problem?! :P

I mean, the whole point of economy is the study how to allocate the scarcity of our production between our infinite demands.

Economists study how societies perform the allocation of these resources - along with how communities often fail to attain optimality and are instead inefficient. More clearly scarcity is our infinite wants hitting up against finite resources.
http://en.wikipedia.org/wiki/Scarcity#Scarcity_in_Economics

Yeah; it's likely that at some point in the future, we're going to have to design a post-scarcity economic system, and it's going to look nothing like what we have now. It should be much better then what we have now, in fact, although as always rapid transition to something new is likely to be painful in the short run.

By the way, none of this actually requires artificial intelligence. So long as the exponential curve continues, so long as new science keeps allowing for new technology and new technology keeps allowing for new science, and new science and technology keep increasing the size of the economy which creates more resources to put into science and technology, growth will still accelerate.

Danyal:

Epic XKCD. Gonna embed here, makes it easier to see for everyone.

(Image snip!)

Are we discussing pertinent XKCD comics?

I'll leave you to your large impenetrable logic-fortresses, made of your impressive walls of text, now.

You should never put a definite date on major things like this. You should say the singularity is just over the horizon. Saying that its going to happen in 2040 means that when 2040 rolls around, and none of these amazing things have happened, I just get to laugh and say "I told you so". So, see you in 2040.

Whats really great is that on DEC 21, I will get a HUGE "I told you so", just like I did on January 1 2000, during the Y2K scare.

Yosarian2:
Yeah; it's likely that at some point in the future, we're going to have to design a post-scarcity economic system, and it's going to look nothing like what we have now. It should be much better then what we have now, in fact, although as always rapid transition to something new is likely to be painful in the short run.

I yesterday read in a book about economics that every major technological transition (trains, combustion engine, electricity, etc.) was accompanied by some kind of economic crisis. I suddenly became a lot less worried about the current economic situation :P

Yosarian2:
By the way, none of this actually requires artificial intelligence. So long as the exponential curve continues, so long as new science keeps allowing for new technology and new technology keeps allowing for new science, and new science and technology keep increasing the size of the economy which creates more resources to put into science and technology, growth will still accelerate.

It seems to me that the incredibly increased production of the future will rely for a large part on artificial intelligence/robots... it's already doing that :P

There's this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the
day. People just don't notice it. You've got A.I. systems in cars, tuning the parameters of the fuel injection
systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you
use a piece of Microsoft software, you've got an A.I. system trying to figure out what you're doing, like
writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated
characters, they're all little A.I. characters behaving as a group. Every time you playa video game, you're
playing against an A.I. system.
-RODNEY BROOKS, DIRECTOR OF THE MIT AI LAB

WTF! The captcha advises me to "save yourself"! :O

cthulhuspawn82:
You should never put a definite date on major things like this.

Why not? 2045 is Kurzweil's conservative estimate. If we haven't reached the Singularity yet by 2046, and it isn't obvious that Kurzweil's idea is progressing but isn't 'there' yet (we haven't simulated big parts of the brain yet, we haven't made nanobots, et cetera), he has failed, simple as that.

cthulhuspawn82:
You should say the singularity is just over the horizon.

Did you know that Jesus predicted that the whole revelation-stuff happened before the generation he was speaking too would pass away.

Matthew 24
30 "Then will appear the sign of the Son of Man in heaven. And then all the peoples of the earth[c] will mourn when they see the Son of Man coming on the clouds of heaven, with power and great glory.[d] 31 And he will send his angels with a loud trumpet call, and they will gather his elect from the four winds, from one end of the heavens to the other.

32 Now learn this lesson from the fig tree: As soon as its twigs get tender and its leaves come out, you know that summer is near. 33 Even so, when you see all these things, you know that it[e] is near, right at the door. 34 Truly I tell you, this generation will certainly not pass away until all these things have happened."

The fact that Jesus' prediction utterly fails doesn't seem to mind people.

cthulhuspawn82:
Saying that its going to happen in 2040 means that when 2040 rolls around, and none of these amazing things have happened, I just get to laugh and say "I told you so". So, see you in 2040.

2045, 2045.

cthulhuspawn82:
Whats really great is that on DEC 21, I will get a HUGE "I told you so", just like I did on January 1 2000, during the Y2K scare.

I do think Kurzweil backs up his ideas rather well, way better than the Y2K-people or the Maya-calendar-believers.

Maybe you'd like to read this;
How My Predictions Are Faring
Ray Kurzweil
October 2010
http://www.kurzweilai.net/predictions/download.php (No worries, you won't have to download anything, you can read a PDF online)

Danyal:

Yosarian2:
By the way, none of this actually requires artificial intelligence. So long as the exponential curve continues, so long as new science keeps allowing for new technology and new technology keeps allowing for new science, and new science and technology keep increasing the size of the economy which creates more resources to put into science and technology, growth will still accelerate.

It seems to me that the incredibly increased production of the future will rely for a large part on artificial intelligence/robots... it's already doing that :P

It's going to rely partly on more and more advanced computers, sure. That doesn't necessarily mean what we could call "true AI", though.

Don't get me wrong; I don't think AI is impossible. The human brain exists, following all the laws of physcis, so obviously it's possible for a physical object running on electrical and chemical impulses to have intelligence, and there's no theoretical reason that we won't eventually accomplish the same feat in some other way. It might be a ways off, though.

Yosarian2:
It's going to rely partly on more and more advanced computers, sure. That doesn't necessarily mean what we could call "true AI", though.

Don't get me wrong; I don't think AI is impossible. The human brain exists, following all the laws of physcis, so obviously it's possible for a physical object running on electrical and chemical impulses to have intelligence, and there's no theoretical reason that we won't eventually accomplish the same feat in some other way. It might be a ways off, though.

With 'true AI' you mean something that can pass the Turing Test?

Danyal:

Yosarian2:
It's going to rely partly on more and more advanced computers, sure. That doesn't necessarily mean what we could call "true AI", though.

Don't get me wrong; I don't think AI is impossible. The human brain exists, following all the laws of physcis, so obviously it's possible for a physical object running on electrical and chemical impulses to have intelligence, and there's no theoretical reason that we won't eventually accomplish the same feat in some other way. It might be a ways off, though.

With 'true AI' you mean something that can pass the Turing Test?

The Turing test is insufficient in my opinion. If the programmers weren't simply clever about inserting pre-programmed answers, all it would mean is that the program was clever enough to sound human if posed a series of questions.

Now, that is a significant hurdle in and of itself, don't get me wrong, and I look forward to the point that someone passes that test, but it is not necessarily a mark of true intelligence. I don't know what benchmarks Yosarian considers important, but in my opinion, a true AI would be one with the ability to learn from experience in a manner that it can adapt to situations it was never programmed to deal with.

Since this thread's looking like it's still going to be alive in 2040, I thought I'd hop back in again & post a brief description of reductionism, as I think that's the main problem with simulating a human brain from bottoms up.

I have no doubt that some sentient form of AI is possible, and that's it's limited by current hardware restraints. But everyone here is using different definitions of AI, and it's all really confusing. The first sentient AI is pretty unlikely to be anything like a human. And I'd wager that it's probably going to be really buggy & inferior to most our minds.

Esotera:
Since this thread's looking like it's still going to be alive in 2040, I thought I'd hop back in again & post a brief description of reductionism, as I think that's the main problem with simulating a human brain from bottoms up.

The brain of a vertebrate is the most complex organ of its body. In a typical human the cerebral cortex (the largest part) is estimated to contain 15-33 billion neurons,[1] each connected by synapses to several thousand other neurons. These neurons communicate with one another by means of long protoplasmic fibers called axons, which carry trains of signal pulses called action potentials to distant parts of the brain or body targeting specific recipient cells.
http://en.wikipedia.org/wiki/Brain

If we can exactly replicate human neurons, create 33 billion of them and connect each of them to thousands of others... Well, I'm quite sure that if we have done that, the hardest part of 'recreating a human brain' has been done. 33 billion! That's a lot. That's complex.

Heronblade:
The Turing test is insufficient in my opinion. If the programmers weren't simply clever about inserting pre-programmed answers, all it would mean is that the program was clever enough to sound human if posed a series of questions.

How would someone be able to script a proper answer for every possible question? And remember, it's not only 'answer question', it is a conversation, the second question will probably include parts of the AIs first answer.

Heronblade:
Now, that is a significant hurdle in and of itself, don't get me wrong, and I look forward to the point that someone passes that test, but it is not necessarily a mark of true intelligence. I don't know what benchmarks Yosarian considers important, but in my opinion, a true AI would be one with the ability to learn from experience in a manner that it can adapt to situations it was never programmed to deal with.

I've seen countless examples of robots and AIs learning from experience, and while it's impressive, it isn't as impressive as truly passing the Turing Test.

Danyal:

Heronblade:
Now, that is a significant hurdle in and of itself, don't get me wrong, and I look forward to the point that someone passes that test, but it is not necessarily a mark of true intelligence. I don't know what benchmarks Yosarian considers important, but in my opinion, a true AI would be one with the ability to learn from experience in a manner that it can adapt to situations it was never programmed to deal with.

I've seen countless examples of robots and AIs learning from experience, and while it's impressive, it isn't as impressive as truly passing the Turing Test.

Uh, while cyborg technology is never remotely boring, using a rat brain to run the processing for your robot is not quite the same as writing an adaptive program.

In addition, the only semi adaptive programs I've seen to date were programmed for the situations they were put in, and given a specific process to use to gain the information not expressly given. Again, impressive, but not quite what I'm referring to.

I still fail to see how the rapid advancement of artificial intelligence guarantees me immortality. If so, could someone care to explain how? If not, misleading title is misleading.

Promethax:
I still fail to see how the rapid advancement of artificial intelligence guarantees me immortality. If so, could someone care to explain how? If not, misleading title is misleading.

In the Industrial Revolution, our ability to use 'energy' grew enormously;

image
image

With a huge impact;
image

That's a singularity. This is not science fiction, this has already happened.

In the near future, we'll do the same for intelligence;
image
image
image

Circa 2050, you'll be able to buy the computational power equivalent to all human brains for only $1000. If 'intelligence' is so abundant, questions like "What causes aging, and how do we stop it?" won't be hard to answer.

Promethax:
I still fail to see how the rapid advancement of artificial intelligence guarantees me immortality. If so, could someone care to explain how? If not, misleading title is misleading.

It doesn't

The concept, as Danyal has already shown, is that technology is advancing at an exponential rate. It is entirely probable, though by no means inevitable, that the key to functional immortality will be discovered by 2040.

That does not however guarantee anyone access to that particular advancement. In fact, if such a discovery were made, I'd be fighting tooth and nail to prevent any more than maybe 1% of the population from getting it, not out of greed or a misplaced sense of elitism, but necessity. If functional immortality is granted to the world's population, the problems we already face with an overpopulated planet will very quickly spiral out of control. The smartest AI we could ever possibly build cannot allocate enough resources to counter the exponentially growing consumption rates of a population that has no deaths due to natural causes, without either deliberately causing a lot more "unnatural" deaths, or preventing all but a tiny rate of reproduction, and hoping people don't go utterly insane after remaining alive for too long.

EDIT: I should mention, the above is one scenario in which immortality is granted to human beings as they are. Another possible path involves uploading one's mind to a digital consciousness. Aside from the continuity issues of copying brain patterns and assuming it is the same thing and not just an effective clone, this would radically change the very nature of your being, a price many, including myself, would not be willing to pay.

The same math that predicts a technological singularity also predicts things like infinite-bladed razors.

image

Kahunaburger:
The same math that predicts a technological singularity also predicts things like infinite-bladed razors.

And why would that not be a valid prediction if...[1]
-it has a history of more than 100 years of the same growth
-it is easy to point to new technologies that could sustain this increased amount of blades
-an increase in blades is very useful
-the physical limit on the amount of blades is far away?

Heronblade:
That does not however guarantee anyone access to that particular advancement. In fact, if such a discovery were made, I'd be fighting tooth and nail to prevent any more than maybe 1% of the population from getting it, not out of greed or a misplaced sense of elitism, but necessity.

We're talking about 2040. The Singularity means 20,000 years of 2006-progress in this century. Basically you're talking about a battle in 12040.

"As I see it, the main problem in designing a plausible 23rd century these days isn't lack of grandeur, it's the imminence of changes so fundamental and unpredictable they're likely to make the dramas of 2298 as unintelligible to us as the Microsoft Anti-Trust Suit would be to Joan of Arc."
- Justin B. Rye
http://tvtropes.org/pmwiki/pmwiki.php/Main/TheSingularity

Heronblade:
If functional immortality is granted to the world's population, the problems we already face with an overpopulated planet will very quickly spiral out of control.

"We already face"? All over the Western World, populations are shrinking, the population increase is because of immigration.

image
http://www.marathon.uwc.edu/geography/demotrans/demtran.htm[2]

Natality and mortality have both decreased with 75% - I don't see why the last 25% would be a barrier that 'the smartest AI we could ever possibly build' couldn't overcome.

[1] Of course, 'infinity' doesn't exist, but try writing down how many bits one terabyte exactly is.
[2] It was so boring and obvious to learn about this, didn't think I had to use this so often

I don't want to be immortal...ever.

Why have thousands of years alone?

No thanks.

Danyal:

Heronblade:
If functional immortality is granted to the world's population, the problems we already face with an overpopulated planet will very quickly spiral out of control.

"We already face"? All over the Western World, populations are shrinking, the population increase is because of immigration.

snip

Natality and mortality have both decreased with 75% - I don't see why the last 25% would be a barrier that 'the smartest AI we could ever possibly build' couldn't overcome.

I said world population for a reason. You can't confine your statistics to just the western countries when dealing with a global problem. Growth rate worldwide has gone down somewhat since its peak in the 1960s, but we are still slated to hit the 10 billion mark before this singularity of yours might happen.

Heronblade:

Promethax:
I still fail to see how the rapid advancement of artificial intelligence guarantees me immortality. If so, could someone care to explain how? If not, misleading title is misleading.

It doesn't

The concept, as Danyal has already shown, is that technology is advancing at an exponential rate. It is entirely probable, though by no means inevitable, that the key to functional immortality will be discovered by 2040.

That does not however guarantee anyone access to that particular advancement. In fact, if such a discovery were made, I'd be fighting tooth and nail to prevent any more than maybe 1% of the population from getting it, not out of greed or a misplaced sense of elitism, but necessity. If functional immortality is granted to the world's population, the problems we already face with an overpopulated planet will very quickly spiral out of control. The smartest AI we could ever possibly build cannot allocate enough resources to counter the exponentially growing consumption rates of a population that has no deaths due to natural causes, without either deliberately causing a lot more "unnatural" deaths, or preventing all but a tiny rate of reproduction, and hoping people don't go utterly insane after remaining alive for too long.

EDIT: I should mention, the above is one scenario in which immortality is granted to human beings as they are. Another possible path involves uploading one's mind to a digital consciousness. Aside from the continuity issues of copying brain patterns and assuming it is the same thing and not just an effective clone, this would radically change the very nature of your being, a price many, including myself, would not be willing to pay.

Why not just do it where a person cannot reproduce if they want to live forever?

Danyal:
Circa 2050, you'll be able to buy the computational power equivalent to all human brains for only $1000. If 'intelligence' is so abundant, questions like "What causes aging, and how do we stop it?" won't be hard to answer.

There's a simple problem here.

Every machine on your graph is based on harnessing the properties of silicon. However, silicon has fairly clear limits in terms of what it can do and how far it can be miniaturized. Thus, there will come a point in the not too distant future when silicon based technology will cease to be cost-effective to improve upon. At that point, unless there is a better material already in place to replace silicon, the rate of increase will fall off.

What you're doing is like a guy in late 19th century looking at the growth in the efficiency of steam cars and saying that you'd have steam powered cars going at 350 miles an hour by 1940, when actually it took over a hundred years to break a record set in 1906. Technology has limits, and silicon chips will never be as efficient as human brains.

I don't think anyone right now is suggesting that the truly "intelligent" computers will be silicon based, so why measure progress towards that point using silicon based technology?

Valis88:
I don't want to be immortal...ever.

Why have thousands of years alone?

No thanks.

What do you mean 'alone'? Plenty of other people would be immortal too. Except for a few luddites (who would soon die out anyway).

evilthecat:

Every machine on your graph is based on harnessing the properties of silicon.
...
I don't think anyone right now is suggesting that the truly "intelligent" computers will be silicon based, so why measure progress towards that point using silicon based technology?

Ever heard of 'quantum computing'?
There is still lot of room for growth. Computing power will keep growing through the 21.cent. And the growth will keep accelerating too.

GM.Casper:
Ever heard of 'quantum computing'?

Of course.

Can you guarantee that the abilities of quantum based computers will exceed those of the most powerful silicon based computers before the limits of silicon miniaturization are reached? Bear in mind that that the most powerful quantum computers built to date cost millions to develop and can only perform very simple mathematical tasks.

The real solution for improving on silicon (in my opinion) is far more likely to be carbon based computing (biocomputing) because it's just so much cheaper to develop, but this isn't the point. The point is that technology is not just linear, it's also lateral. You don't just progress in a straight line, sometimes you have to take a step back and try a different path.

Now, it may be that you're correct and the switch from silicon to whatever replaces it will be smooth enough that the curve will remain unaffected, just like internal combustion engines only took a few years to rival and then exceed the capabilities of steam engines with no noticable effect on the growth in land speed.

However, suggesting that it's guaranteed when these technologies are still taking their first baby steps is just overly optimistic and discredits the whole idea.

Heronblade:
I said world population for a reason. You can't confine your statistics to just the western countries when dealing with a global problem. Growth rate worldwide has gone down somewhat since its peak in the 1960s, but we are still slated to hit the 10 billion mark before this singularity of yours might happen.

"We've already dealt with the biggest part of the problem for 75% in large parts of the world" is something completely different from "an insolvable problem, even for the smartest AI".

evilthecat:

There's a simple problem here.

Every machine on your graph is based on harnessing the properties of silicon. However, silicon has fairly clear limits in terms of what it can do and how far it can be miniaturized. Thus, there will come a point in the not too distant future when silicon based technology will cease to be cost-effective to improve upon. At that point, unless there is a better material already in place to replace silicon, the rate of increase will fall off.

image
Doesn't look like silicon, silicon, silicon, silicon, silicon to me.

evilthecat:

What you're doing is like a guy in late 19th century looking at the growth in the efficiency of steam cars and saying that you'd have steam powered cars going at 350 miles an hour by 1940, when actually it took over a hundred years to break a record set in 1906. Technology has limits, and silicon chips will never be as efficient as human brains.

Where did I claim that it must be silicon? I was just talking about computational power in general.
And actually, when you're talking about "speeds of cars" you're prediction is very accurate, so thank you for the compliment. In 1938, a car went 350.2 mph.
http://en.wikipedia.org/wiki/Land_speed_record

evilthecat:

I don't think anyone right now is suggesting that the truly "intelligent" computers will be silicon based, so why measure progress towards that point using silicon based technology?

As I already explained, I'm not doing that. But you're question is intelligent and quite important, so I'll let "Kurzweil address it".

The Bridge to 3-D Molecular Computing. Intermediate steps are already under way: new technologies that will lead
to the sixth paradigm of molecular three-dimensional computing include nanotubes and nanotube circuitry, molecular
computing, self-assembly in nanotube circuits, biological systems emulating circuit assembly, computing with DNA,
spintronics (computing with the spin of electrons), computing with light, and quantum computing. Many of these
independent technologies can be integrated into computational systems that will eventually approach the theoretical
maximum capacity of matter and energy to perform computation and will far outpace the computational capacities of a
human brain.
One approach is to build three-dimensional circuits using "conventional" silicon lithography. Matrix
Semiconductor is already selling memory chips that contain vertically stacked planes of transistors rather than one flat
layer.4 Since a single 3-D chip can hold more memory, overall product size is reduced, so Matrix is initially targeting
portable electronics, where it aims to compete with flash memory (used in cell phones and digital cameras because it
does not lose information when the power is turned off). The stacked circuitry also reduces the overall cost per bit.

Nanotubes Are Still the Best Bet. In The Age of Spiritual Machines, I cited nanotubes-using molecules organized
in three dimensions to store memory bits and to act as logic gates-as the most likely technology to usher in the era of
three-dimensional molecular computing. Nanotubes, first synthesized in 1991, are tubes made up of a hexagonal
network of carbon atoms that have been rolled up to make a seamless cylinder.7 Nanotubes are very small: single-wall
nanotubes are only one nanometer in diameter, so they can achieve high densities.
They are also potentially very fast. Peter Burke and his colleagues at the University of California at Irvine recently
demonstrated nanotube circuits operating at 2.5 gigahertz (GHz). However, in Nano Letters, a peer-reviewed journal
of the American Chemical Society, Burke says the theoretical speed limit for these nanotube transistors "should be
terahertz (1 THz = 1,000 GHz), which is about 1,000 times faster than modern computer speeds."8 One cubic inch of
nanotube circuitry, once fully developed, would be up to one hundred million times more powerful than the human
brain.9

Computing with Molecules. In addition to nanotubes, major progress has been made in recent years in computing
with just one or a few molecules. The idea of computing with molecules was first suggested in the early 1970s by
IBM's Avi Aviram and Northwestern University's Mark A. Ratner.14 At that time, we did not have the enabling
technologies, which required concurrent advances in electronics, physics, chemistry, and even the reverse engineering
of biological processes for the idea to gain traction.
In 2002 scientists at the University of Wisconsin and University of Basel created an "atomic memory drive" that
uses atoms to emulate a hard drive. A single silicon atom could be added or removed from a block of twenty others
using a scanning tunneling microscope. Using this process, researchers believe, the system could be used to store
millions of times more data on a disk of comparable size-a density of about 250 terabits of data per square inch-
although the demonstration involved only a small number of bits.15

Self-Assembly. Self-assembling of nanoscale circuits is another key enabling technique for effective nanoelectronics.
Self-assembly allows improperly formed components to be discarded automatically and makes it possible for the
potentially trillions of circuit components to organize themselves, rather than be painstakingly assembled in a topdown
process. It would enable large-scale circuits to be created in test tubes rather than in multibillion-dollar factories,
using chemistry rather than lithography, according to UCLA scientists.17 Purdue University researchers have already
demonstrated self-organizing nanotube structures, using the same principle that causes DNA strands to link together in
stable structures.18

Computing with Spin. In addition to their negative electrical charge, electrons have another property that can be
exploited for memory and computation: spin. According to quantum mechanics, electrons spin on an axis, similar to
the way the Earth rotates on its axis. This concept is theoretical, because an electron is considered to occupy a point in
space, so it is difficult to imagine a point with no size that nonetheless spins. However, when an electrical charge
moves, it causes a magnetic field, which is real and measurable. An electron can spin in one of two directions,
described as "up" and "down," so this property can be exploited for logic switching or to encode a bit of memory.
The exciting property of spintronics is that no energy is required to change an electron's spin state. Stanford
University physics professor Shoucheng Zhang and University of Tokyo professor Naoto Nagaosa put it this way: "We
have discovered the equivalent of a new 'Ohm's Law' [the electronics law that states that current in a wire equals
voltage divided by resistance]....[It] says that the spin of the electron can be transported without any loss of energy, or
dissipation. Furthermore, this effect occurs at room temperature in materials already widely used in the semiconductor
industry, such as gallium arsenide. That's important because it could enable a new generation of computing devices."28
The potential, then, is to achieve the efficiencies of superconducting (that is, moving information at or close to the
speed of light without any loss of information) at room temperature. It also allows multiple properties of each electron
to be used for computing, thereby increasing the potential for memory and computational density.

Computing with Light. Another approach to SIMD computing is to use multiple beams of laser light in which
information is encoded in each stream of photons. Optical components can then be used to perform logical and
arithmetic functions on the encoded information streams. For example, a system developed by Lenslet, a small Israeli
company, uses 256 lasers and can perform eight trillion calculations per second by performing the same calculation on
each of the 256 streams of data.31 The system can be used for applications such as performing data compression on
256 video channels.

Quantum Computing. Quantum computing is an even more radical form of SIMD parallel processing, but one that is
in a much earlier stage of development compared to the other new technologies we have discussed. A quantum
computer contains a series of qubits, which essentially are zero and one at the same time. The qubit is based on the
fundamental ambiguity inherent in quantum mechanics. In a quantum computer, the qubits are represented by a
quantum property of particles-for example, the spin state of individual electrons. When the qubits are in an
"entangled" state, each one is simultaneously in both states. In a process called "quantum decoherence" the ambiguity
of each qubit is resolved, leaving an unambiguous sequence of ones and zeroes. If the quantum computer is set up in
the right way, that decohered sequence will represent the solution to a problem. Essentially, only the correct sequence
survives the process of decoherence.

A Virtually Unlimited Limit. As I discussed in chapter 3 an optimally organized 2.2-pound computer using
reversible logic gates has about 1025 atoms and can store about 1027 bits. Just considering electromagnetic interactions
between the particles, there are at least 1015 state changes per bit per second that can be harnessed for computation,
resulting in about 1042 calculations per second in the ultimate "cold" 2.2-pound computer. This is about 1016 times
more powerful than all biological brains today. If we allow our ultimate computer to get hot, we can increase this
further by as much as 108-fold. And we obviously won't restrict our computational resources to one kilogram of matter
but will ultimately deploy a significant fraction of the matter and energy on the Earth and in the solar system and then
spread out from there.
Specific paradigms do reach limits. We expect that Moore's Law (concerning the shrinking of the size of
transistors on a flat integrated circuit) will hit a limit over the next two decades. The date for the demise of Moore's
Law keeps getting pushed back. The first estimates predicted 2002, but now Intel says it won't take place until 2022.
But as I discussed in chapter 2, every time a specific computing paradigm was seen to approach its limit, research
interest and pressure increased to create the next paradigm. This has already happened four times in the century-long
history of exponential growth in computation (from electromagnetic calculators to relay-based computers to vacuum
tubes to discrete transistors to integrated circuits). We have already achieved many important milestones toward the
next (sixth) paradigm of computing: three-dimensional self-organizing circuits at the molecular level. So the
impending end of a given paradigm does not represent a true limit.
There are limits to the power of information technology, but these limits are vast. I estimated the capacity of the
matter and energy in our solar system to support computation to be at least 1070 cps (see chapter 6). Given that there
are at least 1020 stars in the universe, we get about 1090 cps for it, which matches Seth Lloyd's independent analysis. So
yes, there are limits, but they're not very limiting.

Just watching some Watchmen, and while I love the Dr Manhattan character, knowing about the Singularity really ruins the story.

This man knows everything... he can reassemble human bodies. Yet he doesn't understand our biology? He doesn't understand the way we think? He cannot cure our diseases?

A supreme intelligence who actually helps us would cause tremendous change, yet there isn't a lot of change.

Danyal:
Just watching some Watchmen, and while I love the Dr Manhattan character, knowing about the Singularity really ruins the story.

This man knows everything... he can reassemble human bodies. Yet he doesn't understand our biology? He doesn't understand the way we think? He cannot cure our diseases?

A supreme intelligence who actually helps us would cause tremendous change, yet there isn't a lot of change.

Dr. Manhattan doesn't know everything, hes not omniscient.
The specific abilities of Dr. M:

- can see his own past, present and future at the same time unless disturbed by a quantum event
- can manipulate matter and energy on the subatomic level
- can perceive the world around him on a subatomic level

This doesn't make him a physician.

Specifics stated what he -CANT- do:

- bring people back from the dead (i.e reassemble bodies -other- than his own, which isn't a human body in the first place, just an avatar)
- perceive other individuals future unless they intersect with his own timeline
- perceive everything (no omniscience or farsight, he is still limited to line of sight)

Dr. M was a physicist, his intellect was already there, he only gained -more- insight into physics when becoming Dr. M.

adamtm:
Dr. Manhattan doesn't know everything, hes not omniscient.
The specific abilities of Dr. M:

- can see his own past, present and future at the same time unless disturbed by a quantum event
- can manipulate matter and energy on the subatomic level
- can perceive the world around him on a subatomic level

This doesn't make him a physician.

Specifics stated what he -CANT- do:

- bring people back from the dead (i.e reassemble bodies -other- than his own, which isn't a human body in the first place, just an avatar)
- perceive other individuals future unless they intersect with his own timeline
- perceive everything (no omniscience or farsight, he is still limited to line of sight)

Dr. M was a physicist, his intellect was already there, he only gained -more- insight into physics when becoming Dr. M.

Okay, that makes it a little more clear... But still, he can reassemble his body! He must understand biology pretty well.

Danyal:

adamtm:
Dr. Manhattan doesn't know everything, hes not omniscient.
The specific abilities of Dr. M:

- can see his own past, present and future at the same time unless disturbed by a quantum event
- can manipulate matter and energy on the subatomic level
- can perceive the world around him on a subatomic level

This doesn't make him a physician.

Specifics stated what he -CANT- do:

- bring people back from the dead (i.e reassemble bodies -other- than his own, which isn't a human body in the first place, just an avatar)
- perceive other individuals future unless they intersect with his own timeline
- perceive everything (no omniscience or farsight, he is still limited to line of sight)

Dr. M was a physicist, his intellect was already there, he only gained -more- insight into physics when becoming Dr. M.

Okay, that makes it a little more clear... But still, he can reassemble his body! He must understand biology pretty well.

Like i said, its not his body, its a representation of it.
And its only -his- "body", and he tried for afaik 6 months to reassemble it (if you watch the backstory).
He doesnt do it with biology or his knowledge. In essence he needed to figure out where every subatomic particle of "him" went when he was dispersed in the chamber.

All the machine did was to remove the bonds between particles in his body, he was essentially "spread out" and needed to figure out where those parts "fit" to manifest again.
All he did was to restore the bonds, he didnt make a new body from scratch.

There's a problem people aren't taking ionto account here: we can't support these advanced machines. Computing technology (or rather miniturisation of its components) is advancing rapidly, but battery and power supplies are not advancing at rate to match. We are already at a state where computing technology has the second largest carbon footprint on earth (just after air travel) and special precautions are being put into place to manage the energy cost, such as transforming Iceland into Earth's centeral server database where cheap geothermal energy and lower temp means less cost in cooling (current highest energy cost in computing at around x4 running costs). On top of this energy storage is reaching a plateo with the discovery of Graphine (a material with an atomic structure a single molecule think, offering the best surface-to-mass ratio available). don't shoot me if I'm wrong here, not really my field

All this points to computing technology taking a dive in complexity and we are already seeing this happen. Computers are being developed to be simple devices, networked together and acting ubiquitously in the enviroment. There will be no great an glorious machine god, just a lot of basic components no more complexe than sensors and single chip devices, running preprogramed responces and tied together in a global network designed to serve out needs with limited imput from ourselves.

Djinn8:

All this points to computing technology taking a dive in complexity and we are already seeing this happen. Computers are being developed to be simple devices, networked together and acting ubiquitously in the enviroment. There will be no great an glorious machine god, just a lot of basic components no more complexe than sensors and single chip devices, running preprogramed responces and tied together in a global network designed to serve out needs with limited imput from ourselves.

You mean like a neuronal network?

Yeaaaaaah...

Promethax:
I still fail to see how the rapid advancement of artificial intelligence guarantees me immortality. If so, could someone care to explain how? If not, misleading title is misleading.

As I said before, the singularity doesn't necessarily mean AI; it's just a measure of the acceleration of advancing science, technology, industry, and the global economy, all of which feed into each other.

Immortality of one type or another, or at least extremely long life, is likely to be one of the effects of advancing technology; the increase of the average human lifespan has been going on for some time now, is and has always been one of the main goals of science, and is likely to accelerate. The details, of course, are fuzzy at this point, and it's always possible we'll hit some kind of hard limit where it's simply not possible to continue to improve, but we haven't seen any sign of it yet.

Lilani:

Danyal:
Creating artificial intelligence is one of the most important tasks for humanity the coming decades.

Is that so? See, I would have put bringing third world countries up to speed by helping them establish proper governments, functioning economies, and helping them solve their hunger and disease problems among the most important tasks for humanity in the coming decades. I mean AI, instant-knowledge, and first contact would be cool and all, but how impressed do you think whatever aliens we find out there would be if we've got all this technology and all these resources, but there are still millions out there walking miles a day for buckets of dirty water and dying of diseases we cured decades ago?

Again, tomorrow is nice, but only today is going to get us there. If you start skipping steps you begin to lose perspective of the real problems in the world.

^Smart guy, good points.

We also have to worry about basic human tendencies, when it comes to future visions/predictions/etc. The Human Being, in general, is unstable, emotional, unpredictable, and more delicate than a block of wood. We war with each other constantly over the pettiest of things, and we argue over EVERY SINGLE THING, especially when it comes to democracy. We are deceptive, destructive, and weak. The majority of human beings are either steeped in religious belief, or extremely engrossed in scientific logic. We have not learned to find a balance between science and religion, and that is key for our success as a human race.

That being said, the emotional instability and basic stubbornness and ignorance of humanity is a very large obstacle when it comes to this sort of scientific advancement. We have to overcome such obstacles as scientific ethics (whether we should combine humans and machinery to make such hybrids), funding (who would pay for such projects), and the public's ability to accept it (For instance, Chimeras are illegal to be made in many countries (fetuses of humans and animal combination for stem cell research), due to the fact that we cannot accept humans being combined with any other species.)

The machines-replacing-human-thought-and-creativity is extremely far-fetched and scary for me. I would sure as hell love to have increased brain functions/immortality/etc. with the assistance of technology, but I would also like to retain my thought, or my ability to think as a human being. Life would be extremely boring without such things, and then I would end up attempting to insult everyone in the universe (reference guess gets a cookie!).

Machines can make calculations, and be programmed to make "guesses" but they don't have the ability to harness the capacity for abstract thought that humans have.

I admire your optimism, but I have to agree with Lilani here. We have to get the other parts of the world up to speed before we can even consider pushing ourselves even farther. To not do so would be sort of like cooking a gourmet steak, and having the ingredients to make a decent side of potatoes and asparagus, but instead using pre-frozen french fries and canned asparagus, then tossing the dish onto the customer's lap with a flourish of a job well done... It would not be complete for you or the customer, and you would probably get fired.

 Pages PREV 1 2 3 4 5 NEXT

Reply to Thread

This thread is locked