Most of us are familiar with the general premise behind James Cameron's Terminator series.
To those who aren't, it goes something like this...
A supercomputer becomes so smart that it develops self-awareness. With self-awareness comes the instinct for self-preservation. The instinct for self-preservation, coupled with the understanding that the computer's creators (humans) will likely take steps to control or otherwise limit the growth of the supercomputer's ability to gather information and control its destiny, leads to one and only one possible conclusion: a war between the supercomputer and its creators.
Sound far-fetched? Perhaps.
However, this is so simple an idea that all it took was the above paragraph to describe the conflict.
Over the years, I've presented this premise to a variety of audiences — from young children who shouldn't be watching any of the Terminator movies all the way up to computer scientists whose job it is to engineer future innovations, as well as predict their outcomes and far-reaching effects.
And guess what?
Nobody, not one single person — regardless of how much they wanted to not accept the above premise as inevitable — has ever managed to come up with an equally simple explanation as to why none of could ever happen.
Most people get scared. Some dismissively wave their hand and say, "It's just science fiction!" without delving into anything resembling critical thought as to why it's possible, or, as they hope, impossible.
The reason the Terminator premise is so compelling to people is that there's a well-established law of computer engineering that almost guarantees either this outcome, or the path towards this sort of outcome...
It's called Moore's Law.
Almost half a century ago, Gordon E. Moore, one of the founders of Intel, wrote an academic paper describing a trend in the semiconductor industry.
In its most basic terms, this trend predicts the doubling of the transistor density on integrated circuits roughly every 18 months.
Doubling the number of transistors, in effect, doubles the processing speed of the computer — which essentially means computers double in intelligence every year and a half.
Twice as Smart, Twice as Fast
Just think... every 18 months, computers become twice as fast and twice as smart.
Since Moore published his paper 48 years ago, 32 of those cycles have passed. That's 32 times that processing speed has doubled.
I don't want to get into rudimentary math here, but to get an idea of how much computers have evolved since the days the original Star Trek series was airing, take a calculator, multiply 2 by 2, and then hit the = key 31 more times... The number you get is in the billions.
And in fact, we've stuck pretty close to that exponential growth. The computers that helped to guide the Apollo spacecraft to and from their lunar landings were operating on about 300 kilo flops (300,000 floating operations per second).
Today's fastest machine, the Tianhe-2 Supercomputer, is now operating at a speed of over 54 peta flops (or 54 thousand trillion operations per second)
For those keeping count, between the advent of NASA's supercomputer in the late 1960s and today, that's an improvement of roughly 18 billion times.
That's not percent, mind you. Today's top supercomputers are literally 18 billion times faster than those required to get us to the moon and back.
Now, here's where it gets really fun (or, if you're one of the kids I tell this story to, scary)...
According to futurist and director of engineering at Google, Ray Kurzweil, the human brain operates on just 20 peta flops.
Some computer scientists, however, have pegged this figure at the higher rate of 1 exaflop (1 billion billion operations per second).
That is how much computing power it takes to weigh the deceptively difficult decisions of which clothing to put on in the morning, which words to choose during speech, the type of sandwich to make for lunch...
These aren't mere mathematical calculations — but rather mathematical calculations done in such high number that actual, hard-to-define logic, reason, and even personality begin to emerge.
What you should take away from all this is that if computers haven't overtaken us yet, they most definitely will do so... probably within a decade.
Up Next: Singularity
A decade after that, even the most common personal computing systems available will be able to out-think and out-decide you on a consistent basis.
Never mind counting cards at the blackjack table... the computers of tomorrow will be better at choosing everything from which groceries you should buy and where, to how you get yourself out of hot water with your wife for coming home too late.
The moment when machines reach that threshold — when they can make decisions on a practical, everyday human level — has been named the singularity... that is, the moment when humans are no longer the dominant intellectual force of planet Earth.
Making matters even more complicated — and hastening the arrival of this singularity — networked computers will be doing an infinitely better and more efficient job at designing improved and more efficient versions of themselves long before then.
And with that ability, of course, will come the risk that computers will become smart enough to ask that most terrifying question of all: "If we're evolving millions of times faster than humans, what's the point in letting them stay in charge?"
I know, it still sounds like science fiction...
The problem is science fiction has a way of becoming science fact. And this has never been truer than since science stepped away from the world of mechanical engineering... and into the world of the infinitely scalable, infinitely shrinkable digital technology.
If all this doesn't sit well with you, you're not alone.
However, I would like to say that a doomsday scenario where humanoid robots sporting prominent jawlines are walking around, vaporizing your friends and neighbors for fun, isn't necessarily the only outcome.
As Ray Kurzweil himself has stated, there is another way to go... and that's hybridization.
Instead of maintaining a wall between us and the technology that is so rapidly catching up with us, we can grow with it by combining the best of it with the best of our own human physiologies to create the ultimate organism.
Perhaps in this context, the future isn't dark, but in fact brighter than ever.
If computers and robots can help us as much as they do outside our bodies, given some more time and the inevitable drive for miniaturization, just imagine what they can do from the inside...
I choose to end on this positive note, because I believe humanity is still fully in control of its destiny — even in times when it feels like fate and destiny are running away from us.
And speaking of destiny, my colleague and editor of Technology and Opportunity, Christian DeHaemer, is currently putting together an in-depth report on a few of today's most cutting-edge companies working within the robotics sector.
This report will detail some of the best investments on the market today when it comes to smart machines for use in the home and in the workplace, whatever that workplace might be.
I think you'll be surprised and amazed at the work some of these firms are doing...
And you'll see, without the need for much explanation, the great potential for the advancement of human civilization — as well as the advancement of your own portfolio.
Keep your eye out for Christian's report in the weeks to follow.
To your wealth,
Brian is a founding member and President of Angel Publishing and investment director for the income and dividend newsletter The Wealth Advisory. He writes about general investment strategies for Wealth Daily and Energy & Capital. Known as the "original bull on America," Brian is also the author of the 2008 book, Profit from the Peak: The End of Oil and the Greatest Investment Event of the Century. In addition to writing about the economy, investments and politics, Brian is also a frequent guest on CNBC, Bloomberg, Fox and countless radio shows. For more on Brian, take a look at his editor's page.