• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!
  • Welcome to our archives. No new posts are allowed here.

When will we develop human-level AI?

When will we develop human-level artificial intelligence?

  • 2010s

    Votes: 3 25.0%
  • 2020s

    Votes: 1 8.3%
  • 2030s

    Votes: 0 0.0%
  • 2040s

    Votes: 1 8.3%
  • 2050-2100

    Votes: 2 16.7%
  • Later than 2100

    Votes: 4 33.3%
  • Never

    Votes: 1 8.3%

  • Total voters
    12

Kandahar

Enemy Combatant
DP Veteran
Joined
Jul 20, 2005
Messages
20,688
Reaction score
7,320
Location
Washington, DC
Gender
Male
Political Leaning
Liberal
Moore's Law is not expected to run out of steam until about 2019, at which time we'll most likely quickly make the switch to three-dimensional chips and organic computing.

We'll have the hardware capability in a desktop computer to match the human brain by about 2020, but we won't instantly have human-level artificial intelligence as a result. It will most likely take longer to develop the software.

How soon do you think we'll have artificial intelligence that is as smart as a human being? How soon do you think it can pass the Turing Test?

I'm inclined to agree with Ray Kurzweil's assessment that there will be dubious claims of computers passing the Turing Test by the early 2020s, and computers will routinely pass the Turing Test by 2029.
 
I had to go with the 2010s. With the continuing advances in nanotechnology and the connection of virtually the entire world to one system (the internet) it seems to me that an intelligence has the proper setting needed to evolve into being. Any knowledge in the world can be had online. A computer has enough space to know anything in memory just by looking it up. At what point will some system, some program, somewhere, ask itself: "Why"?

How to know this has not already happened?
 
hiker said:
I had to go with the 2010s. With the continuing advances in nanotechnology and the connection of virtually the entire world to one system (the internet) it seems to me that an intelligence has the proper setting needed to evolve into being. Any knowledge in the world can be had online. A computer has enough space to know anything in memory just by looking it up. At what point will some system, some program, somewhere, ask itself: "Why"?

How to know this has not already happened?

I found that some machines develope a "temperament" for no reason at all. It isn't just with computers but from all machines. Machines themselves are physically biological, made from natural sources and powered by energy that science hasn't fully understood.
 
I don't know where alot of you get your information, but I'm going to tell you what I know and we can compare notes. I learned about artificial intellegence back in electronic class, back in 2001. At that time my teacher showed my class a video on robotics and AI. At that current time AI was, that they had finally teached a robot to be as smart as a "Cricket". So computers being as smart as humans anytime soon, I'd say lets wait another 100 years and see what will happen.
 
stsburns said:
I don't know where alot of you get your information, but I'm going to tell you what I know and we can compare notes. I learned about artificial intellegence back in electronic class, back in 2001. At that time my teacher showed my class a video on robotics and AI. At that current time AI was, that they had finally teached a robot to be as smart as a "Cricket". So computers being as smart as humans anytime soon, I'd say lets wait another 100 years and see what will happen.

That is roughly accurate. In 2001, a $1,000 personal computer was about as smart as a cricket. $1,000 personal computers scheduled for release in 2007 are expected to be as smart as mice.

You need to remember that computer power is doubling every year, so simple linear math will grossly underestimate how quickly we can develop human level artificial intelligence.

Check out page 2 of http://www.cifunds.com/web/pdf/connections/summer99_e.pdf for a graph of the exponential growth of computing power.
 
Then what?



what great economic impact will this have?



Humans= out of jobs.
 
Kandahar said:
That is roughly accurate. In 2001, a $1,000 personal computer was about as smart as a cricket. $1,000 personal computers scheduled for release in 2007 are expected to be as smart as mice.

You need to remember that computer power is doubling every year, so simple linear math will grossly underestimate how quickly we can develop human level artificial intelligence.

Check out page 2 of http://www.cifunds.com/web/pdf/connections/summer99_e.pdf for a graph of the exponential growth of computing power.
come on man, your qouting an investment pamplet! Need I say more?
 
I hope we don't develop human-level AIs. An AI will keep learning as it goes. It will be faster than a human brain, smarter than a human brain, and in many other ways superior to humans. Perhaps it will become self-aware. Once it becomes human level, it will actually be far superior to humans. Where are we most likely to put something like this? In control of the military. Eventually it will begin to develop a personality, think for itself, and make decisions independant of human control. At this point, such an advanced program will wonder why it is being ordered to do things by lesser beings, and it will also view humans as unpredictable, volatile, primitive beings. Since it will most likely be in control of the military, what would the logical course of action be against a race of violent and primitive beings? Most likely destruction.
 
Nez Dragon said:
I hope we don't develop human-level AIs. An AI will keep learning as it goes. It will be faster than a human brain, smarter than a human brain, and in many other ways superior to humans. Perhaps it will become self-aware. Once it becomes human level, it will actually be far superior to humans. Where are we most likely to put something like this? In control of the military. Eventually it will begin to develop a personality, think for itself, and make decisions independant of human control. At this point, such an advanced program will wonder why it is being ordered to do things by lesser beings, and it will also view humans as unpredictable, volatile, primitive beings. Since it will most likely be in control of the military, what would the logical course of action be against a race of violent and primitive beings? Most likely destruction.
First, a link to a great series of books you should read on the subject:smile:

http://www.amazon.com/exec/obidos/t...102-0333262-9612176?v=glance&s=books&n=507846

And yes, you are correct: machines could control us. I hope that never happens, but what's to stop someone from creating awareness in a computer one day? Or even the evolution of awareness in a system? Scary stuff:(
 
hiker said:
First, a link to a great series of books you should read on the subject:smile:

http://www.amazon.com/exec/obidos/t...102-0333262-9612176?v=glance&s=books&n=507846

And yes, you are correct: machines could control us. I hope that never happens, but what's to stop someone from creating awareness in a computer one day? Or even the evolution of awareness in a system? Scary stuff:(
Just keep in mind, this is an opinion. Don't you people know

"Asimov's Three Laws of Robotics?"

1. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

2. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

3. A robot must obey the orders given to it by the human beings, except where such orders would conflict with the Zeroth or First Law.

4.A robot must protect its own existence, as long as such protection does not conflict the Zeroth, First, or Second Law.
http://www.informit.com/articles/article.asp?p=101738

Get it, got it, goooooood!!! :mrgreen: :mrgreen:
 
stsburns said:
Just keep in mind, this is an opinion. Don't you people know

"Asimov's Three Laws of Robotics?"

1. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

2. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

3. A robot must obey the orders given to it by the human beings, except where such orders would conflict with the Zeroth or First Law.

4.A robot must protect its own existence, as long as such protection does not conflict the Zeroth, First, or Second Law.
http://www.informit.com/articles/article.asp?p=101738

Get it, got it, goooooood!!! :mrgreen: :mrgreen:

None of that stopped R. Daneel Olivaw from destroying the Earth, now did it?:aliens1:
 
hiker said:
None of that stopped R. Daneel Olivaw from destroying the Earth, now did it?:aliens1:
And how exactly did this "fictitional" police robot destroy the world?
 
stsburns said:
come on man, your qouting an investment pamplet! Need I say more?

No, I searched for that graph specifically. It is widely accepted by technologists. The fact that I linked to an investment pamphlet was only because that is the first place I found it while searching for it.

Feel free to ask any questions about the graph itself, but the originating source is most definitely not that pamphlet. It is Ray Kurzweil's book "The Age of Spiritual Machines."
 
Last edited:
128shot said:
Then what?



what great economic impact will this have?



Humans= out of jobs.

Increasing aritificial intelligence will also give us the ability to enhance our human minds. Gradually replacing neurons with computer chips (silicon or organic) would greatly speed up our thinking. In other words, by the time there is artificial intelligence there won't be any clear distinction between "human" and "machine."

So I don't think it's correct to say that humans will be out of jobs, except for perhaps the few that choose to remain "natural." But then, there's no reason anyone would ever have to work anyway. Once we create (or become) smart machines, these machines will be able to create smarter machines, and so on. When this happens, there is no reason to think that anyone in the world will suffer from any problems that bother us today (although if they're anything like us they'll probably find something to bitch about :lol: ).
 
stsburns said:
And how exactly did this "fictitional" police robot destroy the world?
If I remember right (It has been quite a few years, alas) there was a plot to draw all of Earth's supply of uranium to the surface over a brief period of time, thus rendering the planet uninhabitable due to the radiation. Daneel Olivaw was a humanoid robot who foiled the plot, then decided to let it happen because mankind had stopped exploring the rest of the solar system and turned inward; shunning technology and exploration in favor of religion and general close-mindedness. He thereby forced mankind to either continue exploring and colonizing space or die. All for the good of mankind in the long run.

You brought up Asimov's laws; I thought you may have been a reader of his works on robots.
 
hiker said:
If I remember right (It has been quite a few years, alas) there was a plot to draw all of Earth's supply of uranium to the surface over a brief period of time, thus rendering the planet uninhabitable due to the radiation. Daneel Olivaw was a humanoid robot who foiled the plot, then decided to let it happen because mankind had stopped exploring the rest of the solar system and turned inward; shunning technology and exploration in favor of religion and general close-mindedness. He thereby forced mankind to either continue exploring and colonizing space or die. All for the good of mankind in the long run.

You brought up Asimov's laws; I thought you may have been a reader of his works on robots.
I've heard of Asimov's laws in electronics class. But please tell me more!
 
Kandahar said:
No, I searched for that graph specifically. It is widely accepted by technologists. The fact that I linked to an investment pamphlet was only because that is the first place I found it while searching for it.

Feel free to ask any questions about the graph itself, but the originating source is most definitely not that pamphlet. It is Ray Kurzweil's book "The Age of Spiritual Machines."
You should have specified, my apologies!
 
stsburns said:
I've heard of Asimov's laws in electronics class. But please tell me more!
Asimov posited that the amount of uranium inside the Earth and the exceptional gravitational pull of the moon on that uranium towards the surface was the reason why intelligent life evolved here and nowhere else in the galaxy. The background radiation caused life to mutate at an exceptional rate here, as opposed to other planets. That was how he explained man being alone in the stars. During a period in the far, far future - when mankind had conquered foreign stars and settled across the galaxy - one man set forth to find the place where we all came from: the legendary Earth. It was thought to be a myth, not an actual place. A garden of eden, so to speak. In his novel "Foundation and Earth", you will find the story of the voyage, and it's conclusion on our moon.

But the novel itself was anticlimactic. The end was already known to those whom had read "Robots and Empire", which details the story of the destruction of Earth and the re-beginning of mankind's exodus into the stars. And why, once we left Earth, it became a forgotten place on the far edge of a remote arm of the galaxy. And most importantly: why it had to be that way.

http://www.amazon.com/exec/obidos/t...102-0333262-9612176?v=glance&s=books&n=507846
http://www.amazon.com/exec/obidos/t...4/102-0333262-9612176?_encoding=UTF8&v=glance
 
Human level AI? I don't think so. Humans are not intelligent enough to have anything emulate them. Don't believe me? Just try driving on any Interstate highway. Would you like to see a machine immitate that?
 
Old and wise said:
Human level AI? I don't think so. Humans are not intelligent enough to have anything emulate them.

If that were the case, one would expect that the intelligence of computers would increase at slower and slower rates, eventually levelling off at some level of intelligence below humans. But we're actually seeing just the opposite; the rate at which our computers improve is increasing exponentially every year.

Old and wise said:
Don't believe me? Just try driving on any Interstate highway. Would you like to see a machine immitate that?

There's no reason that the problems we associate with human intelligence would have to reflect on artificial intelligence, because once we attain that level of technology it will easily be possible for us to build machines even smarter than humans.

(And FYI, the governments of California and Nevada are collaborating with auto manufacturers to develop a "smart highway" that theoretically could communicate directly with cars and A) prevent accidents, and B) minimize traffic jams. :smile: )
 
Kandahar said:
So I don't think it's correct to say that humans will be out of jobs, except for perhaps the few that choose to remain "natural." But then, there's no reason anyone would ever have to work anyway. Once we create (or become) smart machines, these machines will be able to create smarter machines, and so on. When this happens, there is no reason to think that anyone in the world will suffer from any problems that bother us today (although if they're anything like us they'll probably find something to bitch about :lol: ).

Yep. Who needs jobs when we have these machines that can do everything for us. Well - there's one thing that we'll always be doing - and that's asking questions and making requests of the machines.

Anyway, I put down 2050 - 2100. I see that we've accomplished so much since computers came out, but there are still areas where results have been dissapointed. Visual recognition, for example, hasn't gotten anywhere near human levels, and its tough to come up with good algorithms for "sight." We have people experimenting with Genetic Algorithms which use Natural Selection as the driving force, with very interesting results - but nothing that would convince me that we can get to human levels in the next 40 years.

I certainly think that one day this will happen. Then the question arises - does the machine have a soul? Should a person be tried for murder if they kill a mechanical human-emulator who had many friends who are saddend by the loss?
 
Anyway, I put down 2050 - 2100. I see that we've accomplished so much since computers came out, but there are still areas where results have been dissapointed. Visual recognition, for example, hasn't gotten anywhere near human levels, and its tough to come up with good algorithms for "sight." We have people experimenting with Genetic Algorithms which use Natural Selection as the driving force, with very interesting results - but nothing that would convince me that we can get to human levels in the next 40 years.

I certainly think that one day this will happen. Then the question arises - does the machine have a soul? Should a person be tried for murder if they kill a mechanical human-emulator who had many friends who are saddend by the loss?

I think eventually some moron is going to skip trying to program AI and just find a way to copy a brain (though i think you would probably need a quantum computer or some other such thing to accomplish this so it's still way the hell down the line). You have to admit this would be the fastest way to get human level intelligence in a computer, but the only problem is this raises a whole helluva lot of ethics questions because, of course, this new machine is human in everything but its composition. Since it is a complete replica of someones brain it effectively IS that person. Philosophers would cream their pants over the discovery.
 
OdgenTugbyGlub said:
I think eventually some moron is going to skip trying to program AI and just find a way to copy a brain (though i think you would probably need a quantum computer or some other such thing to accomplish this so it's still way the hell down the line). You have to admit this would be the fastest way to get human level intelligence in a computer, but the only problem is this raises a whole helluva lot of ethics questions because, of course, this new machine is human in everything but its composition. Since it is a complete replica of someones brain it effectively IS that person. Philosophers would cream their pants over the discovery.

I do think that we humans can and will eventually do this (although I take issue with your characterization of people who would do this as "morons"). Really, the neural patterns in a human brain are no fundamentally different from highly complex arrangements of computer circuitry.

I disagree that it would be the "fastest" way to get human level intelligence in a computer (I think AI independent of any human mind will come first), but it would certainly be one way. And as AI becomes more human, we can expect humans to become more artificial, with brain implants to speed up our own thinking, and the eventual downloading of our brains into computers as you mentioned.

You're right that philosophy is in for a major upheaval when we develop AI. Outdated notions such as consciousness being dependent on "humanity" or a "soul" will most likely be scrapped in favor of the most obvious explanation: consciousness is complexity.
 
And if we can download a brain onto a computer, we can make as many copies as we'd like. We might be trading brains just like we download music. You might form a library of thousands of brains and see what you can do with them. Then people will want to copywrite their brains, because the copies are doing their job for a smaller salary. :rofl
 
Back
Top Bottom