• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Why Transhumanism?

Hoplite, I see you avoided my question, and challenge?

Solve it, and you have my attention, ignore it, and you have no ones attention.

Tim-
 
When you say that human inequality is a human "flaw" you are giving your opinion, not stating an indisputable fact.
True.

this is why I disagree with your proposal, in a nutshell. Glad I'm foreward thinking enough to have pointed that out from the start... The only way to make everyone perfectly equal is to make everyone equally dead...not a good system of governance--although you are consistent in your transhuman theory here.
You seem to be of the opinion that this is an insurmountable problem or that it wouldnt be considered before being implemented. Even if it turns out that you are correct and the use of a synthetic intelligence is impractical or impossible, a global neural network will be a great leap towards the goal of world stability.

yes, but it's also just as powerful a tool for creating the opposite effect as well.
In theory, yes. But how do you fight or subdue a population where information can be passed instantly and statements verified against existing facts?

then how exactly would the system work at all? Some people would refuse to participate altogether, as in government today. Criminals will take advantage and corruption would still go unresolved.
As I have said, think of it like a high-tech radio system where people can turn their links on and off at will to extract what information they want before disconnecting. No one will be forced to have these implants but as society begins to evolve, they will need to seriously consider them or risk being marginalized by the advance of technology. We cant cater to people who refuse to advance based on archaic and emotional connections with an out-moted way of doing things; we dont keep telegraph wires and buggy lanes around for people who didn't want to use the telephone or a car.

that's not only pure speculation, but also comletely unsupported by historical fact. The use of the internet has not eliminated disinformation/hate/etc. It is a tool for dissiminating information and has no control whatsoever on what information is dissiminated or how it is acted upon.
The internet contains very real and factual information that can easily be found if you have basic powers of discrimination, the ability to sort a good source from a bad or do basic research. That is an absolutely unprecedented concept, that the average person can, with very little cost or training, have access to almost any piece of information they want and at near instant speeds. You cannot deny that as the amount of information available to the general public has increased, the negative conditions of the human race have decreased. Technology, on the whole, is an avenue for improvement for humanity.

all speculation. Would not the system be completely unable to make any decision at all? It would be receiving contrary information to nearly every single thing, be it fact, hypothesis, theory, or outright lie.
The system would use what it gathered from the population to synthesize the best possible decision based on logic, reason, human understanding, and the input of the people a decision effected. Keep in mind, a neural net would go a long way towards decreasing the amount of misinformation promulgated as anything someone inputs can instantly be checked with people who have the knowledge to spot a false statement from a factual one.

The concepts of law and justice are general concepts that are applied case by case to specific scenarios. without an understanding of the general principles behind these concepts, they are completely arbitrary.
As these are not concrete concepts, they are arbitrary anyways in that they do not physically exist. This is why a system of logic needs to be tempered with humanity and the input of the people to understand and account for such concepts as justice and law.

I'm not wise enough to know.
I suppose it's something of an academic point, depending on how you actually define wisdom.

All our problems are not based on a lack of common understanding, only some of them.
The root of the vast majority of our problems can be traced back to this source.

I'd like to see an example of a problem that has a root cause elsewhere.

Not now, not ever. The total abandonment of the ideal of equality is unacceptable, yes, but so is the total acceptance of it. Not to mention that it is not even possible (it's an ideal, through and through).
It IS possible to strive for equality while tempering that search with an understanding that hampering people in the name of equality is akin to the idea of "fighting for peace"

pot-ai-toes, pot-ah-toes
More like po-tay-toes dump-trucks. A system whereby voluntary exchange of information and experience can take place is FAR different from an omnipresent "hive mind". Personally I think the idea of a hive mind for humans is absurd, I dont think our psyche could handle that much input. I'd compare it to hooking a car battery up to a cell phone.

Equality does not necessarily follow. Just because everyone has a voice does not mean that they all want/need the same things or that they will all be equally heard. The development of distinct majority/minority groups is endemic to the notion of every direct democracy.
But that's just it, that's the wonder of the idea of a global network, we all CAN be equally heard! Simply because we have groups of people does not necessitate that we allow them to influence the world in their favor at the expense of others. Such notions are no longer necessary.

In general, competition creates advances more readily than cooperation.
Again, in ages past, I agree completely. But now we have reached a point where competition is causing more damage and becoming harder to manage while consuming more resources than it generates. It creates tunnel vision that causes people to tune out secondary consequences and to only focus on success with no regard for cost.

We may be "better off" than folks in the dark ages; that doesn't make us better people. Really, it just makes us more complacent and, in many respects, more gullible. Oh, and also more obese.
I would argue that we have a better grasp of the concept of human rights than those in the Dark Ages. I never said it made us better people, I think judging generations past with your own standards is incredibly senseless. However it is fair to say that we generally treat each other far better than we did in the Dark Ages because we have governments that understand, respect, and protect the notion of human rights.
 
First off let me say that I'm not sure if this has been discussed directly yet.

The main problem I see with transhumanism are its socioeconomic implications. Chances are, if such technologies became available in the future they would come at a very high cost. So the wealthiest amongst us will be the first and foremost to take advantage of them. Sure this may not be a bad thing in itself, but in this particular case, transhumanism has important implications. Imagine if through their wealth, people could make themselves physically superior to everyone else, in ways so far unforeseen. They might be able to communicate telepathically over a wireless network. They might be able to perform calculations instantaneously using a biomechanical calculator in their heads. But with these or other advancements, in my opinion it is a short step to the 'tyranny of the modified', where the 'transhumans' exert a strong control over society by virtue of what others cannot afford.

In short, any notion of equality of opportunity goes bye-bye when physical traits are determined by the wealth of one's parents, as opposed to their genetics.
 
You seem to be of the opinion that this is an insurmountable problem or that it wouldnt be considered before being implemented. Even if it turns out that you are correct and the use of a synthetic intelligence is impractical or impossible, a global neural network will be a great leap towards the goal of world stability.

No it wouldn't. It's just the internet that people can log onto where ever they are at. If our current internet teaches us anything, then this other one will just be used for porn and prostitution. There's nothing there that would make us "more stable". I can understand my neighbor as much as I want, have all knowledge needed. But if he wants something that's not in my best interest to hand over, all the understanding in the world ain't gonna prevent the conflict. And globally that gets even worse.

Nothing is forever, there is no infinite. Not us as individuals, not us as a species, not even this planet. So we are stuck working with what we got for as long as we can work with it. Humans are wonderful creatures. We've expanded incredibly fast for the short time we've been on this planet. We've discovered so many things, and we'll discover even more. Build even greater things, achieve things once thought impossible. But we're also destructive. It's part of our nature. For every marvel we've built, there's a terrible weapon of destruction. This is the fundamental of humanity. Creation and destruction are our tools. And we will ALWAYS choose to use them both. The only hope of "uniting" humanity is an external enemy; that's it. You can't collectively think of humans the same way you think of individual humans. Individually we can be very intelligent and understanding creatures. Collectively, we're a bunch of poo flinging monkeys. And you shouldn't forget that.
 
In theory, yes. But how do you fight or subdue a population where information can be passed instantly and statements verified against existing facts?

False information/reports, etc. In short, an infowar. The same technology that would be available to the "good guys" would be just as available to "bad guys," whatever their ends might be.

Technology, on the whole, is an avenue for improvement for humanity.

Here's where, IMO, you make your biggest mistake. Technology is not just an improver, it is also a destroyer, a falsifier, a concealer, etc. It is a tool, in effect, that does not necessarily improve or regress society in a moral sense. Technology, in solving problems, creates new ones as well. The net change may mean that populations can increase, but it doesn't necessarily make said populations any more enlightened or agreeable. Writing, for instance, allowed for information to be shared over whole regions-- some people became educated and studied medicine, others used writing to keep track of the logistics to wage war or keep track of slave sales.


As these are not concrete concepts, they are arbitrary anyways in that they do not physically exist.

My point exactly, they don't even exist to such a computer (who as you said cannot recognize general concepts) and so you cannot expect a fair, equitable, or just society to be created.

This is why a system of logic needs to be tempered with humanity and the input of the people to understand and account for such concepts as justice and law.

don't know what you mean here. If the computer cannot recognize broad concepts, I don't see how adding broad notions/ideals to the input would even be parsed out by the robot-mind, or whatever.


The root of the vast majority of our problems can be traced back to this source.

I'd like to see an example of a problem that has a root cause elsewhere.

Dought causing food shortages (natural disasters). Ambitious people, sociopaths, class envy etc. Sometimes, even when educated and sensitized to the suffering of others, people will still make justifications for violence and hate.


But that's just it, that's the wonder of the idea of a global network, we all CAN be equally heard! Simply because we have groups of people does not necessitate that we allow them to influence the world in their favor at the expense of others. Such notions are no longer necessary.

But being equally heard is not the same has equal status, income, lifestyle, whatever. It may help a little, or even a lot on a case by case basis, but it also means new forms of oppression, violence, and theft.

Again, in ages past, I agree completely. But now we have reached a point where competition is causing more damage and becoming harder to manage while consuming more resources than it generates. It creates tunnel vision that causes people to tune out secondary consequences and to only focus on success with no regard for cost.

I disagree, why exactly is our society fundamentally any different now than before. Technology has not made the world more ethical.


I would argue that we have a better grasp of the concept of human rights than those in the Dark Ages. I never said it made us better people, I think judging generations past with your own standards is incredibly senseless. However it is fair to say that we generally treat each other far better than we did in the Dark Ages because we have governments that understand, respect, and protect the notion of human rights.

You said:
There is ample proof that technological has made us a better people. Technology increases understanding, both of ourselves and our world. As technology has advanced, so too has the human race.

The thing is, we also have governments that don't, and have killed more people than any of theirs ever did knowing the full implications with regard to human rights. Which is worse? Realize, too, that our very notions of human rights all evolved out of seeds planted in the relatively low-tech medieval era... Point is, technology is not a factor in determining the moral or ethical worth of a society, it is a neutral factor that can be used to enlighten or conceal, create or destroy. The scale is the only thing that changes, not the actual substance or quality of the peoples' character.
 
Last edited:
First off let me say that I'm not sure if this has been discussed directly yet.

The main problem I see with transhumanism are its socioeconomic implications. Chances are, if such technologies became available in the future they would come at a very high cost. So the wealthiest amongst us will be the first and foremost to take advantage of them. Sure this may not be a bad thing in itself, but in this particular case, transhumanism has important implications. Imagine if through their wealth, people could make themselves physically superior to everyone else, in ways so far unforeseen. They might be able to communicate telepathically over a wireless network. They might be able to perform calculations instantaneously using a biomechanical calculator in their heads. But with these or other advancements, in my opinion it is a short step to the 'tyranny of the modified', where the 'transhumans' exert a strong control over society by virtue of what others cannot afford.

In short, any notion of equality of opportunity goes bye-bye when physical traits are determined by the wealth of one's parents, as opposed to their genetics.


Yes, and especially at the onset of this technological evolution, there is the danger that the rift between the haves and the have nots will become so great that the haves will orchestrate the genocide of the have-nots in order to create the 'ideal transhuman society'.
 
If we all knew the same things, I guess you could consider that we could read each other's minds simply by using our own cookie-cutter knowledge. There would no longer be any need to converse.

<flings a little poo>
 
I've been asked why I identify as a Transhumanist Socialist and more specifically what that means specifically.

I'll try to lay it out as simply as I can.

Its clear to me that the old answers to tyranny are inadequate. What I ultimately want to see the establishment of the first posthuman civilization. Up to now, all free societies have started with one premise: human nature is cruel, unjust -- a force to be controlled. The separation of powers, of all ways throught time, are designed purely to stall the ambitions of individuals.

The solution is to address the flaws in human nature. Make all beings truly equal in both body and mind. If you start with minds that are lucid, knowledgable, and emotionally sound, the needs of government change dramatically. If government can account for the nuances of human behavior, thought, emotion, and desire then maybe it can truly consider itself an extension of the will of the people.

Through the use of advanced technology, you make possible the establishment of a true, universal, and pure democracy. Using a network of communication that links the mind of every individual to a central processing network. This network uses a synthetic intelligence to oversee the logistic and beurocratic functions of government. The network that links enhanced minds together will be linked to the synthetic intelligence such that it can have, at a moment's notice, the capability to poll the entire population about legislation or other issues.

The emerging technology also allows us to spread the influence of education to as many people as humanly possible, creating an educated and informed populace that is equipped to form sound opinions on a subject of government. Such a network of communication will also enable the instantaeous transmission of information to anyone anywhere in the world, allowing the near instant sharing of any knowlege. Through these means, we can ensure an educated populace and an equality that has never before been achieved.

There will be understandable unease about giving power to a synthetic intelligence, but all governments have power. The benefit of giving this power to a synthetic intellect is that human affairs would no longer need to be ruled by generalities. The intelligence will have a deep understanding of every person's life and opinions and will have the processing power need to generate solutions that will benefit as many people as possible.

This system will communicate, not assimilate, on a voluntary basis; no one will be spied on and communication can be severed at will. It must also be considered that such a "connected" populace will be extremely hard to dominate or control; there is strength in unity and in understanding.

"General ideas are no proof of the strength, but rather of the insufficiency of the human intellect." The words of Alexis de Tocqueville, an observer of the birth of modern democracy. Though general ideas allow human minds to make judgments quickly, they are necessarily incomplete. So de Tocqueville noted that an all-knowing mind -- the mind of God, as he conceived it -- would have no need for general ideas. It would understand every individual in detail and at a glance. Incomplete applications of law or justice would be impossible for such a mind.

I want human affairs to be driven by wisdom. But wisdom must first be human. You must start with what a human sees and feels. But wisdom must also be knowledgeable, logical, and fair to billions of other beings.

Some people see these emerging technologies as creating an inherent inequality or tampering with the human form in forbidden ways. To take that step is to go down the path of intolerance. Is human nature perfect? No. Therefore, improvements are to be welcomed, not annihilated with ancient taboos and fears. As enhanced beings, we can establish a pure democracy that runs on instantaneous input from the electorate. What system could be more equal than that? Under such conditions, we would have a clean slate to implement and experiment with any political system we wanted because we would have a population with the wisdom to not let emotions and selfish desires get in the way of the progress of the human race.

This may seem like science fiction to many people, but many things that we take for granted today were once considered science fiction and the exponential increase in our technology every day, even given our political burdens, shows that we CAN achieve these things. The biggest hurdle will be in accepting that this is the best of any possible reality we could ever reasonably expect.

sounds like a great idea for a book, actually. but what makes you think a world without emotions and desires would be good thing?
 
sounds like a great idea for a book, actually. but what makes you think a world without emotions and desires would be good thing?

You should read the book brave new world... in there, there's discussion about a persons right to feel. One argues that it's there is good and bad feelings, but you strive for the good... to which the response is along the lines of : "So you're saying you wish to have the right to suffer?"

The other pop culture example of the transhumanist ideal, really, to me is much like 'the borg' of the star trek universe. I'm pretty sure that once you become borg you don't really have any further qualms about not having emotions once the hive mind incorporates your thought process.
 
There's no such thing as "transhumanism" anyway, except in the minds of fools and dreamers. It's just a collection of made-up words to define a non-existent, irrational belief. Same with anarcho-syndicalist and a bunch of other words and prefixes thrown together.

In that case, my political beliefs will now be known as Nintendoism, where mushrooms make you stronger and all princesses deserve to be rescued from reptiles.
 
I've been asked why I identify as a Transhumanist Socialist and more specifically what that means specifically.

I'll try to lay it out as simply as I can.

Its clear to me that the old answers to tyranny are inadequate. What I ultimately want to see the establishment of the first posthuman civilization. Up to now, all free societies have started with one premise: human nature is cruel, unjust -- a force to be controlled. The separation of powers, of all ways throught time, are designed purely to stall the ambitions of individuals.

The solution is to address the flaws in human nature. Make all beings truly equal in both body and mind. If you start with minds that are lucid, knowledgable, and emotionally sound, the needs of government change dramatically. If government can account for the nuances of human behavior, thought, emotion, and desire then maybe it can truly consider itself an extension of the will of the people.

Through the use of advanced technology, you make possible the establishment of a true, universal, and pure democracy. Using a network of communication that links the mind of every individual to a central processing network. This network uses a synthetic intelligence to oversee the logistic and beurocratic functions of government. The network that links enhanced minds together will be linked to the synthetic intelligence such that it can have, at a moment's notice, the capability to poll the entire population about legislation or other issues.

The emerging technology also allows us to spread the influence of education to as many people as humanly possible, creating an educated and informed populace that is equipped to form sound opinions on a subject of government. Such a network of communication will also enable the instantaeous transmission of information to anyone anywhere in the world, allowing the near instant sharing of any knowlege. Through these means, we can ensure an educated populace and an equality that has never before been achieved.

There will be understandable unease about giving power to a synthetic intelligence, but all governments have power. The benefit of giving this power to a synthetic intellect is that human affairs would no longer need to be ruled by generalities. The intelligence will have a deep understanding of every person's life and opinions and will have the processing power need to generate solutions that will benefit as many people as possible.

This system will communicate, not assimilate, on a voluntary basis; no one will be spied on and communication can be severed at will. It must also be considered that such a "connected" populace will be extremely hard to dominate or control; there is strength in unity and in understanding.

"General ideas are no proof of the strength, but rather of the insufficiency of the human intellect." The words of Alexis de Tocqueville, an observer of the birth of modern democracy. Though general ideas allow human minds to make judgments quickly, they are necessarily incomplete. So de Tocqueville noted that an all-knowing mind -- the mind of God, as he conceived it -- would have no need for general ideas. It would understand every individual in detail and at a glance. Incomplete applications of law or justice would be impossible for such a mind.

I want human affairs to be driven by wisdom. But wisdom must first be human. You must start with what a human sees and feels. But wisdom must also be knowledgeable, logical, and fair to billions of other beings.

Some people see these emerging technologies as creating an inherent inequality or tampering with the human form in forbidden ways. To take that step is to go down the path of intolerance. Is human nature perfect? No. Therefore, improvements are to be welcomed, not annihilated with ancient taboos and fears. As enhanced beings, we can establish a pure democracy that runs on instantaneous input from the electorate. What system could be more equal than that? Under such conditions, we would have a clean slate to implement and experiment with any political system we wanted because we would have a population with the wisdom to not let emotions and selfish desires get in the way of the progress of the human race.

This may seem like science fiction to many people, but many things that we take for granted today were once considered science fiction and the exponential increase in our technology every day, even given our political burdens, shows that we CAN achieve these things. The biggest hurdle will be in accepting that this is the best of any possible reality we could ever reasonably expect.


I first encountered the concept of transhumanism (or post-humanism) in the 1980's, among the works of writers such as Bruce Sterling and William Gibson. It is a fascinating concept, and yes it has been taken up on a more serious level by some, including certain scientists and engineers in the field of human-machine interfacing.

Certainly, the future will hold a lot of opportunities for humans to enhance themselves using technology. I think one of the fundamental flaws of transhumanism however, is the notion that we will shed our humanity like a caul through technology; another fundamental flaw is that if this happens, that it will be a good thing.

The concept of a transhuman socialist democracy, as something like a utopian ideal, is even more deeply flawed.

A lot of the points I intended to address have already been made. In the intrests of not writing a whole book, I'll try to limit myself and keep it brief.

A synthetic intelligence (AI) capable of godlike comprehension of all of humanity and all the world, all at once; capable of administrating and refereeing a mind-interfacing democracy of billions with impartial justice. This one is a deus ex machina of the first order. At present we do not know if true AI is possible; there are theories both for and against. Whether such a thinking synthetic could be able to truly comprehend the inputs of billions of brain-wired people all at once is an enormous stretch... why something so superintelligent and godlike would actually care about what wired humans wanted is another question. Pardon me if I consider being wiped out by a super-AI for being inadequately transhuman as being no solution to the problem of human imperfection. :roll:

That "transhumanity" would somehow shed the human flaws of greed, asshatery, dishonesty and will-to-power through technology is another highly dubious concept. It is not as if these are traits that can be eliminated by cutting out a few neurons from the brain, or inserting a chip that broadcasts "don't be an asshat, don't be an asshat" in neural-binary into the cerebrum. Nor are these things that most would tolerate unless imposed by force, even if they were possible.

The concept that "to have full understanding is to behave perfectly". History does not support the notion that greater understanding always leads to improved behavior between humans. Granted it does some times: if both parties are truly intrested in an equitable, fair and peaceful solution. However, it is not always so: this is the same reason diplomacy often fails, because regardless of understanding and communication there are basic conflicts of intrest which are often insoluable by anything but force.

Another flaw in full-understanding is that not everything is subject to simple fact-checking. Even scientists often disagree with each other about various things; the more complex or abstract the subject, the greater the level of disagreement. There are also those who would deliberately cloud the system with viral destructive memes, either for gain or for meanness.

Perhaps AI, nanotech, biotech and other "Singularity technologies" will usher in an age of plenty, where no one will suffer any lack of anything they really need. However, there are ALWAYS those who want MORE than Joe Smith next door, and if they're smart enough they tend to find a way.

Like any system that seeks perfection, this one is doomed to either never get started, or to fail spectacularly. Like most utopias, the assumption that "people in MY world will act correctly because they've been taught correctly" is the inadequate defense against easily notable flaws.


(BTW, kudos to the poster "Other" for several very intelligent and spot-on posts in this thread.)


G.
 
Last edited:
There's no such thing as "transhumanism" anyway, except in the minds of fools and dreamers. It's just a collection of made-up words to define a non-existent, irrational belief. Same with anarcho-syndicalist and a bunch of other words and prefixes thrown together.

In that case, my political beliefs will now be known as Nintendoism, where mushrooms make you stronger and all princesses deserve to be rescued from reptiles.

Ultimately, dreamers have accomplished quite a bit throughout history... but more importantly, with the way technology is advancing, publicly available technologies double in power every year or two now...short of humankind killing itself off, it's not so much an IF technology will advance to where humans can 'enhance' themselves technologically, but more of a WHEN this will happen, and how such advancements become available.

This process would have to start with something (seemingly?) benign like a microchip implant, before moving on to a borg level combination of the organic and inorganic.

I mean, even something like 'big dog' as is being developed, would have been deemed science fiction even a decade ago. youtube : big dog if you're not aware.
 
The main problem I see with transhumanism are its socioeconomic implications. Chances are, if such technologies became available in the future they would come at a very high cost. So the wealthiest amongst us will be the first and foremost to take advantage of them. Sure this may not be a bad thing in itself, but in this particular case, transhumanism has important implications. Imagine if through their wealth, people could make themselves physically superior to everyone else, in ways so far unforeseen. They might be able to communicate telepathically over a wireless network. They might be able to perform calculations instantaneously using a biomechanical calculator in their heads. But with these or other advancements, in my opinion it is a short step to the 'tyranny of the modified', where the 'transhumans' exert a strong control over society by virtue of what others cannot afford.

In short, any notion of equality of opportunity goes bye-bye when physical traits are determined by the wealth of one's parents, as opposed to their genetics.
Such is already the case and such will always be the case until we can set Capitalism aside. The have-nots have the advantage of vastly superior numbers, which counts for a lot. The have-nots will also always be necessary to do the jobs the haves cant or wont do.

False information/reports, etc. In short, an infowar. The same technology that would be available to the "good guys" would be just as available to "bad guys," whatever their ends might be.
But if you have the ability to check what you find against what real professionals and experts know, then the war becomes incredibly one-sided. If someone tells me that the sun is made of cheese, I can ping an astronomer or astrophysicist and access the knowledge they have that they have made available to anyone and verify the statement that the sun is made of cheese to be wrong.

Here's where, IMO, you make your biggest mistake. Technology is not just an improver, it is also a destroyer, a falsifier, a concealer, etc. It is a tool, in effect, that does not necessarily improve or regress society in a moral sense. Technology, in solving problems, creates new ones as well. The net change may mean that populations can increase, but it doesn't necessarily make said populations any more enlightened or agreeable. Writing, for instance, allowed for information to be shared over whole regions-- some people became educated and studied medicine, others used writing to keep track of the logistics to wage war or keep track of slave sales.
But ultimately, it allowed the dissemination of ideas that eventually led to the abolition of slavery virtually worldwide. I'm not saying that technology inherently makes us better people, but it opens roads to us becoming better people.

My point exactly, they don't even exist to such a computer (who as you said cannot recognize general concepts) and so you cannot expect a fair, equitable, or just society to be created.
That doesnt mean you cant teach a computer to recognize these concepts as being part of our psyches and to account for them.

don't know what you mean here. If the computer cannot recognize broad concepts, I don't see how adding broad notions/ideals to the input would even be parsed out by the robot-mind, or whatever.
See previous paragraph

Dought causing food shortages (natural disasters).
A failure to provide for enough safety margin in food storage on the faulty assumption that a population can handle a drought is also to be blamed. Droughts are not uncommon occurrences and the scale and scope of their effects are well known to us.

Ambitious people, sociopaths, class envy etc. Sometimes, even when educated and sensitized to the suffering of others, people will still make justifications for violence and hate.
Dig down to the motivation of these individuals, and you will almost always find a faulty assumption based on insufficient information.

But being equally heard is not the same has equal status, income, lifestyle, whatever. It may help a little, or even a lot on a case by case basis, but it also means new forms of oppression, violence, and theft.
Many of these problems come when a particular person or group feels powerless. If they have a voice and can advocate on their behalf with the same weight as others, they have less of a reason to act out because they are and feel self-directed

I disagree, why exactly is our society fundamentally any different now than before. Technology has not made the world more ethical.
Technology has allowed us to spread information that has enabled us to step back from our less humane practices.

The thing is, we also have governments that don't, and have killed more people than any of theirs ever did knowing the full implications with regard to human rights. Which is worse? Realize, too, that our very notions of human rights all evolved out of seeds planted in the relatively low-tech medieval era... Point is, technology is not a factor in determining the moral or ethical worth of a society, it is a neutral factor that can be used to enlighten or conceal, create or destroy. The scale is the only thing that changes, not the actual substance or quality of the peoples' character.
On the whole, the kind of death and destruction that occurs today is dramatically less than of ages past. It's easy to look past that because we have more information available today so we know of more that happens.

But consider that in our current war, we have a casualty list roughly 5,000 strong. The Battle of Crecy had a casualty list in the THOUSANDS, and that was a single battle.

Millions have been saved due to advances in medical technology, irrigation, water sanitation, etc etc. What advances our treatment of each other is a widespread dissemination of information and that is achieved through better delivery technology.

sounds like a great idea for a book, actually. but what makes you think a world without emotions and desires would be good thing?
I dont, I never said that it would.

You should read the book brave new world... in there, there's discussion about a persons right to feel. One argues that it's there is good and bad feelings, but you strive for the good... to which the response is along the lines of : "So you're saying you wish to have the right to suffer?"

The other pop culture example of the transhumanist ideal, really, to me is much like 'the borg' of the star trek universe. I'm pretty sure that once you become borg you don't really have any further qualms about not having emotions once the hive mind incorporates your thought process.
I have little interest in pop culture references because nothing that I am currently aware of in the pop culture sphere accurately reflects the idea of a transhuman society that I have in mind.


Goshin said:
Certainly, the future will hold a lot of opportunities for humans to enhance themselves using technology. I think one of the fundamental flaws of transhumanism however, is the notion that we will shed our humanity like a caul through technology; another fundamental flaw is that if this happens, that it will be a good thing.
I think this is born out of a fear of technology rather than a true objection based on logical reasoning. The embracing of technology does not necessarily mean the shedding of humanity, simply a change in the way we look at the concept.

The concept of a transhuman socialist democracy, as something like a utopian ideal, is even more deeply flawed.
Can you expand on that?

A synthetic intelligence (AI) capable of godlike comprehension of all of humanity and all the world, all at once; capable of administrating and refereeing a mind-interfacing democracy of billions with impartial justice. This one is a deus ex machina of the first order. At present we do not know if true AI is possible; there are theories both for and against. Whether such a thinking synthetic could be able to truly comprehend the inputs of billions of brain-wired people all at once is an enormous stretch... why something so superintelligent and godlike would actually care about what wired humans wanted is another question. Pardon me if I consider being wiped out by a super-AI for being inadequately transhuman as being no solution to the problem of human imperfection.
You do make some valid points. I agree that AI is as yet unknown value and may prove to be too dangerous. But I dont think that in and of itself should outmode the value of the concept of a world-wide neural net.

As far as an AI wiping us out, if the concept of an AI is untested then such fears are entirely speculation and not a valid objection.

That "transhumanity" would somehow shed the human flaws of greed, asshatery, dishonesty and will-to-power through technology is another highly dubious concept. It is not as if these are traits that can be eliminated by cutting out a few neurons from the brain, or inserting a chip that broadcasts "don't be an asshat, don't be an asshat" in neural-binary into the cerebrum. Nor are these things that most would tolerate unless imposed by force, even if they were possible.
I am not suggesting we forcibly alter the human brain and as far as I know, almost no transhumanists do either.

The idea is to increase the wealth of human knowledge available to the average person and to give them the opportunity to digest that knowledge rather than treat it all as trivia. It becomes much easier to empathize with your fellow man if you understand more about their situation

The concept that "to have full understanding is to behave perfectly". History does not support the notion that greater understanding always leads to improved behavior between humans. Granted it does some times: if both parties are truly intrested in an equitable, fair and peaceful solution. However, it is not always so: this is the same reason diplomacy often fails, because regardless of understanding and communication there are basic conflicts of intrest which are often insoluable by anything but force.
I believe very few situations are unsolvable by anything other than force. I do agree that some situations are, however I would argue that modern understanding and knowledge decreases the occurrence of situations where force is the only solution.

Another flaw in full-understanding is that not everything is subject to simple fact-checking. Even scientists often disagree with each other about various things; the more complex or abstract the subject, the greater the level of disagreement. There are also those who would deliberately cloud the system with viral destructive memes, either for gain or for meanness.
As I have said, such memes would be useless in a system where everyone has the ability to access knowledge and understanding from everyone else. You cant introduce something new with no basis or else it will be immediately spotted.

Perhaps AI, nanotech, biotech and other "Singularity technologies" will usher in an age of plenty, where no one will suffer any lack of anything they really need. However, there are ALWAYS those who want MORE than Joe Smith next door, and if they're smart enough they tend to find a way.
Great plenty must be coupled with great understanding. With this understanding comes the realization that the desire for more is un-necessary and ultimately harmful.

Like any system that seeks perfection, this one is doomed to either never get started, or to fail spectacularly. Like most utopias, the assumption that "people in MY world will act correctly because they've been taught correctly" is the inadequate defense against easily notable flaws.
I would argue that the situation I advance is unique in it's scope and that you cant accurately compare it to other examples.
 
But if you have the ability to check what you find against what real professionals and experts know, then the war becomes incredibly one-sided. If someone tells me that the sun is made of cheese, I can ping an astronomer or astrophysicist and access the knowledge they have that they have made available to anyone and verify the statement that the sun is made of cheese to be wrong.

So you believe that all people who hold beliefs will automatically give up those beliefs when they are refuted by facts or "experts'" opinions? I'm sorry, but that is naive. Access to truth does not equal acceptance of the truth.

But ultimately, it allowed the dissemination of ideas that eventually led to the abolition of slavery virtually worldwide. I'm not saying that technology inherently makes us better people, but it opens roads to us becoming better people.

You are ignoring many, many other factors though. This is why I mentioned technological determinism earlier. You are placing too much importance on a single contributing factor (which can itself be disputed) and completely ignoring all others.

That doesnt mean you cant teach a computer to recognize these concepts as being part of our psyches and to account for them.

If the computer can make better decisions than people because it is not hindered by the supposed humanistic pitfalls of general concepts like law or justice, then how on earth does it "account" for human concepts of law or justice? It can either discount them, and transcend human fallability or account for them and be susceptable to human fallability--not both.

Dig down to the motivation of these individuals, and you will almost always find a faulty assumption based on insufficient information.

Almost always, but not always. And regardless, access to more information does not automatically mean that the information will be used to improve anything. See the first point I made about access to truth.

Many of these problems come when a particular person or group feels powerless. If they have a voice and can advocate on their behalf with the same weight as others, they have less of a reason to act out because they are and feel self-directed

where there is inequality, their is discrepancy in power. You still have not shown how, by any stretch, everyone would be perfectly equalized by instant communications/computer tech.

Technology has allowed us to spread information that has enabled us to step back from our less humane practices.

And as I already said, it has also enabled us to do some of the most inhumane things imaginable. You said yourself that "I'm not saying that technology inherently makes us better people, but it opens roads to us becoming better people."

"Allowing us" to do something, or "opening the road" for us to do something, does not mean people will do that particular thing. Technology is just a double-edged sword, it is a tool that can be used for purposes good or evil.

On the whole, the kind of death and destruction that occurs today is dramatically less than of ages past. It's easy to look past that because we have more information available today so we know of more that happens.
But consider that in our current war, we have a casualty list roughly 5,000 strong. The Battle of Crecy had a casualty list in the THOUSANDS, and that was a single battle.

The trench fighters at the battle of the Somme had better capacity to spread information and better technology than the men at Agincourt. Thousands died at Agincourt; Millions died at the Somme. Both considered single battles (although another battle was also fought their in WWI).

Millions have been saved due to advances in medical technology, irrigation, water sanitation, etc etc. What advances our treatment of each other is a widespread dissemination of information and that is achieved through better delivery technology.

Because of technological advances our living conditions have improved; our populations have increased; We are not necessarily any more fair, just, or equitable because of that technology alone--there were many, many other factors, and some might even argue that we actually are no more fair, equitable, or just as a whole when compared to previous generations.
 
Hoplite -

I have little interest in pop culture references because nothing that I am currently aware of in the pop culture sphere accurately reflects the idea of a transhuman society that I have in mind.

It's not about the transhumanist society that YOU have in mind... it's about the society that those that have the clout and power to push the world towards the transhumanist society that THEY have in mind. There are already so many different types of 'soma' in the real world.

The reality of the matter if you take into account human nature is that if someone can technologically play the chess game 20 moves ahead thinking, and you can only plan these things 5-10 moves ahead... you know that means that you will lose E VERY game and fall into almost every trap and gambit setup against you.

Think about that.
 
I think this is born out of a fear of technology rather than a true objection based on logical reasoning. The embracing of technology does not necessarily mean the shedding of humanity, simply a change in the way we look at the concept.

Actually "shedding humanity like a caul" was literally a quote I lifted from a transhumanist text some while back. At least some view humanity as something to be discarded, or essentially redefined out of existence.

I'll grant you that humanity in 2300 might be a very different thing from humanity in 2010, but my point is that you're not going to leave greed, hate and will-to-power behind through technology, short of giving everyone a lobotomy. :mrgreen:



You do make some valid points. I agree that AI is as yet unknown value and may prove to be too dangerous. But I dont think that in and of itself should outmode the value of the concept of a world-wide neural net.

As far as an AI wiping us out, if the concept of an AI is untested then such fears are entirely speculation and not a valid objection.

On the contrary, it is a valid concern to giving such an AI such power as you envision. You yourself said in an earlier post that if the synthetic intelligence that ran the neural network wiped us out for being too much of a problem, that we probably deserved it, or something like that. One sec and I will pull your exact quote...

other said:
what if the supercomputer decides that the only way to correct the flaws in humanity is to eradicate them? *gasp*
Then I would say it would be justified for us not being intelligent or forward thinking enough to deal with that possibility.

Ouch...



I am not suggesting we forcibly alter the human brain and as far as I know, almost no transhumanists do either.

The idea is to increase the wealth of human knowledge available to the average person and to give them the opportunity to digest that knowledge rather than treat it all as trivia. It becomes much easier to empathize with your fellow man if you understand more about their situation

I believe very few situations are unsolvable by anything other than force. I do agree that some situations are, however I would argue that modern understanding and knowledge decreases the occurrence of situations where force is the only solution.

As I have said, such memes would be useless in a system where everyone has the ability to access knowledge and understanding from everyone else. You cant introduce something new with no basis or else it will be immediately spotted.

Great plenty must be coupled with great understanding. With this understanding comes the realization that the desire for more is un-necessary and ultimately harmful.

I would argue that the situation I advance is unique in it's scope and that you cant accurately compare it to other examples.

Yeah, everybody thinks their version of Utopia is unique and not to be confused with previous regimes that went wrong.

Given advances in technology, and the trends towards cybernetics and brain-computer interfacing, it is certainly possible that a society of the rough characteristics you propose could be attempted some day. What I disagree with is that it would be the smoothly functioning machine you anticipate, or that it would work very well at all, let alone that it would be full of justice and absent violence and coercion.

You're overly optimistic IMO, but I can see that changing your mind isn't going to happen.


BUT, hey, I give you kudos and props for starting a thread that was intresting and different. At least it wasn't the SOS that 90% of the threads on DP seem like lately. :mrgreen:


Oh, and just out of curiosity... are you a believer in the Singularity? If so, do you anticipate it occuring within the 2030-2050 timeframe as some suggest is likely?
 
Last edited:
So you believe that all people who hold beliefs will automatically give up those beliefs when they are refuted by facts or "experts'" opinions? I'm sorry, but that is naive. Access to truth does not equal acceptance of the truth.
I believe that the majority of people will. If people have a good, empirical source of information they will usually correct their own beliefs and in the absence of generated sources to support false beliefs, people will accept a verifiable source.

You are ignoring many, many other factors though. This is why I mentioned technological determinism earlier. You are placing too much importance on a single contributing factor (which can itself be disputed) and completely ignoring all others.
Then what am I ignoring?

If the computer can make better decisions than people because it is not hindered by the supposed humanistic pitfalls of general concepts like law or justice, then how on earth does it "account" for human concepts of law or justice? It can either discount them, and transcend human fallability or account for them and be susceptable to human fallability--not both.
A computer can act within parameters set for it by those who program, and in the case of an AI, teach it.

Almost always, but not always. And regardless, access to more information does not automatically mean that the information will be used to improve anything. See the first point I made about access to truth.
The more a person knows and understands, the more likely they are to make better choices.

where there is inequality, their is discrepancy in power. You still have not shown how, by any stretch, everyone would be perfectly equalized by instant communications/computer tech.
Information acts as a battering ram. It allows the barriers in society to be demolished. A network that allows instant communication of thoughts, feelings, visions, and ideas to anyone anywhere in the world breeds greater communication and with greater communication comes greater understanding and greater empathy.

And as I already said, it has also enabled us to do some of the most inhumane things imaginable. You said yourself that "I'm not saying that technology inherently makes us better people, but it opens roads to us becoming better people."

"Allowing us" to do something, or "opening the road" for us to do something, does not mean people will do that particular thing. Technology is just a double-edged sword, it is a tool that can be used for purposes good or evil.
Except for certain outliers, people do generally take the positive steps.

The trench fighters at the battle of the Somme had better capacity to spread information and better technology than the men at Agincourt. Thousands died at Agincourt; Millions died at the Somme. Both considered single battles (although another battle was also fought their in WWI).
On the whole, information and technology has led to less death and suffering, not more.

Because of technological advances our living conditions have improved; our populations have increased; We are not necessarily any more fair, just, or equitable because of that technology alone--there were many, many other factors, and some might even argue that we actually are no more fair, equitable, or just as a whole when compared to previous generations.
What indications do we have to say that we haven't improved?

It's not about the transhumanist society that YOU have in mind... it's about the society that those that have the clout and power to push the world towards the transhumanist society that THEY have in mind. There are already so many different types of 'soma' in the real world.
Who is pushing the style of transhumanist society that you seem to be afraid of?

The reality of the matter if you take into account human nature is that if someone can technologically play the chess game 20 moves ahead thinking, and you can only plan these things 5-10 moves ahead... you know that means that you will lose E VERY game and fall into almost every trap and gambit setup against you.

Think about that.
Your analogy is flawed, this is not a competition.

Actually "shedding humanity like a caul" was literally a quote I lifted from a transhumanist text some while back. At least some view humanity as something to be discarded, or essentially redefined out of existence.
The part of humanity that feels it needs to tear down it's brethren to advance. This part of us is no longer necessary and is detrimental to our survival as a species. We must outmode this thinking if we are to survive the modern age.

I'll grant you that humanity in 2300 might be a very different thing from humanity in 2010, but my point is that you're not going to leave greed, hate and will-to-power behind through technology, short of giving everyone a lobotomy. :mrgreen:
By outmoding the thinking that supports such feelings, we can eradicate these feelings and leave behind a humanity that understands that cooperation is key to survival at this point in time.

On the contrary, it is a valid concern to giving such an AI such power as you envision. You yourself said in an earlier post that if the synthetic intelligence that ran the neural network wiped us out for being too much of a problem, that we probably deserved it, or something like that. One sec and I will pull your exact quote...

Ouch...
That is my opinion, plain and simple. It is as unfounded as fears about an AI murdering us in our sleep for our "inefficiency".

Yeah, everybody thinks their version of Utopia is unique and not to be confused with previous regimes that went wrong.
Then I'd ask you to point out where something comparable happened and the possibilities were the same.

Given advances in technology, and the trends towards cybernetics and brain-computer interfacing, it is certainly possible that a society of the rough characteristics you propose could be attempted some day. What I disagree with is that it would be the smoothly functioning machine you anticipate, or that it would work very well at all, let alone that it would be full of justice and absent violence and coercion.
I do agree that the possibility for abuse and misuse is there, but such possibilities can be minimized by the dissemination of information, possible through such systems as a global neural network.

Oh, and just out of curiosity... are you a believer in the Singularity? If so, do you anticipate it occuring within the 2030-2050 timeframe as some suggest is likely?
I believe there is a "critical mass" for the acceleration of human understanding via technological means, once we have acquired and disseminated enough knowledge we will reach a point where greater leaps of understanding are possible. If you choose that definition of a Singularity, then yes. I'm not sure of a timeframe, humans are notoriously bad at judging timing of key events.
 
I believe that the majority of people will. If people have a good, empirical source of information they will usually correct their own beliefs and in the absence of generated sources to support false beliefs, people will accept a verifiable source.

What about religions...people do not need empirical verication of their beliefs to believe something, and some people continue to believe despite of contrary empirical verification. That's a fact, why would this suddenly change under your system?

Then what am I ignoring?

You are saying that increased technology/communications leads to advancements in themselves. You are not taking into consideration that technology can be used to destroy, and other factors such as human beliefs, hard environmental factors such as resource availability and space available, food, etc etc. Many things are involved.

A computer can act within parameters set for it by those who program, and in the case of an AI, teach it.

So then the ability of the computer is limited by the programmers' skill?


The more a person knows and understands, the more likely they are to make better choices.

Better in what sense? Better for themselves? Better for others? Better for the earth? "Better" is an entirely subjective category.

Information acts as a battering ram. It allows the barriers in society to be demolished. A network that allows instant communication of thoughts, feelings, visions, and ideas to anyone anywhere in the world breeds greater communication and with greater communication comes greater understanding and greater empathy.

Again, we already have such a network. We're using it right now. Do you deny that the internet has been used for any purposes other than to enlighten and promote world peace and understanding?

Except for certain outliers, people do generally take the positive steps.

Well, how would your system account for those certain outliers?

On the whole, information and technology has led to less death and suffering, not more.

No, it hasn't. It has actually led to more. Information and technology has built mustard gas, tanks, napalm, white phosphorus, synthetic poisons, devices of torture, nukes, etc, etc.

Go back, hypothetically, to the earliest times of man as hunter/gatherers and foragers. Compare. It's like making a comparison between Indians counting coup to Hiroshima.

What indications do we have to say that we haven't improved?

The burden of proof is on you to show that we are more fair, equitable, just, humane/whatever because of our technological improvements.
 
Information acts as a battering ram. It allows the barriers in society to be demolished. A network that allows instant communication of thoughts, feelings, visions, and ideas to anyone anywhere in the world breeds greater communication and with greater communication comes greater understanding and greater empathy.

That is what would happen if this was setup as a 'free' system... in a 'controlled' system, it would likely involve an AI interface to control the thought process of those that are hooked in. Just to say that the same technology that can empower humanity can also be used to enslave humanity.

Who is pushing the style of transhumanist society that you seem to be afraid of?

It's not that I'm 'afraid' of technologically evolving the human species, it's under whose guidance is this advancement being created. What I mean is that if this is being created for the benefit of those in power and super-wealthy, they would use this technology to fully enslave humanity.



Then there's more common quotes :
"There's class warfare, all right, but it's my class, the rich class, that's making war, and we're winning." - Warren Buffet

""Diet, injections, and injunctions will combine, from a very early age, to produce the sort of character and the sort of beliefs that the authorities consider desirable, and any serious criticism of the powers that be will become psychologically impossible. Even if all are miserable, all will believe themselves happy, because the government will tell them that they are so."

"Gradually, by selective breeding, the congenital differences between rulers and ruled will increase until they become almost different species. A revolt of the plebs would become as unthinkable as an organized insurrection of sheep against the practice of eating mutton." - Bertrand Russel

Hell, can't even pull out one quote, but 'ecoscience' written by John P Holdren, and Ehrlic, who have been the 'science sczars' for Bush and Obama.... the whole book is about techniques to sterilize and control populations through technological and chemical means.

Your analogy is flawed, this is not a competition.

Not to you, it's not... but to those in charge, it IS a competition... and a competition that we are helping them win.

Look, I don't doubt that we could create a technological advancement to humanity and create a eutopian system through those means... I'm sure its 90+% of research funding for new technologies is military funding... military only funds things that are good for breaking stuff and killing people...

Then when you consider that transhumanism as a concept originates from the propogators of the eugenics philosophy... except given a technological overlay. I mean, IBM is a lead technology company, that just so happens to have originated by creating a basic 'computerized' tracking system for those in concentration camps. So, yes... we have to be cautious that what is being created won't become an inescapable tyranny for humanity.

I do agree that the possibility for abuse and misuse is there, but such possibilities can be minimized by the dissemination of information, possible through such systems as a global neural network.

There is a fine line between an integrated neural network and the creation of a 'hive mind' where your individuality is lost through the voices of the millions creating a mass 'consensus'.

I believe there is a "critical mass" for the acceleration of human understanding via technological means, once we have acquired and disseminated enough knowledge we will reach a point where greater leaps of understanding are possible. If you choose that definition of a Singularity, then yes. I'm not sure of a timeframe, humans are notoriously bad at judging timing of key events.

Yes, it will essentially be the point where, like in mathematics, they are nearing the limits of what the human mind can comprehend about mathematics.... where a computer can be used to provide the results of the equation, the understanding remains lost within the computers processing. So, it will become a point where technological advancement will be relegated to computers with a general artificial intelligence, and the capacity to learn... OR by technologically amplifying the capacity of the human brain.

That gets back to the issue, when it comes to power and control over people, if you're making plans 10-20 moves ahead, while the world is thinking where the next meal is coming from... then you're going to be at an advantage everytime. In the history of human power, it's inevitable that this power is eventually abused.
 
If people have a good, empirical source of information they will usually correct their own beliefs and in the absence of generated sources to support false beliefs, people will accept a verifiable source.

I don’t believe that to be the case for one primary reason: human creativity. Creativity lends itself to the imagination and belief, religious or otherwise. We have our intellect and reasoning, but these do not cancel out the human need and desire for religious, artistic, and other forms of imagination in action. Even if it did, then a verifiable source is still inadequate to make people believe simple concepts such as what is right and what is wrong. You may believe that welfare programs are the right thing to do. I do not believe this. There is nothing you can tell me which is verifiable to support your belief. We both make value judgments, both based on what we believe is right and wrong.

The more a person knows and understands, the more likely they are to make better choices.

We can’t even get kids to complete high school and go to college, with the provable understanding that this will make their lives better. Humans don’t want to act based on understanding in many cases- they want to do what feels good.

Except for certain outliers, people do generally take the positive steps.

In my observation, the opposite is more likely to be the case.

On the whole, information and technology has led to less death and suffering, not more.

Where and how?

Your analogy is flawed, this is not a competition.

With anything in the natural world, it is always a competition.
 
Last edited:
What about religions...people do not need empirical verication of their beliefs to believe something, and some people continue to believe despite of contrary empirical verification. That's a fact, why would this suddenly change under your system?
You can have religious beliefs and still be intellectually critical and honest with yourself. I am living proof of this :)

You are saying that increased technology/communications leads to advancements in themselves. You are not taking into consideration that technology can be used to destroy, and other factors such as human beliefs, hard environmental factors such as resource availability and space available, food, etc etc. Many things are involved.
This is true

So then the ability of the computer is limited by the programmers' skill?
No, a program capable of assimilating and utilizing new information could, theoretically, develop beyond the skill of the programmers themselves. Think of it like a child, a child grows and develops based on it's environment and stimulus.

Better in what sense? Better for themselves? Better for others? Better for the earth? "Better" is an entirely subjective category.
Choice that are less socially detrimental, either to themselves or to society as a whole.

Again, we already have such a network. We're using it right now. Do you deny that the internet has been used for any purposes other than to enlighten and promote world peace and understanding?
The internet has been put to many uses and I would contend that such a neural net surpasses the Internet.

The Internet is a passive medium, information is extracted visually and audibly by the person using the computer terminal. With a neural network you now have the ability as a user to download and upload information, images, feelings, memories, experiences, and knowledge directly to another person or group of people. If you can perceive it, you can send it. Such information is stored electrically and chemically in the brain, the only hurdle being to develop a device that can read such information, turn it into a transmittable form, then re-integrate transmitted data into the brain in a recognizable format.

Well, how would your system account for those certain outliers?
Humans tend to explore every avenue available to them, I would need a specific example to properly answer this question.

No, it hasn't. It has actually led to more. Information and technology has built mustard gas, tanks, napalm, white phosphorus, synthetic poisons, devices of torture, nukes, etc, etc.
Information has also given us water purification, modern medicine, modern farming, and hygiene that has saved exponentially more people than it's products have destroyed.

The burden of proof is on you to show that we are more fair, equitable, just, humane/whatever because of our technological improvements.
And I have. More technology has enabled a longer lifespan, more free time to assimilate information, and greater information storage, transmission, accessing, and copying ability. With this we have made concepts like truth, justice, peace, suffrage, abolition, freedom, and cooperation widely known and understood as well as systems to ensure these things are available to as many people as possible.



That is what would happen if this was setup as a 'free' system... in a 'controlled' system, it would likely involve an AI interface to control the thought process of those that are hooked in. Just to say that the same technology that can empower humanity can also be used to enslave humanity.
Its a large step between a passive transmitter/receptor that can be switched on and off at will to a device capable of bypassing conscious decisions. I think it's over-reacting to be worried about it.

It's not that I'm 'afraid' of technologically evolving the human species, it's under whose guidance is this advancement being created. What I mean is that if this is being created for the benefit of those in power and super-wealthy, they would use this technology to fully enslave humanity.
If we didnt have a system in place where the common people could be united, I would agree with you. The neural network instantly empowers the masses because any attack anywhere at any time on one can instantly be known to all. Propaganda ceases to be effective, spin is impotent, and the power structure that keeps the super wealthy in power collapses.

This is the Socialist aspect of Socialistic Transhumanism :)

Then there's more common quotes :
"There's class warfare, all right, but it's my class, the rich class, that's making war, and we're winning." - Warren Buffet

""Diet, injections, and injunctions will combine, from a very early age, to produce the sort of character and the sort of beliefs that the authorities consider desirable, and any serious criticism of the powers that be will become psychologically impossible. Even if all are miserable, all will believe themselves happy, because the government will tell them that they are so."

"Gradually, by selective breeding, the congenital differences between rulers and ruled will increase until they become almost different species. A revolt of the plebs would become as unthinkable as an organized insurrection of sheep against the practice of eating mutton." - Bertrand Russel
These are ideas based on a power structure whose foundation is falsehood and obfuscation. Such a system would not last through the implementation of a massive world-wide neural network.

Hell, can't even pull out one quote, but 'ecoscience' written by John P Holdren, and Ehrlic, who have been the 'science sczars' for Bush and Obama.... the whole book is about techniques to sterilize and control populations through technological and chemical means.
Yes I'm familiar with that work and it advances these as possible, though not preferred, options.

Not to you, it's not... but to those in charge, it IS a competition... and a competition that we are helping them win.
Again, it is a competition that can be ended with the use of advanced technology that empowers the masses.

Look, I don't doubt that we could create a technological advancement to humanity and create a eutopian system through those means... I'm sure its 90+% of research funding for new technologies is military funding... military only funds things that are good for breaking stuff and killing people...
At the outset, yes. However much of what we use today started as military technology and then filtered down to the consumer sector. Simply because something was developed as military technology does not guarantee it will remain that forever.

In fact, the armed forces would probably take extreme interest in such a neural net. No more communication lags, instant access to every soldier on the battlefield, up to the second intelligence, flawless surveillance.

Then when you consider that transhumanism as a concept originates from the propogators of the eugenics philosophy... except given a technological overlay.
Not true at all.

Nikolai Fyodorov (IIRC) was one of the first modern people that we could call a Transhumanist and he never advanced the idea of eugenics. JD Bernal was another early delver into Transhumanist ideas and again, to my knowledge, he never expressed an interest in eugenics.

I mean, IBM is a lead technology company, that just so happens to have originated by creating a basic 'computerized' tracking system for those in concentration camps. So, yes... we have to be cautious that what is being created won't become an inescapable tyranny for humanity.
I would not compare IBM's system to the neural net only because I see the neural net as a transcendent idea. This is a completely new playing field that requires new ways of thinking, you CANNOT apply the old schema to it and do it justice.

There is a fine line between an integrated neural network and the creation of a 'hive mind' where your individuality is lost through the voices of the millions creating a mass 'consensus'.
I have addressed this ridiculous notion many times and I dont feel like constantly repeating myself.

Yes, it will essentially be the point where, like in mathematics, they are nearing the limits of what the human mind can comprehend about mathematics.... where a computer can be used to provide the results of the equation, the understanding remains lost within the computers processing. So, it will become a point where technological advancement will be relegated to computers with a general artificial intelligence, and the capacity to learn... OR by technologically amplifying the capacity of the human brain.
I do support the exploration of technological modification and enhancement of the human body, but that is as yet may be. This is a question that I feel could be broached more completely once we see what the neural net can bring us.

That gets back to the issue, when it comes to power and control over people, if you're making plans 10-20 moves ahead, while the world is thinking where the next meal is coming from... then you're going to be at an advantage everytime. In the history of human power, it's inevitable that this power is eventually abused.
Then tell me how, specifically, the neural net can be abused.

Keep in mind, you can download information from other people but your node is like a radio; it sends, receives, and interprets data only. If you lie, it will instantly be recognized as such once it's checked against existing pools of knowledge.



I don’t believe that to be the case for one primary reason: human creativity. Creativity lends itself to the imagination and belief, religious or otherwise. We have our intellect and reasoning, but these do not cancel out the human need and desire for religious, artistic, and other forms of imagination in action. Even if it did, then a verifiable source is still inadequate to make people believe simple concepts such as what is right and what is wrong. You may believe that welfare programs are the right thing to do. I do not believe this. There is nothing you can tell me which is verifiable to support your belief. We both make value judgments, both based on what we believe is right and wrong.
This is true. However, we can, using empirical evidence and knowledge, discover whose opinion is based in fact and whose is not.

We can’t even get kids to complete high school and go to college, with the provable understanding that this will make their lives better. Humans don’t want to act based on understanding in many cases- they want to do what feels good.
Certainty feels very good to humans. If you have a way to essentially ping the sum total of human understanding to help you verify a decision you have made, it's very difficult to be more certain. You could still be wrong, but you have an immeasurably more powerful tool at hand to help you.

In my observation, the opposite is more likely to be the case.
I disagree. Aside from petty squabbles about politics and the occasional nutball, I'd argue we tend to treat each other far better today than we have in ages past.

Where and how?
As I said before; sanitation, farming, health, food production, safety etc etc.

Advances in these areas have saved probably billions of lives through the course of their use. Even something as simple as the development and use of soap went a long way towards saving lives.

With anything in the natural world, it is always a competition.
That is the point of Transhumanism; to lift yourself out of that need to compete using technology
 
Its a large step between a passive transmitter/receptor that can be switched on and off at will to a device capable of bypassing conscious decisions. I think it's over-reacting to be worried about it.

Now, I'm in no way an expert on this, but if it's hooked directly into your brain, ultimately, brain signals at it's base an electrical current running through the neurons. So, I'm not so sure that once you have the capacity to have an implant in your brain that can interpret signals, that the 'code' can be determined to send desired signals that the brain would interpret as desired. Also, since the brain doesn't have any real 'senses' in itself and is rather an information processing center, that once the implants are there that you wouldn't have the capacity to differentiate from your own thoughts or those that you are 'receiving'.

There's an old anime movie I remember watching, 'ghost in the shell'... that movie had precisely that, where the 'internet' was a plug in the back of your head. There you could have telepathic communication, data transference, etc... and that was the well accepted parts... but the problem was brought up in the film of people 'hacking in' to people's brains and implanting them with 'tasks' and completely remake their lives. In one scene, this hacker had created about 20 people to all simultaneously and independently attempted to assassinate a political figure, without being aware of the others.

Which I'm just using to illustrate my main point that technology can be used to both empower and enslave...

If we didnt have a system in place where the common people could be united, I would agree with you. The neural network instantly empowers the masses because any attack anywhere at any time on one can instantly be known to all. Propaganda ceases to be effective, spin is impotent, and the power structure that keeps the super wealthy in power collapses.

This is the Socialist aspect of Socialistic Transhumanism :)

Ultimately, it depends on the process through which the technology advances... I could see a 'neural net' taking the shape as either a 'p2p' type system where you would be networked between individuals in a cloud of connections that people could jump in and out of at will..., or a 'central hub' type system where everyone hooks into the neural network and makes the connection with a central hub... in this case, it would be much easier to control and track the flow of information and propaganda would no longer be 'necessary', because it would be programmed and added in such a way that it would be completely undetectable, people promoting 'undesireable' thoughts could be expelled from the system and dealt with.

These are ideas based on a power structure whose foundation is falsehood and obfuscation. Such a system would not last through the implementation of a massive world-wide neural network.

I agree that such a system, as throughout history, would collapse under the weight of it's own tyranny. The relevance of the quotes, though, deals more with how this process of technological evolution of the species takes place. If it happens through consent, then it might be possible to create this system... otherwise this would be implemented through conquest.

Yes I'm familiar with that work and it advances these as possible, though not preferred, options.

Ummm... just so you know, good portions of the 'suggestions' from this book are in different phases of implementation.

Dangerous Ideas | Big Think - Look at the bottom one, drugging the drinking water. Which, I should add http://www.sciencelab.com/xMSDS-Lithium-9927559 just so we're on the same page about what's being suggested.

Again, it is a competition that can be ended with the use of advanced technology that empowers the masses.

You're assuming that this technological advancement will be allowed for the billions of people on the planet.

At the outset, yes. However much of what we use today started as military technology and then filtered down to the consumer sector. Simply because something was developed as military technology does not guarantee it will remain that forever.

In fact, the armed forces would probably take extreme interest in such a neural net. No more communication lags, instant access to every soldier on the battlefield, up to the second intelligence, flawless surveillance.

Oh, I don't doubt that at all... There's a videogame I once played called 'medal gear solid : guns of the patriots' where ALL soldiers were stamped with processors that graphed everything down to the direction you're facing, location, the amount of oxygen in your blood, etc... which was sent to a central server that through it's AI would implement directives and orders to the appropriate people... on both sides of the conflict. Then through another layer of AI was initiating further conflicts around the world. (Again, I bring up these types of references more for the illustrative purposes) This DID allow teamwork that is unprecedented, by having instant neural communication, however, it allowed a 'hacker' to come in and control the system so that he could turn off their guns at will.

Not true at all.

Nikolai Fyodorov (IIRC) was one of the first modern people that we could call a Transhumanist and he never advanced the idea of eugenics. JD Bernal was another early delver into Transhumanist ideas and again, to my knowledge, he never expressed an interest in eugenics.

Ok, directly, I suppose I misspoke... the progenitors of transhumanism are not implicitly eugenicist... however, the eugenicists believe that they will become transhuman at the expense of the majority.

I would not compare IBM's system to the neural net only because I see the neural net as a transcendent idea. This is a completely new playing field that requires new ways of thinking, you CANNOT apply the old schema to it and do it justice.

I can all but guarantee that IBM's got their hands in the development of such systems, or microsoft... I mean, Bill gates has talked about developing weather control technologies, and genetically modified mosquitos whose spit creates the malaria vaccine, etc... Both these companies can be argued to be eugencist organizations at their core... though Bill Gates no longer controls microsoft in any direct fashion, though I don't doubt that he's still got some clout there. Kinda like how Putin stepped down to work behind the scenes of... whoever Russia's "president" is.

I do support the exploration of technological modification and enhancement of the human body, but that is as yet may be. This is a question that I feel could be broached more completely once we see what the neural net can bring us.

Yes, modifications to the bodily functions... eyes that can see infrared, better then 20/20 vision, muscles double the strength and speed, greater lung capacity, etc... all these things are really hard to turn down, since they maintain your individuality... your soul.

If you're talking about direct communication through the brain and it's signals... it's a double edged sword.

Then tell me how, specifically, the neural net can be abused.

Keep in mind, you can download information from other people but your node is like a radio; it sends, receives, and interprets data only. If you lie, it will instantly be recognized as such once it's checked against existing pools of knowledge.

With virtually every computerized technological advancement, there have been hackers that could 'work the system'.... Ultimately, it depends on the infrastructure of such a system.
 
Now, I'm in no way an expert on this, but if it's hooked directly into your brain, ultimately, brain signals at it's base an electrical current running through the neurons. So, I'm not so sure that once you have the capacity to have an implant in your brain that can interpret signals, that the 'code' can be determined to send desired signals that the brain would interpret as desired. Also, since the brain doesn't have any real 'senses' in itself and is rather an information processing center, that once the implants are there that you wouldn't have the capacity to differentiate from your own thoughts or those that you are 'receiving'.
Tagging information to sort it is something your brain is used to doing. That is something that would have to be addressed with this technology.

There's an old anime movie I remember watching, 'ghost in the shell'... that movie had precisely that, where the 'internet' was a plug in the back of your head. There you could have telepathic communication, data transference, etc... and that was the well accepted parts... but the problem was brought up in the film of people 'hacking in' to people's brains and implanting them with 'tasks' and completely remake their lives. In one scene, this hacker had created about 20 people to all simultaneously and independently attempted to assassinate a political figure, without being aware of the others.

Which I'm just using to illustrate my main point that technology can be used to both empower and enslave...
The connections in that movie were also directly into the entire brain. It was like having a computer augment your brain rather than a simple send/receive device that I have in mind. I'm familiar with the movie.

Ultimately, it depends on the process through which the technology advances... I could see a 'neural net' taking the shape as either a 'p2p' type system where you would be networked between individuals in a cloud of connections that people could jump in and out of at will..., or a 'central hub' type system where everyone hooks into the neural network and makes the connection with a central hub... in this case, it would be much easier to control and track the flow of information and propaganda would no longer be 'necessary', because it would be programmed and added in such a way that it would be completely undetectable, people promoting 'undesireable' thoughts could be expelled from the system and dealt with.
Except you'd need machines that could interpret, decrypt, then re-transmit information in a form recognizable to our brains. That is a far more complex process than simply converting information for transfer.

I agree that such a system, as throughout history, would collapse under the weight of it's own tyranny. The relevance of the quotes, though, deals more with how this process of technological evolution of the species takes place. If it happens through consent, then it might be possible to create this system... otherwise this would be implemented through conquest.
That would not be my preferred method of implementing such a system. Ideally, everyone who wants to take part would do so and society would eventually simply exclude those who did not.

Ummm... just so you know, good portions of the 'suggestions' from this book are in different phases of implementation.
Dangerous Ideas | Big Think -

Look at the bottom one, drugging the drinking water. Which, I should add http://www.sciencelab.com/xMSDS-Lithium-9927559 just so we're on the same page about what's being suggested.
I disagree with it, but again I dont see it moving beyond the idea stage.

You're assuming that this technological advancement will be allowed for the billions of people on the planet.
Technology is not a herd of cattle that can be controlled and moved at will. ALL technology eventually reaches the hands of the common people.

Oh, I don't doubt that at all... There's a videogame I once played called 'medal gear solid : guns of the patriots' where ALL soldiers were stamped with processors that graphed everything down to the direction you're facing, location, the amount of oxygen in your blood, etc... which was sent to a central server that through it's AI would implement directives and orders to the appropriate people... on both sides of the conflict. Then through another layer of AI was initiating further conflicts around the world. (Again, I bring up these types of references more for the illustrative purposes) This DID allow teamwork that is unprecedented, by having instant neural communication, however, it allowed a 'hacker' to come in and control the system so that he could turn off their guns at will.
As entertaining a scenario as it makes for a terrible video game, I dont see that it has relevance to this concept.

Ok, directly, I suppose I misspoke... the progenitors of transhumanism are not implicitly eugenicist... however, the eugenicists believe that they will become transhuman at the expense of the majority.
Transhumanism is a specific philosophy that means a belief in advancing humanity via technological and scientific means, full stop. Eugenics is NOT part of Transhumanist philosophy, in fact the vast majority of Transhumanism stresses that this kind of progress can help EVERYONE, not just those society values most.

There are eugenicists who embrace the idea of using technology to assist in a eugenics program, but that is not Transhumanism.

I can all but guarantee that IBM's got their hands in the development of such systems, or microsoft... I mean, Bill gates has talked about developing weather control technologies, and genetically modified mosquitos whose spit creates the malaria vaccine, etc... Both these companies can be argued to be eugencist organizations at their core...
Being greedy and profiteering is not the same as implementing or advocating for eugenics.

Yes, modifications to the bodily functions... eyes that can see infrared, better then 20/20 vision, muscles double the strength and speed, greater lung capacity, etc... all these things are really hard to turn down, since they maintain your individuality... your soul.
Not even anything so lofty. But the ability to cure, correct, or compensate for any physical ailment the human body could suffer. That, to me, is enough to welcome Transhuman technologies.

I dont believe in clinical immortality, though.

If you're talking about direct communication through the brain and it's signals... it's a double edged sword.
How so?

With virtually every computerized technological advancement, there have been hackers that could 'work the system'.... Ultimately, it depends on the infrastructure of such a system.
True, but again, as I said before, you need a machine that can essentially "fake" being a brain that cant be detected by other people. Such technology is a huge step beyond a simple network and by the time that technology develops, the world can keep an eye on it's progress.
 
Tagging information to sort it is something your brain is used to doing. That is something that would have to be addressed with this technology.

Ya, well, this is still about 90% speculation at this point either way. I mean, 'the matrix' concept was by hooking your brain into a computer, and that illustrates a key point, that when you're dealing with electrical signals, reality is simply what your brain perceives. Though, I could see the initial phases of this being rolled out as video games.

The connections in that movie were also directly into the entire brain. It was like having a computer augment your brain rather than a simple send/receive device that I have in mind. I'm familiar with the movie.

This is true... it also goes into the downside of such augmentations being the need to go through regular maintenance.

Except you'd need machines that could interpret, decrypt, then re-transmit information in a form recognizable to our brains. That is a far more complex process than simply converting information for transfer.

I can't imagine that the brain encrypts information... but I do agree that what I'm talking about is definitely complex of a process... Though, I don't doubt that with sufficient technology that it would not be impossible.

That would not be my preferred method of implementing such a system. Ideally, everyone who wants to take part would do so and society would eventually simply exclude those who did not.

That's precisely the issue, it will become a situation of 'haves' and 'have-nots'... this situation, throughout human history has led to one group wiping out the other. That's the base of my 'fears' of this going on.

Technology is not a herd of cattle that can be controlled and moved at will. ALL technology eventually reaches the hands of the common people.

Yes, if one group can be held back from just thinking themselves the 'master race' and that the rest are 'undeserving' of such technology... which would lead to an inevitable conflict.

As entertaining a scenario as it makes for a terrible video game, I dont see that it has relevance to this concept.

The relevance would be that, well... take the technology discussed from the video game, and add in the 'neural net' connecting the minds of individuals, and very well, could end up with a 'borg' like society where a 'master AI' controls humanity into perpetuity.

Transhumanism is a specific philosophy that means a belief in advancing humanity via technological and scientific means, full stop. Eugenics is NOT part of Transhumanist philosophy, in fact the vast majority of Transhumanism stresses that this kind of progress can help EVERYONE, not just those society values most.

Now, as much as I agree, I also know human nature well enough to know that when there's two classes, once the rift between the two becomes too great then there's a conflict. In a situation of 'transhumans' that have been improved, could very well represent an overwhelming advantage to the rest of humans, and we'd end up with a situation as earth shattering as when Neanderthals were wiped out by Homo Erectus (though, I'm just assuming any ACTUAL conflict)

There are eugenicists who embrace the idea of using technology to assist in a eugenics program, but that is not Transhumanism.

Being greedy and profiteering is not the same as implementing or advocating for eugenics.

I disagree somewhat, while I agree that IBM, and other tech companies, not all having such bloody hands, are greedy and will profiteer if presented with the opportunity to do so... however, those in charge of IBM believed in Hitlers ideas of the 'master race' and a 'slave race'. Now, I couldn't say with certainty that IBM is STILL a 'eugenics' organization, though it is part of the companies history.

So, while I'm willing to agree with you that they are two seperate concepts, they are somewhat interconnected concepts.

Not even anything so lofty. But the ability to cure, correct, or compensate for any physical ailment the human body could suffer. That, to me, is enough to welcome Transhuman technologies.

I dont believe in clinical immortality, though.

How so?

It's a double edged sword, in that the technology can be used to enlighten mankind, while the same technology in the wrong hands can be used to enslave mankind.

True, but again, as I said before, you need a machine that can essentially "fake" being a brain that cant be detected by other people. Such technology is a huge step beyond a simple network and by the time that technology develops, the world can keep an eye on it's progress.

Yes, I think it would be more about the quality of the signal then in the content... but then again, I can only imagine the kinds of difficulties involved in even getting as far as a neural network of more then one mind. I imagine that the task becomes exponentially more complex when linking several minds simultaneously.
 
Chicken!

Hoplite, are you ignoring me? You culd have saved this thread many countless lines of text by simpl y answering my question. Name something anything in this universe that is equal? Anything... How can any Ai be equal, or able to apply equality with no frame of reference?



Tim-
 
Back
Top Bottom