Robotic Executioners


unixknight
 Share

Recommended Posts

3 hours ago, MormonGator said:

I for one, welcome our new robotic overlords. 

Interesting - I am an overlord to robotic overlords but I have a overlord called a wife that is also welcoming the new robotic overlords - I do not think this is all going to work out as planned.

 

The Traveler

Link to comment
Share on other sites

17 hours ago, zil said:

Nope, just some of the manufacturers have been predicting the pens and inks used elsewhere in the universe. :)

Everything (every molecule and atom) on this planet has been used elsewhere in the universe.  This was known even ancently. 

 

The Traveler

 

Link to comment
Share on other sites

17 hours ago, NeuroTypical said:

So what Zil is saying, is that the fountain pen community has been predicting* the robot apocalypse for quite some time now?

 

 

 

*or ushering in

Technological upgrades are usually referred to as revolutions rather than an apocalypse - it is the destruction or loss of technology that is usually referred to as an apocalypse.

 

The Traveler

Link to comment
Share on other sites

7 minutes ago, unixknight said:

Until the technology itself becomes the agent of apocalypse.

Intelligence and knowledge has never been a problem - it has always been ignorance and stupidity.  Especially the fear that ignorance and stupidity has of knowledge and intelligence.   I have always been bewildered by the fear of intelligence so often expressed in science fiction

 

The Traveler

Link to comment
Share on other sites

1 hour ago, Traveler said:

Intelligence and knowledge has never been a problem - it has always been ignorance and stupidity.  Especially the fear that ignorance and stupidity has of knowledge and intelligence.   I have always been bewildered by the fear of intelligence so often expressed in science fiction

Interesting.  So, Dune/Star Wars/Star Trek/Ender/Asimov = yes, Terminator/Mad Max = no for Traveler?

Where do you stand on Sharknado?

Link to comment
Share on other sites

1 hour ago, Traveler said:

Intelligence and knowledge has never been a problem - it has always been ignorance and stupidity.  Especially the fear that ignorance and stupidity has of knowledge and intelligence.   I have always been bewildered by the fear of intelligence so often expressed in science fiction.

So... relevance?

Link to comment
Share on other sites

1 hour ago, unixknight said:

So... relevance?

I will explain it this way - in nature greater intelligence is only a threat to lesser intelligence that is ether a needed resource or competing for a needed resource.  I have yet to encounter technology interested (needing) in resources needed by humans.  Even the concept of super intelligent extraterrestrials wanting resources on earth - everything here is plentiful throughout the universe - except for life and more specifically, intelligent life - why would any intelligent entity destroy the most intelligent resource on earth?

 

The Traveler

Link to comment
Share on other sites

10 minutes ago, Traveler said:

everything here is plentiful throughout the universe - except for life

The Traveler

We actually don't know how plentiful life is in the universe yet. I suspect it is all over the place, especially intelligent life. We are just too immature as a species for them to want anything to do with us. It would be like us wanting to hang out and have conversations with chipmunks.

Link to comment
Share on other sites

2 minutes ago, Emmanuel Goldstein said:

We actually don't know how plentiful life is in the universe yet. I suspect it is all over the place, especially intelligent life. We are just too immature as a species for them to want anything to do with us. It would be like us wanting to hang out and have conversations with chipmunks.

The most efficient use of energy in the universe is nuclear - controlled nuclear reactions have specific electromagnetic radiation that would broadcast like a beacon to all corners of the universe.  If the evolution of life follows the same statistical format as here on earth - the only explanation of the silence is that intelligent life of the format of earth is off the charts - extremely rare. 

 

The Traveler

Link to comment
Share on other sites

1 hour ago, Traveler said:

I will explain it this way - in nature greater intelligence is only a threat to lesser intelligence that is ether a needed resource or competing for a needed resource.  I have yet to encounter technology interested (needing) in resources needed by humans.  Even the concept of super intelligent extraterrestrials wanting resources on earth - everything here is plentiful throughout the universe - except for life and more specifically, intelligent life - why would any intelligent entity destroy the most intelligent resource on earth?

It isn't destruction, but a loss of freedom that is the concern here, in terms of global scale.

Edited by unixknight
Link to comment
Share on other sites

2 minutes ago, unixknight said:

It isn't destruction, but a loss of freedom that is the concern here.

When in history have increases in technology resulted in a loss of freedom?  It has always been the other way around - the increase of freedom and liberty is a primary engine in creating new and advanced technology.    Even in the Book of Mormon free peoples always had the advantage.  The big problem is greed and corruption.  Otherwise the only other force to bring about an apocalypse is a natural disaster.  

 

The Traveler

Link to comment
Share on other sites

Just now, Traveler said:

When in history have increases in technology resulted in a loss of freedom?  It has always been the other way around - the increase of freedom and liberty is a primary engine in creating new and advanced technology.    Even in the Book of Mormon free peoples always had the advantage.  The big problem is greed and corruption.  Otherwise the only other force to bring about an apocalypse is a natural disaster. 

Well, the whole point of this sub-discussion is the uniqueness of A.I. technology as a potential threat to human freedom.

That said, there is an answer to your question.  Several, actually...  Come on over to NSA (I live 5 miles from it.  You can stay in our guest room!)  and you'll see a building packed to the rafters with technology being used to take a steaming, watery dump on the 4th Amendment in ways that wouldn't have been possible just 20 years ago.   

Link to comment
Share on other sites

5 minutes ago, unixknight said:

Well, the whole point of this sub-discussion is the uniqueness of A.I. technology as a potential threat to human freedom.

That said, there is an answer to your question.  Several, actually...  Come on over to NSA (I live 5 miles from it.  You can stay in our guest room!)  and you'll see a building packed to the rafters with technology being used to take a steaming, watery dump on the 4th Amendment in ways that wouldn't have been possible just 20 years ago.   

Again the problem is not A.I.  Such problems are and have always been humans striving for power over other humans.  The real threat now as has always been; are secret combinations. 

 

The Traveler

Edited by Traveler
Link to comment
Share on other sites

Just now, unixknight said:

And what I'm telling you is that the dynamic is about to change, where it's something new striving for power over ALL humans.

What is the logic behind anything other than a human that is striving for power over humans?

 

The Traveler

Link to comment
Share on other sites

1 minute ago, Traveler said:

What is the logic behind anything other than a human that is striving for power over humans?

Artificial Intelligence technology is expanding by leaps and bounds.  A pair of A.I.s recently were shut down after they invented their own language to talk to each other, in order to bring them under control and to determine how, exactly, that happened.  Mind you, this is on binary computers and existing technology.  Dump quantum computing into the mix and enough time, there's no limit to the potential of such devices. 

How long until such a device sees humans as a threat?  (It only  takes one.)   And what will it do then? 

The answer to that question depends entirely on what we do now to apply limits to what an A.I. can possibly do.

Link to comment
Share on other sites

2 hours ago, unixknight said:

Artificial Intelligence technology is expanding by leaps and bounds.  A pair of A.I.s recently were shut down after they invented their own language to talk to each other, in order to bring them under control and to determine how, exactly, that happened.  Mind you, this is on binary computers and existing technology.  Dump quantum computing into the mix and enough time, there's no limit to the potential of such devices. 

How long until such a device sees humans as a threat?  (It only  takes one.)   And what will it do then? 

The answer to that question depends entirely on what we do now to apply limits to what an A.I. can possibly do.

Why would A.I. see humans as a threat?  As I said before - we are not competing for the same resources and what is logical to A.I. is not what is logical to human.   It is more likely and logical that symbiotic relationships would be created.   Human emotions like hate, revenge and pride are not likely to happen when A.I. becomes independently intelligent (which means not dependent on flawed human logic).  Assuming that such things are not logical and intelligent there would be no reason or logic to such irrational possibilities - except in science fiction with the emphasis on fiction.

 

The Traveler

 

The Traveler

Link to comment
Share on other sites

2 minutes ago, Traveler said:

Why would A.I. see humans as a threat?  As I said before - we are not competing for the same resources and what is logical to A.I. is not what is logical to human.   It is more likely and logical that symbiotic relationships would be created.   Human emotions like hate, revenge and pride are not likely to happen when A.I. becomes independently intelligent (which means not dependent on flawed human logic).  Assuming that such things are not logical and intelligent there would be no reason or logic to such irrational possibilities - except in science fiction with the emphasis on fiction.

  • Humans can (and do) switch them off at will.
  • Humans can (and do) dispose of them when not needed anymore
  • Humans can (and do) modify them at will
  • Humans can (and do) destroy them without remorse
  • Humans can (and do) make irrational decisions sometimes because of emotional response
  • Humans can (and do) buy/sell/trade them
  • Humans can (and do) abuse them for amusement (ever heard of Chatbot?)

Other than that, I can't think of a single reason why an A.I. might regard a  human as a threat.

Link to comment
Share on other sites

5 minutes ago, unixknight said:
  • Humans can (and do) switch them off at will.
  • Humans can (and do) dispose of them when not needed anymore
  • Humans can (and do) modify them at will
  • Humans can (and do) destroy them without remorse
  • Humans can (and do) make irrational decisions sometimes because of emotional response
  • Humans can (and do) buy/sell/trade them
  • Humans can (and do) abuse them for amusement (ever heard of Chatbot?)

Other than that, I can't think of a single reason why an A.I. might regard a  human as a threat.

Nothing here causes pain to A.I. and becoming a threat to humans is not going to improve anything in the A.I. universe.  When A.I. becomes smart or smarter than humans - it is more likely that it (or they) will not act like humans.  It is more likely that A.I. will find other A.I. more of a threat to them than humans.  If A.I. is to become a threat to humans - it is as likely it will find it's intelligence a threat to itself - which is very illogical.   It is far more logical that A.I. will intelligently form symbiotic relationships with intelligent humans.  

As I stated before - it is stupidity that is threatened by intelligence.  Not so much the other way around.  Even in scripture -- Intelligence cleaves to intelligence - which means intelligence likes and appreciates intelligence and is not threaten nor intent to threaten other intelligence.  Smart is always better and better off - the only exception is the illogical and irrational.   Even religious thinking puts the stupid and foolish at risk (even when not dealing with superior intelligence) - those willing to learn and improve (not narcissistic and psychopathic) by sear logic will have advantage.  

 

The Traveler

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share