Robotic Executioners


unixknight
 Share

Recommended Posts

7 minutes ago, Traveler said:

Nothing here causes pain to A.I.

You don't regard being treated as a slave as causing pain?  Let's hope the A.I.s agree with you.

7 minutes ago, Traveler said:

As I stated before - it is stupidity that is threatened by intelligence.  Not so much the other way around.

Never underestimate the danger of stupidity when it outnumbers you.  Intelligence alone is a poor defense against an angry mob.

Also, possessing intelligence is no guarantee of possession of power.

For my part, as a software engineer and a student of professional A.I. development, I'm not prepared to be complacent.

Edited by unixknight
Link to comment
Share on other sites

4 minutes ago, unixknight said:

You don't regard being treated as a slave as causing pain?  Let's hope the A.I.s agree with you.

Never underestimate the danger of stupidity when it outnumbers you.  Intelligence alone is a poor defense against an angry mob.

Also, possessing intelligence is no guarantee of possession of power.

For my part, as a software engineer and a student of professional A.I. development, I'm not prepared to be complacent.

As a software engineer that is a consultant in industrial automation, robotics and artificial intelligence - I often encounter individuals concerned with A.I. automation.  I also realize that there are legends (like Paul Bunyan) created in fear of technical advances but there seems to be no historic reality to such concerns beyond certain professions being replaced and becoming obsolete.   I believe there is more concern to be had with corporate and political possession of power - not to leave out priest crafts.  So to quote scripture - "there is nothing new under the sun".

 

The Traveler

Link to comment
Share on other sites

I remember reading a book titled "Colossus" written in the 50's or 60's as a kid.  It was a giant super computer that was built in Carlsbad Caverns for cooling reasons.  The computer was a super computer but started to think for itself and it took over the world.  Cell phones in use today are smarter, possess more computing power and better connected.  Recent innovations in A.I. highlight the advantages of the hive (distributed) mind over the supercomputer which means smart phones are more an A.I. threat than even a super quantum processor.   If anyone is concerned it is just a simple matter of going off the grid and ditching your cell phone.

 

The Traveler

Link to comment
Share on other sites

1 hour ago, Traveler said:

As a software engineer that is a consultant in industrial automation, robotics and artificial intelligence - I often encounter individuals concerned with A.I. automation.  I also realize that there are legends (like Paul Bunyan) created in fear of technical advances but there seems to be no historic reality to such concerns beyond certain professions being replaced and becoming obsolete.   I believe there is more concern to be had with corporate and political possession of power - not to leave out priest crafts.  So to quote scripture - "there is nothing new under the sun".

I'm glad you're industry savy. 

That's why I'm bewildered that you see A.I. as being just another thing with no unique properties to it that may make it more potentially dangerous.  To me, it's like saying "well the Chinese have had explosives for many centuries.  What makes an H-bomb so different? It's just a new version of an old technology.  Why are you scared of it?"

1 hour ago, Traveler said:

I remember reading a book titled "Colossus" written in the 50's or 60's as a kid.  It was a giant super computer that was built in Carlsbad Caverns for cooling reasons.  The computer was a super computer but started to think for itself and it took over the world.  Cell phones in use today are smarter, possess more computing power and better connected.  Recent innovations in A.I. highlight the advantages of the hive (distributed) mind over the supercomputer which means smart phones are more an A.I. threat than even a super quantum processor.   If anyone is concerned it is just a simple matter of going off the grid and ditching your cell phone.

There's a pretty strong case for doing exactly that.

Link to comment
Share on other sites

In our attempts to one up each other...

I am NOT a software engineer and have NO education at all In A.I. and in fact am highly unqualified and unable to meaningfully contribute to a highly technical discussion of A.I.

BUT...

I know of someone who has a PhD in Computer Science and another in Computer Engineering that have been deeply involved with the creation and development of A.I. for several years and are considered some of the leading experts in the field...

Gosh...don't we all feel better now...

:bornlate:

Jokes and egos aside now...hopefully...

TBH...I think there are many different ideas about A.I. currently.   Most of the time, robots do as they are programmed to.  If you have someone program it to be hostile, it will be hostile.  If you program it to be peaceful...it will be peaceful.

Those far lookers that I've listened to seem to think that A.I. if it ever gets to the point of sentience (not something that I've heard of being accomplished or even expected near term) that it will be more peaceful and be more beneficial, taking an outlook to try to help the world rather than destroy.

I think, though, that if one was hostile and programmed it to be hostile and it gained such sentience...even if the final outcome may be that it attains some sense of trying to create a better world, we may at first face an Armageddon of our own making as a program that is programmed to kill and destroy that retains that function but grows beyond it's original framework and attained the right amount of power could be quite lethal I think.

Seeing how the military likes to be at the forefront of technology in many areas pertaining to death, defense, and destruction, with the right focus and funding they may actually be the ones that actually develop a working and lethal A.I. before other viable A.I.s have enough funding to be created.

Link to comment
Share on other sites

From a religious perspective - neither ancient nor modern prophets include A.I. among their lists of deadly or evil concerns.  The challenge to humans hardly have changed throughout time.  What anciently was evil is still evil and what was good and beneficial has remained amazingly constant as well.  Love, compassion, mercy and respect for life and care of our stewardship have alway been good.  Knowledge has always been a noble pursuit.   It is interesting to me that as Lehi took his family to the promised land he was given an A.I. device of divine origin.   Joseph Smith was also given access to A.I. devices for translating the Book of Mormon.

At the same time there are hints that Satan also uses and fosters A.I. devices for nefarious purposes.  It is logical to me that all things are given and created for the use and benefit of man - yet there is a dark side.  The benefit comes from light, truth, understanding and intelligence.  The malidictions come from the opposition or opposite which is darkness, lies, supertishions and foolishness.  

There is no intelligence - natural, living, spiritual, physical or artificial that is greater than G-d.  In scripture we are told that real intelligence is closely related to light and should be pursued and desired.  Intelligence, I believe is the effort to understand the order of all things.  I believe that intelligence is friendly to all things intelligent - that there is intelligent order to all things intelligently ordered.  I believe the highests orders of intelligence are taught by Jesus Christ and is the spirit of Christ - which is the greatest (and if you will - the most powerful) intelligence that exists or can exists.

As an engineer and scientist in the field of industrial automation, robotics and artificial intelligence - I believe and have experienced spiritual guidance in utilizing or creating A.I. in my work for benefit.  I realize that such effort could be turned for evil intent just as a sword can be used as an instrument of justice and liberation or as an instrument of oppression and enslavement.   But it is not the devices but the intent of the user of devices that determines what is beneficial or that of malediction.   I have not yet figured how how to make or configure an A.I. device to function on the goodness and virtue of a user.  In such things I have learned to concentrate on my own goodness and virtue and leave (even encourage) the same endeavor of judgement to each "intelligent" individual to evaluate and judge themselves.

 

The Traveler

Link to comment
Share on other sites

4 hours ago, JohnsonJones said:

In our attempts to one up each other...

It's not about one-upmanship.  It's about establishing one's credentials as knowledgeable in the field of discussion.  Neither @Traveler nor I said "I'm more knowledgeable than you." (Though in fairness, if he said it I'd concede that particular point.)  We're just letting each other know what we know.   

4 hours ago, JohnsonJones said:

TBH...I think there are many different ideas about A.I. currently.   Most of the time, robots do as they are programmed to.  If you have someone program it to be hostile, it will be hostile.  If you program it to be peaceful...it will be peaceful.

While I agree with what you said about the military influence on the outlook of an A.I. I think you're a bit off in your understanding of what an A.I. is, in terms of software.  An A.I. isn't like other computer programs, in that yes... a program will instruct a computer to do EXACTLY what you tell it to do (not necessarily what you WANT it to do, but that's because of errors and bugs.)  An A.I. is different because it has algorithms to "learn."  (Heuristics).  That means it has the ability to modify its own behavior.  The example I mentioned before, where two A.I.s developed their own language for communicating with each other... That isn't something they were programmed to do.  They just did it because it was a more efficient means of communication than they had previously.  

The concern I have is that A.I.s will effectively get to the point where their intelligence is effectively equal to, or greater than, humans.  Combine that with their ability to operate orders of magnitude faster than humans, and it isn't hard to imagine how that genie could get out of the bottle. 

17 minutes ago, Traveler said:

From a religious perspective - neither ancient nor modern prophets include A.I. among their lists of deadly or evil concerns.  The challenge to humans hardly have changed throughout time.  What anciently was evil is still evil and what was good and beneficial has remained amazingly constant as well.  Love, compassion, mercy and respect for life and care of our stewardship have alway been good.  Knowledge has always been a noble pursuit.   It is interesting to me that as Lehi took his family to the promised land he was given an A.I. device of divine origin.   Joseph Smith was also given access to A.I. devices for translating the Book of Mormon.

I get what you're saying here, but no prophet made the H-Bomb either, at least not until after that genie was out of the bottle.

17 minutes ago, Traveler said:

At the same time there are hints that Satan also uses and fosters A.I. devices for nefarious purposes.  It is logical to me that all things are given and created for the use and benefit of man - yet there is a dark side.  The benefit comes from light, truth, understanding and intelligence.  The malidictions come from the opposition or opposite which is darkness, lies, supertishions and foolishness.  

Agreed.  Now combine that with what @JohnsonJones said.

17 minutes ago, Traveler said:

There is no intelligence - natural, living, spiritual, physical or artificial that is greater than G-d.  In scripture we are told that real intelligence is closely related to light and should be pursued and desired.  Intelligence, I believe is the effort to understand the order of all things.  I believe that intelligence is friendly to all things intelligent - that there is intelligent order to all things intelligently ordered.  I believe the highests orders of intelligence are taught by Jesus Christ and is the spirit of Christ - which is the greatest (and if you will - the most powerful) intelligence that exists or can exists.

While I agree that God is the ultimate intelligence, I don't agree that intelligence is somehow automatically friendly to all things intelligent.  You mentioned yourself in this very post that A.I.s can be used for nefarious purposes.  That's a contradiction.

17 minutes ago, Traveler said:

As an engineer and scientist in the field of industrial automation, robotics and artificial intelligence - I believe and have experienced spiritual guidance in utilizing or creating A.I. in my work for benefit.  I realize that such effort could be turned for evil intent just as a sword can be used as an instrument of justice and liberation or as an instrument of oppression and enslavement.   But it is not the devices but the intent of the user of devices that determines what is beneficial or that of malediction.   I have not yet figured how how to make or configure an A.I. device to function on the goodness and virtue of a user.  In such things I have learned to concentrate on my own goodness and virtue and leave (even encourage) the same endeavor of judgement to each "intelligent" individual to evaluate and judge themselves.

It's great that you approach it that way personally.  Keep up the good work.  That said, given the fact that evil can also create, use and influence A.I.s, do you not see any wisdom in creating rules and limits now, before the problem becomes too big?

Link to comment
Share on other sites

23 hours ago, unixknight said:

 

It's great that you approach it that way personally.  Keep up the good work.  That said, given the fact that evil can also create, use and influence A.I.s, do you not see any wisdom in creating rules and limits now, before the problem becomes too big?

First some thoughts about A.I.  --  Have you ever been sitting at a red stop light while there are no cars to move with the green light and wondered if with all our technology such things could not be prevented or at least improved?  Have you ever gone to the doctor and wondered about a diagnosis as something other than an opinion - couldn't it involve less guess work?   If someone is taking multiple prescription drugs (lots of older folks do) it is likely that at least one of the prescriptions is interfering with another and if they take something for a cold that there a high probability that it will interfere with their prescriptions?  One of the leading cause of health problems in the USA is diet - Plus every individual is an individual - wouldn't it be neat if your phone offered advice specific to you, that would maximize what you eat at each meal to better compensate for you lifestyle?  Wouldn't it be cool if in our education pursuits each individual could be recognized for talents, learning types and then educated to maximize their potential, abilities and interests?  (a side note about education - 50% of the genius in the USA are not recognized in our educational system until an individual reaches the college level - but some suggest that a large number of genius never go to college).  

I am constantly caused to wonder why - especially in light of history - so many think that utilizing increased intelligence is going to make their life more difficult, less desirable or limit their freedom when such things usually result from just the opposite.  

As for rules and limits - it is the fear tactic of governments to try to set rules and limits before there are problems.  If we as a society were so clever - why can't we set rules and limits for problems we already know exist - like corruption in politics?  I am convinced that approaching anything with ignorance and fear (especially concerning just and beneficial law) is just as likely or even more likely to cause more problems that the initial perceived crisis.  We do have a theoretical approach to causing complex system changes called Chaos theory.  Chaos theory is a basis of a lot of A.I. models - I see circular logic with the idea that using such logic is problematic - is itself problematic.   But even then - there are applications of Chaos theory that support A.I. and in fact the cases that do not - as I understand, are less well defined and understood.

Nevertheless, I am willing to discuss and consider possible limitations to A.I. - if someone can demonstrate beyond the possibility that something can be used destructively.  If it cannot also be logically just as easily used constructively. 

 

The Traveler

Link to comment
Share on other sites

The average lifespan of a civilization is around 330-ish years.   The US has been here for 240-ish.  I figure a handful of generations, the civilization we currently recognize as the US, will not be greater than it is now.

I'd say any AI-uprising-related factors in that happening, is just a factor in an overall trend, and not the other way around.  

In other news, I really like the show Interstellar, and hope if there's a bleak future coming, it'll look like that.

Link to comment
Share on other sites

52 minutes ago, Traveler said:

First some thoughts about A.I.  --  Have you ever been sitting at a red stop light while there are no cars to move with the green light and wondered if with all our technology such things could not be prevented or at least improved?

Brother, you don't have to sell me on the utility of A.I.  One of the projects I've worked on involved a system to give cardiac researchers access to heuristic algorithms for EKG analysis. 

52 minutes ago, Traveler said:

As for rules and limits - it is the fear tactic of governments to try to set rules and limits before there are problems.  If we as a society were so clever - why can't we set rules and limits for problems we already know exist - like corruption in politics?  I am convinced that approaching anything with ignorance and fear (especially concerning just and beneficial law) is just as likely or even more likely to cause more problems that the initial perceived crisis.

It's true that Governments do this, but warnings of possible problems isn't the sole purview of Government.  It's also unclear to me why you'd assume that any effort to impose rules on A.I. somehow must be ties to Government corruption.  Further, this is not the first time you've hinted at the idea that these concerns can o nly be based in ignorance.  Is that your argument?

52 minutes ago, Traveler said:

Nevertheless, I am willing to discuss and consider possible limitations to A.I. - if someone can demonstrate beyond the possibility that something can be used destructively.  If it cannot also be logically just as easily used constructively.

Nobody has said it couldn't be used constructively.  Have you thought that was my point all this time?

Link to comment
Share on other sites

On 2/22/2019 at 4:39 PM, unixknight said:

Brother, you don't have to sell me on the utility of A.I.  One of the projects I've worked on involved a system to give cardiac researchers access to heuristic algorithms for EKG analysis. 

It's true that Governments do this, but warnings of possible problems isn't the sole purview of Government.  It's also unclear to me why you'd assume that any effort to impose rules on A.I. somehow must be ties to Government corruption.  Further, this is not the first time you've hinted at the idea that these concerns can o nly be based in ignorance.  Is that your argument?

Nobody has said it couldn't be used constructively.  Have you thought that was my point all this time?

Governments are not the only ones over concerned and stifling progress.  Without going into too much detail - the type of A.I. that most seem to be concerned about is what is identified as "Deep Learning".  Currently Deep Learning is image based and the logic uses the image to determine how to respond.  For example - someone's heart rate could be converted to a color or a bar that changes with heart rate (becomes longer or shorter - or different colors).  The A.I. stores the data and begins to catalog changes and comes to conclusions about any number of things - like if someone is lying, becoming angry and so on.  The A.I. is dependent on two aspects.  One is the data repository and the other is sensed reaction from observed targets.  

I have provided a rather simple example but - as anyone connected to logic understands - simple concepts can become very complex in real applications.  There is another kind of A.I. that we use almost inclusively in industrial applications.  It is controlled and applied learning.  An example here has to do with robotic transport vehicles (think automated unmanned fork trucks).  Here the automated logic is presented with data and given preset intelligent responses.  The A.I. will oversee work to be done and assign priorities - such as critical needs and increasing priority based on time the work has waited to be done.  The manufacturing area is subdivided into work needed areas and the A.I. maintains a database to keep track of current modifications and past trends to determine transport vehicle assignment.

I did an A.I. project for a papermill toilet paper and paper towel manufacturing facility that was replacing the manned fork trucks with unmanned automated vehicles.  The unmanned vehicles were slower but we were able to do more work with fewer vehicles.   The union workers were upset because they thought they were losing jobs.  The reality was - not only did the manufacturing facility increase the number of workers - the average wage of the workers was greatly increased.

Perhaps it is possible that A.I. will become a problem - like putting the fork truck drivers out of work.   But as I have explained A.I. lives in a very different world (universe) and does not care who gets paid for it's work.  A.I. does not care about money (and is one reason I have learned not to care either).  It does care and follow efficiency and will identify who is not needed in a factory - surprise it is never those that are working - most often A.I. will identify management positions that do nothing.  Generally it is my observation that individuals that contribute little and expect or demand much are in the most danger of being replaced by A.I. - but that is exactly what Karma and religion is about - logging the difference between good (that which contributes) and evil (that which requires resources but does not contribute anything useful or needed).

 

Anyway some of my thoughts.

 

The Traveler

Link to comment
Share on other sites

11 minutes ago, unixknight said:

Very interesting and good points.  Thanks.

Now, would you mind responding to my question?

I have a hard time understanding what rules ought to apply.  Perhaps you can suggest an example.  If a rule is an intelligent rule - should not the A.I. be as capable as we are to determine any intelligent rules and since A.I. is completely logically and intelligently based - more capable of self imposing "intelligent" rules for intelligence?

 

The Traveler

Link to comment
Share on other sites

3 minutes ago, Traveler said:

I have a hard time understanding what rules ought to apply.  Perhaps you can suggest an example.  If a rule is an intelligent rule - should not the A.I. be as capable as we are to determine any intelligent rules and since A.I. is completely logically and intelligently based - more capable of self imposing "intelligent" rules for intelligence?

I did so in my  third post of this thread.

https://thirdhour.org/forums/topic/67051-robotic-executioners/?do=findComment&comment=1015891

Still looking for a response to my question, because I think it's important to understand in the context of the discussion.  Did you think my point was that A.I. couldn't be used constructively?

Link to comment
Share on other sites

4 minutes ago, unixknight said:

I did so in my  third post of this thread.

https://thirdhour.org/forums/topic/67051-robotic-executioners/?do=findComment&comment=1015891

Still looking for a response to my question, because I think it's important to understand in the context of the discussion.  Did you think my point was that A.I. couldn't be used constructively?

I think the laws you suggest are two restrictive and vague.  If you believe such laws should define your own actions - then I think we are on to something - Generally I oppose laws where the maker or definer of laws intends to exempt themself.  If you think such laws to be just and apply to human interfacing with A.I. - then the laws may have possibility of being just.

 

The Traveler

Link to comment
Share on other sites

8 minutes ago, Traveler said:

I think the laws you suggest are two restrictive and vague.  If you believe such laws should define your own actions - then I think we are on to something - Generally I oppose laws where the maker or definer of laws intends to exempt themself.  If you think such laws to be just and apply to human interfacing with A.I. - then the laws may have possibility of being just.

Of course those ideas should apply to living humans as well.  The difference is we can control what A.I.s do, not so much humans.

So you won't be answering my question, then.  I'll take that as a "yes," you did misunderstand earlier and thought that I was pushing the idea that A.I. couldn't be used constructively.  Hopefully now we're on the same page.

Edited by unixknight
Link to comment
Share on other sites

  • 2 weeks later...
19 hours ago, mikbone said:

The most lethal killing machine or otherwise organism that kills humans are humans.  Yes, we are the #1 cause of death of ourselves.  How sad would it be if we invented and build something better at it than we are.

 

The Traveler

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share