Robots and artificial intelligence


Traveler
 Share

Recommended Posts

I see in the news where a person in South Korea was killed by an industrial robot.  Link https://www.foxbusiness.com/fox-news-world/man-crushed-death-industrial-robot-confused-box-police

Since this was my life’s profession and consulting business, I have had discussions with plant managers and others about the “kill zones” of robots.  Sadly, I am aware of several robot kills – some so foolish and stupid it is hard to imagine what people were thinking.  One of the saddest was a worker that disappeared for a week before he was discovered crushed by an ASRS robot.  He had observed the robot and thought it had a flaw not using a particular location in an ASRS rack.  ASRS stands for automated storage and retrieval system.  He placed a matrass in the location and took naps – until a 3-ton load was placed there by the robot while he was sleeping on the job.

One of the great problems with A I and robots is when humans attempt to interface outside of robotic design or mix human intelligence with what is intended for A I. 

I believe there will be a great advantage to have A I controlled transportation vehicles but what is left out of the notion of cars that will drive themselves is the mixing of humans with A I on the same roads.  Because of what I call the kill zone of robots – I do not believe that human interference is a good idea – kind of like using human fingers to change gears in the gear box of a car.  Any human doing that kind of thing will likely lose their fingers.

As advanced as we may think our society is – there are a lot of stupid humans that must need be removed from the gene pool before we will be ready for what A I and automation is bringing.

 

The Traveler

Link to comment
Share on other sites

1 hour ago, Traveler said:

there are a lot of stupid humans that must need be removed from the gene pool

Do you have any idea how chilling this sounds? Who decides who is “stupid”? You?

Ironically one of the most amusing ways to show the world that you have limited brainpower is by telling them that you think everyone is stupid. 

Edited by LDSGator
Link to comment
Share on other sites

3 hours ago, LDSGator said:

Do you have any idea how chilling this sounds? Who decides who is “stupid”? You?

Ironically one of the most amusing ways to show the world that you have limited brainpower is by telling them that you think everyone is stupid. 

I recall an effort to give what were called the Darwin Awards to foolish (stupid) things people do that end up killing or seriously harming themselves.  Stupid is a word that has meaning – perhaps I can give some examples:  Like playing Russian Roulette with a fully loaded revolver with a hair trigger.  Like jumping of a 3,000-foot cliff thinking that if enough panic sets in your superpower will kick in so you can fly.   Like taking an iron bar with you riding a motorcycle thinking that in an emergency while ridding at a high rate of speed you can use the iron bar as an emergency stopping device by sticking it in the front wheel spokes of your motorcycle.  Or when driving your car at night that when you approach an open intersection there is no need to slow down – rather turn off your headlights to see if a car is coming from the crossway direction (thinking they will not ever think the same way and turn off their headlights as well).

I recall one Darwin Award given to a guy that had his headlights go out while driving his pickup truck.  He was smart enough to realize that the reason the headlights went out was because of a burned-out fuse.  He also was stupid enough to replace the fuse with a 22 bullet that fit in the fuse place.  When the bullet went off it shot him, causing him to panic and drive off a bridge.

Often in my business I would be asked about making something fool proof.  My response was always – making something fool proof is impossible because fools (stupid people) are way to cleaver.

The basis of all this is in the thinking of what to do when something goes wrong in an automated (A I) system.  I was often asked what I recommended.  My response was, “What do you want to happen?  Do you want a human to intervein, or do you want your automated system to resolve the question?”  I would put forth that the worse possible outcome will come as an attempt to mix human intervention (input) with intended automated operations.

One last point – I realize that some think that calling something stupid is name calling of some specific individual.  I have not called anyone stupid – rather I leave such thinking up to individuals to judge themselves and their own thinking process.  If they are willing to put themselves and others at deadly risk – I cannot think of a better word to express such an effort.

 

The Traveler

Link to comment
Share on other sites

Self-driving cars need to make advanced Trolley Problem decisions.

(If you don't know, here's the trolley problem.  Do you pull the lever?)

 

The Trolley Problem — Origins. The Trolley Problem is a thought… | by Sara  Bizarro | Medium

 

(If you don't know why the car can't just hit the brakes, you need to think a bit more about why there are so many auto/pedestrian accidents.  Sometimes you have to make a decision like this.  Run someone over or crash into a barricade and possibly kill the passengers?  Stuff like that.)

Link to comment
Share on other sites

1 hour ago, NeuroTypical said:

Self-driving cars need to make advanced Trolley Problem decisions.

(If you don't know, here's the trolley problem.  Do you pull the lever?)

 

The Trolley Problem — Origins. The Trolley Problem is a thought… | by Sara  Bizarro | Medium

 

(If you don't know why the car can't just hit the brakes, you need to think a bit more about why there are so many auto/pedestrian accidents.  Sometimes you have to make a decision like this.  Run someone over or crash into a barricade and possibly kill the passengers?  Stuff like that.)

I can't tell from your commentary on this image, but do you know why this is satire?  The lever doesn't really make a difference.

Link to comment
Share on other sites

6 hours ago, Traveler said:

I see in the news where a person in South Korea was killed by an industrial robot.  Link https://www.foxbusiness.com/fox-news-world/man-crushed-death-industrial-robot-confused-box-police

Since this was my life’s profession and consulting business, I have had discussions with plant managers and others about the “kill zones” of robots.  Sadly, I am aware of several robot kills – some so foolish and stupid it is hard to imagine what people were thinking.  One of the saddest was a worker that disappeared for a week before he was discovered crushed by an ASRS robot.  He had observed the robot and thought it had a flaw not using a particular location in an ASRS rack.  ASRS stands for automated storage and retrieval system.  He placed a matrass in the location and took naps – until a 3-ton load was placed there by the robot while he was sleeping on the job.

One of the great problems with A I and robots is when humans attempt to interface outside of robotic design or mix human intelligence with what is intended for A I. 

I believe there will be a great advantage to have A I controlled transportation vehicles but what is left out of the notion of cars that will drive themselves is the mixing of humans with A I on the same roads.  Because of what I call the kill zone of robots – I do not believe that human interference is a good idea – kind of like using human fingers to change gears in the gear box of a car.  Any human doing that kind of thing will likely lose their fingers.

As advanced as we may think our society is – there are a lot of stupid humans that must need be removed from the gene pool before we will be ready for what A I and automation is bringing.

I think what you're saying is not so much stupidity, but ignorance.  This is ignorance in the design of the Ai as well as the design of humans who interface with the machine.  The AI wasn't properly programmed for inspection/diagnostic mode.  The man with the box nearby was not aware of what the robot would do when seeing his actions.

This just reminds me that safety engineers are a lot more important than we tend to think.  And there were obviously insufficient safeguards put in place.  Of course, that's easy to say with hindsight.  And I certainly don't want to say that others would have avoided this.  But regardless, Korea doesn't have the safety record that the US does.

Safety design consists of three different methods.

  • Engineered Methods
  • Procedural Methods
  • PPE

For this incident I doubt PPE would have helped.  But obviously there are both engineered solutions and procedural solutions that could have been implemented to prevent this.

  • Engineered: Give the AI a "diagnostic" mode so it doesn't do anything physically without being told, yet it can give data for what inputs it is receiving, and what "decisions" it would make because of those inputs.
  • Procedural: Clear the area/cordon off the sensory input areas during diagnostics/inspection.

Again, this is with the benefit of hindsight.  Unfortunately, when we have new technology being introduced to a human population, we will always have these incidents as part of the growing pains as more and more powerful technologies are implemented.  And as a society, we need to make decisions about what price we're willing to pay on an individual level as technologies can improve our lives on a societal level. 

Unfortunately, I don't know if anyone has the wisdom and foresight to make proper judgments about what price is fair for what benefit.  We just fumble through into the future and lick our wounds as we go.  There's not much more we can do.

Link to comment
Share on other sites

3 hours ago, Traveler said:

Often in my business I would be asked about making something fool proof.  My response was always – making something fool proof is impossible because fools (stupid people) are way to cleaver.

Ironically in my business I see people who think they are geniuses get fleeced by guys who they thought couldn’t tie their shoes. It’s sort of cute, really! 

Link to comment
Share on other sites

2 hours ago, Carborendum said:

I can't tell from your commentary on this image, but do you know why this is satire?  The lever doesn't really make a difference.

https://en.m.wikipedia.org/wiki/Trolley_problem

 

You are the conductor of a runaway trolley. You have to kill either five people or one. 
 

It’s all rhetorical of course but you can a have a fascinating discussion with friends about ethics. Done so many time. I’ll let @NeuroTypical take it from here. 

Edited by LDSGator
Link to comment
Share on other sites

9 hours ago, Traveler said:

Since this was my life’s profession and consulting business, I have had discussions with plant managers and others about the “kill zones” of robots.  Sadly, I am aware of several robot kills – some so foolish and stupid it is hard to imagine what people were thinking.  One of the saddest was a worker that disappeared for a week before he was discovered crushed by an ASRS robot.  He had observed the robot and thought it had a flaw not using a particular location in an ASRS rack.  ASRS stands for automated storage and retrieval system.  He placed a matrass in the location and took naps – until a 3-ton load was placed there by the robot while he was sleeping on the job.

 

And where were the protocols to stop the robot from loading cargo there if it was in fact occupied? Some basic sensors and a little bit of extra programming would have taken care of it. 

Link to comment
Share on other sites

16 hours ago, Carborendum said:

I can't tell from your commentary on this image, but do you know why this is satire?  The lever doesn't really make a difference.

Fine.   Here:

image.png.bd8c2cbf6d0333ffc400354cf54e5082.png

 

It's a legitimate notion to think about.  You can't have self-driving cars that will ever amount to anything, if you don't program it to deal with such situations.  And that forces a conversation about ethics with no obvious answers and lots of disagreement.

I totally get the gut reaction is to shoot the messenger or poke holes in the message or chuckle nervously and change the subject.  But the people tryna build AI robots and self-driving cars have to deal with this stuff.  The conversation has to be had.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6978432/
https://www.vox.com/recode/22700022/self-driving-autonomous-cars-trolley-problem-waymo-google-tesla
https://www.futurity.org/autonomous-vehicles-av-ethics-trolley-problem-2863992-2/
https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/
https://gizmodo.com/mit-self-driving-car-trolley-problem-robot-ethics-uber-1849925401
https://fee.org/articles/the-trolley-problem-and-self-driving-cars/
https://www.brookings.edu/articles/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/
 

Link to comment
Share on other sites

17 hours ago, Carborendum said:

I think what you're saying is not so much stupidity, but ignorance.  This is ignorance in the design of the Ai as well as the design of humans who interface with the machine.  The AI wasn't properly programmed for inspection/diagnostic mode.  The man with the box nearby was not aware of what the robot would do when seeing his actions.

This just reminds me that safety engineers are a lot more important than we tend to think.  And there were obviously insufficient safeguards put in place.  Of course, that's easy to say with hindsight.  And I certainly don't want to say that others would have avoided this.  But regardless, Korea doesn't have the safety record that the US does.

Safety design consists of three different methods.

  • Engineered Methods
  • Procedural Methods
  • PPE

For this incident I doubt PPE would have helped.  But obviously there are both engineered solutions and procedural solutions that could have been implemented to prevent this.

  • Engineered: Give the AI a "diagnostic" mode so it doesn't do anything physically without being told, yet it can give data for what inputs it is receiving, and what "decisions" it would make because of those inputs.
  • Procedural: Clear the area/cordon off the sensory input areas during diagnostics/inspection.

Again, this is with the benefit of hindsight.  Unfortunately, when we have new technology being introduced to a human population, we will always have these incidents as part of the growing pains as more and more powerful technologies are implemented.  And as a society, we need to make decisions about what price we're willing to pay on an individual level as technologies can improve our lives on a societal level. 

Unfortunately, I don't know if anyone has the wisdom and foresight to make proper judgments about what price is fair for what benefit.  We just fumble through into the future and lick our wounds as we go.  There's not much more we can do.

Obviously, you do not know much about industrial automation.  An ASRS that carries a three-ton load runs on the same track as a freight train.  The bay in which the ASRS operates is fenced off and locked during operation with signs everywhere saying do not enter without following power off and equipment lock down procedures.  Operators spend weeks of training before being allowed any where near the equipment.  An operator works from a control room and monitors a variety of automated equipment.  Since the equipment is automated – their job is the redundant eyes to check for possible unforeseen anomalies that could result from various automated (including sensor) misfunctions.

The bay in which the ASRS operates in is called the kill zone for the equipment for good reason.  It is not a matter of a “mistake” but rather a knowledgeable and deliberate violation of rules and protocols.  Various sensors (including your suggestion) must be disabled for the operator to get into the bay area.  I will not pretend to understand certain human thinking and responses to what I see as obviously stupid.  I do not know how else to phrase this – perhaps Darwin said it best with the simple phrase, “Survival of the fittest”.   Even G-d cannot make the universe fool proof – Satan is the example that disproves all the possible fool proof theories.

 

The Traveler

Link to comment
Share on other sites

20 hours ago, Traveler said:

Obviously, you do not know much about industrial automation.

Obviously, you do not know much about what I do or do not know.

20 hours ago, Traveler said:

An ASRS that carries a three-ton load runs on the same track as a freight train.  The bay in which the ASRS operates is fenced off and locked during operation with signs everywhere saying do not enter without following power off and equipment lock down procedures.  Operators spend weeks of training before being allowed any where near the equipment.  An operator works from a control room and monitors a variety of automated equipment.  Since the equipment is automated – their job is the redundant eyes to check for possible unforeseen anomalies that could result from various automated (including sensor) misfunctions.

Yes.

20 hours ago, Traveler said:

The bay in which the ASRS operates in is called the kill zone for the equipment for good reason.  It is not a matter of a “mistake” but rather a knowledgeable and deliberate violation of rules and protocols.

Yes.

20 hours ago, Traveler said:

Various sensors (including your suggestion) must be disabled for the operator to get into the bay area.

Yes.

So far you have not said anything that explains how the man in Korea died.  If all these protocols were in place, then how did the robot kill this man?

Do you even know what a safety engineer does?

20 hours ago, Traveler said:

I will not pretend to understand certain human thinking and responses to what I see as obviously stupid. 

If you want to call it that, I guess that is your right.  But often times, people do "stupid things" out of ignorance (not knowing any better) rather than out of a low intelligence.

20 hours ago, Traveler said:

I do not know how else to phrase this – perhaps Darwin said it best with the simple phrase, “Survival of the fittest”.   Even G-d cannot make the universe fool proof – Satan is the example that disproves all the possible fool proof theories.

No, but when there are very obvious methods that can greatly reduce the injuries, and they were obviously not followed, then we can do better.

Edited by Carborendum
Link to comment
Share on other sites

On 11/10/2023 at 2:21 PM, NeuroTypical said:

Self-driving cars need to make advanced Trolley Problem decisions.

(If you don't know, here's the trolley problem.  Do you pull the lever?)

 

The Trolley Problem — Origins. The Trolley Problem is a thought… | by Sara  Bizarro | Medium

 

(If you don't know why the car can't just hit the brakes, you need to think a bit more about why there are so many auto/pedestrian accidents.  Sometimes you have to make a decision like this.  Run someone over or crash into a barricade and possibly kill the passengers?  Stuff like that.)


Different solution...derail the trolley.  There are chances you kill everyone on the trolley, but at the speed a trolley is going, there are good chances you'll get away with only injuries to those on the trolley (and they have a little more protection than those on the tracks) while not running over anyone that was tied to the track. 

Link to comment
Share on other sites

Its always a numbers game. Saving 5 or 4 or 3 or 2 lives at the cost of killing one is a simple decision to make if the alternative is killing five or 4 or 3 or 2 lives to save one. Its a no brainer unless you're the man on the lever. Fortunately not having a brain is one of the benefits of AI because it doesn't have to deal with all the messy emotional aspects that come with having a brain. And if you are the man on the lever, well I hope you are very well paid for those rare moments when you may have to make a life and death decision. 

Just a random thought - wouldn't it be interesting if there was a direct correlation between the price you paid for the AI controlled car and the quality of the software that drove the car? The high priced cars get software that always puts driver protection above every other consideration and cheaper cars get software that always follows the road rules no matter what. I guess we already have a version of that with higher priced cars like Volvo and Mercedes offering much more safety and protection than lower priced cars such as Toyota or Chevrolet.  

 

 

Edited by askandanswer
Link to comment
Share on other sites

4 hours ago, askandanswer said:

Its always a numbers game. Saving 5 or 4 or 3 or 2 lives at the cost of killing one is a simple decision to make if the alternative is killing five or 4 or 3 or 2 lives to save one. Its a no brainer unless you're the man on the lever. Fortunately not having a brain is one of the benefits of AI because it doesn't have to deal with all the messy emotional aspects that come with having a brain. And if you are the man on the lever, well I hope you are very well paid for those rare moments when you may have to make a life and death decision. 

 

 

 

It’s a grim numbers game, but I sadly agree. Back in college we talked the situation differently. We said “If you could shoot down one of the planes before it hit the WTC would you do it?” I was called a “monster” because I said you should. 

Link to comment
Share on other sites

4 hours ago, askandanswer said:

Just a random thought - wouldn't it be interesting if there was a direct correlation between the price you paid for the AI controlled car and the quality of the software that drove the car? 

In theory, that is what the free market should get you.  But when we have new technology, it is hard to achieve that when, by definition, a new technology has not been tried, tested, and judged by the consumer market.  So, I think it will be another 10 years to get a lot of the flaws in the tech worked out.  Then another 5 to 10 years for the market pricing to reflect quality.

The other thing to consider is that different companies (nations) have different costs of manufacturing regardless of quality.  We hope that there are some factors in the market that eventually even that out as well.  But when governments are constantly changing rules, it is difficult for the consumer market's moderation function to keep up with the changes.

Link to comment
Share on other sites

The concepts of man verses machine have been around for a long time.  Machines provide a great advantage – but with every advantage there is a caveat.  Historically the most difficult problems with technology are what we call user error.

Perhaps it is amusing but in the history of computers the initial reference to a computer bug was the discovery of a cockroach that had died short-circuiting the computer.  In our modern society, as in history, having access to accurate information is critical to solving any problem.  Some of the worse disasters in history have come from engineering a solution without engineering possible differences produced by the changes.  This problem of a solution becoming worse than the problem has introduced Chaos Theory into the equation.

There is another problem that has to do with the news media.  Seldon do reporters understand advanced technology sufficiently to be able to report accurately.   Having read the article about the case in Korea – I doubt that the reporter has ever observed a palletizer robot.  From the description it would seem that the palletizer (box stacker) picks boxes from a conveyor.  If box size are known there is little reason to justify the expense of size and weight sensors on the conveyor. 

If anyone was to Google a robotic palletizer they would see that there are obviously places for humans to avoid when the palletizer is in operation.  My consulting was mostly directed towards silicon fabrication.   Silicon boxes contain a number of silicon wafers (boxes often valued around 1 million $$).  These boxes are stored in what is called a stacker (similar to an ASRS but much smaller).  Highly trained operators will manually get and input boxes into and from a stocker at highly sensitive ports.  90% of the problems at these ports occur when operators attempt to circumvent the sensors for various reasons.  Perhaps it is only once a month there is a problem but realize that a problem will cost a minimum of a million dollars, and you can understand why there is much attention to this – yet this is perhaps one of the most critical points in silicon fabrication.

With A I and self-driving cars it would be possible to have intersections where vehicles passed through at 60 mph missing other cars by inches.  Obviously, this would not be a good place for manually driven cars – let alone pedestrians.  It would also be a soft target for terrorism.  It is my personal belief that as technology increases it is of necessity that social behaviors, of necessity, must also become more defined and refined. 

One of the theories why we have not found any intelligent life (as we know it) anywhere else in the universe other than earth is that civilizations with advancing technology destroy themselves; especially if their technology is advancing without advancing social behaviors directly reverent and concurrent to continuing the species.  It is my theory that monetary and personal pleasure-seeking gratification tendencies are the greatest threat to the human species of our modern age – more than nuclear war or global climate change.  It is also my impression that our latter-day prophets primary warning is directed towards declines in personal interactions with one another of our same species.  I do not believe we can engeneer our tech as a solution to human interface problems

 

The Traveler

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share