英语吧 关注:1,544,852贴子:11,405,726
  • 3回复贴,共1

Beg the help of the great God!!!!!

只看楼主收藏回复

有没有大神能得出这套阅读的答案
In the beginning of the movie / robot a robot has to decide whom to save after two cars plunge into the water-del spooner or a child. even though spooner screams"save her save her.the robot rescues him because it calculates that he has a 45 percent chance of survival compared to sarah's ll percent. the robots decision and its calculated approach raise an important question:would humans make the same choice and which choice would we want our robotic counterparts to make?
  Isaac asimov evaded the whole notion of morality in devising his three laws of robotics, which hoId that I. robots cannot harm humans or allow humans to come to harm; 2. robots must obey preservation, unless doing so conflicts with laws i or 2. these laws are programmed into Asimov's robots--they don't have to think, judge, or value. they don t have to like humans or believe that hurting them is wrong or bad. they simply don't do it.
  The robot who rescues Spooner's life in i, robot follows Asimov's zero law: robots cannot harm humanity (as opposed to individual humans or allow humanity to come to harm--an expan-sion of the first law that allows robots to determine what's in the greater good. under the first law.a robot could not harm a dangerous gunman, but under the zero" law, a robot could kill the gunman to save others.
  Whether it’s possible to program a robot with safeguards such as asimov's laws is debatable A word such as"harm"is vague (what about emotional harm? is replacing a human employee harm?)and abstract concepts present coding problems. the robots in asime complications and loopholes in the three laws, and even when the laws work, robots still have to asscss situations.
  Assessing situations can be complicated. a robot has to identify the players, conditions, and possible outcomes for warious scennrios. It’s doubtful that a computer program can do that-at least, not without some undesirable results. A robotocost at the bristol robotics laboratory pro
  grammed a robot to save human proxies(替身)called“H’bots”from danger. when one H-bot of headed for danger,the robot sucecssfully pushed it out the way. but when two h-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both die .The robot choked 42 percent of the time, unable to decide which to save and letting them headed for danger, the robot successfully pushed it.how can a robot decide whom to save or what’s best for humanity, especially if it can't calculate survival odds?
  46. what question does the example in the movie raise?
  a whether robots can reach better decisions
  b)whether robots follow Asimov’s zero’law
  c)how robots may make bad judgments
  d) how robots should be programmed
  47. what does the author think of asimov's three laws of robotics?
  a) they are apparently divorced from reality
  b) they did not follow the coding system of robotics
  c) they laid a solid foundation for robotics.
  d) they did not take moral issues into consideration.
  48. what does the author say about asimov s robots?
  a they know what is good or bad for human beings
  b)they are programmed not to hurt human beings
  c) they perform duties in their owners'best interest.
  d) they stop working when a moral issue is involved.
  49. what does the author want to say by mentioning the word"harm"in asimovs laws?
  a abstract concepts are hard to program.
  b) it is hard for robots to make decisions.
  c) robots may do harm in certain situations.
  d) laws use too many vague terms.
  50. what has the roboticist at the bristol robotics laboratory found in his experiment?
  a)robots can be made as intelingent as human beiegs some day.
  b)robots can have moral issues encoded into their programs.
  c)robots can have trouble making decisions in complex scenarios.
  d)robots can be programmed to perceive potential perils.


1楼2018-01-12 08:47回复
    bbbcb


    IP属地:重庆2楼2018-01-12 08:51
    收起回复
      不会选C 重在参与


      来自iPhone客户端3楼2018-01-12 09:22
      回复