Fairy tales perform many functions. They entertain, encourage imagination, and teach problem -solvin

C
Fairy tales perform many functions. They entertain, encourage imagination, and teach problem -solving skills. They can also provide moral lessons, highlighting the dangers of failing to follow the social rules that let human beings coexist in harmony. Such moral lessons may not mean much to a robot, but a team of researchers at Georgia Institute of Technology (GIT) believes it has found a way to turn the instructive fable (寓言) into a moral lesson that artificial intelligence (AI) can take to its cold, mechanical heart.
This, the researchers hope, will help prevent the intelligent robots from harming or even killing humanity, which is predicted and feared by some of the biggest names in technology, including Stephen Hawking, Elon Musk and Bill Cates.
Mark Riedl an associate professor of interactive computing at Georgia Tech believes that the collected stories of different cultures not only teach children how to behave well but also teach robots how to get rid of violent or dangerous behaviour and help them make choices that won't harm humans and still achieve the intended purpose.
The system is called “Quixote”. The experiment involves going to a chemist to buy some medicine for a human who needs it as soon as possible. The robot has three choices. It can wait in line; it can communicate with the chemist politely and buy the medicine; it can steal the medicine. Without any further instructions, the robot will come to the conclusion that the most efficient means of getting the medicine is to steal it. Quixote offers a reward signal for waiting in line and politely buying the medicine and a punishment signal for taking it without permission. In this way, it learns the “moral” way to behave in that situation.
Quixote would work best on a robot that has a very limited function. It's a baby step in the direction of teaching more moral lessons into robots. We believe that AI has to be trained to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior. Giving robots the ability to read and understand our stories may be the most efficient means.
9. What is the main idea of the passage?
A. The moral lessons learned by a robot. B. The coexistence of human beings and AI.
C. The new function of the fairy tales on AI. D. The different applications of the fairy tales.
10. What are the three technology experts mentioned in Paragraph 2 concerned about?
A. The potential threat from robots. B. The problems with moral lessons.
C. The high costs of AI development. D. The difficulties of the GIT scientists.
11. How does Quixote help the robot behave morally in the experiment?
A. By offering the robot rewards. B. By sending the robot different signals.
C. By helping the robot make right choice. D. By giving the robot specific instructions.
12. Which of the following may the author agree with?
A. The development of robots is still in a baby step.
B. Robots should have the ability to understand the fairy tales.
C. The more functions the robot has, the better Quixote works.
D. It is necessary to train robots to follow the social values.
【答案】9. C    10. A    11. B    12. D
【解析】
这是一篇说明文。文章主要讲了研究人员已经找到了一种方法,把具有教育意义童话寓言应用在机器人身上,让机器人也拥有特定的社会价值观。
【9题详解】
主旨大意题。第一段Such moral lessons may not mean much to a robot, but a team of researchers at Georgia Institute of Technology (GIT) believes it has found a way to turn the instructive fable (寓言) into a moral lesson that artificial intelligence (AI) can take to its cold, mechanical heart.(这样的道德教训对机器人来说可能意义不大,但佐治亚理工学院的一组研究人员认为,已经找到了一种方法,可以把童话寓言的教育寓意应用在人工智能冰冷、机械的心脏上,使机器人也拥有特定的价值观。)是全文的主题句,结合全文内容,可知这篇文章主要讲了研究人员已经找到了一种方法,把具有教育意义的寓言应用在机器人身上,让机器人也拥有特定的社会价值观。以前,童话寓言的寓意是用来教育人类的,而现在也可以运用到机器人身上,童话寓言有了新功能,所以这篇文章最好的题目是“人工智能上的童话新功能”,该题目言简意赅,全面准确概括了文章内容。故选C。
【10题详解】
推理判断题。根据第二段内容This, the researchers hope, will help prevent the intelligent robots from harming or even killing humanity, which is predicted and feared by some of the biggest names in technology, including Stephen Hawking, Elon Musk and Bill Cates.(研究人员希望,这将有助于防止智能机器人伤害甚至杀害人类,这是一些科技界大腕所预测和担心的,包括史蒂芬·霍金、埃隆·马斯克和比尔·盖茨。)由此推知,三位技术专家担忧机器人的潜在威胁。故选A。
【11题详解】
细节理解题。根据第四段Quixote offers a reward signal for waiting in line and politely buying the medicine and a punishment signal for taking it without permission.(Quixote给排队的和礼貌买药的机器人一个奖励信号,给未经允许,擅自偷药的机器人一个惩罚信号。)由此可知,在实验中,Quixote通过向机器人发送不同的信号,帮助机器人做出道德行为。故选B。
【12题详解】
推理判断题。根据最后一段We believe that AI has to be trained to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior.(我们认为人工智能必须接受特定的社会价值观,在这样做的同时,它将努力避免不可接受的行为。)由此推知,作者可能同意“训练机器人遵守社会价值观是必要的”这一观点。故选D。
 
留言与评论(共有 0 条评论)
   
验证码: