workshop on the legal, ethical and social issues regarding therapeutic and educational robotics

Workshop Proposal, Almere, The Netherlands (22-23 Oct 2015)


  1. Vanderborght, Ethics:
  • There have already been other projects involving robots for research with autistic children. What are the main new contributions of this project
  • Does the care of autistic children pose different ethical challenges compared to other fragile groups, such as elderly people?
  • Could you please describe some of the interactive procedures?
  • Is it safe for both the child and the robot to interact without supervision?
  • “to what extent should the robot-child interaction be supervised and controlled by the therapist?” Do you mean that parts of the robot-child interaction is not supervised and controlled by the therapist?
  • “to what extent should the robot-child interaction be supervised and controlled by the therapist?” Does it mean that the therapist is not involved and responsible for the design of the interaction? Can you give example of robot interacting with child that is not designed and in that sense supervised by the therapist?
  • Is the issue how much trust can be placed in the robot, or how much trust can be placed in the therapist who is supervising the use of the robot?
  • Does the robot raise any ethical question different from any other therapeutic tool?
  • What is your opinion regarding the latest Special Eurobarometer 427, Autonomous Systems Report, 2015? European like more robots the more they know. Do we need to continue teaching Europeans on the robot culture? Or is it a natural consequence after time goes by?
  • Can you elaborate on the type(s) of data collected by the robot?
  • Do you have a data management plan for the access, sharing, re-use, and long-term storage of these data?
  1. Bouri, Exoskeletons Activities Daily Living:
  • Do some users attribute any degree of agency to exoskeletons? For example, do they name them in any cases?
  • In which case do you think a fragile user (e.g. an elderly person) would feel more stigmatized: with an exoskeleton or with an assistive home robot?
  • As I understand it, exoskeleton devices, do not really provide an independent living for the user. What is the factor that needs to be faced in order to provide this?
  • Which is the cost for such a device?
  • “A presence of another person is necessary and no technological answer is available to this issue” Do you think these type of devices introduce special issues regarding the possibility of critical failures, responsibility, and the need for human supervision? Compare motor saw, rifles, cars, etc.
  • Can automobile regulation assist in thinking about the risks posed by failure – failures that pose a “safety risk” could be treated more rigorously than failures that just mean that the device cannot provide the assistance planned?
  • Exoskeletons bring about independence. However, as the nature of the exoskeleton is the symbiosis with the user’s movement, could them cause some sort of dependence? What could this dependence mean in the long-term? What could happen if the exoskeleton stops working and needs to be replaced by another one? What if the producer stops producing them?
  • How is the liability environment similar or different to other assistive technologies, like motorized wheelchairs, etc.
  • Are regulations on this type of technology highly country specific (EU vs US vs Japan)? If so, what are the major differences?
  1. Ienca, Intelligent Technology Dementia Care
  • What can we do to ensure a more egalitarian distribution of this technology in the future to ensure it does not fall only in the hands of the richer?
  • What is the most state-of-the-art technology that is currently available for dementia care?
  • How could IT products for dementia care could be more accessible and acceptable by stakeholders (industry, institutions and users)?
  • Are IT products for all or just for those who can afford them?
  • “Special standards for informed consent with mentally incompetent patients” What role does the mentally incompetent patient play in the informed consent?
  • “Special standards for informed consent with mentally incompetent patients” Does het still play a role? Or is he completely in the hands of the caregivers?
  • Please say more about informed consent in this context.
  • Are “distributive justice, informed consent and personal autonomy” the only ethical issues involved in intelligence technology for dementia care? Could you tell us whether you consider we need general ethical guidelines or some sort of specific framework?
  • Is there a precedent or existing policy model within targeted countries for reimbursement plans?
  • Can you elaborate on your plans for special standards for informed consent?
  1. Fosch, Principles Care Robotics
  • Could you please tell us more about the concept ìRegulate-As-You-Goî?
  • Do you envision a universal set of care robotics laws, or do you think it should differ per country? (For example in Japan and Europe, where robots are accepted differently)
  • How close we are to an HRI official public policy?
  • The willingness and full consent of elderly to interact with robots is not still addresses. Should this be incorporated to a forthcoming public policy?
  • “if they should be granted agenthood” Why is this a relevant question? Suppose we grant robots agenthood. What does it imply? Does it mean they are responsible for their actions? And what does that mean? (see autonomous cars).
  • If these devices enter the market before liability rules are clear, do post-facto legal devices, especially tort law, provide a good mechanism for developing such rules?
  • If we “regulate as we go”, do you think we will have to encounter serious misuse in society instead of anticipating likely harms? (E.g., regulations on the use of human subjects in research came only after many cases of mistreatment)
  • Do you think the law around care robots will have to evolve specifically to address new user trust paradigms that is not necessarily seen in other computer/sensor interfaces?
  1. op den Akker, Care Robots Technology
  • You said “We cannot foresee how technical ideas will be applied and used and how they will impact human practices.” Don’t you think, however, that we should at least try to foresee this?
  • Are we actually able, as humans, to think of machines as something that can only be described in relation to us?
  • Is it possible for children and elderly to understand the semiotic view?
  • How this could be explained to dementia users?
  • We try to work on robot-ethics, from the robot perspective and according to the human code of ethics. Should robots have a different code of ethics? And, in relation to this, what is your opinion on the investment humans do in human ethics? Wouldn’t we better need to promote ethics and emotion intelligence to humans?
  • What about the ability of designers to embed values into the design of technical systems/or the technology itself? Even if the technology is not “intelligent”, couldn’t the design choices reinforce certain politics, morals, or beliefs?
  • Similarly, how does the robot as a socio-technical system play into your argument?
  1. Saiger, Accommodating Students with Disabilities Using Social Robots and Telepresence Platforms
  • What are the main advantages of using robots to support children with disabilities?
  • Does the age of the children play an important role in the use of robots for children with disabilities?
  • Could you please describe some cases of robots used in education for children with dissabilities?
  • Could this practice assist mixed classrooms?
  • How are telepresence robots used in schools for children with physical disabilities? What are the experiences, challenges?
  • Who is responsible to decide what technology suits better what needs?
  • How can technology counterbalance the isolation caused by some of this technology?
  • Should the use of such technology be time-limited (recovering of the disability, removal of the conditions that originated the need of usage)? If the purpose of it is to enhance sociability, could be removed once the child improves his/her condition?
  • How can roboticists make a technology that can cope with the different disabilities? Should it be created a priority or most-frequent disabilities?
  • Are the remote students be evaluated as equally as the other students? Could all that cause further discriminations?
  • The team asks how are these “certainties” defined, and by whom? How are these autonomous systems going to be able to solve problems without objective answers? And, moreover, as the nature of ethics is very subjective, how will machines be able to deal with the variety of profiles, beliefs, and cultures? []
  • Are there separate design features that would be necessary to make education-assisting robots compliant for students with disabilities?
  • Are there currently cases where students with disabilities are not being accommodated? Is this because the use cases, design of the robot, or what?
  1. Sedenberg, Designing therapeutic robots for privacy preserving systems
  • Regarding access to data, what are examples of special considerations for choices of disabled or vulnerable persons?
  • What do you mean by “dynamic informed consent models”?
  • How feasible is actually privacy and ethical rules and data protection in a “cloud world”?
  • How feasible is to protect children, elderly and other vulnerable groups from a world that they can’t really understand?
  • Do children, elderly, handicapped, etc. fall under the concept of “autonomous agents”?
  • Are there human beings that can be considered autonomous agent? How is this concept related to independent?
  • How far does the value of transparency extend?
  • Can the code or algorithms remain proprietary if the system is to be considered “transparent”?
  • What is your opinion regarding data portability? Could that make systems more vulnerable?
  • From your point of view, is IBM Watson working following all the reports you mention? How can these data monopolies comply with all the legislation and codes of ethics? Are they really working for the sake of the public interest? How can the education system and the practices change in consequence?
  1. Reppou, Dementia Care
  • Did most people in the focus group perceive the robot as a potential companion or as a tool?
  • Are you going to use always the same robot?
  • Do older people (sometimes) have the right to be excluded and left alone, not being bothered with all kinds of “news”?
  • What technology helps in caring about the real needs of isolated older people?
  • To what extend is the “friend” metaphor the right one?
  • Can one develop a definition that distinguishes care, therapeutic, and educational robots from toys? (this is also relevant to the Karagiannis presentation:  I notice that the robot you are using looks like a toy.  Is this purposeful?  Is this consistent with your stated question whether seniors and robots can be “friends”?
  • The education on technology is a very important issue. The compilation of information from the users is very valuable but the authors could have gone beyond. What conclusions do we extract from all this?
  • How were participants selected for your study?
  • How do you plant to follow up on this study?
  1. Albo, Toy Robot vs Medical Device
  • What is the attitude of medical robots towards considering social robots as medical devices?
  • The robot design could follow the needs of specific groups and address their needs. Isn’t that feasible?
  • Why medical devices appear as toy robots? Is it for commercial or public policy issues?
  • Should designers and researcher actually consider more the applications of the robots that they develop/use?
  • What are the implications of each of the alternatives (robot toy/medical device)?
  • Can you explain better last sentence (“unethical behavior”)?
  • Can a toy be unethical?
  • Can one develop a definition that distinguishes care, therapeutic, and educational robots from toys?
  • What do you know about the consequences of the occurrence of harm? If a “robot toy” causes harm (either in the physical or cognitive layer)… would the regulations regarding toy robots be enough? If yes, what is the sense on having both certifications?
  • How do you think this problem should be addressed? In the US, should the Federal Trade Commission (FTC) prosecute manufacturers of these “toys”, or should there be a clarification of regulatory fit from the FDA? Or…?
  1. Gallego, Obstacles in HRI studies
  • Apart for the obvious obstacles to HRI with elderly participants should we also take into account their unwillingness to substitute HI with HRI?
  • How valid could the data be in small samples due to all the difficulties that you describe?
  • “Impaired or slower cognitive abilities” Do terms like “human” and “machine” mean the same for elderly as for children, adults?
  • “Impaired or slower cognitive abilities” What does this imply for the way they perceive a robot, how they value their interaction with it?
  • Please say more about informed consent in this context.
  • How could new technologies help to HRI research? Could big data be used to this domain? Could a 24/7 monitor enrich the study? How invasive would be that?
  • How do you think underrepresentation of elderly participants in HRI studies is impacting the design of therapeutic robots?
  1. Carnevale/Pirni
  • Laurel D. Riek and Don Howard in “A Code of Ethics for the HRI Profession” in WeROBOT 2014, wonder if we have the need to have protocols for touching. They also say that HRI practitioners should consider whether these robots be designed to encourage/discourage the formation of emotional bonds [while realizing some bonding will be inevitable regardless of the morphology of the platform].
  • Can you elaborate on the ethical sustainability test?
  • Whose responsibility are these considerations? Governments? Manufacturers? Healthcare providers? Users?
  • Should our laws regulate how person-like should our caregivers be?
  • If a robotic caregiver has become someone’s object of affection, should it be treated by the law as more than an object than just an object/possession/tool?