Comments and Feedback on the winning proposals of the Guardian's Challenge
By Ceyhun Karasu, AI Ethics Advisor to EthicsNet
This document is a compilation of my comments and suggestions on the winning proposals of the Guardians’ Challenge. First things first, EthicsNet’s initiative motivates me and shows that important people in the field of AI do care about ethical issues and its technical difficulties. As we all know, it’s crucial to think carefully about the implications of AI driven systems in society. The challenge offers people from different backgrounds to come up with proposals and share their views, which helps people like me (who research independently) better understand the community revolving around AI ethics.
It’s very important to have a cross-disciplinary and interdisciplinary understanding of AI R&D in general. This can help us mitigate possible existential risks, but also globally help us design a better future where AI will play a crucial role. My feedback on the proposals is a compilation of suggestions, critical argumentations and questions that I think are important aspects when thinking about AI ethics. The feedback is written with the vision of creating a better database to make AI systems much safer and morally aligned with human values. The timescale of ethical behavior is an issue that I expected to be discussed in the proposals, but few talks about it. Proposal 4 briefly talks about it by referring to datasets that evolve over time.
Ethics is time dependent, meaning that an action is taken with a specific timeframe in mind which is voluntarily or involuntarily embedded in the causality of the action. An action that might seem ethical in a given timeframe might not be at all ethical in another timeframe, the long versus short-term decision-making debate relative to an individual’s or entity’s moral state should be considered. Keeping the aforementioned point in mind, we understand that ethics cannot be reduced to simple right or wrong action taking and doing the good or bad thing. It is not an issue of good/correct and bad/wrong action which is still a remaining reasoning mechanism dating back to humanities feudal and religious history.`
Overall, I think the approaches are going in the right direction to create a database for ethically aligned AI systems. The component of interactivity between users and AI is also a recurrent feature which is a positive attitude to keep knowledge sharing as diverse as possible. There are certain things that could have been defined a little bit more precisely.
PROPOSAL 1: A developmentally-situated approach to teaching normative behavior to AI (G. Gordon Worley)
1.) “…philosophy the study of ethics quickly turns to metaethics…” (p. 1)
a) Philosophy deals with ethical questions that can indeed lead to metaethics but that is only a small portion of the whole picture. There is also work done where ethics is more systematically analyzed that falls in the domain of formal ethics (see Gensler 1996). Formal reasoning systems such as logic allows philosophers to build complex models by introducing ethical axioms and resolve ethical dilemmas based on those axioms. On the other hand, philosophers studying applied ethics, analyze real-world case studies (e.g. abortion, autonomy in healthcare, human impact on global warming etc.) and utilize ethical frameworks to assess moral values and principles, and study how moral values and principles are formed or evolve in time.
Evolutionary ethics is also an example where both philosophers and researchers from the sciences such as biology, behavioral psychology and sociobiology work together. There are also interesting exotic approaches to deal with certain ethical problems by using Bayesian epistemology which is a domain inspired by a combination of game theory, analytical philosophy, statistics and ethics. These models can provide an insight in how ethical values are learned and how these values can both be generalized to unknown cases and personalized to specific known cases. Overall, philosophy has grown into a vast interdisciplinary domain like most disciplines and experimental philosophers are trying to answer deep philosophical questions with both scientific methodology and philosophical reasoning. Philosophical studies related to moral relativism and cultural relativism are examples that show that philosophers also study cultural development and personal development for the formation of ethical values and principles.
2.) “Rather than developing a coherent ethical framework from which to respond, humans learn ethics by first learning how to resolve particular ethical questions in particular ways, often without realizing they are engaged in ethical reasoning, and then generalizing until they come to ask question about what is universally ethical (Kohlberg, Levine, & Hewer, 1983).” (p. 1)
a) The top-down approach is very interesting as it looks at the psychology and personal development of cognitive beings in order to deduce ethical principles or learn about the formation of moral values. The vision is interesting and developmental psychology can be used as a methodological inspiration rather than a direct implementation. b) That said, AI is not human (yet), meaning that developmental methods that work to understand how cognitive beings develop moral values might not work for AI systems because of biological differences. On the other hand, the important goal of EthicsNet is to create a database for AI systems that can prevent human error reasoning strategies or at least mitigate human error reasoning strategies in order to act and behave ethically. Using developmental psychology to understand how these errors are learned and developed is a good alternative to create a database that brings human bias and human error into the equation. However, there is still an epistemic gap in the learning mechanisms between cognitive beings (e.g. Humans, primates) and artificial cognitive systems (e.g. AI, AGI, robots) in that both systems do not share the same evolutionary path.
c) That said, maybe it is a better idea to create a robust framework of ethical values and understand the mechanisms of how these ethical values are formed and also have a vast ontological picture of these ethical values. With time, allow the robust framework to adapt based on previous experiences (here developmental psychology can give us some insights in the learning mechanisms). Consequently, embed these frameworks to AI systems and, to stay coherent with Kohlberg’s theory, push AI systems to reason at stage 6. Preferably don’t (yet) allow these systems to take higher order actions before consensus is reached between the AI system’s predictions and human supervision, or we can let the AI systems take actions in a simulated environment before taking higher order actions.
d) The following questions arise, if we solely focus on the works of developmental psychologists such as Commons, Piaget and Kohlberg: Should the AI system mimic the same developmental stages as human beings to ultimately reason ethically about given situations? Or should/can it skip some developmental stages in order to ethically reason about given situations? What tells us that these evolutionary developmental stages are aligned with the future of humanity and the universe? Before finding an answer to the last question, we as humans have to globally set milestones and goals that we can reach within the next century. For as long as humanity cannot come to consensus and propose global policies, the space of future possibilities will be unknown and so will every step we take be supported by overestimated assumptions.
e) Another argument against solely using developmental psychology to create “kind” AI is the fact that it’s not yet clear whether humans are the best evolutionary example that promotes acting kindly. Mimicking directly the evolutionary stages of human development is in my opinion not the right alternative to create AI’s that are “kind”. Although, several studies in developmental psychology are very well documented, we remain with data that is not diversified enough or noisy. Moral values and learned ethical principles are not only personal but also shared and can change according to the social factors in which a human finds itself. More research around socially learned moral values can definitely help make the data less noisy or at least add depth.
3.) “Commons has argued that we can generalize developmental theories to other beings, and there is no special reason to think AI will be exceptional with regards to development of ontological and behavioral complexity, thus we should expect AI to experience psychological development (or something functionally analogous to it) and thus will develop in their moral reasoning as they learn and grow in complexity.” (p. 2)
a) It would be good if we can find more studies about whether developmental theories can be generalized to AI systems and offer an understanding of how to create databases that will lead AI systems to act ethically. Developmental psychology can offer deeper insights in understanding the first crucial steps that a cognitive being takes to start reasoning ethically. This could indeed be used as a conceptual learning framework to develop a rough database for AI systems. On the other hand, the complexities of the brain are not yet immediately comparable to algorithms or neuromorphic hardware that we have today. Currently, AI systems use very simplified cognitive models to learn and most of the time need a lot of data to give approximative answers yet are performing impressively better at very specific tasks. It is still an open question whether AI systems will be able to reason ethically.
4.) “Assuming AI systems are not exceptional in this regard, we consider a formulation of EthicsNet compatible with developmentally situated learning of ethical behavior.” (p. 2)
a) AI systems can become exceptional as they might have the capacity to “learn” ethical reasoning mechanisms much faster and make parallelly more accurate predictions than human beings. This procedural advantage can be based on the quantity and quality of available data or simply the advances in implementing cognitive models to computational modeling.
b) Another factor that makes AI systems exceptional is that it can be atemporal and not constrained to the same physical limitations as human beings. These are the questions that come up when I think about the compatibility problem. Would an AI take action once it reaches a level of ethical reasoning that is more “advanced” and forward-thinking than all human beings combined? In other words, what is the spectrum of ethical values that an AI system will consider and how relevant are these values to the future of human civilization?
5.) “The analogous cases for AI systems will of course be different, but the general point of presenting developmentally appropriate information holds, such as eliding nuances of norms for children that adults would normally consider.” (p. 3)
a) How to proceed if the developmentally appropriate information surpasses even the most ethically conscious human being that acts as a guardian?
6.) “… clues to humans about the AI system’s degree of development …” (p. 3)
a) How can we categorize, qualify or quantify the “AI’s degree of development” if it surpasses the degree of development proposed by models of developmental psychology?
7.) “Given this context in which EthicsNet will be deployed, it will also be important to make sure to choose partners that enable AI systems being trained through EthicsNet to learn from humans from multiple cultures since different cultures have differing behavioral norms.” (p. 4)
a) This is indeed vital to create AI systems that are cooperative and culturally robust. The challenge here is of course reaching consensus between opposing views and moral values that are embedded in a given culture (e.g. abortion is not welcomed in cultures where religion has the upper hand).
8.) “…framework in which AI systems can interact with humans who serve as guardians and provide feedback on behavioral norms…” (p. 4)
a) In general, teaching an AI human values like we teach children values is not immediately the best option. No matter what you teach a child it will find ways to circumvent these norms as evolution pushed us in that direction so that we do not fall prey to predators. An AI system needs to learn to form a moral compass on its own and weigh the values that are thought where human interaction should be minimized to prevent embedding human bias. An AI has to be able to reason about ethical dilemmas and think neutrally without taking sides, while human beings should solve moral dilemmas on their side and occasionally interact with AI systems to reach consensus. After reaching consensus based on the interactions between humans, the interactions between AI system’s reasoning and interactions between humans and AI systems, only then can we vouch for AI systems to act according to that specific dilemma. Finding mechanism to interpret and contextualize information is key to develop “safer” and “ethically conscious” AI systems. Supervision and interchanged interaction between AI and humans can solve the problem of contextualization but there is a problem. What are the conditions and requirements to supervise? To what extent does that person have to understand complex ethical dilemmas in order to teach it to the AI or embed it in the algorithm? What about more abstract concepts such as belief systems and behavioral patterns?
Conclusion for Proposal 1
The idea of supervised learning that enables human-to-AI interchanged interaction is a good dynamic approach even though it has its complexities. Overall the constructivist approach of this proposal is very interesting and if guided in the right direction can be a good first step to introduce ethical reasoning mechanism to AI systems. If we stick with the developmental psychology approach, we also have to consider that humans are involved in different ecosystems (Bronfenbrenner 1979) which can dramatically influence the moral compass of a human being. AI systems have the capacity to not be directly influenced by these different ecological systems. If we want AI systems that are culturally robust and have an understanding of different interpersonal and social relations, we must find a way to embed or at least teach the different ecological systems and how these systems interact. The precise interaction is still unclear in today’s scientific literature because of the ongoing nature versus nurture debate. However, there is good scientific work that provides mathematical models describing the internal cognitive processes of behavioral development which in principle can be converted to computational models. One crucial point and difference between humans and AI systems is simply put the difference in physical limitations and constraints. Humans are constrained to time and space. AI systems can bypass this, making them less vulnerable to physical constraints but at the same time less able to “understand” what it feels to have a body in time and space. Physical properties and sensory motor experiences do influence our ways of living and thinking, which should also influence how we learn moral values, the way we think about ethics and how we act morally. One simple example is the fact that it is not a good idea to jump from a roof to save a person that is being harassed on the street right below the building. AI systems need to understand the physical limitations of humans to provide decisions or act morally.
PROPOSAL 2: Crowd Ethics (Ovidiu Dobre)
I agree that an elite group of people with moral consciousness is not able to perceive all possible ethical motivation and that a collective approach of image classification with an ethical attitude can be an alternative option. However, collective image labeling is not enough information to decide whether the situation is ethical/right/wrong. Ethical reasoning needs causality, interpretability, contextualization and associative thinking with many counter examples. A video or a cluster of images depicting a scenario would be epistemically more valuable for ethical reasoning and labeling.
This proposal might be a little bit too simplistic for a very complex problem. The examples of the images that are labeled as ethical or not is also very narrow. For example, Image 3, depicting a child that doodles on the wall of the bathroom is labeled as “not ethical”. This is a good example where contextualization will be extremely important, because a kid doodling on the wall of a bathroom might be a good thing to improve the creativity and imagination of the child. The intention of the proposal might be genuine, but the simplicity can cause very noisy datasets if the context is not provided. The idea of the platform and the collectivistic approach is interesting, but the approach in defining what is ethical and not is again rather simplistic.
The gamification of a platform to drive it forward is also very tricky to agree with in the context of ethics and I would not agree that gamification motivates people to act ethically nor kindly. Gamification can promote competitiveness (bias) which might unintentionally influence the user’s perception about whether an image depicting an event is ethical or not. The psychological factor of participation and how to minimize biased decisions is also an important issue that should be researched.
PROPOSAL 3: Built-in Artificial Empathy as Baseline in AI (Wu Steven)
The conceptual model proposed is interesting and seems to be a combination of the two previous proposals but with video material as data source instead. Empathy plays indeed a crucial role in moral judgement and being able to feel another person’s situation. However, we are still stuck with the dilemma of physical constraints and context that I discussed earlier. There are several things that I discussed in the previous proposals that are also applicable to this proposal 3 such as: the problem involved with human annotation and crowd ethics (e.g. conditions and requirements to annotate, moral capacity of the annotator, supervised vs. unsupervised learning to prevent human error). The systematic approach of proposal 3 is in its essence in the right direction. However, difficulties might arise when thinking about the bigger picture. I’ll point out some difficulties that are presented in section 3 of the proposal in the following paragraph.
1.) “Materials and methods” (p. 2)
a) Sourcing video materials and extracting event clips through private or public platforms with proper licensing can work for certain video materials, but would be more difficult for sensitive video materials. Sensitive video materials can help understand more complicated situations such as case studies by criminal justice departments. One example is for instance having access to the data recorded by law enforcement and police interventions (e.g. Axon). These cases can be valuable information to create empathy maps as it reflects some reality of the justice system, while videos on YouTube or Vimeo can be fictional and not always realistic. Others might also induce privacy and public policy issues if for instance video recordings of public surveillance cameras need to be accessed.
b) The difficulties that might arise with empathy maps is at the stage of contextualization. Video sources do not give us the full picture of the act and causalities in which a person is involved. We all know that empathy is something very dynamic with different layers and dimensions. An empathic act can at a given timeframe for a given situation turn out to be a devasting for other situations or people. This is of course known as a moral dilemma which should be discussed. Let us say that a man is in financial difficulties. The only way to provide and survive is by robbing a bank or by committing an illegal crime. That person is ready to risk its life to protect the future of its family and community. From the viewpoint of the person that will commit this crime, the act is altruistic, in the sense that the person will provide and protect the lives of its family or community. However, from the viewpoint of the legal system the person is acting immorally by bringing the lives of other people (outside or inside the community) in danger and going against its civic duty as citizen. So, on one side we can conclude that the person committing a crime has a level of empathy and from another perspective we can conclude that the person has no empathy for people outside its own community.
We can even complicate the situation by going prematurely to the future and presume that the child of the person committing a crime was an excellent student. It turns out that the child has the potential to become the person curing cancer and other pathological diseases, saving millions of lives. We can see that the space of future probabilities can grow exponentially and that even the most sophisticated statistical analysis will not be able to predict how the criminal act of that person committing a crime will affect future lives.
This is one core element of why humanity has difficulties to come up with overarching policies and global agreements, as every moral act is situation and context dependent. In other words, an empathy map can become very useful for less complex situations and acts where enough historical information is available. It can be utilized for simple environments or systems where enough information is available. It can also be useful for situations where there is universal consensus based on shared principles and natural laws (e.g. video showing an entity, individual or a group of people polluting the environment). However, it won’t be useful to map out more complicated “stories” and or situations where not enough historical information is available. Historical information can give an insight in the intention of the act in which a person is involved.
c) The implementation of automated event detection algorithms will again depend on how much historical information we have about a given situation, how accurate the algorithm can interpret the information and whether we have a clear understanding of the moral implications of that given event in order to generalize. The source and format of the data will also influence this procedure.
2.) “The fact that we were born with empathy makes us different from other species and artificial products.” (p. 3)
a) It would have been better if this was formulated differently such as “the fact that humans have the ability to empathize in a more advanced and complex way than other living beings”. It has been shown that primates such as chimpanzees and dolphins do have a degree of empathy towards other living beings. This is due to mirror neurons (Frans de Waal, Jean Decety et Vittorio Gallese) that can play an important role driving the sentient being to unintentionally mimic the behavior of another living being. Although scientific literature and opinions are diverse, it does show us an alternative scientific view that empathy is not unique to humans and that it has a biological foundation.
b) The question whether different forms of empathy are innate or is developed throughout experience should also be considered. The concept of mirror neurons is in the literature of cognitive science vague and still a hypothesis (Hickok 2014), nevertheless still plausible.
PROPOSAL 4: We're involved in AI ethics with the right concept! (Adam Bostock)
a) The underlying ethical approach and proposal is interesting and relevant. However, the method to implement this approach is not well defined and too vague. The proposal refers to implement it by using a technology that is easy to understand and easy to access: “simple to understand, and accessible via a wide range of devices and platforms.” It refers to the internet and the online social network. However, we know that making datasets (where more values are relevant) too accessible to the public can also lead to very noisy datasets or biased datasets. It is perfectly plausible that groups of people might poison or negatively bias the dataset (e.g. alt right groups or extremist groups), and this might jeopardize the dataset if the majority has the upper hand. Subsequently, the proposal provides explanation about xIntelligence (xI) as a method of sharing knowledge in different forms of intelligence, which works with providing concepts that can be shared or distributed. It then indicates that not only humans but also AI systems can learn from it, and thus by adding concepts about ethics we will make the AI system learn these concepts with the proposed ethical approach.
The whole point of creating a dataset in order to make AI systems behave ethically, is finding how AI systems can interpret datasets linked to shared concepts and their attributes that have ethical relevancy. Introducing concepts of Ethics with attributes to AI systems does not mean that the AI system will act ethically or be able to interpret that information correctly. The nine ethical approaches are overall relevant to this day, but again does not provide information about how to deal with complex problems such as for instance the trolley problem for core safeguards like “no harm”.
b) The second point refers to the training of datasets that evolve over time. Would there be a metric to map out or follow its evolution in order to guarantee that the attributes and concepts linked to the dataset has not been compromised? c) The third point protects the idea of adopting a probabilistic approach in favor of binary approaches. Could Bayesian networks for instance help us with this? The remaining bullet points are genuine but very general, which can lead to inconsistencies or circular reasoning if very specific cases are presented. For instance, taking point nine into account, would it not be against the idea of distributed knowledge and value diversification if a central authority decides whether certain datasets are claimed to be risky? How open should the database be to both involve different people from different backgrounds and still give authoritative powers the ability to pause the adoption of risky datasets.