Xiaofei Liu is currently an Associate Professor of Philosophy at Xiamen University. He studied at Peking University, the University of St Andrews, and University of Missouri. His research interests moral responsibility, applied ethics, and the experimental approach to morality. He is currently working on three research projects: moral issues concerning the use of statistical discrimination, moral issues concerning interaction with artificial objects, and the understanding of agency in different cultures.
A common objection to statistical discrimination is that it violates the duty to respect individuality. A series challenge to this objection is that even individualized consideration inevitably relies on some kind of generalization. This leads some theorists to conclude that there is no morally significant difference between discrimination based on individual information and discrimination based on some group identity. This talk considers a possible way to draw a morally significant distinction between two types of generalization in statistical discrimination: individualized generalization and over-generalization. It argues that reliability is not the only dimension for evaluating statistical generalization; there is a second, and a morally relevant, dimension – identity, which helps to explain why there is a morally significant difference between individualized generalization and over-generalization. This talk ends by extending this distinction to two types of inductive inference in everyday causal reasoning, explaining why the standard causal reasoning in scientific research does not amount to an over-generalization.
Recent advance in artificial intelligence and related technology have significantly changed many aspects of human life, and are destined to bring even more radical changes to the structure of human society. One potential area that is subject to such changes is what British philosopher Peter F. Strawson called reactive attitudes – our emotional and attitudinal reaction to intelligent beings with which we interact. Should we praise Alpha Go, the artificial intelligence program that beat the best human Go game player, for its computational power or creativity? Should we adore a robot companion for its heart-warming partnership? This project targets both the normative and the empirical question of reactive attitudes: (1) Should we have reactive attitudes toward intelligent machines? If so, under what conditions? (2) Do people from different cultures in fact have reactive attitudes toward artificial objects? If so, under what conditions?