Referências bibliográficas

  1. AGAMBEN, Giorgio. O uso dos corpos: Homo sacer, IV, 2. Boitempo Editorial, 2017.
  2. ALON BARKAT, S., BUSUIC, M. Human-AI Interactions in Public Sector Decision-Making: ‘Automation Bias’ and ‘Selective Adherence’ to Algorithmic Advice (January 28, 2022). Alon-Barkat, Saar and Madalina Busuioc. 2022. Available at SSRN: or
  3. ARNOLD, K. C., CHAUNCEY, K., GAJOS, K. Z., (2020)  Predictive text encourages predictable writing,  IUI ’20: Proceedings of the 25th International Conference on Intelligent User InterfacesMarch 2020 Pages 128–138
  4. BAK-COLEMAN, J. B., ALFANO, M., BARFUSS, W., BERGSTROM, C. T., CENTENO, M. A., COUZIN, I. D., & WEBER, E. U. (2021). Stewardship of global collective behavior, Proceedings of the National Academy of Sciences, 118(27).
  5. BENJAMIN, R. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press, Cambridge, UK
  6. BENJAMIN, R. Informed refusal: Toward a justice-based bioethics. Science, Technology, & Human Values, v. 41, n. 6, p. 967-990, 2016.
  7. BLACK, Edwin. IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation-Expanded Edition. Dialog press, 2012.
  8. BORODITSKY, Lera. How language shapes thought. Scientific American, v. 304, n. 2, p. 62-65, 2011.
  9. BRAIDOTTI, Rosi. Posthuman knowledge. Cambridge: Polity Press, 2019.
  10. BRUNS, Axel. Are filter bubbles real?. Cambridge: Polity Press, 2019.
  11. CHUDE-SOKEI, L. (2015). The sound of culture: Diaspora and black technopoetics. Wesleyan University Press.
  12. CORBETT-DAVIES, Sam; GOEL, Sharad. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023, 2018.
  13. DE SOUSA SANTOS, Boaventura. Epistemologias do Sul. Revista Crítica de Ciências Sociais, v. 80, p. 5-10, 2008.
  14. EUBANKS, Virginia. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.
  15. FEENBERG, Andrew. Transforming technology: A critical theory revisited. Oxford University Press, 2002.
  16. FLUSSER, Vilém; CARDOSO, Rafael; ABI-SÂMARA, Raquel. O mundo codificado: por uma filosofia do design e da comunicação. Editora Cosac Naify, 2007
  17. FREEMAN, S., GIBBS, M., NANSEN, B. 2022. ‘Don’t mess with my algorithm’: Exploring the relationship between listeners and automated curation and recommendation on music streaming services. First Monday27(1). Available at:
  18. GRANOVETTER, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83(6), 1420–1443.
  19. HARAWAY, Donna. Simians, cyborgs, and women: The reinvention of nature. Routledge, 2013.
  20. HEIDEGGER, Martin. (2007) The question of technics. Scientiae Studia, v. 5, n. 3, p. 375–398.
  21. JOSELLI, M. 2014. Ethics and games: ‘morals, immorals, or amorals?’ A study on game ethics according to Aristotle, St. Augustine, and Kant. In SBGames, 448–56.
  22. KAPLAN, Andreas; HAENLEIN, Michael. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, v. 62, n. 1, p. 15-25, 2019.
  23. KRAMER, Adam DI; GUILLORY, Jamie E.; HANCOCK, Jeffrey T. Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, v. 111, n. 24, p. 8788-8790, 2014.
  24. LEAHU, Lucian. Ontological surprises: A relational perspective on machine learning. In: Proceedings of the 2016 ACM Conference on Designing Interactive Systems. 2016. p. 182-186.
  25. LARUS, James et al. When computers decide: European recommendations on machine-learned automated decision making. 2018.
  26. MBEMBE, Achille. Necropolítica. n-1 edições, 2021.
  27. MULLAINATHAN, S. (2019). Biased algorithms are easier to fix than biased people. The New York Times.
  28. NICOLACI-DA-COSTA, A. M. (2002). Revoluções tecnológicas e transformações subjetivas. Psicologia: teoria e pesquisa, 18(2), 193-202.
  29. NOBLE, Safya, DAMORIM, F. 2022.  Algoritmos da Opressão: Como os mecanismos de busca reforçam o racismo, Editora Rua do Sabão; 1ª edição (17 janeiro 2022)
  30. OBERMEYER, Z., POWERS, B., VOGELI, C., MULLAINATHAN, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447-453. doi: 10.1126/science.aax2342. PMID: 31649194.
  31. O’NEIL, Cathy. Algoritmos de destruição em massa. Editora Rua do Sabão, 2021.
  32. PARISER, Eli. The filter bubble: What the Internet is hiding from you. penguin UK, 2011.
  33. PASQUALE, F. 2020. New Laws of Robotics: Defending Human Expertise in the Age of AI, Harvard University Press, Cambridge.
  34. PRIMO, Alex Fernando Teixeira. Explorando o conceito de interatividade: definições e taxonomias. Informática na educação: teoria & prática. Vol. 2, n. 2 (out. 1999), p. 65-80, 1999.
  35. RAHWAN, I., CEBRIAN, M., OBRADOVICH, N. et al. Machine behaviour. Nature 568, 477–486 (2019).
  36. REID, T., GILBERT, J. Inclusion in human-machine interactions. Science. 2022 Jan 14;375(6577):149-150. doi: 10.1126/science.abf2618. Epub 2022 Jan 13. PMID: 35025666.
  37. RICHARDSON, K., & Ebooks Corporation. (2015). An Anthropology of Robots and AI : Annihilation Anxiety and Machines.
  38. SALGANIK, M. J., DODDS, P. S., & WATTS, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854–856.
  39. SIBILIA, P. (2014). O homem pós-orgânico: A alquimia dos corpos e das almas à luz das tecnologias digitais.
  40. SIMONDON, Gilbert. On the mode of existence of technical objects. Minneapolis: Univocal Publishing, 2017.
  41. SHNEIDERMAN, B. 2022. Human-Centered AI. Oxford University Press, Oxford, UK.
  42. SHIN, D. How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance, Computers in Human Behavior, Volume 109, 2020, ISSN 0747-5632,
  43. SHIVA, V. et al. Indigenous knowledges in global contexts: Multiple readings of our world. University of Toronto Press, 2000.
  44. SPARROW, R. 2017. Robots, rape and representation. International Journal of Social Robotics: 1–13.
  45. TEIXEIRA,  J.  de  F.  (2016).  O  cérebro  e  o  robô:  inteligência  artificial,  biotecnologia  e  a nova ética. São Paulo: Paulus.
  46. TUFEKCI, Z. (2018). YouTube, the great radicalizer. The New York Times, 10, 2018.
  47. TURING, Alan M. Computing Machinery and Intelligence. Creative Computing, v. 6, n. 1, p. 44-53, 1980.
  48. TURKLE, S. 2011. Alone together: why we expect more from technology and less from each other.
  49. VERBEEK, P.P. 2015. COVER STORY beyond interaction: a short introduction to mediation theory. Interactions 22 (3): 26–31 Available at:
  50. WANG, Yilun; KOSINSKI, Michal. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology, v. 114, n. 2, p. 246, 2018
  51. WEISER, Mark. The computer for the 21st century. Scientific american, v. 265, n. 3, p.94-104, 1991
  52. WOLF, C., BLOMBERG, J. 2019. Evaluating the promise of human-algorithm collaborations in everyday work practices. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1–23
  53. WONDERLY, M. 2008. A Human approach to assessing the moral significance of ultra-violent video games. Ethics and Information Technology 10 (1): 1–10.
  54. ZHANG, S., MEHTA, N., SINGH, P. V., SRINIVASAN, K. Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb (January 21, 2021). Rotman School of Management Working Paper No. 3770371, Available at SSRN: