agenda

My research agenda and ideas that excite me


ongoing projects

projects I am leading or co-leading

Agonistic Image Generation. with Andrew Shaw, Ranjay Krishna, and Amy Zhang. UW CSE. We seek to address cases of offense and harm in image generation of people by centering clarification of user intentionality, inspired from philosophical accounts of speech-acts and autonomy.

Discursive Datasets. with Matt Wallingford, Amy Zhang, and Ranjay Krishna. UW CSE. Using socially produced visual data to build more robust and socially intelligent visual representations. Supported by the Mary Gates Scholarship.

Ready-/Present-at-hand in Language Models. with Ari Holtzman (U Chicago). We identify the utility of Heidegger’s ready-/present-at-hand concept over the current System I/II distinction in understanding important aspects of language models.

Political Autonomy on Social Platforms. with Katie Yurechko (Wash. & Lee, Oxford). We set forth a framework for understanding and analyzing the pre-/political experience and autonomy of users in social platforms.

Emergence in Language Models, a Philosophical Perspective. An analysis of what it really means to call things in language models “emergent”, and what kinds of things can even be said to “emerge” in the first place – not abilities, I claim. Essay forthcoming.

Benchmarking LLM Creativity. with Tuhin Chakrabarty (Stony Brook), Roger Beaty (Penn).


exciting ideas and directions

Kernels of research ideas I’m excited about. If any of these excite you too, please shoot me an email at andreye [at] uw [dot] edu!

AI Tools for Thought / Textual Social Sciences

  • Proactively asking great questions is a core part of thinking. Being asked a challenging question is how humans become conscious of what they don’t know they don’t know – we’re intellectually “caught off guard”. But it’s very difficult to ask great questions. How can AI systems do it?
  • Critical learning often takes the practical form of figuring out what words mean. (Think philosopy 101: figuring out what “metaphysics”, “contingency”, “normative” mean.) Formal definitions are only a scaffold. The real conceptual grasp of the term comes from reading a multitude of texts which cross-reference and build up the term. Can LMs introduce “new” words developing “new” concepts, and thus contribute towards human “intelligence augmentation”?

Digital Tools for Metaphilosophy

  • Expanding the modalities in which we do philosophy beyond the text document
  • Data sheets are a now a commonplace practice for machine learning datasets to contextualize where they are coming from, their methodology, and their limitations. Can we extract and deploy “metaphilosophy data sheets”?
  • Can intelligent tools and interfaces help bridge intellectual divides in philosophy (e.g. analytic-continental, canon-periphery)?

Philosophical meditations on AI

  • An exploration of what “selfhood” means for AI – what does it mean when models say “As an AI language model…”? What might it mean to negate the sycophantic, servile, mirror-like nature many current language models have been aligned to?
  • Critique of the utilitarian priority of “preferences” in alignment, mayhaps borrowing from the Frankfurt School.
  • The kind of thing Borges and AI does, but with someone like Baudrillard, Nietzsche, Foucault.
  • Developing Vilém Flusser’s notion of technical images for computer vision. See: Into the Universe of Technical Images.
  • Theorizing if computer vision (and/or language modeling) is guilty of what Donna Haraway calls the ‘god trick’, and building information systems which reflect Haraway’s maxim that objectivity is partial perspective. See: Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective and A Cyborg Manifesto. “Situated Cameras, Situated Knowledges” is a great start.
  • What happens if we take Iris Murdoch’s notion of ‘moral vision’ literally? Murdoch says that “moral differences are differences in vision” – what we need is not a “renewed attempt to specify the facts but rather a fresh vision”. What does this mean for computer vision?