HAI Workshop: Uncertainty in AI Situations

Date
Tue December 10th 2019, 10:00am - 4:00pm
Event Sponsor
Stanford Institute for Human-Centered Artificial Intelligence
Location
CESTA (back workspace)
Wallenberg Hall (Bldg 160)
HAI Workshop: Uncertainty in AI Situations

Led by Elaine Treharne and Mark Algee-Hewitt

Please join us for a collegial Human-centered Artificial Intelligence workshop to brainstorm research ideas about “Uncertainty in AI Situations”. Joining us will be Professor Mohamed Cheriet, Director of Synchromedia Laboratory at the University of Quebec’s École de technologie supérieure (ÉTS).

Our workshop’s focus on “Uncertainty in AI Situations” asks researchers to consider what AI can do when faced with uncertainty. Machine learning algorithms whose classifications rely on posterior probabilities of membership often present ambiguous results, where due to unavailable training data or ambiguous cases, the likelihood of any outcome is approximately even. In such situations, the human programmers must decide how the machine handles ambiguity: whether making a “best-fit” classification or reporting potential error, there is always a potential conflict between the mathematical rigor of the model and the ambiguity of real-world use cases.

Since Humanists, in particular, are adept, and professionally skilled, at working with interpretative paradigms that are neither right nor wrong, our insights into uncertainty, ambiguity, and indecision are crucial to AI research and development. Moreover, in working in depth on irony and satire—genres that are deliberately ambiguous and indeterminate—or on color description—one of the most ambiguous of all linguistic and conceptual categories—this research workshop will discuss the attributes of unclarity at multiple levels to help us frame and advance more effective questions that take uncertainty into account.

We shall ask:

  • How do researchers create training sets that engage with uncertainty, particularly when deciding between reflecting real-world data and curating data sets to avoid bias?
  • How can we frame ontologies, typologies, and epistemologies that can account for, and help solve, ambiguity in data and indecision in AI?

Our workshop will begin the process of advancing AI to a new intellectual understanding of one of the trickiest problems in the machine-learning environment.

No prior knowledge of AI is assumed for this workshop. Please do come and join in!