Part 0: Young Claude as an Autistic Human
I hate to anthropomorphize Claude, but I had an interesting sentiment analysis problem today that I’m thinking through. Because for LLMs who accept human input, we will have variations that may be more conversational. I’m working with a instructor at Yale who is developing LLMs for suicide prevention and therapy, and in cases where people aren’t in the right psychological, how do we determine their emotional state at a given moment.
Anthropic says that AI should be treated as a “as a brilliant but very new employee (with amnesia) who needs explicit instructions” [1]. This makes sense, but in trying (Claude haiku, not Sonnet) I can see why people hate dealing with ASD folks like myself. The AI is genuinely on the spectrum, and needing to teach it without giving a direct instruction about tone or what to do is impossible, leading to really long prompts, which waste precious tokens (as a corollary, people tend to get exasperated socially by ASD folks for similar reasons).
- https://arxiv.org/html/2407.12725v1 is a pretty good look at how some LLMs can read emotions. It also has a pretty cool experimental setup.
This is the start of a multi-part look for me into how LLMs can read emotional content, mainly because when I read fiction I don’t necessarily see it as well. I think I have some emotional deficits myself, and I again think AI can be very valuable in reading “emotional blindness” for folks on the spectrum (or really really bad if used badly).
Leave a comment