- Experimental setup
- visual displays with two stimuli
- referential expressions as auditory cue
- time starts with modifier onset
- stimuli were not color-diagnostic
- reaction time was counted, when button was pressed, indicating which of the two objects was the target
- Result
- reaction time higher when modifier is used overinformatively than informatively
- Potential problems
- didn't use color-diagnostic stimuli (the more atypical an object, the more people use a modifier overinformatively -- maybe to correct the listener's visual search; a listener doesn't have a high expectation on what color a circle might have which could cause looking for a contrast set)
- very low scene variation
- have pilots for 3 and 6 stimuli
- utterance choice probabilistically, inspired by typicality study
- only visually presented utterances so far
- Improve stimuli (e.g., improve orange pear, get rid of avocado)
- Have written and auditory cues (in separate experiments)
- Open questions
- how to encode selection of an item (which keys to press)
- when to identify time onset?
- idea: take utterances from bda to get "human-like" refExp distribution (maybe also say that the refExps come from a previous experiment to ensure they expect human-like rational utterances)