Object-based encoding constrains storage in visual working memory
Published in Journal of Experimental Psychology: General, 2023
Recommended citation: Ngiam, W. X. Q., Loetscher, K., & Awh, E. (2023). Object-based encoding constrains storage in visual working memory. Journal of Experimental Psychology: General https://doi.org/10.1037/xge0001479
Abstract
The fundamental unit of visual working memory (WM) has been debated for decades. WM could be object-based, such that capacity is set by the number of individuated objects, or feature-based, such that capacity is determined by the total number of feature values stored. The goal in the present work was to examine whether object- or feature-based models would best explain how multi-feature objects (i.e., color/orientation or color/shape) are encoded into visual WM. If maximum capacity is limited by the number of individuated objects, then above-chance performance should be restricted to the same number of items as in a single feature condition. By contrast, if the capacity is determined by independent storage resources for distinct features – without respect to the objects that contain those features – then successful storage of feature values could be distributed across a larger number of objects than when only a single feature is relevant. We conducted a whole-report task in which subjects reported both features from every item in a six-item array. As in past work, we observed an object-based benefit – substantially more feature values were stored with multi-featured objects compared to single-feature objects. In addition, observers had a strong tendency to report the items in descending order of memory quality, replicating Adam et al. (2017). Finally, the crucial finding was that above-chance reports of multi-feature objects were concentrated within the first three responses, while the final three responses were best modeled as guesses. Thus, whole-report procedures reveal object-based encoding into visual working memory.