I haven't yet had a chance to dig into this pre-print but I know from @elmahdi its been in the works for a while now, so looking forward to reading. Although I wouldn't call them foundation models. I'd call them multimodal pretrained models, or anything that doesn't confuse people further about them being close to foundational.
https://arxiv.org/pdf/2209.15259.pdf
@timnitGebru @elmahdi fascinating read, thanks for sharing!
@timnitGebru @elmahdi
done for the necessary title change, and some other updates (I wasn't totally ok with "foundation models" either in the earlier version). https://arxiv.org/abs/2209.15259