fxtentacle
7 hours ago
The title is not wrong, but it also doesn't feel correct either. What they do here is they use a pre-trained model to guide the training of a 2nd model. Of course, that massively speeds up training of the 2nd model. But it's not like you can now train a diffusion model from scratch 20x faster. Instead, this is a technique for transplanting an existing model onto a different architecture so that you don't have to start training from 0.
pedrovhb
3 hours ago
It does feel right to me, because it's not distilling the second model, and in fact the second model is not an image generation model at all, but a visual encoder. That is, it's a more "general purpose" model which specializes in extracting semantic information from images.
In hindsight it makes total sense - generative image models don't automatically start out with an idea of semantic meaning or the world, and so they have to implicitly learn one during training. That's a hard task by itself, and it's not specifically trained for this task, but rather learns it on the go at the same time as the network learns to create images. The idea of the paper then is to provide the diffusion model with a preexisting concept of the world by nudging its internal representations to be similar to the visual encoders'. As I understand DINO isn't even used during inference after the model is ready, it's just about representations.
I wouldn't at all describe it as "a technique for transplanting an existing model onto a different architecture". It's different from distillation because again, DINO isn't an image generation model at all. It's more like (very roughly simplifying for the sake of analogy) instead of teaching someone to cook from scratch, we're starting with a chef who already knows all about ingredients, flavors, and cooking techniques, but hasn't yet learned to create dishes. This chef would likely learn to create new recipes much faster and more effectively than someone starting from zero knowledge about food. It's different from telling them to just copy another chef's recipes.
psb217
a few seconds ago
The technique in this paper would still be rightly described as distillation. In this case it's distillation of "internal" representations rather than the final prediction. This a reasonably common form of distillation. The interesting observation in this paper is that including an auxiliary distillation loss based on features from a non-generative model can be beneficial when training a generative model. This observation leads to interesting questions like, eg, which parts of the overall task of generating images (diffusionly) are being learned faster/better due to this auxiliary distillation loss.
byyoung3
6 hours ago
Yes, now it seems obvious, but before this it wasn't clear that that would be something that could speed things up, due to the fact that the pretrained model was trained on a separate objective. It's a brilliant idea that works amazingly.
zaptrem
7 hours ago
Yeah, I wonder whether this still saves compute if you include the compute used to train DINOV2/whatever representation model you'd like to use?
cubefox
3 hours ago
That's the question. More precisely, how does the new method compare to the classical one in terms of training compute and inference compute?