I noticed a similar phenomenon in my work on JoyCaption when I began teaching it VQA. JoyCaption was trained on about 800k image-caption pairs, and built from so400m and Llama 3.1 8B Instruct. There's no VQA data in its training.
As an experiment, I hand built a VQA dataset of ~600 examples, which is a vanishingly small number compared to even rudimentary VQA datasets (which tend to be about 10k examples or more). However, I ensured that the dataset was broad and highly varied, and that the queries aggressively exercised both visual and textual understanding.
With only 600 training examples, I finetuned the base JoyCaption model in a handful of minutes and to my surprise, not only did it gain VQA abilities, it's able to generalize quite far outside of its training set. Even for concepts not in the original 800k caption data.
My hypothesis is that if the training data is varied enough, it forces the model to generalize. It isn't given enough examples of any given type of task to learn specialized circuitry for them, so its only option is to learn a broadly generalized set of circuitry. The data keeps it on its toes, so to speak.
Of course, this leans heavily on Llama's existing instruction (text-based) tuning, so it's starting off on good footing there. The surprising bit is being able to generalize so well to a new domain (vision) with so little data.
One caveat is that this model is highly unstable, and the accuracy of its responses is much worse than the accuracy of the base model. It's able to handle all of the tasks I've tested on it, but often requires a few retries to get it right.
Building these datasets is also tedious and intensive. I've yet to successfully train existing AIs to generate useful user queries/instructions/questions, either through prompting or finetuning. So it has to all be done by hand. And every answer was either written by me, or generated by an existing VLM and then edited by me to ensure perfect accuracy and adherence to the request. Since the queries are complex and challenging, this makes the work of writing those answers similarly challenging and time consuming.
As an aside: this training also seems to have broken Llama's alignment. I've had it be remarkably sassy in its responses, and it's much better at simulating more normal human responses.