GitHub – apple/ml-fastvlm: This repository contains the official implementation of “FastVLM: Efficient Vision Encoding for Vision Language Models”


This is the official repository of
FastVLM: Efficient Vision Encoding for Vision Language Models. (CVPR 2025)

Accuracy vs latency figure.

  • We introduce FastViTHD, a novel hybrid vision encoder designed to output fewer tokens and significantly reduce encoding time for high-resolution images.
  • Our smallest variant outperforms LLaVA-OneVision-0.5B with 85x faster Time-to-First-Token (TTFT) and 3.4x smaller vision encoder.
  • Our larger variants using Qwen2-7B LLM outperform recent works like Cambrian-1-8B while using a single image encoder with a 7.9x faster TTFT.
  • Demo iOS app to demonstrate the performance of our model on a mobile device.

We use LLaVA codebase to train FastVLM variants. In order to train or finetune your own variants,
please follow instructions provided in LLaVA codebase.
We provide instructions for running inference with our models.

conda create -n fastvlm python=3.10
conda activate fastvlm
pip install -e .

For detailed information on various evaluations, please refer to our paper.

To download all the pretrained checkpoints run the command below (note that this might take some time depending on your connection so might be good to grab ☕️ while you wait).

bash get_models.sh   # Files will be downloaded to `checkpoints` directory.

To run inference of PyTorch checkpoint, follow the instruction below

python predict.py --model-path /path/to/checkpoint-dir \
                  --image-file /path/to/image.png \
                  --prompt "Describe the image."

Inference on Apple Silicon

To run inference on Apple Silicon, pytorch checkpoints have to be exported to format
suitable for running on Apple Silicon, detailed instructions and code can be found model_export subfolder.
Please see the README there for more details.

For convenience, we provide 3 models that are in Apple Silicon compatible format: fastvlm_0.5b_stage3,
fastvlm_1.5b_stage3,
fastvlm_7b_stage3.
We encourage developers to export the model of their choice with the appropriate quantization levels following
the instructions in model_export.

Inference on Apple Devices

To run inference on Apple devices like iPhone, iPad or Mac, see app subfolder for more details.

If you found this code useful, please cite the following paper:

@InProceedings{fastvlm2025,
  author = {Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokul Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari},
  title = {FastVLM: Efficient Vision Encoding for Vision Language Models},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2025},
}

Our codebase is built using multiple opensource contributions, please see ACKNOWLEDGEMENTS for more details.

Please check out the repository LICENSE before using the provided code and
LICENSE_MODEL for the released models.



Source link