VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications
Author(s)
Zhang, Chenhui
DownloadThesis PDF (15.23Mb)
Advisor
Wang, Sherrie
Terms of use
Metadata
Show full item recordAbstract
Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose VLEO-Bench, a comprehensive evaluation framework to quantify the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our framework includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks.
Date issued
2025-05Department
Massachusetts Institute of Technology. Institute for Data, Systems, and SocietyPublisher
Massachusetts Institute of Technology