File size: 2,078 Bytes
05db857 65e21da 05db857 126688a 65e21da 126688a 65e21da 126688a 65e21da 126688a 65e21da 6380a94 65e21da 126688a 65e21da 126688a c35aea5 65e21da |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
# GeoVistaBench
**GeoVistaBench is the first benchmark to evaluate agentic models’ general geolocalization ability.**
GeoVistaBench is a collection of real-world photos with rich metadata for evaluating geolocation models. Each sample corresponds to one picture identified by its `uid` and includes both the original high-resolution imagery and a lightweight preview for rapid inspection.
## Dataset Structure
- `id`: unique identifier.
- `raw_image_path`: relative path (within this repo) to the source picture under `raw_image/<uid>/`.
- `preview`: compressed JPEG preview (<=1M pixels) under `preview_image/<uid>/`. This is used by HF Dataset Viewer.
- `metadata`: downstream users can parse it to obtain lat/lng, city names, multi-level location tags, and related information.
- `data_type`: string describing the imagery type.
All samples are stored in a Hugging Face-compatible parquet file.
## Working with GeoBench
1. Clone/download this folder (or pull it via `huggingface_hub`).
2. Load the parquet file using Python:
```python
from datasets import load_dataset
ds = load_dataset('path/to/this/folder', split='test')
sample = ds[0]
``
`sample["raw_image_path"]` points to the higher-quality file for inference.
## Related Resources
- GeoVista Technical Report
https://huggingface.co/papers/2511.15705
- GeoVista-Bench (previewable variant):
A companion dataset with resized JPEG previews intended to make image preview easier in the Hugging Face dataset viewer:
https://huggingface.co/datasets/LibraTree/GeoVistaBench
(Same underlying benchmark; different packaging / image formats.)
## Citation
```
@misc{wang2025geovistawebaugmentedagenticvisual,
title = {GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization},
author = {Yikun Wang and Zuyan Liu and Ziyi Wang and Pengfei Liu and Han Hu and Yongming Rao},
year = {2025},
eprint = {2511.15705},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.15705},
}
```
|