zensvi.cv.ClassifierPerceptionViT¶
- class zensvi.cv.ClassifierPerceptionViT(perception_study, device=None, verbosity=1)¶
Bases:
zensvi.cv.classification.base.BaseClassifier
A classifier for evaluating the perception of streetscape based on a given study
- Parameters:
perception_study (str) – The specific perception study for which the model is trained, including “safer”, “livelier”, “wealthier”, “more beautiful”, “more boring”, “more depressing”. This affects the checkpoint file used.
device (str, optional) – The device that the model should be loaded onto. Options are “cpu”, “cuda”, or “mps”. If None, the model tries to use a GPU if available; otherwise, falls back to CPU.
verbosity (int, optional) – Level of verbosity for progress bars. Defaults to 1. 0 = no progress bars, 1 = outer loops only, 2 = all loops.
- classify(dir_input: str | pathlib.Path, dir_summary_output: str | pathlib.Path, batch_size=1, save_format='json csv', verbosity: int = None) List[str] ¶
Classifies images based on human perception of streetscapes from the specified perception study.
- Parameters:
dir_input (Union[str, Path]) – Directory containing input images.
dir_summary_output (Union[str, Path]) – Directory to save summary output. If None, output is not saved.
batch_size (int, optional) – Batch size for inference. Defaults to 1.
save_format (str, optional) – Save format for the output. Options are “json” and “csv”. Add a space between options. Defaults to “json csv”.
verbosity (int, optional) – Level of verbosity for progress bars. If None, uses the instance’s verbosity level. 0 = no progress bars, 1 = outer loops only, 2 = all loops.
- Returns:
List of dictionaries containing perception scores for each image.
- Return type:
List[dict]
- property verbosity¶
Property for the verbosity level of progress bars.
- Returns:
verbosity level (0=no progress, 1=outer loops only, 2=all loops)
- Return type:
int