zensvi.cv.ClassifierGlare ========================= .. py:class:: zensvi.cv.ClassifierGlare(device=None, verbosity=1) Bases: :py:obj:`zensvi.cv.classification.base.BaseClassifier` A classifier for identifying glare in images using the model from Hou et al (2024) (https://github.com/ualsg/global-streetscapes). :param device: The device that the model should be loaded onto. Options are "cpu", "cuda", or "mps". If `None`, the model tries to use a GPU if available; otherwise, falls back to CPU. :type device: str, optional :param verbosity: Level of verbosity for progress bars. Defaults to 1. 0 = no progress bars, 1 = outer loops only, 2 = all loops. :type verbosity: int, optional .. py:method:: classify(dir_input: Union[str, pathlib.Path], dir_summary_output: Union[str, pathlib.Path], batch_size=1, save_format='json csv', verbosity: int = None) -> List[str] Classifies images based on presence of glare. Processes images from the input directory and classifies them as having glare ("True") or not having glare ("False"). Results can be saved in JSON and/or CSV format. :param dir_input: Directory containing input images or path to a single image. :param dir_summary_output: Directory to save classification results. :param batch_size: Number of images to process simultaneously. Defaults to 1. :param save_format: Space-separated string of output formats. Options are "json" and "csv". Defaults to "json csv". :param verbosity: Level of verbosity for progress bars. If None, uses the instance's verbosity level. 0 = no progress bars, 1 = outer loops only, 2 = all loops. :type verbosity: int, optional :returns: List of glare classifications ("True" or "False") for each image. .. py:property:: verbosity Property for the verbosity level of progress bars. :returns: verbosity level (0=no progress, 1=outer loops only, 2=all loops) :rtype: int