Rangel, José Carlos, Cruz, Edmanuel, Cazorla, Miguel Automatic Understanding and Mapping of Regions in Cities Using Google Street View Images Rangel JC, Cruz E, Cazorla M. Automatic Understanding and Mapping of Regions in Cities Using Google Street View Images. Applied Sciences. 2022; 12(6):2971. https://doi.org/10.3390/app12062971 URI: http://hdl.handle.net/10045/122169 DOI: 10.3390/app12062971 ISSN: 2076-3417 Abstract: The use of semantic representations to achieve place understanding has been widely studied using indoor information. This kind of data can then be used for navigation, localization, and place identification using mobile devices. Nevertheless, applying this approach to outdoor data involves certain non-trivial procedures, such as gathering the information. This problem can be solved by using map APIs which allow images to be taken from the dataset captured to add to the map of a city. In this paper, we seek to leverage such APIs that collect images of city streets to generate a semantic representation of the city, built using a clustering algorithm and semantic descriptors. The main contribution of this work is to provide a new approach to generate a map with semantic information for each area of the city. The proposed method can automatically assign a semantic label for the cluster on the map. This method can be useful in smart cities and autonomous driving approaches due to the categorization of the zones in a city. The results show the robustness of the proposed pipeline and the advantages of using Google Street View images, semantic descriptors, and machine learning algorithms to generate semantic maps of outdoor places. These maps properly encode the zones existing in the selected city and are able to provide new zones between current ones. Keywords:Semantic maps, Automatic map, Outdoor understanding, Deep learning MDPI info:eu-repo/semantics/article