Autor(es):
Pereira, Américo ; Carvalho, Pedro ; Pereira, Nuno ; Viana, Paula ; Côrte-Real, Luís
Data: 2023
Identificador Persistente: http://hdl.handle.net/10400.22/24734
Origem: Repositório Científico do Instituto Politécnico do Porto
Assunto(s): Computer vision; datasets; scene analysis; scene reconstruction; visual scene understanding
Descrição
The widespread use of smartphones and other low-cost equipment as recording devices, the massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities, made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video, there is a lack of solutions able to analyze and semantically describe the information in the visual scene so that it can be efficiently used and repurposed. Scientific contributions have focused on individual aspects or addressing specific problems and application areas, and no cross-domain solution is available to implement a complete system that enables information passing between cross-cutting algorithms. This paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the representation of information in a virtual environment, including how the extracted data can be described and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and opportunities in different steps of the entire process, allowing to identify current gaps in the literature. The work reviews various technologies specifically from the perspective of their applicability to an endto- end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant tasks.