Name: | Description: | Size: | Format: | |
---|---|---|---|---|
1.74 MB | Adobe PDF |
Advisor(s)
Abstract(s)
The widespread use of smartphones and other low-cost equipment as recording devices, the
massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities,
made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and
intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video,
there is a lack of solutions able to analyze and semantically describe the information in the visual scene
so that it can be efficiently used and repurposed. Scientific contributions have focused on individual
aspects or addressing specific problems and application areas, and no cross-domain solution is available
to implement a complete system that enables information passing between cross-cutting algorithms. This
paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the
representation of information in a virtual environment, including how the extracted data can be described
and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and
opportunities in different steps of the entire process, allowing to identify current gaps in the literature.
The work reviews various technologies specifically from the perspective of their applicability to an endto-
end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant
tasks.
Description
Keywords
Computer vision; datasets; scene analysis; scene reconstruction; visual scene understanding
Citation
A. Pereira, P. Carvalho, N. Pereira, P. Viana and L. Côrte-Real, "From a Visual Scene to a Virtual Representation: A Cross-Domain Review," in IEEE Access, vol. 11, pp. 57916-57933, 2023, doi: 10.1109/ACCESS.2023.3283495.