Autor(es):
Santos, Bruno M. ; Pais, Pedro ; Ribeiro, Francisco M. ; Lima, José ; Goncalves, Gil ; Pinto, Vítor H.
Data: 2023
Identificador Persistente: http://hdl.handle.net/10198/21876
Origem: Biblioteca Digital do IPB
Assunto(s): Hand keypoints estimation; Convolutional neural network; VGG; FreiHAND
Descrição
Accurate estimation of hand shape and position is an important task in various applications, such as human-computer interaction, human-robot interaction, and virtual and augmented reality. In this paper, it is proposed a method to estimate the hand keypoints from single and colored images utilizing the pre-trained deep convolutional neural networks VGG-16 and VGG-19. The method is evaluated on the FreiHAND dataset, and the performance of the two neural networks is compared. The best results were achieved by the VGG-19, with average estimation errors of 7.40 pixels and 11.36 millimeters for the best cases of two-dimensional and three-dimensional hand keypoints estimation, respectively.