Detalhes do Documento

IViHumans Platform - The Graphical Processing Layer

Autor(es): Abreu, Ricardo

Data: 2008

Identificador Persistente: http://hdl.handle.net/10451/14022

Origem: Repositório da Universidade de Lisboa

Assunto(s): Virtual Humans; Virtual Environments; Emotional Expression; Synthetic Perception; Steering; Locomotion


Descrição

Virtual environments that are inhabited by agents with a human like embodiment have many practical applications nowadays, in areas such as entertainment, education, psychotherapy, industrial training, or reconstitution of historical environments. These are examples of areas that may benefit from a flexible platform that supports the generation and rendering of animated scenes with intelligent virtual humans. The IViHumans platform is currently being built with this perspective in mind. The platform is divided in two layers: one for graphical processing and another for articial intelligence computation. It was projected to provide a set of features which automatically takes care of many issues that are common to applications integrating virtual humans and virtual environments. This document focuses on the conception and development of the Graphical Processing layer, which constitutes the ground for the Articial Intelligence layer. The connection between the two layers is also addressed. The layers were projected to run in diferent processes, communicating by means of a simple, yet efective and extensible client/server protocol that we idealized and implemented. The tasks of the graphical processing layer rely, first of all, on graphical representations. For that matter, we highlight the techniques used in 3D object modeling. We also focus on our design and implementation and on how we applied the principles of object oriented design to confer flexibility to the platform. Reynolds' conception of movement is applied according to our own view, to make virtual humans and other objects steer autonomously in the world, while displaying consistent animations that are automatically chosen according to character specific rules. We expose our solution for facial expressions that can be mixed to transmit complex emotions and that are subject to automatic smooth transitions. We show how virtual objects can be characterized with default and custom properties. We discuss the integration of perception through synthetic vision, including how it is coupled with distinct kinds of automatic memory that recalls any attributes of the objects that inhabit the virtual world

Tipo de Documento Dissertação de mestrado
Idioma Português
Orientador(es) Cláudio, Ana Paula
Contribuidor(es) Repositório da Universidade de Lisboa
facebook logo  linkedin logo  twitter logo 
mendeley logo

Documentos Relacionados