View-invariant action recognition based on Artificial Neural Networks.
|Name||View-invariant action recognition based on Artificial Neural Networks.|
In this paper, a novel view invariant action recognition method based on neural network representation and recognition is proposed. The project has employed the technique mentioned and excellent results were obtained for a number of widely used font types. The technical approach followed in processing input images, detecting graphic symbols, analyzing and mapping the symbols and training the network for a set of desired Unicode characters corresponding to the input images are discussed in the subsequent sections. Even though the implementation might have some limitations in terms of functionality and robustness, the researcher is confident that it fully serves the purpose of addressing the desired objectives. The novel representation of action images is based on learning spatially related prototypes using Self Organizing Maps (SOM). Fuzzy distances from prototypes are used to produce a time invariant action representation. Multilayer perceptions’ are used for action classification. The algorithm is trained using data from a various setup. An arbitrary number of images can be used in order to recognize actions using a Bayesian framework. The proposed method can also be applied to the depicting interactions between images, without any modification. The use of information captured from different viewing angles leads to high classification performance. The proposed method is the first one that has been tested in challenging experimental setups, a fact that denotes its effectiveness to deal with most of the open issues in action recognition.
|ieee paper year||2012|