Thursday, 16 April 2009


By Adm Csapo

This time we will introduce one of the research topics in our lab.

When a remotely teleoperated robot touches a surface, the generallywidespread solution today is to use haptic feedback devices to conveythe haptic sensations to the user. However, even state-of-the-arthaptic feedback devices have limitations in terms of portability andin terms of levels of operation. This is in contrast with the humanauditory system, which is highly sensitive to even minute variationsin sound, and which can be put to use through simple, commerciallyavailable headsets. The question arises: to what extent can we substitute haptic sensations through audio? If such substitution werepossible, it would be useful not only because tactile experience isimportant in our daily lives when we manipulate objects, but alsobecause ideas can hopefully be generalized and extended to other typesof feedback parameters – not just those derived from surface textures.

The goal of this research in the long term is to find out more aboutthe limits of conveying tactile feedback information using sound. Tothis end, we intend to create a hardware-software framework - referredto as HaptaSone - which will allow users to experiment with differentkinds of sounds that are meaningful to them, and test whether or notthey can be used to provide percepts comparable to those perceivedwhen touching surfaces. In a way similar to Bach y Rita's findings inthe haptic feedback of visual information [1], such percepts can hopefully be achieved thanks to to the extremely high plasticity ofthe brain, once users perceive the audio output of HaptaSone as adirect consequence of their own actions (i.e., touching differentsurfaces).


HaptaSone is intended to be an interactive framework which will allowusers to attach measuring devices such as laser profilometers andinfrared temperature sensors to their hands and listen to audiofeedback generated based on texture and temperature properties of thesurface they are touching. The block diagram of HaptaSone can be seenin figure 1. As can be seen in the diagram, after the acquisition ofthe sensory data, soft computing methods such as artificial neural networks will be used to infer the perception that the user would getwhen directly touching the given surface. Ideally, this inferencewill be based on training data acquired from actual psychophysicalexperiments in which users will be asked to compare different surfacesin terms of hardness, roughness, sharpness, and temperature.

Based on the acquired surface descriptors, the Matching and Pairingmodule will select a set of sounds from the sound database (whichcontains predefined sounds as well as labeled sounds that can besupplied by the user) and pairs each surface parameter with a certainauditory cognitive communication channel. This pairing is notarbitrary and should be supported by psychophysical models of humanauditory perception.

Labels: ,


Post a Comment

Subscribe to Post Comments [Atom]

<< Home