Image-to-Friction-Generation

The electrovibration tactile display could render the tactile feeling of different textured surfaces by generating the frictional force through voltage modulation. When a user is sliding his/her finger on the display surface, he/she can feel the frictional texture. However, it is not trivial to prepare and fine-tune the appropriate frictional signals for haptic design and texture simulation. In this paper, we present a deep-learning-based framework to generate the frictional signals from the textured images of fabric materials. The generated frictional signal can be used for the tactile rendering on the electrovibration tactile display. Leveraging GANs (Generative Adversarial Networks), our system could generate the displacement-based data of frictional coefficients for the tactile display to simulate the tactile feedback of different fabric materials. Our experimental results show that the proposed generative model could generate the frictional-coefficient signals visually and statistically close to the ground-truth signals. The following user studies on fabric-texture simulation show that users could not discriminate the generated and the ground-truth frictional signals being rendered on the electrovibration tactile display, suggesting the effectiveness of our deep-frictional-signal-generation model.

Invited Journal Paper submission

Authors: Shaoyu Cai, Lu Zhao, Yuki Ban, Takuji Narumi, Yue Liu and Kening Zhu

Source code and data set: https://github.com/shaoyuca/FrictGAN-Image-to-Friction-Generation