Tactile sense is a critical component of how individuals perceive their surroundings. Haptics, or devices that can produce incredibly exact vibrations that imitate the sensation of touch, are one method to bring the third sense to life. However, as far as haptics has progressed, humans are quite picky about whether or not something feels ‘correct,’ and virtual textures don’t always deliver. Researchers have now found a new approach for computers to attain that true texture – with the help of humans. The framework, known as a preference-driven model, harnesses our capacity to discern between the nuances of different textures as a tool to tune up these virtual counterparts.
We may now immerse ourselves in a world of sights and sounds from the comfort of our own homes, yet something is missing: touch.
Tactile sense is a crucial component of how individuals perceive their surroundings. Haptics, or devices that produce incredibly exact vibrations that replicate the sensation of touch, are one method of bringing the third sense to life. However, as far as haptics has progressed, humans are quite picky about whether or not something feels “correct,” and virtual textures don’t always deliver.
Now, researchers at the USC Viterbi School of Engineering have developed a new method for computers to achieve that true texture — with the help of human beings. Called a preference-driven model, the framework uses our ability to distinguish between the details of certain textures as a tool in order to give these virtual counterparts a tune-up.
When we see a table, we can imagine how it will feel when we touch it. Using our prior knowledge of the surface, you can simply provide visual feedback to users, allowing them to choose what matches.
Shihan Lu
The research was published in IEEE Transactions on Haptics by three USC Viterbi Ph.D. students in computer science, Shihan Lu, Mianlun Zheng and Matthew Fontaine, as well as Stefanos Nikolaidis, USC Viterbi assistant professor in computer science and Heather Culbertson, USC Viterbi WiSE Gabilan Assistant Professor in Computer Science.
“We want users to compare their feelings about the real texture and the virtual texture,” remarked Lu, the initial author. “The model then iteratively updates a virtual texture so that it eventually matches the real one.”
According to Fontaine, the idea came up during a Haptic Interfaces and Virtual Environments seminar taught by Culbertson in the fall of 2019. They were inspired by the art application Picbreeder, which can repeatedly generate images depending on a user’s preferences until it achieves the desired result.
“We thought, what if we could do that for textures?” Fontaine recalled.
The user is given a genuine texture initially, and the model generates three virtual textures at random using dozens of variables, from which the user can choose the one that feels the most comparable to the real thing. The search modifies the distribution of these variables over time as it gets closer to what the user desires. This method offers an advantage over directly capturing and “playing back” textures, according to Fontaine, because there is always a gap between what the computer reads and what we experience.
“Rather than just imitating what we can record, you’re measuring parameters of exactly how they perceive it,” Fontaine explained. There will be some inaccuracy in how you captured that material, as well as how you play it back. “All the user has to do is select the best texture and modify the level of friction with a simple slider.” Friction is critical to how we perceive textures, and it can differ from person to person. “It’s extremely simple,” Lu added.
Their work arrives just in time for the increasing demand for particular, precise virtual textures. Everything from video games to fashion design is incorporating haptic technology, and existing databases of virtual textures can be improved using this user choice method.
“The haptic gadget is becoming increasingly popular in video games, fashion design, and surgery simulation,” Lu said. “Even at home, we’ve begun to see customers with such (haptic) devices that are becoming as popular as laptops. For example, first-person video games will give them the impression that they are truly participating with their surroundings.”
Lu previously did other work on immersive technology, but with sound — specifically, making the virtual texture even more immersive by introducing matching sounds when the tool interacts with it.
“When we interact with the environment through a tool, tactile feedback is merely one modality, one type of sensory feedback,” Lu explained. “Audio is another type of sensory feedback, and both are highly significant.”
The texture-search methodology also allows someone to take a virtual texture from a database, such as the University of Pennsylvania’s Haptic Texture Toolkit, and improve it until they achieve the desired result. “You can take the prior virtual textures searched by others, and then continue adjusting it based on those,” Lu explained. “You don’t have to start from scratch every time.”
According to Lu, this is especially useful for virtual textures used in training for dentistry or surgery, which require extreme accuracy. “Surgical training is definitely a huge area that requires very realistic textures and tactile feedback,” Lu said. “Fashion design also necessitates a great deal of precision in texture development before they go and construct it.”
Real textures may not even be required for the model in the future, according to Lu. The way certain things in our lives feel is so intuitive that fine-tuning a texture to match that memory is something we can do inherently just by looking at a photo, without having the real texture for reference in front of us.
“When we see a table, we can imagine how it will feel when we touch it,” Lu explained. “Using our prior knowledge of the surface, you can simply provide visual feedback to users, allowing them to choose what matches.”