Unsupervised Learning of Disentangled and Interpretable Representations of Material Appearance
DOI:
https://doi.org/10.26754/jjii3a.202410583Abstract
As humans, we have learned through experience how to interpret the visual appearance of materials in our environment, enabling us to predict the properties of an object just by looking at it for a few seconds. Although this seems like a straightforward task, the final appearance showed by a material is the result of a non-trivial interaction between confounding factors like geometry, illumination, or viewing angle, which we do not completely understand. Creating an algorithm able to disentangle the perceptual factors of material appearance present in an image, just like humans do, would be a breakthrough in several fields (e.g., in architecture, it could help to create prototypes that accurately reproduce how they will be perceived in the real life; following the inverse path of going from perceptual factors to images, it could be used in product design to intuitively define the perceptual features of a desired material, instead of delving into its mathematical definition). Here, we propose a learning-based algorithm capable of effectively disentangling certain perceptual features of images in an unsupervised way.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Santiago Jiménez, Julia Guerrero Viu, Belén Masiá Corcoy
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.