Unsupervised Learning of Disentangled and Interpretable Representations of Material Appearance

Authors

DOI:

https://doi.org/10.26754/jjii3a.202410583

Abstract

As humans, we have learned through experience how to interpret the visual appearance of materials in our environment, enabling us to predict the properties of an object just by looking at it for a few seconds. Although this seems like a straightforward task, the final appearance showed by a material is the result of a non-trivial interaction between confounding factors like geometry, illumination, or viewing angle, which we do not completely understand. Creating an algorithm able to disentangle the perceptual factors of material appearance present in an image, just like humans do, would be a breakthrough in several fields (e.g., in architecture, it could help to create prototypes that accurately reproduce how they will be perceived in the real life; following the inverse path of going from perceptual factors to images, it could be used in product design to intuitively define the perceptual features of a desired material, instead of delving into its mathematical definition). Here, we propose a learning-based algorithm capable of effectively disentangling certain perceptual features of images in an unsupervised way.

Downloads

Download data is not yet available.

Published

2024-07-17

How to Cite

Jiménez, S., Guerrero Viu, J., & Masiá Corcoy, B. (2024). Unsupervised Learning of Disentangled and Interpretable Representations of Material Appearance. Jornada De Jóvenes Investigadores Del I3A, 12. https://doi.org/10.26754/jjii3a.202410583

Issue

Section

Artículos (Tecnologías de la Información y las Comunicaciones)