Abstract
Existing devices for measuring material appearance in spatially-varying samples are limited to a single scale, either micro or mesoscopic. This is a practical limitation when the material has a complex multi-scale structure. In this paper, we present a system and methods to digitize materials at two scales, designed to include high-resolution data in spatially-varying representations at larger scales. We design and build a hemispherical light dome able to digitize flat material samples up to 11x11cm. We estimate geometric properties, anisotropic reflectance and transmittance at the microscopic level using polarized directional lighting with a single orthogonal camera. Then, we propagate this structured information to the mesoscale, using a neural network trained with the data acquired by the device and image-to-image translation methods. To maximize the compatibility of our digitization, we leverage standard BSDF models commonly adopted in the industry. Through extensive experiments, we demonstrate the precision of our device and the quality of our digitization process using a set of challenging real-world material samples and validation scenes. Further, we demonstrate the optical resolution and potential of our device for acquiring more complex material representations by capturing microscopic attributes which affect the global appearance: we characterize the properties of textile materials such as the yarn twist or the shape of individual fly-out fibers. We also release the SEDDIDOME dataset of materials, including raw data captured by the machine and optimized parameteres.
Resources
Bibtex
@article{garces2023seddidome,
author = {Garces, Elena and Arellano, Victor and Rodriguez-Pardo, Carlos and Pascual, David and Suja, Sergio and Lopez-Moreno, Jorge},
title = {{Towards Material Digitization with a Dual-scale Optical System}},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
volume = {4},
number = {42},
year = {2023}
}
Acknowledgements
We wish to thank the reviewers for their helpful comments. We also thank Javier Fabre for help with the renders. Luis Romero for overall support with the 3D scenes and the validation setups. Sofia Dominguez for her help capturing data. We would like to thank Carlos Heras, Iñigo Salinas, Raul Alcain, Carlos Aliaga, and Enrique Pellejer for their help with hardware prototyping. Elena Garces was partially supported by a Juan de la Cierva - Incorporacion Fellowship (IJC2020-044192-I). This publication is part of the project TaiLOR, CPP2021-008842 funded by MCIN/AEI/10.13039/501100011033 and the NextGenerationEU / PRTR programs.