Introduction: appearance
Appearance is an optical phenomenon seen by a user or captured by a sensor after the interaction of light with an object (fig. 1). The result depends on the incoming light (e.g. spectrum, direction, energy…), the physicochemical composition of the object that can modify the properties of the incoming light (e.g. absorption, diffusion…), and naturally on the surface quality (e.g. roughness). The modelling and acquisition of appearance is thus intrinsically a multidimensional problem. State-of-the-art points up from 12 (fig. 2 and Dong 20131) to 14 dimensions in non-free space2.
In the heritage field, when it comes to acquiring visual appearance beyond color, the benchmark remains RTIs (Reflectance Transform Imaging3). It is limited compared to a BRDF (Bidirectional Reflectance Distribution Function4) acquisition, since it captures only a single viewpoint. It uses the PTM (Polynomial Texture Maps)5 approximation of reflectance using a low-degree polynomial leads to an easy but smoothed estimate of a normal field.
In addition, illumination is approximated by a directional source6, further reducing the quality of the approximation.
Finally, only one point of view is acquired, which is not suitable for estimating spatial variations in reflection (SVBRDF–Spatially Varying BRDF).
Context
The context of this presentation is the restitution of the Gaston de Saint-Maurice’s residence in Cairo. This building is an excellent example of the reuse of ornaments during the construction of modern Cairo7. Built between 1875 and 1879, using decor from Cairo’s monuments, it was demolished in 1937. Some of the decor was re-used for the construction of the new French Embassy in Cairo, which still exists today. For a restitution8, we need to acquire the appearance of its decorations. Reproducing the complex appearance allows for greater fidelity of the 3D restitution compared to observations in real conditions.

As the conventional use of a lighting dome (as in RTI) is not possible on site for large, immobile objects, the standard approach relies on the use of black reflecting spheres to estimate light direction9. Finally, learning methods (e.g.Volait 201310) currently offer no metrological guarantees.
For greater flexibility, we retained an approach based on a simple camera and a light source (a spotlight for photography). This also makes it possible to adapt to on-site acquisition constraints, thanks to their portability and the use of standard photographic equipment, which poses fewer administrative problems when deployed on site.
Methodology
In this paper, we present an outline of the approach, which is detailed in Corentin Cou’s thesis manuscript11. The setup is detailed in fig. 3.
The light source tracking setup
To find the position of the spot and its direction (fig. 3-5), we place a checkerboard (fig. 3-3) close to the object to be measured, as well as two mirror spheres above it (fig. 3-4). A camera, noted (fig. 3-2), images the mirror spheres and remains fixed throughout the acquisition procedure.
Finally, a 3D mesh of the scene, containing the object to be studied and the checkerboard, is acquired using photogrammetry software (Metashape) with another camera to avoid moving the camera.
This gives us not only the 3D model we’ll be working on, but also the relative positions of the mirror spheres, which will help us find the positions of the spotlight. Each time the spotlight is moved, we take a photo of the mirror spheres with CL. Thanks to the highlights on the two reflective spheres and an approach12 inspired by Corsini et al13, the position of the spotlight will be estimated, enabling free positioning in light.
Acquisition
We use a second camera (CO–fig. 3-1) to image the object and acquire its reflective properties. We multiply the number of light positions and camera positions as much as possible14.
Numerical optimization to recover appearance parameters
It would now be possible to approximate these measurements with a model of BRDFs. Unfortunately, the density of observation direction and spot position is not sufficient for every point on the mesh of the object under study. To densify this information, we will group all pixels corresponding to the same material15.
Once densified, it is possible to retrieve all the BRDF parameters that make up our appearance. We use the GGX BRDF model16 with a normal map for each viewpoint. All these parameters are retrieved through an iterative optimization process17.
Results
The measurement campaign was carried out in November 2019, in the French Embassy in Cairo (Egypt). Due to the use of standard equipment, we only carried the checkerboard and mirror spheres, as well as a camera. It was a Canon EOS 5D Mark II (definition of 56126×3744 pixels, focal distance–50 mm, aperture–F/6.7, sensibility–ISO 1600, exposition time 1/6 s). The rest of the equipment belonged to the photographer Matjaž Kačičnik, who was in Cairo during our assignment.

The ceiling shown in figure 4 was acquired during the afternoon, including appearance and photogrammetry.
We used a Nikon D850 (definition of 8256×5504 pixels, focal distance–60 mm, aperture–F/7.1, sensibility–ISO 400, sequence of exposition times–1/3 s-0.62 s-1.3 s-2.5 s-5 s-10 s).
The data processing algorithm was implemented in 2023. In order to best calibrate the light spot used during the campaign, the same model was acquired (fig. 3-5–Profoto B10), notably to validate the approximation by a Lambertian disk. This validation has been done based on in-laboratory radiometric measures18.
Conclusion
We have demonstrated that it is possible, using a simple setup, to capture the appearance of a monumental scenery, more precisely, a SVBRDF. The setup uses only common, off-the-shelf hardware, thus reducing deployment difficulties.
We have also proposed a new methodology to find the position of a projector-type light source.
This method frees us from the constraints of positioning such a source, and contributes to its ease of deployment.
After numerical optimization, the visual results are consistent with the original images. However, we need to go further than simple visual validation to offer an approach tending towards metrology. The points for improvement and research are as follows. From a model point of view, we do not obtain a single normal map, but one per viewpoint. We need to improve the optimization process to resolve this limitation. Then, still on the optimization method, we need to make it more robust when brightness is high. Finally, to aid the study of such sets, it would be useful to be able to relate the BRDF parameters retrieved to the actual physical parameters.
Acknowledgements
This work was funded by the SMART 3D project (from MITI CNRS 80 prime call) and supported by the project
µChaOS (Tremplin CNRS grant). It was carried out as part of the thesis of Corentin Cou19, to whom this communication is dedicated.
Bibliographie
Baillet, V., Mora, P., Cou, C., Tournon-Valiente, S., Volait, M., Granier, X., Pacanowski, R. and Guennebaud, G. (2021): « 3D for Studying Reuse in 19th Century Cairo: the Case of Saint-Maurice Residence », Eurographics Workshop on Graphics and Cultural Heritage, 117-120. [https://doi.org/10.2312/gch.20211414]
Corsini, M., Callieri, M. and Clignoni, P. (2008): « Stereo Light Probe », Computer Graphics Forum, 27, 291-300. [https://doi.org/10.1111/j.1467-8659.2008.01126.x]
Cou, C. (2023): New approach of 3D for monumental heritage, PhD thesis, University of Bordeaux, Bordeaux. [https://hal.science/tel-04364363]
Deschaintre, V., Aittala, M., Durand, F., Drettakis, G. and Bousseau, A. (2019): « Flexible SVBRDF Capture with a Multi-Image Deep Network », Computer Graphics Forum, 38, 1-13. [https://doi.org/10.1111/cgf.13765]
Dong, Y., Lin, S. and Guo, B. (2013): « Introduction » in Material Appearance Modeling: A Data-Coherent Approach, Heidelberg, 1-17. [https://doi.org/10.1007/978-3-642-35777-0_1]
Duffy, S.M., Kennedy, H., Goskar, T. and Backhouse, P. (2018): Multi-light Imaging Highlight-Reflectance Transformation Imaging (H-RTI) for Cultural Heritage, Barnet. [https://doi.org/10.5284/1110911]
Judd, D.B. (1967): « Terms, Definitions, and Symbols in Reflectometry », Journal of the Optical Society of America, 57, 445-452. [https://doi.org/10.1364/JOSA.57.000445]
MacDonald, L. and Robson, S. (2010): « Polynomial Texture Mapping and 3D Representations », International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVIII, 422-427, Newcastle upon Tyne. [https://www.isprs.org/proceedings/XXXVIII/part5/papers/152.pdf]
McGuigan, M. and Christmas, J. (2020): « Automating RTI: Automatic light direction detection and correcting non-uniform lighting for more accurate surface normal », Computer Vision and Image Understanding, 192, 102911. [https://doi.org/10.1016/j.cviu.2019.102880]
McGuire, M., Dorsey, J., Haines,E., Haines, E., Hughes, J.F., Marshchner, S., Pharr, M. and Shirley, P. (2020): « A Taxonomy of Bidirectional Scattering Distribution Function Lobes for Rendering Engineers », Workshop on Material Appearance Modeling. [https://doi.org/10.2312/mam.20201143]
Mytum, H. and Peterson, J.R. (2018): « The Application of Reflectance Transformation Imaging (RTI) », Historical Archaeology, 52, 489-503. [https://doi.org/10.1007/s41636-018-0107-x]
Nicodemus, F.E. (1965): « Directional Reflectance and Emissivity of an Opaque Surface », Applied Optics, 4, 767-775. [https://doi.org/10.1364/AO.4.000767]
Volait, M. (2013): « Le remploi de grands décors historiques dans l’architecture moderne : l’hôtel particulier Saint-Maurice au Caire (1875-79) », 5èmes rencontres internationales du patrimoines architectural méditerranéen, 109-112, Marseille. [https://shs.hal.science/halshs-00918821v1]
Walter, B., Marschner, S.R., Li, H. and Torrance, K.E. (2007): « Microfacet Models for Refraction though Rough Surfaces », Eurographics Symposium on Rendering, 195-206. [https://doi.org/10.2312/EGWR/EGSR07/195-206]
Weyrich, T., Lawrence, J., Lensch, H., Rusinkiewicz, S. and Zickler, T. (2009): « Principles of Appearance Acquisition and Representation », Foundations and Trends® in Computer Graphics and Vision, 4, 75-191. [http://dx.doi.org/10.1561/0600000022]
Notes
- Dong et al. 2013, 1-17.
- Weyrich et al. 2009, 75-191.
- Mytum & Peterson 2018, 489-503.
- Nicodemus 1965, 767-775; Judd 1967, 445-452.
- MacDonald & Robson 2010, 422-427.
- McGuigan & Christmas 2020, 102911.
- Volait 2013, 109-112.
- Baillet et al. 2021, 117-120.
- Duffy et al. 2018.
- Deschaintre et al. 2019, 1-13.
- Cou 2023.
- Cou 2023.
- Corsini et al. 2008, 291-300.
- Cou 2023.
- Cou 2023.
- Walter et al. 2007, 195-206.
- Cou 2023.
- Cou 2023, 115-116.
- McGuigan & Christmas 2020, 102911.