Annual luminance maps provide meaningful evaluations for occupants’ visual comfort, preferences, and perception. However, acquiring annual luminance maps require labor-intensive and time-consuming simulations or impracticable long-term field measurements. In this Research Committee talk, we will present a novel data-driven machine learning approach that makes annual luminance-based evaluations more efficient and accessible. The methodology is based on predicting the annual panoramic luminance maps from a limited number of point-in-time high dynamic range imagery by utilizing a deep neural network (DNN). Unlike the fixed camera viewpoint of perspective or fisheye projections that are commonly used in daylighting evaluations, panoramas (with 360° horizontal and 180° vertical field of view) allow full degree-of-freedom in camera roll, pitch, and yaw, thus providing a robust source of information for an occupant’s visual experience in a given environment. The DNN predicted high-quality panoramas are validated against Radiance (RPICT) renderings using a series of quantitative and qualitative metrics. The most efficient predictions are achieved with 9 days of hourly data collected around the spring equinox, summer and winter solstices (2.5% of the year) to predict the luminance maps for the rest of the year. The results clearly show that practitioners and researchers can efficiently incorporate long-term luminance-based metrics over multiple view directions into the design and research processes using the proposed DNN workflow. We share a public dataset of annual HDR panoramic luminance maps and the machine learning codebase to enable reproducibility and future explorations (https://github.com/yueAUW/neural-daylighting.git).
