Satellite imagery has been widely used for the assessment of wildfire burn severity within the scientific community and fire management agencies. Multiple indices have been proposed to assess burn severity, among which the differenced Normalized Burn Ratio (dNBR) is arguably the most commonly used index that is expected to provide an objective and consistent assessment. However, although evidence of variability in the dNBR-based assessment of burn severity driven by image pair selection has been shown in many studies, the comprehensive examination of the extent of the bias resulting from the image selection has been lacking. In this study, we focus on three factors of the image selection process which are encountered by most Landsat-derived dNBR applications, including the sensor combination and the difference in timing of image acquisition (for both the year and seasonality) of pre- and post-fire image pairs. Through separate analyses, each targeting a single factor, we show that Landsat sensor combination between the pre- and post-fire images has a limited impact on the dNBR values. The difference in the year of acquisition between the images in the image pairs is shown to influence dNBR assessment with a noticeable increase in mean dNBR (>0.1) with only a single year difference between images compared to multi-year differences. However, differences in the image acquisition seasons and the resulting phenological differences is shown to impact dNBR values most considerably. Based on our results, we warn against the calculation of dNBR when the images are acquired in different seasons. We believe that despite the existence of multiple derivatives of dNBR, there remains a need for an improved version; one that is less susceptible to the phenological impacts introduced by the selected images.