This question is probably best answered by nate, since he's the technical expert and has been around long enough to probably have the historical knowledge of this that I'm most curious about. But any insight is welcome.
SDA has long prided itself on its video quality standards. Runs are encoded in multiple quality options, with high quality (or "insane quality" in some cases) being the best available. Use of higher quality inputs like S-Video or Component has been encouraged. Something I noticed - and this is mentioned on the AviSynth page, for self-encoders - is that SDA videos will never have higher resolution than the actual game; if a game runs at 240p, even high quality video will be encoded at 320x240. This is because, supposedly, there's no advantage to encoding a 240p game in any higher resolution than 240p, since you get no extra information out of it and it would just waste server space.
But something I've learned about over the past few years is the idea of interpolation during upscaling: when a low-resolution input is displayed on a high-resolution screen (such as inputting a SD console into an HDTV, or expanding a low-resolution video or image to full-screen on a computer), it needs to be upscaled. In other words, the (e.g.) 320x240 pixel image needs to be scaled up to (e.g.) 1440x1080 to be displayed (obviously this is done automatically by the HDTV or computer, and the user may not be aware of it). But there are many different upscaling methods, and some result in a very different look than others. In particular, HDTVs and media players on computers tend to use a bilinear or bicubic method of upscaling, which typically results in a blurry image. Even a 640x480 video (which would be what high-quality SDA videos use) would end up blurred if upscaled to full-screen on a media player on a computer (which even in the mid-2000s, would have used a sharp 1024x768 resolution at the lowest).
Now, I know that depending on taste, some amount of blur can be advantageous (I learned about all this in the context of learning about consumer-grade CRT TVs, which have a natural Gaussian blur that pixel-artists often took into account when drawing their sprites, leading to more realistic and rounded off images). But based on my understanding of SDA's mindset in the mid-2000s - and the lack of any mention of bilinear/bicubic upscaling on the SDA pages I'm aware of - it seems like such a blurring of SDA's videos (during upscaling when watching on a computer in full-screen) would have been considered undesirable, especially for 2D sprite games (which even emulators would render at full pixel-sharpness). Which makes me wonder: did nate (and the rest of the SDA staff) understand, during that time period, that videos were being blurred during upscaling, when viewed in full-screen on a computer? Was this considered desirable for whatever reason? Was bilinear/bicubic upscaling even common in media players of the time period? Was the whole concept of upscaling/interpolation just not on the SDA team's radar at the time (I certainly had no idea about it myself; I just assumed "low resolution" just inherently looked blurry)? And if such blurring is not considered desirable, is there a way to circumvent it (perhaps enabling nearest-neighbor upscaling in VLC?) that would be recommended?
SDA has long prided itself on its video quality standards. Runs are encoded in multiple quality options, with high quality (or "insane quality" in some cases) being the best available. Use of higher quality inputs like S-Video or Component has been encouraged. Something I noticed - and this is mentioned on the AviSynth page, for self-encoders - is that SDA videos will never have higher resolution than the actual game; if a game runs at 240p, even high quality video will be encoded at 320x240. This is because, supposedly, there's no advantage to encoding a 240p game in any higher resolution than 240p, since you get no extra information out of it and it would just waste server space.
But something I've learned about over the past few years is the idea of interpolation during upscaling: when a low-resolution input is displayed on a high-resolution screen (such as inputting a SD console into an HDTV, or expanding a low-resolution video or image to full-screen on a computer), it needs to be upscaled. In other words, the (e.g.) 320x240 pixel image needs to be scaled up to (e.g.) 1440x1080 to be displayed (obviously this is done automatically by the HDTV or computer, and the user may not be aware of it). But there are many different upscaling methods, and some result in a very different look than others. In particular, HDTVs and media players on computers tend to use a bilinear or bicubic method of upscaling, which typically results in a blurry image. Even a 640x480 video (which would be what high-quality SDA videos use) would end up blurred if upscaled to full-screen on a media player on a computer (which even in the mid-2000s, would have used a sharp 1024x768 resolution at the lowest).
Now, I know that depending on taste, some amount of blur can be advantageous (I learned about all this in the context of learning about consumer-grade CRT TVs, which have a natural Gaussian blur that pixel-artists often took into account when drawing their sprites, leading to more realistic and rounded off images). But based on my understanding of SDA's mindset in the mid-2000s - and the lack of any mention of bilinear/bicubic upscaling on the SDA pages I'm aware of - it seems like such a blurring of SDA's videos (during upscaling when watching on a computer in full-screen) would have been considered undesirable, especially for 2D sprite games (which even emulators would render at full pixel-sharpness). Which makes me wonder: did nate (and the rest of the SDA staff) understand, during that time period, that videos were being blurred during upscaling, when viewed in full-screen on a computer? Was this considered desirable for whatever reason? Was bilinear/bicubic upscaling even common in media players of the time period? Was the whole concept of upscaling/interpolation just not on the SDA team's radar at the time (I certainly had no idea about it myself; I just assumed "low resolution" just inherently looked blurry)? And if such blurring is not considered desirable, is there a way to circumvent it (perhaps enabling nearest-neighbor upscaling in VLC?) that would be recommended?
Thread title: