Since media queries are what are used declaratively for resource <-> capability matching on the HTML Media Element when using static resources, it seems natural to also use them for the Media Source Extension case.
At the time, the response was that resource selection should be performed by the UA, not the author. For adaptive streaming, this option has been on the table for a long time: various manifest formats exist which can describe available equivalent media streams (DASH, HLS, Smooth Streaming) and UA's *could* support these, giving complete control of the stream selection to the UA.
However, in practice, only one desktop UA has supported any such format, and it is a proprietary one (Safari and HLS). All desktop UAs are working on - and in some cases have deployed - the alternative Media Source Extensions model. That specification is close to CR. At least the two largest video sites in the US have adopted this approach and have large-scale deployed services.
Whilst this approach allows the site to adapt the streamed media to available bandwidth and certain device capabilities exposed by canPlayType and media queries, we are still missing the capability to adapt to display resolution and audio output capability.
These are optimizations and cannot be done perfectly. For example, a display may be HD but the video may be in a small window with less than SD resolution. Or a device may have a digital audio output connected, but it's connected to an amplifier that performs down-mixing to stereo.
Nevertheless, the optimizations which can be performed become quite valuable when considering UHD and high quality multi-channel audio. Optimizing for display capability not only saves bandwidth, but may avoid the need for downscaling, saving battery life.
In discussions in the html-media group, it's been suggested that this topic should be the subject of a general solution, not something specific to MSE.
Is there any interest in re-visiting this topic here ?