There’s an overabundance of competent-ish frontend developers. You most likely need to pay the devs less, compared to someone writing it with e.g. C++, and finding people with relevant experience takes less time. You also get things like a ready-made sandbox and the ability to re-use UI components from other web services, which simplifies application development. So my guess is that this is done to save money.
Also, the more things are running in an embedded browser the more reasons M$ has to bake Edge into the OS, without raising eyebrows as to why they’re providing it as a default (look it’s a system tool as well, not just a browser).
And this is because audiophiles don’t understand why the audio master is 96 kHz or more often 192 kHz. You can actually easily hear the difference between 48, 96 and 192 kHz signals, but not in the way people usually think, and not after the audio has been recorded – because the main difference is latency when recording and editing. Digital sound processing works in terms of samples, and a certain amount of them have to be buffered to be able to transform the signal between time and frequency. The higher the sample rate, the shorter the buffer, and if there’s one thing humans are good at hearing (relatively speaking) it’s latency.
Digital instruments start being usable after 96 kHz as the latency with 256 samples buffered gets short enough that there’s no distracting delay from key press to sound. 192 gives you more room to add effects and such to make the pipeline longer. Higher sample rate also makes changing frequencies, like bringing the pitch down, simpler as there’s more to work with.
But after the editing is done, there’s absolutely no reason to not cut the published recording to 48 or 44.1 kHz. Human ears can’t hear the difference, and whatever equipment you’re using will probably refuse to play anything higher than 25 kHz anyways, as e.g. the speaker coils aren’t designed to let higher frequency signals through. It’s not like visual information where equipment still can’t match the dynamic range of the eye, and we’re just starting to get to a pixel density where we can no longer see a difference between DPIs.