Such error handling is easier, because it’s a normal part of the flow, rather than a separate path that’s not tested. If submitting fails to load the page, you get the normal browser error page and reload, submitting it again. And that’s what you want it to be, unless you can actively do something else in the meantime (which is still a good deal more complex and not typically supported by web apps). By contrast, my experience says that scripting-based submission is very flimsy in the presence of unreliable network. It’s crazy just how many things enter broken, unrecoverable states because it didn’t occur to the authors that maybe that fetch could fail, and maybe you might want to try again. Or worse, and practically never handled properly, when network conditions lead to a stuck request, where the client is waiting for something that will never come. In the browser, you just press F5/Ctrl+R/⌘R when you recognise this situation, but when scripting-powered the only way of fixing this may be briefly toggling the network adapter in order to kill the request (because you can’t just reload the page because you’d lose in-page form state, another of these things that the browser gives you for free but scripting plays havoc with, though Firefox is best at compensating), and hoping that leads the page to let you submit again.
Seriously, scripting-based submission, navigation, &c. is reimplementing a lot of what the browser give you for free, and it can’t be done perfectly (e.g. tab loading indicator) and what is reimplemented is absolutely never as flawless as possible. Yes, you get some interesting advantages, but you lose a lot too, more than developers tend to admit.