I think I now understand what is happening in my case, with the connections sticking around, the next requests are actively refused by the WiFi module before any of my code has a chance to see the requests. So we have two options: the first is to close all connections immediately, since we can only handle one at a time, and the other one is to keep that connection always open and reuse that sequentially to make multiple transfers (but we need to terminate each request with explicit sizes, Content-Length or Chunked transfer).
Has anyone had success implementing a DigiX server with keep-alive in the connection, and chunked web transfers using the new chunked support in DigiFi? This seems like a reasonable approach, that could be fairly efficient, as long as the clients on the other side (browsers) don't try to get too smart. These are all guidelines, so a browser could honor keep-alive and reuse that connection but also have multiple connections and pipeline them heavily assuming a beefy serve on the other side. That would be bad. The only super safe mechanism is to ensure all connections can be immediately closed on the other side (Connection: close and explicit sizes?)