Skip to content

Conversation

@xal-0
Copy link

@xal-0 xal-0 commented Dec 16, 2025

On Unix, if a uv_stream_t has UV_HANDLE_BLOCKING_WRITES set, either because we were unable to put the file in O_NONBLOCK mode when the stream was initialized, or because the stream has been shared with a subprocess via UV_INHERIT_STREAM, uv_write can block.

As discussed in #3602, the nicest solution would be to use platform-specific features to do non-blocking writes to files regardless of the O_NONBLOCK state, but a fallback is needed regardless. This change implements the fallback by doing all writes to blocking streams in the thread pool.

There are a handful of specific changes I'd like to get feedback on:

  • This tweaks uv_spawn slightly: we clear O_NONBLOCK on libuv-owned streams shared using UV_INHERIT_STREAM, but not on files shared with UV_INHERIT_FD. We set UV_HANDLE_BLOCKING_WRITES only when we put the file in blocking mode, since uv_stream_set_blocking is documented to make uv_write block.
  • The UV_HANDLE_* enum is getting crowded. I put UV_HANDLE_WRITE_PENDING on top of a Windows-only flag, but the name is generic enough that we might one day want to use it for Windows too. Should it go somewhere else?
  • uv_fs_t* blocked_write is a new field on UV_STREAM_PRIVATE_FIELDS for now, breaking ABI.

Handling uv_close is more complicated when we dispatch writes to the thread pool. We can try to uv_cancel the write request, but there are few portable things we can do to cancel an in-progress write once it has been picked up from the work queue, save for reserving a signal for libuv's use only. I implemented a compromise: we try uv_cancel, but keep the FD open and defer calling the write callbacks until the write(2) call completes (the write request itself doesn't need to be completed fully). This seems acceptable because until #4966 the callback can't know how many bytes were written if status == UV_ECANCELED.

The added test reproduces a problem we encounter frequently in Julia, where spawning a process puts our stream into blocking mode, then a later write fills the pipe/TTY buffer and deadlocks.

@xal-0 xal-0 force-pushed the ss/stream-write-threadpool branch 2 times, most recently from 323863a to 562101f Compare December 16, 2025 03:50
On Unix, if a uv_stream_t has UV_HANDLE_BLOCKING_WRITES set, either
because we were unable to put the file in O_NONBLOCK mode when the
stream was initialized, or because the stream has been shared with a
subprocess via UV_INHERIT_STREAM, uv_write can block.

The nicest solution would be to use platform-specific features to do
non-blocking writes to files regardless of the O_NONBLOCK state, but a
fallback is needed regardless.  This change implements the fallback by
doing all writes to blocking streams in the threadpool.

Note that this also tweaks uv_spawn slightly: we clear O_NONBLOCK on
libuv-owned streams shared using UV_INHERIT_STREAM, but not on files
shared with UV_INHERIT_FD.
@xal-0 xal-0 force-pushed the ss/stream-write-threadpool branch from 562101f to a3c9a98 Compare December 16, 2025 04:06
@saghul
Copy link
Member

saghul commented Dec 16, 2025

IIRC way back when this was added so the console logs in Node wouldn't be lost (or maybe also interleaved?) if the process exits abruptly.

Wouldn't this change that?

Just an observation, generally I think this is a good change.

@Keno
Copy link
Contributor

Keno commented Dec 16, 2025

We can try to uv_cancel the write request, but there are few portable things we can do to cancel an in-progress write once it has been picked up from the work queue, save for reserving a signal for libuv's use only

I do think we need a way to kick the thread out of the syscall for cancellation purposes. We already have UV_LOOP_BLOCK_SIGNAL - no reason we can't have UV_LOOP_CANCEL_SIGNAL have the embedder designate a signal that has been appropriately configured. Needs to properly handle the sigmask of course.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants