You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
intserver= ... // like beforestructpollfdpfds[1024] = {{ .events=POLLIN, .fd=server }};
while(poll(pfds)) { // This is the "event loop"foreach(pfdinpfds, ) {
if (fd.revents | POLLIN) {
if (fd.fd==server) { // Server socket has connection!intconnection=accept(server);
push(fds, { .events=POLLIN, .fd=connection})
} else { // Connection socket has data!charbuf[4096];
intsize=read(connection, buffer, sizeofbuf);
write(connection, buffer, size);
}
}}}
Scale problem 2: linear scan of file descriptors
With thousands of fds, passing the entire list back and forth to the kernel
becomes a bottleneck when most of them will remain unready in any single loop.
intserver= ... // like beforeinteventfd=epoll_create1(0);
structepoll_eventevents[10];
structepoll_eventev= { .events=EPOLLIN, .data.fd=server };
epoll_ctl(epollfd, EPOLL_CTL_ADD, server, &ev);
// This *is* the "event loop", every pass is a "tick"while((intmax=epoll_wait(eventfd, events, 10, -1))) {
for(n=0; n< max; n++) {
if (events[n].data.fd.fd==server) {
// Server socket has connection!intconnection=accept(server);
ev.events=EPOLLIN; ev.data.fd=connection;
epoll_ctl(eventfd, EPOLL_CTL_ADD, connection, &ev);
} else {
// Connection socket has data!charbuf[4096];
intsize=read(connection, buffer, sizeofbuf);
write(connection, buffer, size);
}
}}
What is the node event loop?
A semi-infinite loop, polling and blocking on the O/S until some in a set of
file descriptors are ready.
When does node exit?
It exits when it no longer has an events to epoll_wait() for, so will never
have any more events to process. At that point the epoll loop must complete.
Note: .unref() marks handles that are being waited on in the loop as "not
counting" towards keeping node alive.
Can we poll for all Node.js events?
Yes and no.
"file" descriptors: yes, but not actual disk files (sorry)
timeout resolution is milliseconds, timespec is nanoseconds, but rounded up
to system clock granularity.
Only one timeout at a time, but Node.js keeps all timeouts sorted, and sets the
timeout value to the next/earliest timeout.
Not pollable: file system
fs.* use the uv thread pool (unless they are sync).
The blocking call is made by a thread, and when it completes, readiness is
signalled back to epoll loop using either an eventfd or a self-pipe.
Aside: self-pipe
A pipe, where one end is written to by a thread or signal handler, and the
other end is polled in the epoll loop.
Traditional way to "wake up" a polling loop when the event to wait for is
directly representable as a file descriptor.
Sometimes pollable: dns
dns.lookup() calls getaddrinfo(), a function in the system
resolver library that makes blocking socket calls and cannot be integrated
into a polling loop.
dns.<everything else> uses non-blocking I/O, and integrates with the epoll
loop
Docs bend over backwards to explain this, but once you know how the event loop
works, and how blocking library calls must be shunted off to the thead pool,
this will always makes sense.
Important notes about the UV thread pool
It is shared by:
fs,
dns,
http.request() (with a name, dns.lookup() is used to resolve), and
any C++ addons that use it.
Default number of threads is 4, significantly parallel users of the above
should increase the size.
Hints:
Resolve DNS names yourself, directly, using the direct APIs to avoid
dns.lookup().
Increase the thread pool size with UV_THREADPOOL_SIZE.
Pollable: signals
The ultimate async... uses the self-pipe pattern to communicate with epoll loop.
Note that attaching callbacks for signals doesn't "ref" the event loop, which
is consistent with their usage as a "probably won't happen" IPC mechanism.
Pollable: child processes
Unix signals child process termination with SIGCHLD
Pipes between the parent and child are pollable.
Sometimes pollable: C++ addons
Addons should use the UV thread pool, but can do anything, including making
blocking calls which will block the loop (perhaps unintentionally).
Hints:
Review their code
Track loop metrics
You should now be able to describe:
What is the event loop
When is node multi-threaded
Why it "scales well"
End
This talk, including compilable version of pseudo "C" for playing with:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters