Checkout Tools
  • last updated 3 hours ago
Constraints: committers
Constraints: files
Constraints: dates

Changeset 1801666 is being indexed. fix MPM_MODULES typo (in check-conf) fix MPM_MODULES typo
mpm_winnt: Advertise support for preshutdown notifications in the service,

and perform shutdown in respond to SERVICE_CONTROL_PRESHUTDOWN.

The pure shutdown notification leaves a small amount of time for the service

to finish (and the allowed amount of time has been shrinking with every new

version of Windows), and handling only it increases the chance of the process

being killed by SCM, instead of gracefully shutting down. Handling the

preshutdown control code extends this period, and increases the chances of

finishing everything properly when the machine is rebooted or shut down.


Please note that although the preshutdown notifications are available only

starting from Windows Vista, the code is compatible with the previous versions

of Windows, since the SCM ignores unknown SERVICE_ACCEPT codes, and will

still send an ordinary SERVICE_CONTROL_SHUTDOWN under old Windows


mpm_winnt: Remove unused values of the io_state_e enum.

Submitted By: Ivan Zhakov <ivan {at}>

mpm_winnt: Remove a duplicated comment in the child_main() function.

mpm_winnt: Use a LIFO stack instead of a FIFO queue to hold unused

completion contexts, as that may significantly reduce the memory usage.

This simple change can have a noticeable impact on the amount of memory

consumed by the child process in various cases. Every completion context

in the queue has an associated allocator, and every allocator has it's

ap_max_mem_free memory limit which is not given back to the operating

system. Once the queue grows, it cannot shrink back, and every allocator

in each of the queued completion contexts keeps up to its max_free amount

of memory. The queue can only grow when a server has to serve multiple

concurrent connections at once.

With that in mind, consider a case with a server that doesn't encounter many

concurrent connections most of the time, but has occasional spikes when

it has to serve multiple concurrent connections. During such spikes, the

size of the completion context queue grows.

The actual difference between using LIFO and FIFO orders shows up after

such spikes, when the server is back to light load and doesn't see a lot

of concurrency. With FIFO order, every completion context in the queue

will be used in a round-robin manner, thus using *every* available allocator

one by one and ultimately claiming up to (N * ap_max_mem_free memory) from

the OS. With LIFO order, only the completion contexts that are close to

the top of the stack will be used and reused for subsequent connections.

Hence, only a small part of the allocators will be used, and this can

prevent all other allocators from unnecessarily acquiring memory from

the OS (and keeping it), and this reduces the overall memory footprint.

Please note that this change doesn't affect the worst case behavior, as

it's still (N * ap_max_mem_free memory), but tends to behave better in

practice, for the reasons described above.

Another thing worth considering is the new behavior when the OS decides

to swap out pages of the child process, for example, in a close-to-OOM

condition. Handling every new connection after the swap requires the OS

to load the memory pages for the allocator from the completion context that

is used for this connection. With FIFO order, the completion contexts are

used one by one, and this would cause page loads for every new connection.

With LIFO order, there will be almost no swapping, since the same completion

context is going to be reused for subsequent new connections.

mpm_winnt: Drop the APLOG_DEBUG diagnostic saying how many thread

are blocked on the I/O completion port during the shutdown.

Prior to r1801635, the shutdown code required to know the amount of blocked

threads, as it has been dispatching the same amount of completion packets.

But this no longer holds, and the only reason why we maintain the

corresponding g_blocked_threads variable is because of this debug

diagnostic message.

Drop it in order to reduce complexity of the quite critical code in the

winnt_get_connection() function and to reduce the amount of global


mpm_winnt: Remove an unnecessary Sleep() in the winnt_accept() function.

This sleep occured in a situation when:

- We don't have a free completion context in the queue

- We can't add one, as doing so would exceed the max_num_completion_contexts

limit (all worker threads are busy)

- We have exceeded a 1 second timeout while waiting for it

In this case, the Sleep() call is unnecessary, as there is no intermittent

failure that can be waited out, but rather than that, it's an ordinary

situation with all workers being busy. Presumably, calling Sleep() here

can be even considered harmful, as it affects the fairness between the

listeners that are blocked waiting for the completion context.

So, instead of calling Sleep() just check for the possible shutdown and

immediately retry acquiring a completion context. If all worker threads

are still busy, the retry will block in the same WaitForSingleObject() call,

which is fine.

mpm_winnt: Simplify the shutdown code that was waiting for multiple worker

thread handles in batches.

Starting from r1801636, there is no difference between ending the wait with

one or multiple remaining threads. This is because we terminate the process

if at least one thread is still active when we hit a timeout.

Therefore, instead of making an effort to evenly distribute and batch the

handles with WaitForMultipleObjects(), we could just start from one end,

and wait for one thread handle at a time.

mpm_winnt: Avoid using TerminateThread() in case the shutdown routine

hits a timeout while waiting for the worker threads to exit.

Using TerminateThread() can have dangerous consequences such as deadlocks —

say, if the the thread is terminated while holding a lock or a heap lock

in the middle of HeapAlloc(), as these locks would not be released.

Or it can corrupt the application state and cause a crash.


Rework the code to call TerminateProcess() in the described circumstances

and leave the cleanup to the operating system.

mpm_winnt: Make the shutdown faster by avoiding unnecessary Sleep()'s

when shutting down the worker threads.

Previously, the shutdown code was posting an amount of I/O completion

packets equal to the amount of the threads blocked on the I/O completion

port. Then it would Sleep() until all these threads "acknowledge" the

completion packets by decrementing the global amount of blocked threads.

A better way would be to send the number of IOCP_SHUTDOWN completion

packets equal to the total amount of threads and immediately proceed to

the next step. There is no need to block until the threads actually receive

the completion, as the shutdown process includes a separate step that waits

until the threads exit, and the new approach avoids an unnecessary delay.

Split another entry that has long been missing from the website for 2.2
Cleaner split of 2.4 from 2.2 in vulnerability table, tie 2.2 to .34 release
Announce 2.2.34, close a chapter
Fix release date
note URL describing PROXY

clarify and typo fixes

prep for site refresh

Propose fix for PR 61142

Add logic to read the Upgrade header and use it in the response.

Use we you are proxying to a server that has multiple upgrade on the same IP/Port.

PR 61142

Point to 2.4.28-dev version of patch
not yet in 2.4.x

add comment

Simple TCPv4 binary protocol test

possible CPAN lib to help make this easier


Start of binary...

Add in TCP6 test as well...

mpm_winnt: Following up on r1801144, use the new accept_filter_e enum

values in a couple of missed places in winnt_accept().