httpd

Checkout Tools
  • last updated 6 hours ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates

Changeset 1801753 is being indexed.

mpm_winnt: Do not redefine the standard CONTAINING_RECORD() macro

in child.c.

This definition has been added in https://svn.apache.org/r88498 — perhaps,

because not every versions of SDK contained it at that time.

But since then, the macro has been available starting from Windows 2000

(https://msdn.microsoft.com/en-us/library/windows/hardware/ff542043),

and any available version of Windows SDK now should also contain it.

mpm_winnt: Remove an obsolete comment in child.c explaining why the

declarations of the structures and functions to access the completion

contexts reside in a header file.

This no longer holds, as all the necessary functions and structures are

located in the single .c file (child.c).

mpm_winnt: Tweak the names of the variables in child.c which are used to

represent a queue of the completion contexts.

Starting from r1801655, the "queue" isn't really a queue, as all the

access happens with a LIFO order. So, instead of that, call it a "pool

of completion contexts", adjust names of all relevant variables and

tweak the comments.

This patch changes

- qlock to ctxpool_lock,

- qhead to ctxpool_head, and

- qwait_event to ctxpool_wait_event.

mpm_winnt: Tweak the listener shutdown code to use a separate event

instead of the global variable (shutdown_in_progress).

This change has two purposes. First of all, it makes the listener threads

which are blocked waiting for a completion context exit immediately during

shutdown. Previously, such threads would only check for exit every second.

The second reason for this change is to put the child_main() function in

charge of controlling the listeners life cycle. Previously, such relation

was circumvented by the fact that the listeners were also waiting for the

global child exit_event. With the new separate listener_shutdown_event,

only the child_main() function is responsible for shutting down the

listeners, and I think that this makes the code a bit clearer.

All the original behavior, including the special APLOG_DEBUG diagnostic

message when we fail to acquire a free completion context in 1 second,

is kept unchanged.

Fix 2.2.34 release date
Note SHA256
Bumped to 2.2.34
mpm_winnt: Following up on r1801655, add a comment that explains the

reason to choose the LIFO processing order for completion contexts.

It would be better to keep this important information in the code, instead

of just having it in the log message.

Propose.
Makefile.in: merge typo fix from test-integration branch
Makefile.in: fix MPM_MODULES typo (in check-conf)
Makefile.in: fix MPM_MODULES typo
mpm_winnt: Advertise support for preshutdown notifications in the service,

and perform shutdown in respond to SERVICE_CONTROL_PRESHUTDOWN.

The pure shutdown notification leaves a small amount of time for the service

to finish (and the allowed amount of time has been shrinking with every new

version of Windows), and handling only it increases the chance of the process

being killed by SCM, instead of gracefully shutting down. Handling the

preshutdown control code extends this period, and increases the chances of

finishing everything properly when the machine is rebooted or shut down.

(See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683241)

Please note that although the preshutdown notifications are available only

starting from Windows Vista, the code is compatible with the previous versions

of Windows, since the SCM ignores unknown SERVICE_ACCEPT codes, and will

still send an ordinary SERVICE_CONTROL_SHUTDOWN under old Windows

versions.

mpm_winnt: Remove unused values of the io_state_e enum.

Submitted By: Ivan Zhakov <ivan {at} visualsvn.com>

mpm_winnt: Remove a duplicated comment in the child_main() function.

mpm_winnt: Use a LIFO stack instead of a FIFO queue to hold unused

completion contexts, as that may significantly reduce the memory usage.

This simple change can have a noticeable impact on the amount of memory

consumed by the child process in various cases. Every completion context

in the queue has an associated allocator, and every allocator has it's

ap_max_mem_free memory limit which is not given back to the operating

system. Once the queue grows, it cannot shrink back, and every allocator

in each of the queued completion contexts keeps up to its max_free amount

of memory. The queue can only grow when a server has to serve multiple

concurrent connections at once.

With that in mind, consider a case with a server that doesn't encounter many

concurrent connections most of the time, but has occasional spikes when

it has to serve multiple concurrent connections. During such spikes, the

size of the completion context queue grows.

The actual difference between using LIFO and FIFO orders shows up after

such spikes, when the server is back to light load and doesn't see a lot

of concurrency. With FIFO order, every completion context in the queue

will be used in a round-robin manner, thus using *every* available allocator

one by one and ultimately claiming up to (N * ap_max_mem_free memory) from

the OS. With LIFO order, only the completion contexts that are close to

the top of the stack will be used and reused for subsequent connections.

Hence, only a small part of the allocators will be used, and this can

prevent all other allocators from unnecessarily acquiring memory from

the OS (and keeping it), and this reduces the overall memory footprint.

Please note that this change doesn't affect the worst case behavior, as

it's still (N * ap_max_mem_free memory), but tends to behave better in

practice, for the reasons described above.

Another thing worth considering is the new behavior when the OS decides

to swap out pages of the child process, for example, in a close-to-OOM

condition. Handling every new connection after the swap requires the OS

to load the memory pages for the allocator from the completion context that

is used for this connection. With FIFO order, the completion contexts are

used one by one, and this would cause page loads for every new connection.

With LIFO order, there will be almost no swapping, since the same completion

context is going to be reused for subsequent new connections.

mpm_winnt: Drop the APLOG_DEBUG diagnostic saying how many thread

are blocked on the I/O completion port during the shutdown.

Prior to r1801635, the shutdown code required to know the amount of blocked

threads, as it has been dispatching the same amount of completion packets.

But this no longer holds, and the only reason why we maintain the

corresponding g_blocked_threads variable is because of this debug

diagnostic message.

Drop it in order to reduce complexity of the quite critical code in the

winnt_get_connection() function and to reduce the amount of global

variables.

mpm_winnt: Remove an unnecessary Sleep() in the winnt_accept() function.

This sleep occured in a situation when:

- We don't have a free completion context in the queue

- We can't add one, as doing so would exceed the max_num_completion_contexts

limit (all worker threads are busy)

- We have exceeded a 1 second timeout while waiting for it

In this case, the Sleep() call is unnecessary, as there is no intermittent

failure that can be waited out, but rather than that, it's an ordinary

situation with all workers being busy. Presumably, calling Sleep() here

can be even considered harmful, as it affects the fairness between the

listeners that are blocked waiting for the completion context.

So, instead of calling Sleep() just check for the possible shutdown and

immediately retry acquiring a completion context. If all worker threads

are still busy, the retry will block in the same WaitForSingleObject() call,

which is fine.

mpm_winnt: Simplify the shutdown code that was waiting for multiple worker

thread handles in batches.

Starting from r1801636, there is no difference between ending the wait with

one or multiple remaining threads. This is because we terminate the process

if at least one thread is still active when we hit a timeout.

Therefore, instead of making an effort to evenly distribute and batch the

handles with WaitForMultipleObjects(), we could just start from one end,

and wait for one thread handle at a time.

mpm_winnt: Avoid using TerminateThread() in case the shutdown routine

hits a timeout while waiting for the worker threads to exit.

Using TerminateThread() can have dangerous consequences such as deadlocks —

say, if the the thread is terminated while holding a lock or a heap lock

in the middle of HeapAlloc(), as these locks would not be released.

Or it can corrupt the application state and cause a crash.

(See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686717)

Rework the code to call TerminateProcess() in the described circumstances

and leave the cleanup to the operating system.

mpm_winnt: Make the shutdown faster by avoiding unnecessary Sleep()'s

when shutting down the worker threads.

Previously, the shutdown code was posting an amount of I/O completion

packets equal to the amount of the threads blocked on the I/O completion

port. Then it would Sleep() until all these threads "acknowledge" the

completion packets by decrementing the global amount of blocked threads.

A better way would be to send the number of IOCP_SHUTDOWN completion

packets equal to the total amount of threads and immediately proceed to

the next step. There is no need to block until the threads actually receive

the completion, as the shutdown process includes a separate step that waits

until the threads exit, and the new approach avoids an unnecessary delay.

Split another entry that has long been missing from the website for 2.2
Cleaner split of 2.4 from 2.2 in vulnerability table, tie 2.2 to .34 release
Announce 2.2.34, close a chapter
Fix release date
note URL describing PROXY

clarify and typo fixes

prep for site refresh

Propose fix for PR 61142

Add logic to read the Upgrade header and use it in the response.

Use we you are proxying to a server that has multiple upgrade on the same IP/Port.

PR 61142