Checkout Tools
  • last updated 5 hours ago
Constraints: committers
Constraints: files
Constraints: dates
util_filter: both directions for setaside/reinstate/adapt logging.

ap_filter_{setaside,reinstate,adopt}() can be called by both input and output

filters, so adapt confusing (always out) logging.

Follow up to r1877785: ap_filter_should_yield() is not NULL safe :p

ap_filter_output_pending(): test where each filter should yield after running.

Since running a filter may pass data next to it, ap_filter_output_pending()

should test for ap_filter_should_yield(f->next) after each f call, otherwise

it won't take into account new pending data in filters it just walked.

util_filter: export ap_filter_adopt_brigade() since mod_ssl uses it.
  1. … 4 more files in changeset.
util_filter: axe misleading AP_BUCKET_IS_MORPHING() macro and fix comments.

Morphing buckets are not only those with ->length == -1, so the macro is

misleading. Modify comments to talk about opaque buckets when length == -1

and about morphing buckets (once) for opaque and FILE buckets.

  1. … 1 more file in changeset.
core: add r->flushed flag and set it when the response is sent.

By setting EOR->r->flushed in the core output filter, allow one to determine at

log_transaction hook time whether the request has been fully flushed through

the network, or not (network issue, filter error, n-th pipelined resposne...).

Introduce the ap_bucket_eor_request() helper to get the request bound to an EOR

bucket, and uses it in ap_core_output_filter() to mark the EOR's request just

before destroying it, after all the previous buckets have been sent.

While at it, rename the request_rec* member of struct ap_bucket_eor from "data"

to "r", which makes the code clearer (not to be confused with b->data).

Finally, add CustomLog format %F, showing "F" or "-" depending on r->flushed,

for admins to figure out for each request.

  1. … 6 more files in changeset.
core: handle morphing buckets setaside/reinstate and kill request core filter.

The purpose of ap_request_core_filter() is not clear, it seems to prevent

potential morphing buckets to go through AP_FTYPE_CONNECTION filters which

would fail to set them aside (ENOTIMPL), and read them (unbounded) in memory.

This patch allows ap_filter_setaside_brigade() to set morphing buckets aside

by simply moving them, assuming they have the correct lifetime (either until

some further EOR, or the connection lifetime, or whatever). IOW, the module is

responsible for sending morphing buckets whose lifetime needs not be changed

by the connection filters.

Now since morphing buckets consume no memory until (apr_bucket_)read, like FILE

buckets, we don't account for them in flush_max_threshold either. This changes

ap_filter_reinstate_brigade() to only account for in-memory and EOR buckets to


Also, since the EOR bucket is sent only to c->output_filters once the request

is processed, when all the filters < AP_FTYPE_CONNECTION have done their job

and stopped retaining data (after the EOS bucket, if ever), we prevent misuse

of ap_filter_{setaside,reinstate}_brigade() outside connection filters by

returning ENOTIMPL. This is not the right API for request filters as of now.

Finally, ap_request_core_filter() and co can be removed.

  1. … 6 more files in changeset.
Follow up to r1840265: really privatize ap_filter_{recycle,adopt_brigade}().

Move ap_filter_adopt_brigade()'s declaration to "server/core.h" (private).

For ap_filter_recycle(), make it static/internal to util_filter (renamed to

recycle_dead_filters() which better fits what it does). It's now also called

unconditionally from ap_filter_input_pending() which itself is always called

after the request processing and from MPM event (as input_pending hook).

  1. … 4 more files in changeset.
Follow up to r1840149: core input filter pending data.

Since r1840149 ap_core_input_filter() can't use use f->[priv->]bb directly, so

ap_filter_input_pending() stopped accounting for its pending data.

But ap_core_input_filter() can't (and doesn't need to) setaside its socket

bucket, so ap_filter_setaside_brigade() is not an option. This commit adds

ap_filter_adopt_brigade() which simply moves the given buckets (brigade) into

f->priv->bb, and since this is not something to be done blindly (the buckets

need to have c->pool/bucket_alloc lifetime, which is the case in the core

filter) the function is not AP_DECLAREd/exported thus can be used in core only.

With ap_filter_adopt_brigade() and ap_filter_reinstate_brigade(), the core

input is now ap_filter_input_pending() friendly.

Also, ap_filter_recycle() is no more part of the API (AP_DECLARE removed too),

there really is no point to call it outside core code. MAJOR bumped once again

because of this.

  1. … 4 more files in changeset.
util_filter: protect ap_filter_t private fields from external (ab)use.

Introduce opaque struct ap_filter_private to move ap_filter_t "pending", "bb"

and "deferred_pool" fields to the "priv" side of things.

This allows to trust values set internally (only!) in util_filter code, and

make useful assertions between the different functions calls, along with the

usual nice extensibility property.

Likewise, the private struct ap_filter_conn_ctx in conn_rec (from r1839997)

allows now to implement the new ap_acquire_brigade() and ap_release_brigade()

functions useful to get a brigade with c->pool's lifetime. They obsolete

ap_reuse_brigade_from_pool() which is replaced where previously used.

Some comments added in ap_request_core_filter() regarding the lifetime of the

data it plays with, up to EOR...

MAJOR bumped (once again).

  1. … 7 more files in changeset.
core: follow up to r1839997: some runtime optimizations.

We don't mind about cleaning up a connection filter when its pool is being

cleaned up already. For request filters, let pending_filter_cleanup() do

nothing if the given filter is not pending (anymore), which allows to save a

cleanup kill when the filter is removed.

Clear (zero) the reused filters (ap_filter_t) on reuse rather than cleanup,

then a single APR_RING_CONCAT() can be used to recycle dead_filters in a one


Always call ap_filter_recycle() in ap_filter_output_pending(), even if no

filter is pending, and while at it fix s/ap_filter_recyle/ap_filter_recycle/

silly typo.

  1. … 3 more files in changeset.
core: follow up to r1839997: recycle request filters to a delayed ring first.

We want not only ap_filter_output_pending() to be able to access each pending

filter's *f after the EOR is destroyed, but also each request filter to do

the same until it returns.

So request filters are now always cleaned up into a dead_filters ring which is

merged into spare_filters only when ap_filter_recycle() is called explicitely,

that is in ap_process_request_after_handler() and ap_filter_output_pending().

The former takes care of recycling at the end of the request, with any MPM,

while the latter keeps recycling during MPM event's write completion.

  1. … 4 more files in changeset.
Axe spurious comment (added and addressed in r1839997).

core: always allocate filters (ap_filter_t) on f->c->pool.

When filters are allocated on f->r->pool, they may be destroyed any time

underneath themselves which makes it hard for them to be passed the EOR and

forward it (*f can't be dereferenced anymore when the EOR is destroyed, thus

before request filters return).

On the util_filter side, it also makes it impossible to flush pending request

filters when they have set aside the EOR, since f->bb can't be accessed after

it's passed to the f->next.

So we always use f->c->pool to allocate filters and pending brigades, and to

avoid leaks with keepalive requests (long living connections handling multiple

requests), filters and brigades are recycled with a cleanup on f->r->pool.

Recycling is done (generically) with a spare data ring (void pointers), and a

filter(s) context struct is associated with the conn_rec to maintain the rings

by connection, that is:

struct ap_filter_conn_ctx {

struct ap_filter_ring *pending_input_filters;

struct ap_filter_ring *pending_output_filters;

struct ap_filter_spare_ring *spare_containers,




int flushing;


MMN major bumped (again).

  1. … 7 more files in changeset.
util_filter: split pending filters ring in two: input and output ones.

Pending input and output are now maintained separately in respectively

c->pending_input_filters and c->pending_output_filters, which improves

both performances and debug-ability.

Also, struct ap_filter_ring is made opaque, it's only used by util_filter

and this will allow us to later change it e.g. to a dual ring+apr_hash to

avoid quadratic search in ap_filter_prepare_brigade().

MMN major bumped due to the change in conn_rec (this is trunk only code

anyway for now).

  1. … 3 more files in changeset.
Axe some redundant conditions. PR 62549.

  1. … 5 more files in changeset.
Follow up to r1837822: typo.
core: ap_filter_output_pending() to flush outer most filters first.

Since previous output filters may use ap_filter_should_yield() to determine

whether they should send more data (e.g. ap_request_core_filter), we need

to flush pending data from the core output filter first, and so on up the


Otherwise we may enter an infinite loop where ap_request_core_filter() does

nothing on ap_filter_output_pending() called from MPM event.

core: axe data_in_in/output_filter from conn_rec.

They were superseded by ap_filter_should_yield() and ap_run_in/output_pending()

in r1706669 and had poor semantics since then (we can't maintain pending

semantics both by filter and for the whole connection).

Register ap_filter_input_pending() as the default input_pending hook (which

seems to have been forgotten in the first place).

On the MPM event side, we don't need to flush pending output data when the

connection has just been processed, ap_filter_should_yield() is lightweight and

enough to determine whether we should really enter write completion state or go

straight to reading. ap_run_output_pending() is used only when write completion

is in place and needs to be completed before more processing.

  1. … 6 more files in changeset.
util_filter: axe loglevel explicit checks already done by ap_log_cerror().

core: avoid double loop in ap_filter_output_pending().

Since ap_filter_output_pending() is looping on the relevant filters already,

we don't need to use ap_filter_should_yield() to check upstream filters.

core: integrate data_in_{in,out}put_filter to ap_filter_{in,out}put_pending().

Straightforward for ap_filter_input_pending() since c->data_in_input_filter is

always checked wherever ap_run_input_pending(c) is.

For ap_filter_output_pending(), this allows to set c->data_in_output_filter in

ap_process_request_after_handler() and avoid an useless flush from mpm_event.

  1. … 5 more files in changeset.
core: core output filter optimizations.

The core output filter used to determine first if it needed to block before

trying to send its data (including set aside ones), and if so it did call


This can be avoided by making send_brigade_nonblocking() send as much data as

possible (nonblocking), and only if data remain check whether they should be

flushed (blocking), according to the same ap_filter_reinstate_brigade()

heuristics but afterward.

This allows both to simplify the code (axe send_brigade_blocking and some

duplicated logic) and optimize sends since send_brigade_nonblocking() is now

given all the buckets so it can make use of scatter/gather (iovec) or NOPUSH

option with the whole picture.

When sendfile is available and/or with fine tuning of FlushMaxThreshold (and

ReadBufferSize) from r1836032, one can now take advantage of modern network

speeds and bandwidth.

This commit also adds some APLOG_TRACE6 messages for outputed bytes (including

at mod_ssl level since splitting happens there when it's active).

  1. … 2 more files in changeset.
core: Add ReadBufferSize, FlushMaxThreshold and FlushMaxPipelined directives.

ReadBufferSize allows to configure the size of read buffers, for now it's

mainly used for file buckets reads (apr_bucket_file_set_buf_size), but it could

be used to replace AP_IOBUFSIZE in multiple places.

FlushMaxThreshold and FlushMaxPipelined allow to configure the hardcoded


The former sets the maximum size above which pending data are forcibly flushed

to the network (blocking eventually), and the latter sets the number of

pipelined/pending responses above which they are flushed regardless of whether

a pipelined request is immediately available (zero disables pipelining).

Larger ReadBufferSize and FlushMaxThreshold can trade memory consumption for

performances with the capacity of today's networks.

  1. … 3 more files in changeset.
util_filter: Axe conn_rec->empty brigade.

Since it's internal util_filter use, we shouldn't expose it in conn_rec and

can replace it with a pooled brigade provided by ap_reuse_brigade_from_pool().

  1. … 3 more files in changeset.
util_filter: follow up to r1835640: pending_filter_cleanup() precedence.

Register pending_filter_cleanup() as a normal cleanup (not pre_cleanup) so

that the pending filters are still there on pool cleanup, and f->bb is set

to NULL where needed.

Then is_pending_filter() check is moved where relevant.

util_filter: keep filters with aside buckets in order.

Read or write of filter's pending data must happen in the same order as the

filter chain, thus we can't use an apr_hash_t to maintain the pending filters

since it provides no garantee on this matter.

Instead use an APR_RING maintained in c->pending_filters, and since both the

name (was c->filters) and the type changed, MAJOR is bumped (trunk only code

anyway so far).

  1. … 4 more files in changeset.
Guess at fixing win32 build regression on trunk introduced by r1734656

  1. … 2 more files in changeset.
core: fix ap_request_core_filter()'s brigade lifetime.

The filter should pass everything up to and including EOR, then bail out.

For EOR it can't use a brigade created on r->pool, so retain one created

on c->pool in c->notes (this avoids leaking a brigades for each request

on the same connection).

  1. … 1 more file in changeset.
util_filter: better ap_pass_brigade() vs empty brigades.