| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Any request from a client being relayed to an uplink server will have
its size extended to this value. It will also be applied to background
replication requests, if the BGR mode is FULL.
As request coalescing is currently very primitive, this setting should
usually be left diabled, and bgrWindowSize used instead, if appropriate.
If you enable this, set it to something large (1M+), or it might have
adverse effects.
|
| |
|
|
|
|
|
|
|
|
|
| |
Incoming requests from clients might actually be prefetch jobs from
another downstream proxy. Don't do prefetching for those, as this would
cascade upwars in the proxy chain (prefetch for a prefetch of a prefetch)
Incoming requests might also be background replication. Don't relay
those if we're not configured for background replication as well.
|
|
|
|
|
|
|
|
|
| |
There is a race condition where we process the next request from the
same client faster than the OS will schedule the async prefetch job,
rendering it a NOOP in the best case (request ranges match) or fetching
redundant data from the upstream server (prefetch range is larger than
actual request by client). Make prefetching synchronous to prevent this
race condition.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This change restructures the source code directories, separates shared
form non-shared application code and adds CMake dependencies. These
dependencies allow the tracking of changes and trigger a rebuild of
those build targets where changed files are involved.
WARNING: Note that the support of the DNBD3_SERVER_AFL build option is
not supported yet. Thus, the option should be never turned on.
|
|
|
|
|
| |
Still needs some cleanup and optimizations, variable naming sucks,
comments, etc.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
- Now uses linked lists instead of huge array
- Does prefetch data on client requests
- Can have multiple replication requests in-flight
|
| |
|
| |
|
|
|
|
|
|
| |
If an image is incomplete, but has no upstream server that can be used
for replication, reload the cache map from disk periodically, in case
some other server instance is writing to the image.
|
|
|
|
|
|
|
|
|
| |
Cache maps will now be saved periodically, but only if either they have
a "dirty" bit set, which happens if any bits in the map get cleared
again (due to corruption), or if new data has been replicated from an
uplink server. This either means at least one byte received and 5
minutes have passed, or at least 500MB have been downloaded. The timer
currently runs every 20 seconds.
|
| |
|
|
|
|
|
|
|
|
| |
Tracking the "working" state of images using one boolean is insufficient
regarding the different ways in which providing an image can fail.
Introduce separate flags for different conditions, like "file not
readable", "file not writable", "no uplink server available", "file
content has changed".
|
| |
|
| |
|
|
|
|
|
|
|
| |
Initializing the signal in the thread lead to a race
where we would init the uplink and queue a request for it
before the thread actually initialized it. This was not harmful
but lead to spurious warnings in the server's log.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Keeping the uplink thread around forever even though we
disconnected from the upstream server seems wasteful. Get
rid of this and rear down the uplink entirely.
|
|
|
|
|
|
| |
Gets rid of a bunch of locking, especially the hot path in net.c where
clients are requesting data. Many clients unsing the same incomplete
image previously created a bottleneck here.
|
| |
|
|
|
|
| |
First step towards less locking for proxy mode
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Alt-Server checks are now run using the threadpool, so we don't need a
queue and dedicated thread anymore. The rtt history is now kept per
uplink, so many uplinks won't overwhelm the history, making its time
window very short.
Also the fail counter is now split up; a global one for when the server
actually isn't reachable, a local (per-uplink) one for when the server
is reachable but doesn't serve the requested image.
|
| |
|
|
|
|
|
| |
* Change link to uplink everywhere
* dnbd3_connection_t -> dnbd3_uplink_t
|
|
|
|
|
|
| |
Lock order is predefined in locks.h. Immediately bail out if a lock with
lower priority is obtained while the same thread already holds one with
higher priority.
|
| |
|
|
|
|
|
| |
Allow attaching in ULR_PROCESSING state, leave lower slots empty
to increase chances attaching to ULR_PROCESSING.
|
|
|
|
|
|
| |
Fix a race condition where the client thread tears down the client
struct including the sendMutex while the uplink thead is currently
holding the lock, trying to send data to the client.
|
| |
|
|
|
|
|
|
|
|
| |
Just assume sane platforms offer smart mutexes
that have a fast-path with spinlocks internally
for locks that have little to no congestion.
In all other cases, mutexes should perform better
anyways.
|
|
|
|
|
| |
Early benchmarking shows that this is faster, since we don't
require another thread to wake up just to send out the request.
|
| |
|
| |
|
|
|
|
|
|
| |
In case we don't use background replication a connection to an uplink
server can potentially stay around forever. This in turn would prevent
the uplink server from freeing the image as it appears to be in use.
|