| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
| |
Initializing the signal in the thread lead to a race
where we would init the uplink and queue a request for it
before the thread actually initialized it. This was not harmful
but lead to spurious warnings in the server's log.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Keeping the uplink thread around forever even though we
disconnected from the upstream server seems wasteful. Get
rid of this and rear down the uplink entirely.
|
|
|
|
|
|
| |
Gets rid of a bunch of locking, especially the hot path in net.c where
clients are requesting data. Many clients unsing the same incomplete
image previously created a bottleneck here.
|
| |
|
|
|
|
| |
First step towards less locking for proxy mode
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Alt-Server checks are now run using the threadpool, so we don't need a
queue and dedicated thread anymore. The rtt history is now kept per
uplink, so many uplinks won't overwhelm the history, making its time
window very short.
Also the fail counter is now split up; a global one for when the server
actually isn't reachable, a local (per-uplink) one for when the server
is reachable but doesn't serve the requested image.
|
| |
|
|
|
|
|
| |
* Change link to uplink everywhere
* dnbd3_connection_t -> dnbd3_uplink_t
|
|
|
|
|
|
| |
Lock order is predefined in locks.h. Immediately bail out if a lock with
lower priority is obtained while the same thread already holds one with
higher priority.
|
| |
|
|
|
|
|
| |
Allow attaching in ULR_PROCESSING state, leave lower slots empty
to increase chances attaching to ULR_PROCESSING.
|
|
|
|
|
|
| |
Fix a race condition where the client thread tears down the client
struct including the sendMutex while the uplink thead is currently
holding the lock, trying to send data to the client.
|
| |
|
|
|
|
|
|
|
|
| |
Just assume sane platforms offer smart mutexes
that have a fast-path with spinlocks internally
for locks that have little to no congestion.
In all other cases, mutexes should perform better
anyways.
|
|
|
|
|
| |
Early benchmarking shows that this is faster, since we don't
require another thread to wake up just to send out the request.
|
| |
|
| |
|
|
|
|
|
|
| |
In case we don't use background replication a connection to an uplink
server can potentially stay around forever. This in turn would prevent
the uplink server from freeing the image as it appears to be in use.
|
|
|
|
|
|
| |
It didn't make too much sense that we only checked _maxPayload when the
reply arrived; simply don't forward a request where we already know we
won't handle the reply.
|
| |
|
| |
|
|
|
|
|
|
| |
Gets rid of the lastBytesSent field as well as the stats lock per
client. Cleaned and split up the messy net_clientsToJson function while
at it.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This is a compromise; if you want to validate replicated data fairly
quickly, using this option will make background replication only kick in
when there's a "dirty" 16M block, i.e. some blocks within a 16M block
are cached locally, but not all. Completing the block makes it possible
to validate its CRC32 checksum.
|
|
|
|
|
|
| |
Further improving cache handling, don't keep blocks
in cache that have been requested via background replication.
It's likely these aren't needed in the near future.
|
|
|
|
|
|
|
|
| |
Now that we support sparse files, using just
fdatasync isn't safe anymore. Instead of handling
both cases differently just drop fdatasync,
the difference has probably been marginal all
along anyways.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cacheFd is now moved to the uplink data structure and will
only be handled by the uplink thread.
The integrity checker now supports checking all blocks of an
image. This will be triggered automatically whenever a check for
a single block failed.
Also, if a crc check on startup fails, the image won't be discarded
anymore, but rather a full check will be initiated.
Furthermore, when calling image_updateCacheMap() on an image that
was previously complete, the cache map will now be re-initialized,
and a new uplink connection created.
|
| |
|
|
|
|
|
|
|
| |
In scenarios where the proxy is using an NFS server as
storage (for whatever crazy reason) or when the cacheFd
goes bad through e.g. a switchroot, try to re-open it
instead of just disabling caching forever.
|
|
|
|
|
| |
Background replication will not kick in if there aren't at least
that many clients connected.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Will not preallocate images in this mode. Old images are only
deleted if the disk is full, determined by write() calls to
the cache file yielding ENOSPC or EDQUOT. In such a case,
the least recently used image(s) will be deleted to free
up at least 256MiB, and then the write() call will be repeated.
This *should* work somewhat reliably unless the cache partition
is ridiculously small. Performance might suffer a little, and
disk fragmentation might occur much faster than in prealloc
mode. Testing is needed.
|
|
|
|
|
| |
Rounding to 4k so caching works efficiently
This should now close #3231
|
|
|
|
|
| |
maxClients, maxImages, maxPayload, maxReplicationSize
Refs #3231
|
|
|
|
|
| |
Just as in the fuse client, this will speed things up if
we have several alt-servers in our list which are not reachable.
|
|
|
|
|
| |
Incremental updating of the global byte counter would only work when
background replication is disabled. Fix this.
|
|
|
|
| |
Less writes to variables, more up-to-date values for uplinks.
|
|
|
|
|
|
| |
We only used it for CRC-32, so now the source
tree includes a stripped down version of the crc32
code from the zlib project.
|
|
|
|
| |
conversion problems
|
|
|
|
|
| |
Introduces new shared source unit timing.[ch]
Closes #3214
|