| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is that for full image checks, we don't want to
pollute the fs cache with gigabytes of data that won't be
needed again soon. This would certainly hurt performance
on servers that dont have hundreds of GBs of RAM.
For single block checks during replication this has the
advantage that we don't check the block in memory before
it hit the disk once, but actually flush the data to disk,
then remove it from the page cache, and only then read it
again, from disk.
TODO: Might be worth making this a config option
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cacheFd is now moved to the uplink data structure and will
only be handled by the uplink thread.
The integrity checker now supports checking all blocks of an
image. This will be triggered automatically whenever a check for
a single block failed.
Also, if a crc check on startup fails, the image won't be discarded
anymore, but rather a full check will be initiated.
Furthermore, when calling image_updateCacheMap() on an image that
was previously complete, the cache map will now be re-initialized,
and a new uplink connection created.
|
|
|
|
|
|
|
| |
In scenarios where the proxy is using an NFS server as
storage (for whatever crazy reason) or when the cacheFd
goes bad through e.g. a switchroot, try to re-open it
instead of just disabling caching forever.
|
| |
|
| |
|
|
|
|
|
| |
imageListLock was locked on twice in the call stack, which
is bad if you're using non-recursive locks.
|
| |
|
| |
|
| |
|
|
|
|
| |
uplink if existent
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Will not preallocate images in this mode. Old images are only
deleted if the disk is full, determined by write() calls to
the cache file yielding ENOSPC or EDQUOT. In such a case,
the least recently used image(s) will be deleted to free
up at least 256MiB, and then the write() call will be repeated.
This *should* work somewhat reliably unless the cache partition
is ridiculously small. Performance might suffer a little, and
disk fragmentation might occur much faster than in prealloc
mode. Testing is needed.
|
|
|
|
|
| |
maxClients, maxImages, maxPayload, maxReplicationSize
Refs #3231
|
| |
|
|
|
|
| |
Refs #3231
|
|
|
|
|
| |
Just as in the fuse client, this will speed things up if
we have several alt-servers in our list which are not reachable.
|
|
|
|
|
|
|
| |
Introduce new flag in "select image" message to tell the uplink server
whether we have background replication enabled or not. Also reject
a connecting proxy if the connecting proxy uses BGR but we don't, as this
would basically force the image to be replicated locally too.
|
|
|
|
| |
...there were quite a few format string errors as it turns out :/
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We only used it for CRC-32, so now the source
tree includes a stripped down version of the crc32
code from the zlib project.
|
|
|
|
| |
conversion problems
|
|
|
|
|
| |
Introduces new shared source unit timing.[ch]
Closes #3214
|
|
|
|
| |
false positives
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We steal 8 bits from the request offset to count hops when requests
get relayed by proxies. This still leaves plenty of bits for the
offset (56 bits, supporting images of up to 72 petabytes).
This is used to detect proxy cycles. The algorithm is not perfect
but should prevent endless relays of the same request.
This is backwards compatible to old clients and servers, as the server
only ever sets the hopcount in relayed requests if the upstream server
is using protocol version 3 or newer, and clients are automatically
upwards compatible as there is practically no image larger than 74PB,
so the newly introduced hop count field is always 0 even in requests
from old clients.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This will close the readFd of images that have no active clients
after some idle period (1 hour currently).
Prevents deleted images from taking up space until the server
is shut down.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This was a wrong decision made long time ago, and it's broken in
certain scenarios (eg. two servers serving from same NFS mount).
Also it's of limited use anyways since it only supportes ASCII and
would ignore umlauts, so blöd and BLÖD would still be considered
two different images.
So if you relied on this "feature" in any way, be careful when
updating.
|
| |
|
|
|
|
|
|
|
| |
A run with gprof revealed that background replication is a huge CPU hog.
The block selection was very slow and has been improved a lot.
Minor improvements were made to other functions that scan the cache map
of an image and are thus relatively slow.
|
| |
|
|
|
|
| |
So you can cancel image loading on startup via Ctrl-C
|
|
|
|
| |
Using uint32_t for fileSize is not too clever :(
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Now that we can automatically load unknown images from disk on request,
it makes sense to remove non-working images from the image list. On
future requests, we will look for them on disk again, which is nice
in case of temporary storage hickups.
Also, some more ore less related locking has been refined (loading images,
replicating images)
|
|
|
|
|
| |
This will prevent hidden files from being exported to clients and also
prevents directory traversal attacks ( ../../image.img )
|