| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
If we cat the stats file right after starting the fuse client, its
contents will be cached forever. The exact cause is unknown, since the
timeout was specified as one second, but setting it to 0 seems to fix
this issue.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Our main signal handler sends SUGHUP to the receiver and background
threads, so if they block in some recv() or poll() they will get EINTR
and can check keepRunning.
|
| |
|
| |
|
| |
|
|
|
|
| |
arguments fixed, auto_cache in lowlevel activated, multi and single threaded modes are supported now
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
If we lost connection and then go check all known alt
servers, see if we have some pending request queued and
if so, use its offset and length for the alt server probe.
This ensures that the server being tested is able to
satisfy at least the next request we'll send.
|
|
|
|
|
| |
read() calls are supposed to return 0 when reading at EOF,
so properly mimic that behavior.
|
| |
|
| |
|
|
|
|
|
|
| |
There might be more than one pending connect, but each call to
multiConnect() can return at most one fd, so we could be ignoring
some successful connections.
|
| |
|
|
|
|
|
|
| |
This speeds up initialization with a long list of servers
where the first in the list don't work, as the delay between
servers is now lowered to 100ms.
|
|
|
|
|
|
|
| |
5 servers are considered "active", that is, are being measured
for their RTT regularly. If We have more than 5 servers and
one of the active ones isn't reachable repeatedly, the two
servers will swap position.
|
| |
|
| |
|
|
|
|
| |
...there were quite a few format string errors as it turns out :/
|
|
|
|
|
|
|
| |
AF_INET luckily was "2" on all platforms checked, so no problems
there with interoperation, but AF_INET6 is different between
Linux, BSD, Windows and possibly others, so map back and forth
between AF_INET/AF_INET6 and HOST_IP4/HOST_IP6 to fix this.
|
|
|
|
|
|
| |
Previously, a fresh one was created and destroyed fo every read
requests. This caused a lot of syscalls when reading. Now
there's a simple cache of currently up to 6 signalfd.
|
|
|
|
|
|
|
| |
The server is always backwards compatible, and so should be the
client. If support for an older version will not be kept up,
MIN_SUPPORTED_{CLIENT,SERVER} will be increased accordingly
so that the connection is dropped.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We steal 8 bits from the request offset to count hops when requests
get relayed by proxies. This still leaves plenty of bits for the
offset (56 bits, supporting images of up to 72 petabytes).
This is used to detect proxy cycles. The algorithm is not perfect
but should prevent endless relays of the same request.
This is backwards compatible to old clients and servers, as the server
only ever sets the hopcount in relayed requests if the upstream server
is using protocol version 3 or newer, and clients are automatically
upwards compatible as there is practically no image larger than 74PB,
so the newly introduced hop count field is always 0 even in requests
from old clients.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new mechanism is supposed to complement the existing
RTT based balancing. While the RTT-averaging approach is
better suited to react to sudden drastic changes in server
latency.
The new approach doesn't directly consider RTT, but keeps
track of how many consecutive times each server was the best
server when measuring the RTTs. The higher that counter
rises, the more likely it will become that the connection
switches over to that server.
Eg.: Server 1 measures 600µs each time
Server 2 measures 599µs each time
After a while, in case server 1 is currently used, the
connection will eventually switch over to server 2. The
RTT-based mechanism would not switch over in this case,
since the threshold that prevents constant switching between
servers is not reached.
The new approach is meant to handle scenarios where the
network is generally fast, but it would still be beneficial
from a network topology point of view if the clients switch
to the slightly faster server, assuming it is closer to the
client and thus less network segments are burdened.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
replace epoll with poll.
We now don't assume that a signal equals a single fd (eventfd on Linux).
The next step would be to create a version of signal.c that uses a pipe
internally, so it can be used on other platforms, like *BSD.
This is also the reason epoll was replaced with poll in uplink.c
|
|
|
|
|
|
|
|
|
| |
Before, we would wait endlessly if there is a pending
read request that doesn't get answered (e.g. because
the server went down. That means you couldn't exit
the client in that case. Now we use a signal handler
to set a flag which causes the read to bail out and
return EIO.
|
|
|
|
| |
entering fuse_main
|
| |
|
| |
|
| |
|