| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This speeds up initialization with a long list of servers
where the first in the list don't work, as the delay between
servers is now lowered to 100ms.
|
|
|
|
|
|
|
| |
5 servers are considered "active", that is, are being measured
for their RTT regularly. If We have more than 5 servers and
one of the active ones isn't reachable repeatedly, the two
servers will swap position.
|
| |
|
| |
|
|
|
|
|
|
|
| |
The server is always backwards compatible, and so should be the
client. If support for an older version will not be kept up,
MIN_SUPPORTED_{CLIENT,SERVER} will be increased accordingly
so that the connection is dropped.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We steal 8 bits from the request offset to count hops when requests
get relayed by proxies. This still leaves plenty of bits for the
offset (56 bits, supporting images of up to 72 petabytes).
This is used to detect proxy cycles. The algorithm is not perfect
but should prevent endless relays of the same request.
This is backwards compatible to old clients and servers, as the server
only ever sets the hopcount in relayed requests if the upstream server
is using protocol version 3 or newer, and clients are automatically
upwards compatible as there is practically no image larger than 74PB,
so the newly introduced hop count field is always 0 even in requests
from old clients.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new mechanism is supposed to complement the existing
RTT based balancing. While the RTT-averaging approach is
better suited to react to sudden drastic changes in server
latency.
The new approach doesn't directly consider RTT, but keeps
track of how many consecutive times each server was the best
server when measuring the RTTs. The higher that counter
rises, the more likely it will become that the connection
switches over to that server.
Eg.: Server 1 measures 600µs each time
Server 2 measures 599µs each time
After a while, in case server 1 is currently used, the
connection will eventually switch over to server 2. The
RTT-based mechanism would not switch over in this case,
since the threshold that prevents constant switching between
servers is not reached.
The new approach is meant to handle scenarios where the
network is generally fast, but it would still be beneficial
from a network topology point of view if the clients switch
to the slightly faster server, assuming it is closer to the
client and thus less network segments are burdened.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
replace epoll with poll.
We now don't assume that a signal equals a single fd (eventfd on Linux).
The next step would be to create a version of signal.c that uses a pipe
internally, so it can be used on other platforms, like *BSD.
This is also the reason epoll was replaced with poll in uplink.c
|
|
|
|
|
|
|
|
|
| |
Before, we would wait endlessly if there is a pending
read request that doesn't get answered (e.g. because
the server went down. That means you couldn't exit
the client in that case. Now we use a signal handler
to set a flag which causes the read to bail out and
return EIO.
|
|
|
|
| |
entering fuse_main
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|