| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Since we process everything sequentially, by the time we receive any
task management function, the task referenced in the request has already
been rejected (or processed), so we just reply OK for now, so the SNs
don't get messed up.
|
| |
|
|
|
|
|
| |
In functions that can handle multiple different structs, instead of
picking an arbitrary one as the pointer type in the function signature,
pass an uint8_t and cast to the according struct in the sub-cases in the
method body.
|
| |
|
|
| |
... The other one is already named PHYSICAL
|
| |
|
|
|
|
|
|
|
|
|
|
| |
There were a lot of similarly named and redundant variables in various
structs named pos, len, xfer_len, des_xfer_pos, etc. It could be very
challenging to keep track of what information is stored where when
working with the code.
Attempt to minimize this by relying only on a single "len" variable in
the scsi_task struct.
This refactoring uncovered a few inconsistencies in how allocation
length limitations were handled, which were addressed here too.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
- Fold header/data handling into one function
This uncovered a few redundant checks
and makes it easier to reason about control flow
- Make all iscsi_pdu stack-allocated
This greatly reduces the number of malloc and free calls
during normal operation, lowers the risk of memory management
bugs, and potentially increases performance in high concurrency
scenarios.
|
| | |
|
| |
|
|
|
|
|
|
| |
This broke when sending ds payload was refactored to avoid copying the
buffer into the PDU's buffer before sending. iscsi_connection_pdu_create
took care of this before, but now that we send the source buffer
directly, pad the packet manually after sending the buffer contents if
required.
|
| | |
|
| |
|
|
| |
Makes using the kernel's iscsi module simpler
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Work towards simplifying the iscsi implementation has begun. Goals are:
- Simpler and easier to understand resource/lifecycle management of
allocations
- Single-threaded architecture, making locking unnecessary
- Moving as many allocations as possible to the stack
- Making the call-stack more shallow for easier tracking of code flow
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- R2T handling
- Portal groups
- Fixes to login phase handling
- Code refactoring
- Remove obsolete PDU fields
- SCSI INQUIRY handler
- Persistent Reservation support
- Implement SCSI block based operations
- Implement other needed SCSI ops
- Disks are now reported as read-only
- Doxygen tags
- Bugfixes for crashes, memleaks, etc.
|
| |
|
|
| |
Also a couple bug fixes and other minor improvements
|
| |
|
|
|
| |
- globals, portal groups, portals, ports, etc.
- Finally, fixed some bugs.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
Bringing up a proxy that has been offline for some time will trigger
lots of loads and replication on other proxies when booting up again.
Just wait until a client actually needs an image before establishing
an uplink connection.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
Any request from a client being relayed to an uplink server will have
its size extended to this value. It will also be applied to background
replication requests, if the BGR mode is FULL.
As request coalescing is currently very primitive, this setting should
usually be left diabled, and bgrWindowSize used instead, if appropriate.
If you enable this, set it to something large (1M+), or it might have
adverse effects.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Incoming requests from clients might actually be prefetch jobs from
another downstream proxy. Don't do prefetching for those, as this would
cascade upwars in the proxy chain (prefetch for a prefetch of a prefetch)
Incoming requests might also be background replication. Don't relay
those if we're not configured for background replication as well.
|
| |
|
|
|
|
|
|
|
| |
There is a race condition where we process the next request from the
same client faster than the OS will schedule the async prefetch job,
rendering it a NOOP in the best case (request ranges match) or fetching
redundant data from the upstream server (prefetch range is larger than
actual request by client). Make prefetching synchronous to prevent this
race condition.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change links the dnbd3-server with 'libatomic' to add support for
atomic operations not supported by hardware (especially 32-bit hardware
architectures, such as ARM). Thus the dnbd3-server can also run on a
Raspberry Pi 1 running Rasperry Pi OS.
Note that the dnbd3-server is still linked to the libatomic, even if the
hardware supports atomic operations. In this case, the compiler resolves
atomic operations and replaces them automatically with specific built-in
functions. This unnecessary linkage can be removed in the future if the
GCC supports an upcoming option called automatic linking of libatomic
(--enable-autolink-libatomic).
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
saveLoadAllCacheMaps() is called frequently, and a 'full' run can take
some time. If we only update the nextSave timestamp when we're done, we
might already have a concurrent call to the function, which will also do
a 'full' run, since the timestamp is not updated yet. This doesn't break
anything, but leads to even more disk activity, which is probably
already high, given that the previous run is not done yet.
|