<feed xmlns='http://www.w3.org/2005/Atom'>
<title>openslx-ng/ipxe.git/src/arch/riscv/include, branch openslx</title>
<subtitle>Fork of ipxe; additional commands and features</subtitle>
<id>https://git.openslx.org/openslx-ng/ipxe.git/atom/src/arch/riscv/include?h=openslx</id>
<link rel='self' href='https://git.openslx.org/openslx-ng/ipxe.git/atom/src/arch/riscv/include?h=openslx'/>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/'/>
<updated>2025-07-11T11:23:51+00:00</updated>
<entry>
<title>[riscv] Create coherent DMA mapping of 32-bit address space on demand</title>
<updated>2025-07-11T11:23:51+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-11T11:00:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=2aacb346ca2e5325921871a114ee691880000e47'/>
<id>urn:sha1:2aacb346ca2e5325921871a114ee691880000e47</id>
<content type='text'>
Reuse the code that creates I/O device page mappings to create the
coherent DMA mapping of the 32-bit address space on demand, instead of
constructing this mapping as part of the initial page table.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Invalidate data cache on completed RX DMA buffers</title>
<updated>2025-07-10T13:39:07+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-10T13:33:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=bbabde8ff8182fcef738893c29a698783758a489'/>
<id>urn:sha1:bbabde8ff8182fcef738893c29a698783758a489</id>
<content type='text'>
The data cache must be invalidated twice for RX DMA buffers: once
before passing ownership to the DMA device (in case the cache happens
to contain dirty data that will be written back at an undefined future
point), and once after receiving ownership from the DMA device (in
case the CPU happens to have speculatively accessed data in the buffer
while it was owned by the hardware).

Only the used portion of the buffer needs to be invalidated after
completion, since we do not care about data within the unused portion.

Update the DMA API to include the used length as an additional
parameter to dma_unmap(), and add the necessary second cache
invalidation pass to the RISC-V DMA API implementation.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Add optimised TCP/IP checksumming</title>
<updated>2025-07-10T12:32:45+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-10T11:50:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=634d9abefbb896e0c563ede4b6a7df40d0948501'/>
<id>urn:sha1:634d9abefbb896e0c563ede4b6a7df40d0948501</id>
<content type='text'>
Add a RISC-V assembly language implementation of TCP/IP checksumming,
which is around 50x faster than the generic algorithm.  The main loop
checksums aligned xlen-bit words, using almost entirely compressible
instructions and accumulating carries in a separate register to allow
folding to be deferred until after all loops have completed.

Experimentation on a C910 CPU suggests that this achieves around four
bytes per clock cycle, which is comparable to the x86 implementation.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Provide a DMA API implementation for RISC-V bare-metal systems</title>
<updated>2025-07-09T10:07:37+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-08T13:56:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=101ef74a6e7ed29af42f9d5432504b437e75374d'/>
<id>urn:sha1:101ef74a6e7ed29af42f9d5432504b437e75374d</id>
<content type='text'>
Provide an implementation of dma_map() that performs cache clean or
invalidation as required, and an implementation of dma_alloc() that
returns virtual addresses within the coherent mapping of the 32-bit
physical address space.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Support explicit cache management operations on I/O buffers</title>
<updated>2025-07-07T15:38:23+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-07T12:11:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=e223b325113670a29205c62b0e7cbdd75b36b934'/>
<id>urn:sha1:e223b325113670a29205c62b0e7cbdd75b36b934</id>
<content type='text'>
On platforms where DMA devices are not in the same coherency domain as
the CPU cache, it is necessary to be able to explicitly clean the
cache (i.e. force data to be written back to main memory) and
invalidate the cache (i.e. discard any cached data and force a
subsequent read from main memory).

Add support for cache management via the standard Zicbom extension or
the T-Head cache management operations extension, with the supported
extension detected on first use.

Support cache management operations only on I/O buffers, since these
are guaranteed to not share cachelines with other data.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Add support for detecting T-Head vendor extensions</title>
<updated>2025-07-07T15:38:23+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-07T12:03:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=6a75115a7496ffbeeb6e70019639813edfe3a9c3'/>
<id>urn:sha1:6a75115a7496ffbeeb6e70019639813edfe3a9c3</id>
<content type='text'>
Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Serialise MMIO accesses with respect to each other</title>
<updated>2025-06-22T08:45:09+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-06-22T08:26:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=25fa01822b7864ace7abe792e81662ea291e87fa'/>
<id>urn:sha1:25fa01822b7864ace7abe792e81662ea291e87fa</id>
<content type='text'>
iPXE drivers have been written with the implicit assumption that MMIO
writes are allowed to be posted but that an MMIO register read or
write after another MMIO register write will always observe the
effects of the first write.

For example: after having written a byte to the transmit holding
register (THR) of a 16550 UART, it is expected that any subsequent
read of the line status register (LSR) will observe a value consistent
with the occurrence of the write.

RISC-V does not seem to provide any ordering guarantees between
accesses to different registers within the same MMIO device.  Add
fences as part of the MMIO accessors to provide the assumed
guarantees.

Use "fence io, io" before each MMIO read or write to enforce full
serialisation of MMIO accesses with respect to each other.  This is
almost certainly more conservative than is strictly necessary.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Maximise barrier effects of memory fences</title>
<updated>2025-06-12T11:33:46+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-06-12T11:26:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=41e65df19d4dc3a5f5621ce0e9d74f270d4efb3f'/>
<id>urn:sha1:41e65df19d4dc3a5f5621ce0e9d74f270d4efb3f</id>
<content type='text'>
The RISC-V "fence" instruction encoding includes bits for predecessor
and successor input and output operations, separate from read and
write operations.  It is up to the CPU implementation to decide what
counts as I/O space rather than memory space for the purposes of this
instruction.

Since we do not expect fencing to be performance-critical, keep
everything as simple and reliable as possible by using the unadorned
"fence" instruction (equivalent to "fence iorw, iorw").

Add a memory clobber to ensure that the compiler does not reorder the
barrier.  (The volatile qualifier seems to already prevent reordering
in practice, but this is not guaranteed according to the compiler
documentation.)

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Support mapping I/O devices outside of the identity map</title>
<updated>2025-05-26T16:56:27+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-05-26T14:45:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=eae9a27542b020c4192c0435718f7c7d71251a4a'/>
<id>urn:sha1:eae9a27542b020c4192c0435718f7c7d71251a4a</id>
<content type='text'>
With the 64-bit paging schemes (Sv39, Sv48, and Sv57), we identity-map
as much of the physical address space as is possible.  Experimentation
shows that this is not sufficient to provide access to all I/O
devices.  For example: the Sipeed Lichee Pi 4A includes a CPU that
supports only Sv39, but places I/O devices at the top of a 40-bit
address space.

Add support for creating I/O page table entries on demand to map I/O
devices, based on the existing design used for x86_64 BIOS.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Support older SBI implementations</title>
<updated>2025-05-25T09:43:39+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-05-25T08:28:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=8d88870da59e93202d1cff4504f2fd2d2fa139cd'/>
<id>urn:sha1:8d88870da59e93202d1cff4504f2fd2d2fa139cd</id>
<content type='text'>
Fall back to attempting the legacy SBI console and shutdown calls if
the standard calls fail.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
</feed>
