<feed xmlns='http://www.w3.org/2005/Atom'>
<title>openslx-ng/ipxe.git/src/arch/riscv/core, branch openslx</title>
<subtitle>Fork of ipxe; additional commands and features</subtitle>
<id>https://git.openslx.org/openslx-ng/ipxe.git/atom/src/arch/riscv/core?h=openslx</id>
<link rel='self' href='https://git.openslx.org/openslx-ng/ipxe.git/atom/src/arch/riscv/core?h=openslx'/>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/'/>
<updated>2025-11-05T19:33:53+00:00</updated>
<entry>
<title>[ioapi] Allow iounmap() to be called for port I/O addresses</title>
<updated>2025-11-05T19:33:53+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-11-05T17:29:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=bd3982b63064590497d39d63e96d0a3f63149b73'/>
<id>urn:sha1:bd3982b63064590497d39d63e96d0a3f63149b73</id>
<content type='text'>
Allow code using the combined MMIO and port I/O accessors to safely
call iounmap() to unmap the MMIO or port I/O region.

In the virtual offset I/O mapping API as used for UEFI, 32-bit BIOS,
and 32-bit RISC-V SBI, iounmap() is a no-op anyway.  In 64-bit RISC-V
SBI, we have no concept of port I/O and so the issue is moot.

This leaves only 64-bit BIOS, where it suffices to simply do nothing
for any pages outside of the chosen MMIO virtual address range.

For symmetry, we implement the equivalent change in the very closely
related RISC-V page management code.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Correct page table stride calculation</title>
<updated>2025-10-27T14:22:16+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-10-27T14:04:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=0ddd83069363f411881442857d5971654638a986'/>
<id>urn:sha1:0ddd83069363f411881442857d5971654638a986</id>
<content type='text'>
Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Ensure coherent DMA allocations do not cross cacheline boundaries</title>
<updated>2025-07-11T12:50:41+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-11T12:50:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=434462a93ebe462cbd334cbf3431ff9c5b5a1aae'/>
<id>urn:sha1:434462a93ebe462cbd334cbf3431ff9c5b5a1aae</id>
<content type='text'>
Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Support the standard Svpbmt extension for page-based memory types</title>
<updated>2025-07-11T11:24:02+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-11T11:24:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=d539a420df919b3f9b808703b5f8d62bc485fe15'/>
<id>urn:sha1:d539a420df919b3f9b808703b5f8d62bc485fe15</id>
<content type='text'>
Set the appropriate Svpbmt type bits within page table entries if the
extension is supported.  Tested only in QEMU so far, due to the lack
of availability of real hardware supporting Svpbmt.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Create coherent DMA mapping of 32-bit address space on demand</title>
<updated>2025-07-11T11:23:51+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-11T11:00:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=2aacb346ca2e5325921871a114ee691880000e47'/>
<id>urn:sha1:2aacb346ca2e5325921871a114ee691880000e47</id>
<content type='text'>
Reuse the code that creates I/O device page mappings to create the
coherent DMA mapping of the 32-bit address space on demand, instead of
constructing this mapping as part of the initial page table.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Use 1GB pages for I/O device mappings</title>
<updated>2025-07-11T11:05:52+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-11T10:30:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=0611ddbd1235be79e224e97ac30231801e58deda'/>
<id>urn:sha1:0611ddbd1235be79e224e97ac30231801e58deda</id>
<content type='text'>
All 64-bit paging schemes support at least 1GB "gigapages".  Use these
to map I/O devices instead of 2MB "megapages".  This reduces the
number of consumed page table entries, increases the visual similarity
of I/O remapped addresses to the underlying physical addresses, and
opens up the possibility of reusing the code to create the coherent
DMA map of the 32-bit address space.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Invalidate data cache on completed RX DMA buffers</title>
<updated>2025-07-10T13:39:07+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-10T13:33:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=bbabde8ff8182fcef738893c29a698783758a489'/>
<id>urn:sha1:bbabde8ff8182fcef738893c29a698783758a489</id>
<content type='text'>
The data cache must be invalidated twice for RX DMA buffers: once
before passing ownership to the DMA device (in case the cache happens
to contain dirty data that will be written back at an undefined future
point), and once after receiving ownership from the DMA device (in
case the CPU happens to have speculatively accessed data in the buffer
while it was owned by the hardware).

Only the used portion of the buffer needs to be invalidated after
completion, since we do not care about data within the unused portion.

Update the DMA API to include the used length as an additional
parameter to dma_unmap(), and add the necessary second cache
invalidation pass to the RISC-V DMA API implementation.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Add optimised TCP/IP checksumming</title>
<updated>2025-07-10T12:32:45+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-10T11:50:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=634d9abefbb896e0c563ede4b6a7df40d0948501'/>
<id>urn:sha1:634d9abefbb896e0c563ede4b6a7df40d0948501</id>
<content type='text'>
Add a RISC-V assembly language implementation of TCP/IP checksumming,
which is around 50x faster than the generic algorithm.  The main loop
checksums aligned xlen-bit words, using almost entirely compressible
instructions and accumulating carries in a separate register to allow
folding to be deferred until after all loops have completed.

Experimentation on a C910 CPU suggests that this achieves around four
bytes per clock cycle, which is comparable to the x86 implementation.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Provide a DMA API implementation for RISC-V bare-metal systems</title>
<updated>2025-07-09T10:07:37+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-08T13:56:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=101ef74a6e7ed29af42f9d5432504b437e75374d'/>
<id>urn:sha1:101ef74a6e7ed29af42f9d5432504b437e75374d</id>
<content type='text'>
Provide an implementation of dma_map() that performs cache clean or
invalidation as required, and an implementation of dma_alloc() that
returns virtual addresses within the coherent mapping of the 32-bit
physical address space.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[riscv] Support explicit cache management operations on I/O buffers</title>
<updated>2025-07-07T15:38:23+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-07-07T12:11:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=e223b325113670a29205c62b0e7cbdd75b36b934'/>
<id>urn:sha1:e223b325113670a29205c62b0e7cbdd75b36b934</id>
<content type='text'>
On platforms where DMA devices are not in the same coherency domain as
the CPU cache, it is necessary to be able to explicitly clean the
cache (i.e. force data to be written back to main memory) and
invalidate the cache (i.e. discard any cached data and force a
subsequent read from main memory).

Add support for cache management via the standard Zicbom extension or
the T-Head cache management operations extension, with the supported
extension detected on first use.

Support cache management operations only on I/O buffers, since these
are guaranteed to not share cachelines with other data.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
</feed>
