<feed xmlns='http://www.w3.org/2005/Atom'>
<title>openslx-ng/ipxe.git/src/arch/arm64, branch openslx</title>
<subtitle>Fork of ipxe; additional commands and features</subtitle>
<id>https://git.openslx.org/openslx-ng/ipxe.git/atom/src/arch/arm64?h=openslx</id>
<link rel='self' href='https://git.openslx.org/openslx-ng/ipxe.git/atom/src/arch/arm64?h=openslx'/>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/'/>
<updated>2025-11-19T22:20:38+00:00</updated>
<entry>
<title>[arm] Avoid unaligned accesses for memcpy() and memset()</title>
<updated>2025-11-19T22:20:38+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-11-19T22:17:14+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=81496315f22f5ab90eddf8788fc8526eab1852f9'/>
<id>urn:sha1:81496315f22f5ab90eddf8788fc8526eab1852f9</id>
<content type='text'>
iPXE runs only in environments that support unaligned accesses to RAM.
However, memcpy() and memset() are also used to write to graphical
framebuffer memory, which may support only aligned accesses on some
CPU architectures such as ARM.

Restructure the 64-bit ARM memcpy() and memset() routines along the
lines of the RISC-V implementations, which split the region into
pre-aligned, aligned, and post-aligned sections.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[lkrn] Add basic support for the RISC-V Linux kernel image format</title>
<updated>2025-05-20T12:08:38+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-05-19T23:26:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=ecac4a34c7be8d1d81d21fa662460bf162d6a434'/>
<id>urn:sha1:ecac4a34c7be8d1d81d21fa662460bf162d6a434</id>
<content type='text'>
The RISC-V and AArch64 bare-metal kernel images share a common header
format, and require essentially the same execution environment: loaded
close to the start of RAM, entered with paging disabled, and passed a
pointer to a flattened device tree that describes the hardware and any
boot arguments.

Implement basic support for executing bare-metal RISC-V and AArch64
kernel images.  The (trivial) AArch64-specific code path is untested
since we do not yet have the ability to build for any bare-metal
AArch64 platforms.  Constructing and passing an initramfs image is not
yet supported.

Rename the IMAGE_BZIMAGE build configuration option to IMAGE_LKRN,
since "bzImage" is specific to x86.  To retain backwards compatibility
with existing local build configurations, we leave IMAGE_BZIMAGE as
the enabled option in config/default/pcbios.h and treat IMAGE_LKRN as
a synonym for IMAGE_BZIMAGE when building for x86 BIOS.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[build] Allow for 32-bit and 64-bit versions of util/zbin</title>
<updated>2025-05-06T11:11:02+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-05-06T11:07:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=98646b9f016d9bff91a5c89f402aeb452ee7f84b'/>
<id>urn:sha1:98646b9f016d9bff91a5c89f402aeb452ee7f84b</id>
<content type='text'>
Parsing ELF data is simpler if we don't have to build a single binary
to handle both 32-bit and 64-bit ELF formats.

Allow for separate 32-bit and 64-bit binaries built from util/zbin.c
(as is already done for util/elf2efi.c).

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[crypto] Expose shifted out bit from big integer shifts</title>
<updated>2025-02-13T15:25:35+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2025-02-13T14:18:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=5056e8ad936742ba410031cff14c0f72d87805fc'/>
<id>urn:sha1:5056e8ad936742ba410031cff14c0f72d87805fc</id>
<content type='text'>
Expose the bit shifted out as a result of shifting a big integer left
or right.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[crypto] Expose carry flag from big integer addition and subtraction</title>
<updated>2024-11-26T12:55:13+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2024-11-26T12:53:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=167a08f08928c7e469f50d5d364287abb784e99c'/>
<id>urn:sha1:167a08f08928c7e469f50d5d364287abb784e99c</id>
<content type='text'>
Expose the effective carry (or borrow) out flag from big integer
addition and subtraction, and use this to elide an explicit bit test
when performing x25519 reduction.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[crypto] Use architecture-independent bigint_is_set()</title>
<updated>2024-10-10T14:35:16+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2024-10-08T10:52:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=f78c5a763cc7bb2e2b7b437e7cc74a3efb876960'/>
<id>urn:sha1:f78c5a763cc7bb2e2b7b437e7cc74a3efb876960</id>
<content type='text'>
Every architecture uses the same implementation for bigint_is_set(),
and there is no reason to suspect that a future CPU architecture will
provide a more efficient way to implement this operation.

Simplify the code by providing a single architecture-independent
implementation of bigint_is_set().

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[crypto] Rename bigint_rol()/bigint_ror() to bigint_shl()/bigint_shr()</title>
<updated>2024-10-07T12:13:43+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2024-10-07T11:13:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=7e0bf4ec5cb3dd608d97735575e3f62252455878'/>
<id>urn:sha1:7e0bf4ec5cb3dd608d97735575e3f62252455878</id>
<content type='text'>
The big integer shift operations are misleadingly described as
rotations since the original x86 implementations are essentially
trivial loops around the relevant rotate-through-carry instruction.

The overall operation performed is a shift rather than a rotation.
Update the function names and descriptions to reflect this.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[crypto] Eliminate temporary carry space for big integer multiplication</title>
<updated>2024-09-27T12:51:24+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2024-09-26T15:24:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=3f4f843920afdc1d808a8b20354cf3eca481401a'/>
<id>urn:sha1:3f4f843920afdc1d808a8b20354cf3eca481401a</id>
<content type='text'>
An n-bit multiplication product may be added to up to two n-bit
integers without exceeding the range of a (2n)-bit integer:

  (2^n - 1)*(2^n - 1) + (2^n - 1) + (2^n - 1) = 2^(2n) - 1

Exploit this to perform big integer multiplication in constant time
without requiring the caller to provide temporary carry space.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[profile] Standardise return type of profile_timestamp()</title>
<updated>2024-09-24T14:40:45+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2024-09-24T13:49:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=5f7c6bd95bd6089473db3ba4f033584f5de0ee8a'/>
<id>urn:sha1:5f7c6bd95bd6089473db3ba4f033584f5de0ee8a</id>
<content type='text'>
All consumers of profile_timestamp() currently treat the value as an
unsigned long.  Only the elapsed number of ticks is ever relevant: the
absolute value of the timestamp is not used.  Profiling is used to
measure short durations that are generally fewer than a million CPU
cycles, for which an unsigned long is easily large enough.

Standardise the return type of profile_timestamp() as unsigned long
across all CPU architectures.  This allows 32-bit architectures such
as i386 and riscv32 to omit all logic associated with retrieving the
upper 32 bits of the 64-bit hardware counter, which simplifies the
code and allows riscv32 and riscv64 to share the same implementation.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
<entry>
<title>[crypto] Use constant-time big integer multiplication</title>
<updated>2024-09-23T12:19:58+00:00</updated>
<author>
<name>Michael Brown</name>
</author>
<published>2024-09-19T15:23:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.openslx.org/openslx-ng/ipxe.git/commit/?id=3def13265d9475c861eed1a101584b761e97ae33'/>
<id>urn:sha1:3def13265d9475c861eed1a101584b761e97ae33</id>
<content type='text'>
Big integer multiplication currently performs immediate carry
propagation from each step of the long multiplication, relying on the
fact that the overall result has a known maximum value to minimise the
number of carries performed without ever needing to explicitly check
against the result buffer size.

This is not a constant-time algorithm, since the number of carries
performed will be a function of the input values.  We could make it
constant-time by always continuing to propagate the carry until
reaching the end of the result buffer, but this would introduce a
large number of redundant zero carries.

Require callers of bigint_multiply() to provide a temporary carry
storage buffer, of the same size as the result buffer.  This allows
the carry-out from the accumulation of each double-element product to
be accumulated in the temporary carry space, and then added in via a
single call to bigint_add() after the multiplication is complete.

Since the structure of big integer multiplication is identical across
all current CPU architectures, provide a single shared implementation
of bigint_multiply().  The architecture-specific operation then
becomes the multiplication of two big integer elements and the
accumulation of the double-element product.

Note that any intermediate carry arising from accumulating the lower
half of the double-element product may be added to the upper half of
the double-element product without risk of overflow, since the result
of multiplying two n-bit integers can never have all n bits set in its
upper half.  This simplifies the carry calculations for architectures
such as RISC-V and LoongArch64 that do not have a carry flag.

Signed-off-by: Michael Brown &lt;mcb30@ipxe.org&gt;
</content>
</entry>
</feed>
