summaryrefslogtreecommitdiffstats
path: root/src/arch/x86/transitions
Commit message (Collapse)AuthorAgeFilesLines
* [librm] Ensure that inline code symbols are uniqueMichael Brown2018-03-211-1/+1
| | | | | | | | | | | | | | | Commit 6149e0a ("[librm] Provide symbols for inline code placed into other sections") may cause build failures due to duplicate label names if the compiler chooses to duplicate inline assembly code. Fix by using the "%=" special format string to include a guaranteed-unique number within the label name. The "%=" will be expanded only if constraints exist for the inline assembly. This fix therefore requires that all REAL_CODE() fragments use a (possibly empty) constraint list. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Add facility to provide register and stack dump for CPU exceptionsMichael Brown2018-03-182-7/+118
| | | | | | | | | | | | | When DEBUG=librm_mgmt is enabled, intercept CPU exceptions and provide a register and stack dump, then drop to an emergency shell. Exiting from the shell will almost certainly not work, but this provides an opportunity to view the register and stack dump and carry out some basic debugging. Note that we can intercept only the first 8 CPU exceptions, since a PXE ROM is not permitted to rebase the PIC. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Fail gracefully if asked to ioremap() a zero lengthMichael Brown2017-03-211-1/+2
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Conditionalize the workaround for the Tivoli VMM's SSE garblingLaszlo Ersek2016-11-081-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 71560d1 ("[librm] Preserve FPU, MMX and SSE state across calls to virt_call()") added FXSAVE and FXRSTOR instructions to iPXE. In KVM virtual machines, these instructions execute fine as long as the host CPU supports the "unrestricted_guest" feature (that is, it can virtualize big real mode natively). On older host CPUs however, KVM has to emulate big real mode, and it currently doesn't implement FXSAVE emulation. Upstream QEMU rebuilt iPXE at commit 0418631 ("[thunderx] Fix compilation with older versions of gcc") which is a descendant of commit 71560d1 (see above). This was done in QEMU commit ffdc5a2 ("ipxe: update submodule from 4e03af8ec to 041863191"). The resultant binaries were bundled with the QEMU v2.7.0 release; see QEMU commit c52125a ("ipxe: update prebuilt binaries"). This distributed the iPXE workaround for the Tivoli VMM bug to a number of KVM users with old host CPUs, causing KVM emulation failures (guest crashes) for them while netbooting. Make the FXSAVE and FXRSTOR instructions conditional on a new feature test macro called TIVOLI_VMM_WORKAROUND. Define the macro by default. There is prior art for an assembly file including config/general.h: see arch/x86/prefix/romprefix.S. Also, TIVOLI_VMM_WORKAROUND seems to be a good fit for the "Obscure configuration options" section in config/general.h. Cc: Bandan Das <bsd@redhat.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Greg <rollenwiese@yahoo.com> Cc: Michael Brown <mcb30@ipxe.org> Cc: Michael Prokop <launchpad@michael-prokop.at> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Pickford <arch@netremedies.ca> Cc: Radim Krčmář <rkrcmar@redhat.com> Ref: https://bugs.archlinux.org/task/50778 Ref: https://bugs.launchpad.net/qemu/+bug/1623276 Ref: https://bugzilla.proxmox.com/show_bug.cgi?id=1182 Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1356762 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Preserve FPU, MMX and SSE state across calls to virt_call()Michael Brown2016-05-021-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IBM Tivoli Provisioning Manager for OS Deployment (also known as TPMfOSD, Rembo-ia32, or Rembo Auto-Deploy) has a serious bug in some older versions (observed with v5.1.1.0, apparently fixed by v7.1.1.0) which can lead to arbitrary data corruption. As mentioned in commit 87723a0 ("[libflat] Test A20 gate without switching to flat real mode"), Tivoli's NBP sets up a VMM and makes calls to the PXE stack in VM86 mode. This appears to be some kind of attempt to run PXE API calls inside a sandbox. The VMM is fairly sophisticated: for example, it handles our attempts to switch into protected mode and patches our GDT so that our protected-mode code runs in ring 1 instead of ring 0. However, it neglects to apply any memory protections. In particular, it does not enable paging and leaves us with 4GB segment limits. We can therefore trivially break out of the sandbox by simply overwriting the GDT (or by modifying any of Tivoli's VMM code or data structures). When we attempt to execute privileged instructions (such as "lidt"), the CPU raises an exception and control is passed to the Tivoli VMM. This may result in a call to Tivoli's memcpy() function. Tivoli's memcpy() function includes optimisations which use the SSE registers %xmm0-%xmm3 to speed up aligned memory copies. Unfortunately, the Tivoli VMM's exception handler does not save or restore %xmm0-%xmm3. The net effect of this bug in the Tivoli VMM is that any privileged instruction (such as "lidt") issued by iPXE may result in unexpected corruption of the %xmm0-%xmm3 registers. Even more unfortunately, this problem affects the code path taken in response to a hardware interrupt from the NIC, since that code path will call PXENV_UNDI_ISR. The net effect therefore becomes that any NIC hardware interrupt (e.g. due to a received packet) may result in unexpected corruption of the %xmm0-%xmm3 registers. If a packet arrives while Tivoli is in the middle of using its memcpy() function, then the unexpected corruption of the %xmm0-%xmm3 registers will result in unexpected corruption in the destination buffer. The net effect therefore becomes that any received packet may result in a 16-byte block of corruption somewhere in any data that Tivoli copied using its memcpy() function. We can work around this bug in the Tivoli VMM by saving and restoring the %xmm0-%xmm3 registers across calls to virt_call(). To work around the problem, we need to save registers before attempting to execute any privileged instructions, and ensure that we attempt no further privileged instructions after restoring the registers. This is less simple than it may sound. We can use the "movups" instruction to save and restore individual registers, but this will itself generate an undefined opcode exception if SSE is not currently enabled according to the flags in %cr0 and %cr4. We can't access %cr0 or %cr4 before attempting the "movups" instruction, because access a control register is itself a privileged instruction (which may therefore trigger corruption of the registers that we're trying to save). The best solution seems to be to use the "fxsave" and "fxrstor" instructions. If SSE is not enabled, then these instructions may fail to save and restore the SSE register contents, but will not generate an undefined opcode exception. (If SSE is not enabled, then we don't really care about preserving the SSE register contents anyway.) The use of "fxsave" and "fxrstor" introduces an implicit assumption that the CPU supports SSE instructions (even though we make no assumption about whether or not SSE is currently enabled). SSE was introduced in 1999 with the Pentium III (and added by AMD in 2001), and is an architectural requirement for x86_64. Experimentation with current versions of gcc suggest that it may generate SSE instructions even when using "-m32", unless an explicit "-march=i386" or "-mno-sse" is used to inhibit this. It therefore seems reasonable to assume that SSE will be supported on any hardware that might realistically be used with new iPXE builds. As a side benefit of this change, the MMX register %mm0 will now be preserved across virt_call() even in an i386 build of iPXE using a driver that requires readq()/writeq(), and the SSE registers %xmm0-%xmm5 will now be preserved across virt_call() even in an x86_64 build of iPXE using the Hyper-V netvsc driver. Experimentation suggests that this change adds around 10% to the number of cycles required for a do-nothing virt_call(), most of which are due to the extra bytes copied using "rep movsb". Since the number of bytes copied is a compile-time constant local to librm.S, we could potentially reduce this impact by ensuring that we always copy a whole number of dwords and so can use "rep movsl" instead of "rep movsb". Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Reduce real-mode stack consumption in virt_call()Michael Brown2016-04-291-59/+103
| | | | | | | | | | | | | | | | | | Some PXE NBPs are known to make PXE API calls with very little space available on the real-mode stack. For example, the Rembo-ia32 NBP from some versions of IBM's Tivoli Provisioning Manager for Operating System Deployment (TPMfOSD) will issue calls with the real-mode stack placed at 0000:03d2; this is at the end of the interrupt vector table and leaves only 498 bytes of stack space available before overwriting the hardware IRQ vectors. This limits the amount of state that we can preserve before transitioning to protected mode. Work around these challenging conditions by preserving everything other than the initial register dump in a temporary static buffer within our real-mode data segment, and copying the contents of this buffer to the protected-mode stack. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [comboot] Support COMBOOT in 64-bit buildsMichael Brown2016-04-151-2/+64
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Do not unconditionally preserve flags across virt_call()Michael Brown2016-03-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 196f0f2 ("[librm] Convert prot_call() to a real-mode near call") introduced a regression in which any deliberate modification to the low 16 bits of the CPU flags (in struct i386_all_regs) would be overwritten with the original flags value at the time of entry to prot_call(). The regression arose because the alignment requirements of the protected-mode stack necessitated the insertion of two bytes of padding immediately below the prot_call() return address. The solution chosen was to extend the existing "pushfl / popfl" pair to "pushfw;pushfl / popfl;popfw". The extra "pushfw / popfw" appears at first glance to be a no-op, but fails to take into account the fact that the flags restored by popfl may have been deliberately modified by the protected-mode function. Fix by replacing "pushfw / popfw" with "pushw %ss / popw %ss". While %ss does appear within struct i386_all_regs, any modification to the stored value has always been ignored by prot_call() anyway. The most visible symptom of this regression was that SAN booting would fail since every INT 13 call would be chained to the original INT 13 vector. Reported-by: Vishvananda Ishaya <vishvananda@gmail.com> Reported-by: Jamie Thompson <forum.ipxe@jamie-thompson.co.uk> Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Support ioremap() for addresses above 4GB in a 64-bit buildMichael Brown2016-02-262-0/+139
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Add support for running in 64-bit long modeMichael Brown2016-02-242-52/+456
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for running the BIOS version of iPXE in 64-bit long mode. A 64-bit BIOS version of iPXE can be built using e.g. make bin-x86_64-pcbios/ipxe.usb make bin-x86_64-pcbios/8086100e.mrom The 64-bit BIOS version should appear to function identically to the normal 32-bit BIOS version. The physical memory layout is unaltered: iPXE is still relocated to the top of the available 32-bit address space. The code is linked to a virtual address of 0xffffffffeb000000 (in the negative 2GB as required by -mcmodel=kernel), with 4kB pages created to cover the whole of .textdata. 2MB pages are created to cover the whole of the 32-bit address space. The 32-bit portions of the code run with VIRTUAL_CS and VIRTUAL_DS configured such that truncating a 64-bit virtual address gives a 32-bit virtual address pointing to the same physical location. The stack pointer remains as a physical address when running in long mode (although the .stack section is accessible via the negative 2GB virtual address); this is done in order to simplify the handling of interrupts occurring while executing a portion of 32-bit code with flat physical addressing via PHYS_CODE(). Interrupts may be enabled in either 64-bit long mode, 32-bit protected mode with virtual addresses, 32-bit protected mode with physical addresses, or 16-bit real mode. Interrupts occurring in any mode other than real mode will be reflected down to real mode and handled by whichever ISR is hooked into the BIOS interrupt vector table. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Rename prot_call() to virt_call()Michael Brown2016-02-222-36/+36
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Do not preserve flags unnecessarilyMichael Brown2016-02-211-17/+11Star
| | | | | | | No callers of prot_to_phys, phys_to_prot, or intr_to_prot require the flags to be preserved. Remove the unnecessary pushfl/popfl pairs. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Add phys_call() wrapper for calling code with physical addressingMichael Brown2016-02-211-5/+204
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a phys_call() wrapper function (analogous to the existing real_call() wrapper function) for calling code with flat physical addressing, and use this wrapper within the PHYS_CODE() macro. Move the relevant functionality inside librm.S, where it more naturally belongs. The COMBOOT code currently uses explicit calls to _virt_to_phys and _phys_to_virt. These will need to be rewritten if our COMBOOT support is ever generalised to be able to run in a 64-bit build. Specifically: - com32_exec_loop() should be restructured to use PHYS_CODE() - com32_wrapper.S should be restructured to use an equivalent of prot_call(), passing parameters via a struct i386_all_regs - there appears to be no need for com32_wrapper.S to switch between external and internal stacks; this could be omitted to simplify the design. For now, librm.S continues to expose _virt_to_phys and _phys_to_virt for use by com32.c and com32_wrapper.S. Similarly, librm.S continues to expose _intr_to_virt for use by gdbidt.S. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Generate page tables for 64-bit buildsMichael Brown2016-02-191-2/+181
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Prepare for long-mode memory mapMichael Brown2016-02-191-23/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bulk of the iPXE binary (the .textdata section) is physically relocated at runtime to the top of the 32-bit address space in order to allow space for an OS to be loaded. The relocation is achieved with the assistance of segmentation: we adjust the code and data segment bases so that the link-time addresses remain valid. Segmentation is not available (for normal code and data segments) in long mode. We choose to compile the C code with -mcmodel=kernel and use a link-time address of 0xffffffffeb000000. This choice allows us to identity-map the entirety of the 32-bit address space, and to alias our chosen link-time address to the physical location of our .textdata section. (This requires the .textdata section to always be aligned to a page boundary.) We simultaneously choose to set the 32-bit virtual address segment bases such that the link-time addresses may simply be truncated to 32 bits in order to generate a valid 32-bit virtual address. This allows symbols in .textdata to be trivially accessed by both 32-bit and 64-bit code. There is no (sensible) way in 32-bit assembly code to generate the required R_X86_64_32S relocation records for these truncated symbols. However, subtracting the fixed constant 0xffffffff00000000 has the same effect as truncation, and can be represented in a standard R_X86_64_32 relocation record. We define the VIRTUAL() macro to abstract away this truncation operation, and apply it to all references by 32-bit (or 16-bit) assembly code to any symbols within the .textdata section. We define "virt_offset" for a 64-bit build as "the value to be added to an address within .textdata in order to obtain its physical address". With this definition, the low 32 bits of "virt_offset" can be treated by 32-bit code as functionally equivalent to "virt_offset" in a 32-bit build. We define "text16" and "data16" for a 64-bit build as the physical addresses of the .text16 and .data16 sections. Since a physical address within the 32-bit address space may be used directly as a 64-bit virtual address (thanks to the identity map), this definition provides the most natural access to variables in .text16 and .data16. Note that this requires a minor adjustment in prot_to_real(), which accesses .text16 using 32-bit virtual addresses. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Transition to protected mode within init_librm()Michael Brown2016-02-191-103/+126
| | | | | | | | | | | | | Long-mode operation will require page tables, which are too large to sensibly fit in our .data16 segment in base memory. Add a portion of init_librm() running in 32-bit protected mode to provide access to high memory. Use this portion of init_librm() to initialise the .textdata variables "virt_offset", "text16", and "data16", eliminating the redundant (re)initialisation currently performed on every mode transition as part of real_to_prot(). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Provide an abstraction wrapper for prot_callMichael Brown2016-02-192-11/+7Star
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Convert prot_call() to a real-mode near callMichael Brown2016-02-182-7/+6Star
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Simplify definitions for prot_call() and real_call() stack framesMichael Brown2016-02-181-14/+17
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Use garbage-collectable section namesMichael Brown2016-02-181-22/+26
| | | | | | Allow unused sections of librm.o to be removed via --gc-sections. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Discard argument as part of return from real_call()Michael Brown2016-02-171-1/+1
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [librm] Discard argument as part of return from prot_call()Michael Brown2016-02-172-5/+2Star
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [bios] Add bin-x86_64-pcbios build platformMichael Brown2016-02-166-0/+1526
Move most arch/i386 files to arch/x86, and adjust the contents of the Makefiles and the include/bits/*.h headers to reflect the new locations. This patch makes no substantive code changes, as can be seen using a rename-aware diff (e.g. "git show -M5"). This patch does not make the pcbios platform functional for x86_64; it merely allows it to compile without errors. Signed-off-by: Michael Brown <mcb30@ipxe.org>