summaryrefslogtreecommitdiffstats
path: root/src/drivers
Commit message (Collapse)AuthorAgeFilesLines
* [build] Mark known reviewed files as permitted for UEFI Secure BootMichael Brown2026-01-1455-0/+55
| | | | | | | | | Some past security reviews carried out for UEFI Secure Boot signing submissions have covered specific drivers or functional areas of iPXE. Mark all of the files comprising these areas as permitted for UEFI Secure Boot. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [build] Mark core files as permitted for UEFI Secure BootMichael Brown2026-01-1412-0/+12
| | | | | | | | | | | | Mark all files used in a standard build of bin-x86_64-efi/snponly.efi as permitted for UEFI Secure Boot. These files represent the core functionality of iPXE that is guaranteed to have been included in every binary that was previously subject to a security review and signed by Microsoft. It is therefore legitimate to assume that at least these files have already been reviewed to the required standard multiple times. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [build] Mark existing files as explicitly forbidden for Secure BootMichael Brown2026-01-1379-0/+115
| | | | | | | | | | | | | | | The third-party 802.11 stack and NFS protocol code are known to include multiple potential vulnerabilities and are explicitly forbidden from being included in Secure Boot signed builds. This is currently handled at the per-directory level by defining a list of source directories (SRCDIRS_INSEC) that are to be excluded from Secure Boot builds. Annotate all files in these directories with FILE_SECBOOT() to convey this information to the new per-file Secure Boot permissibility check, and remove the old separation between SRCDIRS and SRCDIRS_INSEC. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [intel] Add PCI ID for I219-V and -LM 24Christian I. Nilsson2025-12-151-0/+2
| | | | Signed-off-by: Christian I. Nilsson <nikize@gmail.com>
* [crypto] Construct signatures using ASN.1 buildersMichael Brown2025-12-011-14/+4Star
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [intel] Add PCI IDs for I225 and I226 chipsetsBert Ezendam2025-11-261-0/+7
| | | | | | Identifiers are taken from the pci.ids database. Signed-off-by: Bert Ezendam <bert.ezendam@alliander.com>
* [pci] Allow probing permission to vary by rangeMichael Brown2025-11-252-5/+17
| | | | | | | Make pci_can_probe() part of the runtime selectable PCI I/O API, and defer this check to the per-range API. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [pci] Use linker tables for runtime selectable PCI APIsMichael Brown2025-11-242-2/+258
| | | | | | | Use the linker table mechanism to enumerate the underlying PCI I/O APIs, to allow PCIAPI_CLOUD to become architecture-independent code. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [uart] Make baud rate a property of the UARTMichael Brown2025-11-051-4/+3Star
| | | | | | | | Make the current baud rate (if specified) a property of the UART, to allow the default_serial_console() function to specify the default baud rate as well as the default UART device. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [pci] Disable decoding while setting a BAR valueMichael Brown2025-10-301-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting the base address for a 64-bit BAR requires two separate 32-bit writes to configuration space, and so will necessarily result in the BAR temporarily holding an invalid partially written address. Some hypervisors (observed on an AWS EC2 c7a.medium instance in eu-west-2) will assume that guests will write BAR values only while decoding is disabled, and may not rebuild MMIO mappings for the guest if the BAR registers are written while decoding is enabled. The effect of this is that MMIO accesses are not routed through to the device even though inspection from within the guest shows that every single PCI configuration register has the correct value. Writes to the device will be ignored, and reads will return the all-ones pattern that typically indicates a nonexistent device. With the ENA network driver now using low latency transmit queues, this results in the transmit descriptors being lost (since the MMIO writes to BAR2 never reach the device), which in turn causes the device to lock up as soon as the transmit doorbell is rung for the first time. Fix by disabling decoding of memory and I/O cycles while setting a BAR address (as we already do while sizing a BAR), so that the invalid partial address can never be decoded and so that hypervisors will rebuild MMIO mappings as expected. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Leave queue base address empty when creating a low latency queueMichael Brown2025-10-281-1/+4
| | | | | | | | | | | | The queue base address is meaningless for a low latency queue, since the queue entries are written directly to the on-device memory. Any non-zero queue base address will be safely ignored by the hardware, but leaves open the possibility that future revisions could treat it as an error. Leave this field as zero, to match the behaviour of the Linux driver. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Limit receive queue size to work around hardware bugsMichael Brown2025-10-172-12/+5Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a801244 ("[ena] Increase receive ring size to 128 entries") increased the receive ring size to 128 entries (while leaving the fill level at 16), since using a smaller receive ring caused unexplained failures on some instance types. The original hardware bug that resulted in that commit seems to have been fixed: experiments suggest that the original failure (observed on a c6i.large instance in eu-west-2) will no longer reproduce when using a receive ring containing only 16 entries (as was the case prior to that commit). Newer generations of the ENA hardware (observed on an m8i.large instance in eu-south-2) seem to have a new and exciting hardware bug: these instance types appear to use a hash of the received packet header to determine which portion of the (out-of-order) receive ring to use. If that portion of the ring happens to be empty (e.g. because only 32 entries of the 128-entry ring are filled at any one time), then the packet will be silently dropped. Work around this new hardware bug by reducing the receive ring size down to the current fill level of 32 entries. This appears to work on all current instance types (but has not been exhaustively tested). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Increase transmit queue size to match receive fill levelMichael Brown2025-10-171-1/+1
| | | | | | | | Avoid running out of transmit descriptors when sending TCP ACKs by increasing the transmit queue size to match the increased received fill level. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Add memory barrier after writing to on-device memoryMichael Brown2025-10-171-0/+1
| | | | | | | Ensure that writes to on-device memory have taken place before writing to the doorbell register. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Increase receive fill levelMichael Brown2025-10-161-1/+1
| | | | | | | | Experiments suggest that at least some instance types (observed with c6i.large in eu-west-2) experience high packet drop rates with only 16 receive buffers allocated. Increase the fill level to 32 buffers. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Add support for low latency transmit queuesMichael Brown2025-10-162-17/+263
| | | | | | | | | | | | | | | | | | | | | | | | | | | Newer generations of the ENA hardware require the use of low latency transmit queues, where the submission queues and the initial portion of the transmitted packet are written to on-device memory via BAR2 instead of being read from host memory. Detect support for low latency queues and set the placement policy appropriately. We attempt the use of low latency queues only if the device reports that it supports inline headers, 128-byte entries, and two descriptors prior to the inlined header, on the basis that we don't care about using low latency queues on older versions of the hardware since those versions will support normal host memory submission queues anyway. We reuse the redundant memory allocated for the submission queue as the bounce buffer for constructing the descriptors and inlined packet data, since this avoids needing a separate allocation just for the bounce buffer. We construct a metadata submission queue entry prior to the actual submission queue entry, since experimentation suggests that newer generations of the hardware require this to be present even though it conveys no information beyond its own existence. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Record supported device featuresMichael Brown2025-10-162-2/+6
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Cancel uncompleted transmit buffers on closeMichael Brown2025-10-161-2/+25
| | | | | | | | Avoid spurious assertion failures by ensuring that references to uncompleted transmit buffers are not retained after the device has been closed. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Map the on-device memory, if presentMichael Brown2025-10-152-17/+70
| | | | | | | | | | | | | | | Newer generations of the ENA hardware require the use of low latency transmit queues, where the submission queues and the initial portion of the transmitted packet are written to on-device memory via BAR2 instead of being read from host memory. Prepare for this by mapping the on-device memory BAR. As with the register BAR, we may need to steal a base address from the upstream PCI bridge since the BIOS on some instance types (observed with an m8i.metal-48xl instance in eu-south-2) will fail to assign an address to the device. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Add descriptive messages for any admin queue command failuresMichael Brown2025-10-151-9/+31
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [pci] Record prefetchable memory window for PCI bridgesMichael Brown2025-10-141-5/+23
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [ena] Use pci_bar_set() to place device within bridge memory windowMichael Brown2025-10-141-1/+1
| | | | | | | | | | | | Use pci_bar_set() when we need to set a device base address (on instance types such as c6i.metal where the BIOS fails to do so), so that 64-bit BARs will be handled automatically. This particular issue has so far been observed only on 6th generation instances. These use 32-bit BARs, and so the lack of support for handling 64-bit BARs has not caused any observable issue. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [pci] Handle sizing of 64-bit BARsMichael Brown2025-10-142-36/+76
| | | | | | | | Provide pci_bar_set() to handle setting the base address for a potentially 64-bit BAR, and rewrite pci_bar_size() to correctly handle sizing of 64-bit BARs. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Rearm interrupts unconditionally on every pollMichael Brown2025-10-101-6/+7
| | | | | | | | | | | | | | | Experimentation suggests that rearming the interrupt once per observed completion is not sufficient: we still see occasional delays during which the hardware fails to write out completions. As described in commit d2e1e59 ("[gve] Use dummy interrupt to trigger completion writeback in DQO mode"), there is no documentation around the precise semantics of the interrupt rearming mechanism, and so experimentation is the only available guide. Switch to rearming both TX and RX interrupts unconditionally on every poll, since this produces better experimental results. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Use raw DMA addresses in descriptors in DQO-QPL modeMichael Brown2025-10-102-4/+5
| | | | | | | | | | The DQO-QPL operating mode uses registered queue page lists but still requires the raw DMA address (rather than the linear offset within the QPL) to be provided in transmit and receive descriptors. Set the queue page list base device address appropriately. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Report only packet completions for the transmit ringMichael Brown2025-10-092-12/+3Star
| | | | | | | | | | | | | | | | The hardware reports descriptor and packet completions separately for the transmit ring. We currently ignore descriptor completions (since we cannot free up the transmit buffers in the queue page list and advance the consumer counter until the packet has also completed). Now that transmit completions are written out immediately (instead of being delayed until 128 bytes of completions are available), there is no value in retaining the descriptor completions. Omit descriptor completions entirely, and reduce the transmit fill level back down to its original value. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Use dummy interrupt to trigger completion writeback in DQO modeMichael Brown2025-10-092-3/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When operating in the DQO operating mode, the device will defer writing transmit and receive completions until an entire internal cacheline (128 bytes) is full, or until an associated interrupt is asserted. Since each receive descriptor is 32 bytes, this will cause received packets to be effectively delayed until up to three further packets have arrived. When network traffic volumes are very low (such as during DHCP, DNS lookups, or TCP handshakes), this typically induces delays of up to 30 seconds and results in a very poor user experience. Work around this hardware problem in the same way as for the Intel 40GbE and 100GbE NICs: by enabling dummy MSI-X interrupts to trick the hardware into believing that it needs to write out completions to host memory. There is no documentation around the interrupt rearming mechanism. The value written to the interrupt doorbell does not include a consumer counter value, and so must be relying on some undocumented ordering constraints. Comments in the Linux driver source suggest that the authors believe that the device will automatically and atomically mask an MSI-X interrupt at the point of asserting it, that any further interrupts arriving before the doorbell is written will be recorded in the pending bit array, and that writing the doorbell will therefore immediately assert a new interrupt if needed. In the absence of any documentation, choose to rearm the interrupt once per observed completion. This is overkill, but is less impactful than the alternative of rearming the interrupt unconditionally on every poll. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Add missing memory barriersMichael Brown2025-10-091-0/+3
| | | | | | | Ensure that remainder of completion records are read only after verifying the generation bit (or sequence number). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [intelxl] Use default dummy MSI-X target addressMichael Brown2025-10-092-34/+6Star
| | | | | | | Use the default dummy MSI-X target address that is now allocated and configured automatically by pci_msix_enable(). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [pci] Map all MSI-X interrupts to a dummy target address by defaultMichael Brown2025-10-091-0/+52
| | | | | | | | | | | | | | | | | | | | | | | Interrupts as such are not used in iPXE, which operates in polling mode. However, some network cards (such as the Intel 40GbE and 100GbE NICs) will defer writing out completions until the point of asserting an MSI-X interrupt. From the point of view of the PCI device, asserting an MSI-X interrupt is just a 32-bit DMA write of an opaque value to an opaque target address. The PCI device has no know to know whether or not the target address corresponds to a real APIC. We can therefore trick the PCI device into believing that it is asserting an MSI-X interrupt, by configuring it to write an opaque 32-bit value to a dummy target address in host memory. This is sufficient to trigger the associated write of the completions to host memory. Allocate a dummy target address when enabling MSI-X on a PCI device, and map all interrupts to this target address by default. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Select preferred operating modeMichael Brown2025-10-061-1/+16
| | | | | | | | | | | | | | | | | | Select a preferred operating mode from those advertised as supported by the device, falling back to the oldest known mode (GQI-QPL) if no modes are advertised. Since there are devices in existence that support only QPL addressing, and since we want to minimise code size, we choose to always use a single fixed ring buffer even when using raw DMA addressing. Having paid this penalty, we therefore choose to prefer QPL over RDA since this allows the (virtual) hardware to minimise the number of page table manipulations required. We similarly prefer GQI over DQO since this minimises the amount of work we have to do: in particular, the RX descriptor ring contents can remain untouched for the lifetime of the device and refills require only a doorbell write. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Add support for out-of-order queuesMichael Brown2025-10-062-95/+398
| | | | | | | | | | | | | | | | | | | | | | Add support for the "DQO" out-of-order transmit and receive queue formats. These are almost entirely different in format and usage (and even endianness) from the original "GQI" in-order transmit and receive queues, and arguably should belong to a completely different device with a different PCI ID. However, Google chose to essentially crowbar two unrelated device models into the same virtual hardware, and so we must handle both of these device models within the same driver. Most of the new code exists solely to handle the differences in descriptor sizes and formats. Out-of-order completions are handled via a buffer ID ring (as with other devices supporting out-of-order completions, such as the Xen, Hyper-V, and Amazon virtual NICs). A slight twist is that on the transmit datapath (but not the receive datapath) the Google NIC provides only one completion per packet instead of one completion per descriptor, and so we must record the list of chained buffer IDs in a separate array at the time of transmission. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Cancel pending transmissions when closing deviceMichael Brown2025-10-061-6/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | We cancel any pending transmissions when (re)starting the device since any transmissions that were initiated before the admin queue reset will not complete. The network device core will also cancel any pending transmissions after the device is closed. If the device is closed with some transmissions still pending and is then reopened, this will therefore result in a stale I/O buffer being passed to netdev_tx_complete_err() when the device is restarted. This error has not been observed in practice since transmissions generally complete almost immediately and it is therefore unlikely that the device will ever be closed with transmissions still pending. With out-of-order queues, the device seems to delay transmit completions (with no upper time limit) until a complete batch is available to be written out as a block of 128 bytes. It is therefore very likely that the device will be closed with transmissions still pending. Fix by ensuring that we have dropped all references to transmit I/O buffers before returning from gve_close(). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [bnxt] Handle link related async eventsJoseph Wong2025-10-013-30/+105
| | | | | | | Handle async events related to link speed change, link speed config change, and port phy config changes. Signed-off-by: Joseph Wong <joseph.wong@broadcom.com>
* [gve] Allow for descriptor and completion lengths to vary by modeMichael Brown2025-09-302-17/+44
| | | | | | | | | | The descriptors and completions in the DQO operating mode are not the same sizes as the equivalent structures in the GQI operating mode. Allow the queue stride size to vary by operating mode (and therefore to be known only after reading the device descriptor and selecting the operating mode). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Rename GQI-specific data structures and constantsMichael Brown2025-09-302-39/+48
| | | | | | | | Rename data structures and constants that are specific to the GQI operating mode, to allow for a cleaner separation from other operating modes. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Allow for out-of-order buffer consumptionMichael Brown2025-09-302-71/+106
| | | | | | | | | | | | | We currently assume that the buffer index is equal to the descriptor ring index, which is correct only for in-order queues. Out-of-order queues will include a buffer tag value that is copied from the descriptor to the completion. Redefine the data buffers as being indexed by this tag value (rather than by the descriptor ring index), and add a circular ring buffer to allow for tags to be reused in whatever order they are released by the hardware. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Add support for raw DMA addressingMichael Brown2025-09-292-6/+24
| | | | | | | | | | | | | | | | | | | | | | | Raw DMA addressing allows the transmit and receive descriptors to provide the DMA address of the data buffer directly, without requiring the use of a pre-registered queue page list. It is modelled in the device as a magic "raw DMA" queue page list (with QPL ID 0xffffffff) covering the whole of the DMA address space. When using raw DMA addressing, the transmit and receive datapaths could use the normal pattern of mapping I/O buffers directly, and avoid copying packet data into and out of the fixed queue page list ring buffer. However, since we must retain support for queue page list addressing (which requires this additional copying), we choose to minimise code size by continuing to use the fixed ring buffer even when using raw DMA addressing. Add support for using raw DMA addressing by setting the queue page list base device address appropriately, omitting the commands to register and unregister the queue page lists, and specifying the raw DMA QPL ID when creating the TX and RX queues. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Add concept of a queue page list base device addressMichael Brown2025-09-292-6/+27
| | | | | | | | Allow for the existence of a queue page list where the base device address is non-zero, as will be the case for the raw DMA addressing (RDA) operating mode. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Set descriptor and completion ring sizes when creating queuesMichael Brown2025-09-292-5/+25
| | | | | | | | | | | | | The "create TX queue" and "create RX queue" commands have fields for the descriptor and completion ring sizes, which are currently left unpopulated since they are not required for the original GQI-QPL operating mode. Populate these fields, and allow for the possibility that a transmit completion ring exists (which will be the case when using the DQO operating mode). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Add concept of operating modeMichael Brown2025-09-292-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GVE family supports two incompatible descriptor queue formats: * GQI: in-order descriptor queues * DQO: out-of-order descriptor queues and two addressing modes: * QPL: pre-registered queue page list addressing * RDA: raw DMA addressing All four combinations (GQI-QPL, GQI-RDA, DQO-QPL, and DQO-RDA) are theoretically supported by the Linux driver, which is essentially the only public reference provided by Google. The original versions of the GVE NIC supported only GQI-QPL mode, and so the iPXE driver is written to target this mode, on the assumption that it would continue to be supported by all models of the GVE NIC. This assumption turns out to be incorrect: Google does not deem it necessary to retain backwards compatibility. Some newer machine types (such as a4-highgpu-8g) support only the DQO-RDA operating mode. Add a definition of operating mode, and pass this as an explicit parameter to the "configure device resources" admin queue command. We choose a representation that subtracts one from the value passed in this command, since this happens to allow us to decompose the mode into two independent bits (one representing the use of DQO descriptor format, one representing the use of QPL addressing). Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Remove separate concept of "packet descriptor"Michael Brown2025-09-292-37/+25Star
| | | | | | | | | | | The Linux driver occasionally uses the terminology "packet descriptor" to refer to the portion of the descriptor excluding the buffer address. This is not a helpful separation, and merely adds complexity. Simplify the code by removing this artifical separation. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [gve] Parse option list returned in device descriptorMichael Brown2025-09-262-1/+79
| | | | | | | | Provide space for the device to return its list of supported options. Parse the option list and record the existence of each option in a support bitmask. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [bnxt] Add error recovery supportJoseph Wong2025-09-183-60/+662
| | | | | | | | | Add support to advertise adapter error recovery support to the firmware. Implement error recovery operations if adapter fault is detected. Refactor memory allocation to better align with probe and open functions. Signed-off-by: Joseph Wong <joseph.wong@broadcom.com>
* [efi] Drag in MNP driver whenever SNP driver is presentMichael Brown2025-08-271-0/+4
| | | | | | | | | | | | | | | | | | | | | The chainloaded-device-only "snponly" driver already drags in support for driving SNP, NII, and MNP devices, on the basis that the user generally doesn't care which UEFI API is used and just wants to boot from the same network device that was used to load iPXE. The multi-device "snp" driver already drags in support for driving SNP and NII devices, but does not drag in support for MNP devices. There is essentially zero code size overhead to dragging in support for MNP devices, since this support is always present in any iPXE application build anyway (as part of the code to download "autoexec.ipxe" prior to installing our own drivers). Minimise surprise by dragging in support for MNP devices whenever using the "snp" driver, following the same reasoning used for the "snponly" driver. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [bnxt] Update CQ doorbell typeJoseph Wong2025-08-131-2/+2
| | | | | | | Update completion queue doorbell to a non-arming type, since polling is used. Signed-off-by: Joseph Wong <joseph.wong@broadcom.com>
* [dwgpio] Use fdt_reg() to get GPIO port numbersMichael Brown2025-08-072-15/+9Star
| | | | | | | | | | | | | DesignWare GPIO port numbers are represented as unsized single-entry regions. Use fdt_reg() to obtain the GPIO port number, rather than requiring access to a region cell size specification stored in the port group structure. This allows the field name "regs" in the port group structure to be repurposed to hold the I/O register base address, which then matches the common usage in other drivers. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [fdt] Provide fdt_reg() for unsized single-entry regionsMichael Brown2025-08-071-7/+3Star
| | | | | | | | Many region types (e.g. I2C bus addresses) can only ever contain a single region with no size cells specified. Provide fdt_reg() to reduce boilerplate in this common use case. Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [dwgpio] Add driver for the DesignWare GPIO controllerMichael Brown2025-08-053-5/+420
| | | | Signed-off-by: Michael Brown <mcb30@ipxe.org>
* [fdt] Use phandle as device locationMichael Brown2025-08-041-0/+1
| | | | | | | | | | | | | Consumption of phandles will be in the form of locating a functional device (e.g. a GPIO device, or an I2C device, or a reset controller) by phandle, rather than locating the device tree node to which the phandle refers. Repurpose fdt_phandle() to obtain the phandle value (instead of searching by phandle), and record this value as the bus location within the generic device structure. Signed-off-by: Michael Brown <mcb30@ipxe.org>